HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI
20 Feb 2026 13:00h - 14:00h
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI
Summary
The panel examined how heterogeneous compute and voice-first interfaces are reshaping AI adoption in India, noting that voice is the most natural UI and that AI experiences must remain consistent even with fluctuating network quality, which requires running inference locally on devices such as smartphones handling 10-billion-parameter models and glasses running sub-billion-parameter models [2-3][6-9][11-13].
Rizvi underscored the environmental stakes of AI, stressing finite energy resources while highlighting India’s vibrant ecosystem of 300 Gen-AI startups, sovereign large language models, and a strong application-layer focus, and he pointed to enterprise-scale bottlenecks in compute availability and connectivity that Cisco aims to address [14-19][22-24].
Arun Shetty identified three core impediments-insufficient infrastructure (power, compute, networking), security/safety of models, and data gaps-and argued that edge inferencing will become dominant, requiring fit-for-purpose solutions that combine edge, on-prem, and cloud resources while ensuring visibility to mitigate hallucinations, toxicity, and malicious tampering [43-48][49-53][54-58][60-62]. Dr. Kamakoti added that sovereign models are essential to thwart adversarial attacks, describing trust as a non-reflexive, non-symmetric, context-dependent relation that must be mathematically defined, and he emphasized that heterogeneous architectures are needed for dynamic threat detection such as advanced deep-packet inspection [51-55][56-63][64-66].
Gokul highlighted vertical-specific edge models that must overcome memory, I/O, thermal and power limits, noting India’s power-intensive data-center profile, the importance of improving PUE, using air-cooled racks where feasible, and adopting hybrid renewable/off-grid energy to support edge deployment that can “leapfrog” underserved regions while reducing total cost of ownership [67-73][74-78][78-82].
Durga concluded by advocating a holistic distribution of compute from devices through edge clouds to data centers, describing Qualcomm’s “hybrid AI” approach that leverages air-cooled carts for large models without always requiring liquid cooling, and the Minister reinforced that policy must secure power, water, land, and infrastructure to enable this distributed AI ecosystem, with welfare and happiness as the ultimate goals [90-98][99-103][139-145].
Overall, the discussion converged on the need for secure, energy-efficient, and heterogeneous AI infrastructure, backed by coordinated policy, to drive India’s next-generation digital transformation [89-95][101-103][139-145].
Keypoints
Major discussion points
– Heterogeneous compute & edge inference are essential for a seamless AI experience.
Durga emphasized that AI should remain “invariant to the quality of the communications” by running inference on devices when possible and leveraging edge-cloud and data-center resources as needed [8-13]. Dr. Kamakoti later linked this need to “heterogeneous architecture” for dynamic malware detection [54]. Gokul reinforced the push toward edge inferencing to reach locations with limited connectivity [66-71].
– Infrastructure constraints-power, compute, networking-must be addressed with fit-for-purpose solutions.
Arun Shetty listed the three impediments to AI adoption: power, compute, and networking, noting that “more inferencing happening at the edge” will shape future designs [43-48]. Gokul expanded on the power challenge, describing cooling limits, PUE targets, and the need for hybrid energy systems [70-78].
– Security, safety, trust, and sovereign models are critical barriers.
Shetty highlighted model hallucinations, toxicity, and the need for visibility across the stack [52-55]. Dr. Kamakoti stressed the importance of “sovereign models” to prevent adversarial attacks and discussed the mathematical foundations of trust [52-60]. Later, Shetty detailed practical guardrails such as asset discovery, vulnerability scanning, and policy enforcement [118-130].
– Environmental and energy-efficiency considerations underpin all technical choices.
Rizvi called out the “strong environmental aspect” of AI inference and the finite nature of energy [14-15]. Gokul quantified the power-to-cooling split in data centers, advocated for air-cooled racks where possible, and urged hybrid renewable/off-grid solutions to meet India’s growing demand [70-78].
Overall purpose / goal of the discussion
The panel aimed to map India’s AI landscape, identify the technical, infrastructural, security, and policy challenges of scaling generative AI, and outline coordinated actions-ranging from heterogeneous edge compute to sovereign model development and sustainable energy strategies-that enable responsible, enterprise-grade AI deployment across the country in the near-term (2-year horizon).
Tone of the discussion
– Opening (Durga & Rizvi): Technical and forward-looking, highlighting opportunities of voice interfaces and heterogeneous compute.
– Middle (Shetty, Dr. Kamakoti, Gokul): Becomes more urgent and problem-focused, stressing concrete constraints (power, security, data gaps) and the need for collaborative, fit-for-purpose solutions.
– Closing (Rizvi, Minister): Shifts to a supportive, optimistic tone, emphasizing policy alignment, national welfare, and a collective commitment to “leapfrog” with AI while ensuring sustainability.
Overall, the conversation moved from an exploratory technical vision to a pragmatic roadmap anchored in security, energy, and policy considerations, ending on a hopeful note from the ministerial perspective.
Speakers
– Durga (Durga Malladi) – Speaker; associated with Qualcomm AI initiatives and workshops [S2][S3].
– Honorable Minister – Minister of State for Personnel, Minister of State for Personal Grievances and Pensions; involved in administrative reforms and India’s science & innovation agenda [S4][S5][S6].
– Gokul (Gokul Subramaniam) – Speaker; contributor to discussions on heterogeneous compute and edge AI [S7].
– Arun Shetty – Cisco representative; discusses infrastructure, security, and AI adoption challenges [S8].
– Dr. Kamakoti (Prof. V. Kamakoti) – Professor; expert on AI security, sovereign models, and cybersecurity [S9].
– Rizvi (Kazim Rizvi) – Panel moderator and speaker; focuses on environmental aspects of AI and policy [S10].
Additional speakers:
– Mr. Vichetti – Mentioned by the Honorable Minister; role/title not specified in the transcript.
– Sarah (Intel) – Referred to for handing over gifts; affiliation with Intel, role not specified.
Opening – Voice-first, multilingual AI (Durga) [1-8]
Durga opened by stressing that voice-first interfaces are strategically vital for India’s multilingual population, describing voice as “the most natural user interface to devices around you” and noting the need to support 14 languages for native-language interaction. He argued that AI should move away from continuous typing or texting toward on-device inference so that the experience remains “invariant to the quality of the communications.” To deliver this, Durga called for heterogeneous compute capable of running a 10-billion-parameter multimodal model on a smartphone and a sub-billion-parameter model on smart glasses, with seamless fallback to edge-cloud or data-centre resources when connectivity permits [1-8].
Environmental & ecosystem context (Rizvi) [9-15]
Rizvi shifted the discussion to the broader ecosystem, warning that energy is finite and that “efficiently managing the energy requirements” of AI inference is a critical, often overlooked concern. He highlighted India’s vibrant generative-AI startup scene-about 300 firms building on large language models-and the development of sovereign LLMs such as Sarvam to secure the application layer. Rizvi emphasized the importance of tackling enterprise-scale compute and network-connectivity bottlenecks, noting Cisco’s role in the ecosystem [9-15].
Cisco’s perspective – three impediments & fit-for-purpose architecture (Arun Shetty) [16-30]
Arun Shetty identified three core impediments to AI adoption: power, compute and networking. He projected national AI demand could reach 63 GW within a few years, observed tightening compute capacity, and argued that networking must evolve to support distributed workloads. Shetty advocated “fit-for-purpose” solutions that combine on-device, edge-cloud and on-premise resources, stating that more inferencing at the edge will reshape architecture. He positioned security and safety as an even larger challenge, calling for visibility across the stack to detect hallucinations, toxicity and malicious model manipulation. Shetty also stressed that Cisco cannot solve these problems alone and must work with ecosystem partners [16-30].
Security, sovereign models & formal trust (Dr Kamakoti) [31-45]
Dr Kamakoti expanded the security theme, insisting that sovereign models are essential to prevent adversarial attacks and data-poisoning. He introduced a formal notion of trust, describing it as “not reflexive, not symmetric, not transitive,” and emphasizing its context-dependence and temporal nature. He said a mathematical framework for trust must be built to underpin secure AI deployments. Kamakoti argued that heterogeneous architectures are required for dynamic deep-packet inspection and rapid malware-signature updates, because traditional signature-based inspection cannot keep pace with evolving threats. He also warned that models used in education must be carefully curated to avoid teaching harmful content [31-38][39-41][34-36].
Edge-compute constraints, cooling, PUE & hybrid energy (Gokul) [46-60]
Gokul focused on practical constraints of edge deployment. He explained that vertical-specific models must operate within tight limits of memory, I/O, thermal dissipation and power. Citing India’s data-centre power profile-where roughly 40 % of consumption goes to cooling-he advocated improving Power Usage Efficiency (PUE) by using air-cooled server carts where feasible and reserving liquid cooling for densities above ~25 kW per rack. Gokul argued that a hybrid energy mix of renewable, grid and off-grid sources is required, noting that “pure renewable may not be enough; a hybrid mix … is required” to sustain growth and enable “leap-frogging” of AI services to remote regions lacking reliable connectivity [46-55][55-57][56-60].
Closing visions – Hybrid AI, guardrails, policy (Durga, Arun Shetty, Minister) [61-78]
In the closing round-table, Durga reiterated a holistic approach that distributes compute across devices, edge-cloud and data-centres. He described Qualcomm’s “Hybrid AI” strategy, which leverages air-cooled carts for large models (100-300 billion parameters) without always requiring liquid cooling, thereby reducing reliance on a single monolithic data-centre [61-64]. Arun Shetty returned to security, outlining concrete guardrails for enterprises: systematic discovery of “shadow AI” assets, vulnerability scanning of models, and enforcement of policies that block transmission of confidential data to unauthorised third-party services. He referenced industry standards such as NIST, MITRE and OWASP to frame a “secure AI factory” capable of defending against malicious manipulation using AI-driven security tools [65-69][70-73]. The Honourable Minister concluded by linking the technical agenda to national policy, stressing that the government must ensure provision of power, water and land to support AI infrastructure and framing the ultimate goal as “welfare for all, happiness for all,” echoing the panel’s repeated emphasis on energy availability as a primary bottleneck [74-78].
Roadmap synthesis
Collectively, the panel agreed on a near-term roadmap (2-4 years) that couples distributed, energy-aware compute with robust, trust-centric security and sovereign data governance to enable responsible, enterprise-grade AI deployment in India. [61-78]
with them. 14 languages. Voice is the most natural user interface to devices around you. So the idea is not to actually keep typing and texting, but it’s about the usage of voice, but in native languages, which actually work very nicely. And that means that you have to make sure that the use cases are built on top of it. So that’s what our focus is from a processor standpoint. One final note, and given that I have maybe just one minute, another aspect of heterogeneous computers, disaggregation of compute within the network itself. What I mean by that is, at some point in time, you might have extremely good connectivity to the network. And at some other point in time, you might have zero connectivity to the network.
And the question to ask is, do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices. Not that you want to do it all the time, but when you can, why not? today we can run up to a 10 billion parameter model multimodal model state of the art on a smartphone and a sub 1 billion parameter model in your glasses without necessarily charging a device the whole day it’s once every 24 hours so we’ve come a long way in that which means use the data centers use the edge cloud as and when necessary they have a role to play at the same time make sure that we also build for devices where the inference actually occurs and users directly perceive that’s where the data originates so it’s important to think about it that way
yeah there’s there’s also very strong environmental aspect to this and which often gets unnoticed and undiscussed but that element is also very important in terms of efficiently managing the energy requirements because energy as we also know is finite and so I think you one thing which I was struck to me which is spoke what was inferences and the other is that it’s not just about the energy but it’s also about the energy and the A lot of what’s happening in India is also around inferencing models, right? So, I mean, in terms of the Gen AI story, which we have, we have almost 300 Gen AI startups, which are building on top of the large language models.
And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with Sarvam and others, we are also building sovereign large language models, right? So, we are sort of, as Minister Vaishnav has spoken about, every, you know, piece of the puzzles. We are there in terms of fitting that puzzle together. I’d like to come to Mr. Arun Shetty, sir, is with Cisco. And, you know, we just want to take it further from where Durga sir had left in terms of talking about enterprise adoption at scale. And, you know, of course, with Cisco, what are the challenge of bottlenecks, which you see in terms of computer availability, connectivity, which Cisco is trying to do, which you see in generally.
And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about.
And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And
Yeah, so as you know, we connect and protect the… This should be working, right? Yeah, yeah, yeah. As you know, we connect and protect even in the AI era, right? We started in the internet, we came into the cloud, and we are in this era. First of all, thank you very much for having me, and it’s indeed a pleasure to be representing this esteemed panel. So I think what I’ll do is I’ll summarize based on what others have spoken, actually, and I think those are real problems. The first one is clearly the three impediments for AI adoption is one is clearly infrastructure constraints, and we all spoke about it, and they all spoke about it.
The first one is the power. power is a challenge will be a challenge i think usc is expecting it will be 63 gigawatts of power in couple of years what they require okay and then the compute is a problem we did recognize that compute is becoming a problem and then uh kamakoti sir did tell that cisco is in networking what are you doing in networking and networking will be a problem actually and then we need to see how we need to address and clearly it has to be a fit for purpose solutions because you not only do huge data centers and i think what we see is in couple of years you will see there is more inferencing happening at the edge and that’s what we need that’s what the how the world will move and that’s why solutions have to be fit for purpose for sure the second bigger challenge what we have is the security and the safety aspect so that is something what we need to pay lot of attention because as the adage says what if you can’t see you can’t trust right you can’t trust something what you can’t see so you need to have the visibility across the stack and also you need to see whether the models what we are using are the right models for us or is there anything malicious into the models itself actually vulnerabilities in that model so the security aspect becomes where security and safety aspect becomes very very important because the models hallucinate you can inject toxicity into the model so those are the challenges what we need to address as far as what we use so i think it is very very important to build our models and if you look at the models all the models were built using the public data which was the text voice and video data so but however the enterprises the government has the best data sets so why can’t we use those data sets so the third impediment what we have today is the data set so the third impediment what we have today is the data set so the third impediment is the data gap and data gap is essentially i need to have high quality accessible and manageable data and we can build gpts using that what we can call it as a machine gpt what we can build using that use that for inferencing use that for training use that for inferencing and we get a lot of quality use of ai without data the which is the fuel for the ai today you can’t really move forward on the ai and i think these are the typical three problems and the ways we are looking at addressing this is clearly one is i will not be able to build a huge data center for a specific use case so take a use case and then see how fast i can give that infrastructure a comprehensive secure ai factory or a secure infrastructure whether it is in the data center or in the edge actually so that people can focus on building the use cases or the applications on top of it and the second thing comes on the safety and the security aspect of it and how we can do the defense mechanism and the third one is the data so these are the three problems what cisco is trying to address along with the ecosystem partners of course because this is not a problem what you can solve alone actually yeah thank you
yeah i think i don’t know if my mic okay it’s okay yeah and i’ll i’ll sort of take from the security point which you have spoken and i’ll come back to dr kamakoti i think we have on the clock it shows seven but on my watch it shows 15 yeah so i’ll go by my watch uh yeah so dr kamakoti would like to focus on critical infra and public systems here and as you know that as with the advent of ai we’re going to use it across these sectors as well so how important do you see heterogeneous compute in terms of contributing to national resilience to safeguard and to sort of you know ensure that our critical infrastructure public systems are secure as well
So today, the type of things that we need to do for each one of these actions, the type of inferencing, type of response time we need, as Shetty mentioned, it’s going to be different. I hope all of you have seen Yes Prime Minister, and always they say, need to know, right? You need to know, right? And now what happens is if I am going to make a model that has understood the entire data, then this that the model, and it is used to be someone that someone should they need to know that data? That’s a very important question. So that’s where the entire aspect of cybersecurity comes in. And that’s why we are all saying that we have need to have sovereign models.
As he rightly pointed out, we can have adversarial AI, we can go poison the whole thing and then make it teach make it tell the things that, you know, should not be told, or need not be told. Okay. This is something that we need to very much look at from a security point where i do an inferencing and my training data set goes for a toss number one so we need to have something for for education at least as a director of one of the premium students in the country what my worry is that for education like how we have since our board for uh you know movies what we should make models for which certain details alone should be fed into it see is a bacha right whatever you teach what it will tell you back probably do a little more uh generative on that so this is number one number two is again coming back to cisco itself right you do deep packet inspection and basically you do it with some signatures today the the whole story is changing dynamically the malware can change its signature so that’s going to be the biggest challenge now and what sort of inferencing they are going to do they have to bring some more different architecture and that will be a heterogeneous architecture now and so so So, ultimately, you know, as you see, you know, what you see, the trust component, I always repeat this, I’ll finish with this with my one minute.
So, trust is, you know, friends, you know, if you want to define A is equivalent to B, that’s the definition, right? If you want to define A, you have to come with B, which is equivalent to A. So, equivalence in discrete mathematics, equivalence relation should satisfy three properties, reflexive, symmetric, transitive. A is trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive, I trust Gokul, Gokul trust you, I may not trust you. Trust is in addition, trust is context dependent, I trust. I trust you on something, I don’t trust you on something else. It is temporal, morning I trust you, evening I don’t trust you.
So, right? So, the main thing is, we have to build that mathematics. defined trusted and if you go to you know some of these search engine and define trust you get 1 million hits for that so so that is going to be the most important part so specifically on heterogeneous we will have certain different types of security issues something which a can sound something which is originating because of a and that’s where all of us edge connectivity server all the three people have to work together and and we will teach and he’ll put policy so
but both of you are equally playing an important role in terms of policy dr. Kamothi you’re also you know very influential and important figure in India’s AI policies of course lots to learn from you Goku very quickly would like to come to you and you know just sort of taking away in terms of the practical deployment models and what are the sort of examples you’ve seen which demonstrate that we are moving towards heterogeneous compute right and what needs to be done to also get get to that
So I started off with workload and I’ll go back to the same thing. So one of the things that we’re looking at and it’s critical is to see what vertical really needs what kind of domain specific models. And then try to apply that as much as possible as edge inferencing and contain the walls that are there that prevents AI to work efficiently. Primarily it’s like memory, you know, the connectivity, the IO, the thermal and then the power. So from an edge inferencing standpoint, there are quite a few things that are being done, be it an education segment where you want more translation, data being available, transcription. So that the knowledge is being imparted in a way that you have with the right data with the lowest power that’s meaningful for the student.
And more importantly, when we talk security, it’s not only about protecting data. the models we keep talking data and models it’s protecting the user that’s even more fundamental and how you can ensure that that happens second thing is applying it to other verticals be it small and medium business i think there is a great opportunity there where edge inferencing and putting compute with the right kind of power that can translate the businesses into actually using ai more effectively the last aspect that i want to also touch upon is in terms of just power you know as we go from one gig to nine to ten gig in the next five years in the country we have to realize that india is challenged by three physical things that we cannot run away from land water and power and these are very important aspects that it will drive how we set up our infrastructure and you know almost you know in a hundred percent of your power energy that comes into a data center forty percent goes into cooling forty percent into your computer and twenty percent on connectivity and there is this famous metric that you use, the PUE, the power usage efficiency.
It has to be as close to one as possible. All the power that you give goes to the most important thing, which is the computer, not to the cooling and things. And there are a lot of technologies that are being played with with respect to how much you can air cool on a rack, per rack, and that was okay up to about 25 kilowatt, and as you start to get to 100, you have to use liquid cooling, and then how we can set that infrastructure up. And for a country like India, it’s absolutely important to look at what hybrid energy solutions we can go with, because just pure renewable may not be able to address it. You’ll have to have something that is stable and be able to do something off -grid so that there is that dependency for you to get the data from the data centers and push as much as possible to edge, because edge is all about reach.
How can I take it to places across the country where there is no access to connectivity? It’s about how can I leapfrog? How can I leapfrog with verticals that have not used technology as much? We’ve always done a leapfrogging in India, and this is a great moment for us, and total cost of ownership. Those are the big areas.
Thank you, Gokul. And I think as we are approaching the end of the panel, I’d sort of like to go to Durga and Dr. Shetty also in terms of closing remarks and the way forward. So to both of you, I’ll pose this question in terms of the next two to four years, because I think the AI age, we don’t think too far ahead. We can’t do five -year planning or 10 -year planning. I think two -year planning is sufficient. So what enterprise outcomes are you both looking at? Maybe we can start with Durga in terms of defining India’s access to compute, access to infrastructure, capacity, and also sort of building in scale, cost efficiency and energy efficiency.
So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the world as well, where the problems are somewhat similar, is the ability to distribute compute across the entire network. So think of a combination of inference that runs in devices to the largest… extent that’s possible. Edge cloud, on -prem servers, where a lot of the localized processing can be done. And these can be done in air -cooled carts, by the way. The point that was made earlier is absolutely relevant. You don’t necessarily need liquid cooling all the time. You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter models, which are getting pretty sophisticated.
That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the overall requirements of what you need in a data center. And instead of, therefore, concentrating the entire compute in one single location and then building it for just that alone, a holistic approach of devices, edge cloud, plus data center is probably what we are looking forward to. From Qualcomm, we call it as hybrid AI. It’s not just a marketing slogan, but it is something that we truly believe in. Thank you.
Since the infrastructure part has been addressed here, so let me talk. A little bit more on safety and security aspects. So I think one of the things what we need to understand about the modern… these models are very intricate and very complex. And it’s also non -deterministic because if you give an input, not necessarily the output will be the same like a standard application, correct? So that’s why it is non -deterministic. So what one should be doing, right? There are two aspects of safety and security. I’ll just touch upon why it is important to know that actually. Safety is all about, we want the models to work in a certain way but it is not working in that certain way or the way we want them to work.
That is the first part of it. That’s where the toxicity part, hallucination, all those challenges come actually. The second part of it is the security part wherein a bad actor from outside can change the behavior of the model. So we need to be careful about both the things actually. So what one should be doing? Say for example, I think Kamakoti sir also told about users to have, that’s it. users also to be secure, right? So it is essential that the organizations or the country has to build that actually. So which means if I’m accessing a chat GPT and sending some confidential info, the system should stop me. So that is the when I’m accessing a third party application, the system should be smart enough to stop me saying that you can’t be sharing that information that’s not allowed for you to share that.
So that’s something which is already happening in organizations today. The second part of it is the first party application, I’m building an application, and I’m using a model. So now the organization should be able to scan what all my AI assets are. Because one of the biggest challenges for enterprise is the shadow AI applications, they don’t know what people are doing actually. So I need to clearly know what all my assets are. That is number one, I detect all my assets or discover all my assets. And next is I should scan. and also ensure that these models and the applications what I’m using are not vulnerable. If it is vulnerable, then I need to put guardrails around it or I need to fix those problems.
And similarly, there are organizations who are already telling that there are a lot of risks. So you need to nist Mitre and OWASP are telling that there are a lot of risks associated with that and we need to ensure that we need to stop that. So that is something what Cisco is focus, our focus to see how we can use AI to defend the, to defend against all these malice and also the vulnerabilities what we see. Thank you so much.
I think with this, we’ll probably close the panel, but I’d like to invite Honorable Minister once again for his very quick closing remarks that you have sort of. Thank you. us highly motivated to sort of build on this. You’ve heard us in the last one hour. What are your thoughts? We’d love to hear from you in terms of your closing address.
Thank you, Rizvi. And in fact, it’s a great pleasure to be here with the eminent Padmasree Awadi, Professor Kamakoti and Gokul and Durga Prasad and Mr. Vichetti sharing their truly professional experience and how as a policymaker, how we should view the things especially in terms of power, electricity, water and the land. How we should be well equipped to provide all these things where all the eminent panelists over here or the eminent people of the days would be thinking of putting. My primary challenge they have posed before is try to provide all these things. We are here to provide the rest remaining. And in fact, you know, thanks once again for a very apt introduction. very apt dialogue over here.
Ultimately, we have to all, me as a policymaker, and you all technocrats and innovators have to think the basic agenda for this AI impact term is welfare for all, happiness for all. Thank you for inviting me. Thank you so much.
With this, we will have to close the panel. I’d like to thank all our panelists and also invite colleagues, Sarah from Intel to hand over the gifts. But we’ll just have a group photo. Thank you.
Artificial intelligence | Information and communication technologies for development Durga argues that AI applications should run inference locally so that the user experience does not degrade when n…
EventAI is no longer a concept confined to research laboratories or science fiction novels. From smartphones that recognise faces to virtual assistants that understand speech and recommendation engines tha…
UpdatesWhat’s the energy consumption? Are there simpler, lower parameter, lower energy consuming models rather than the giant models? To me, it’s a core question. And I think… it’s nice to know that there …
EventPower management and renewable energy access are critical infrastructure needs that must be addressed alongside connectivity solutions
EventInfrastructure constraint: insufficient power, compute, network bandwidth, memory, and data center capacity globally – Infrastructure limitations
EventThere’s a need to address power and infrastructure requirements in developing regions to enable DPI implementation. This includes finding innovative solutions for regions with limited resources.
EventIssues oftrust, safety, and securitycan become barriers to achieving sustainable development through ICTs. Violations ofprivacyand cyber-attacks are among the major concerns of Internet users, and the…
BlogSpeakers demonstrated strong consensus on fundamental principles: infrastructure alone is insufficient, language barriers are critical obstacles, trust and safety are foundational, and government coll…
EventSo understanding that risk picture is going to be critically important. And last, I think that really pivots into one of the themes from the summit itself, as policymakers, in particular policymakers,…
Event_reportingDeep science requires a lot of research and development. It requires patient capital. But the societal and economic returns from reduced disease burden to global platform leadership are exponential. N…
Event_reportingChallenges and Opportunities in the Energy Transition She mentions that when faced with the choice between reducing carbon footprint and securing affordable, plentiful energy, the latter often wins o…
Event## Environmental Sustainability and Energy Considerations
EventFrisch explains that sustainability has become increasingly important at CERN, especially when designing machines for century-long operation. This includes reducing electricity consumption, eliminatin…
Event1. In pursuit of sustainable development and taking into account its obligations under those international agreements concerning environment to which it is a party, each Party shall endeavour to minim…
ResourceA particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with sustainability goals. Multiple speakers emphasised that energy constraints and cost p…
Event“It can deal with multilinguality and voice.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai?diplo-deep-link-text=So+that%27s+where+th…
EventJörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated with the University of Geneva. And EuroDIG has participated in the UN Secretary …
EventContinued dialogue on emerging technologies needed in future permanent mechanism
EventThis framing influenced subsequent speakers to emphasize concrete, practical solutions rather than theoretical benefits. It established the foundational context that made other participants focus on i…
EventDr. Panneerselvam Madanagopal The concept of ‘technology overshoot’ provides a framework for understanding why AI adoption faces resistance beyond infrastructure or skills gaps. It suggests the techn…
EventAnd we bring together engineering talent, silicon design strength, and a growing ecosystem of system and infrastructure partners, including manufacturing. But what truly defines and makes this moment …
EventThe emphasis on moving from pilots to permanent solutions reflects a broader maturation in the climate-tech space, where the focus is shifting from proving technical feasibility to achieving sustained…
EventThat’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the overall requirements of what you need in a data center. And instead of, therefore,…
Event_reportingThe tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-looking perspective while emphasizing inclusivity and global cooperation. There is a s…
EventThe conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusiasm about technology’s potential while candidly acknowledging significant challen…
EventThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
EventThe discussion maintained an optimistic and ambitious tone throughout, with speakers expressing confidence in India’s ability to compete globally in AI development. The tone was collaborative and solu…
Event“Durga stressed that voice‑first interfaces are strategically vital for India’s multilingual population and that voice is the most natural way for users to interact with devices, requiring support for native languages.”
The knowledge base notes that Durga emphasizes voice as the most natural interaction method and the importance of supporting native languages for India’s linguistic diversity [S2].
“Durga highlighted the need for heterogeneous compute capable of running a 10‑billion‑parameter multimodal model on a smartphone and a sub‑billion‑parameter model on smart glasses, with fallback to edge‑cloud or data‑centre resources.”
While the knowledge base discusses the broader goal of distributing inference across devices and the network, it does not specify model sizes; it adds context about the overall strategy of edge-to-cloud compute distribution [S11].
“Rizvi mentioned sovereign LLMs such as Sarvam being developed in India to secure the application layer.”
Sarvam is referenced as an Indian sovereign model that aims to demonstrate world-class capabilities built locally [S93].
“Arun Shetty identified power, compute and networking as the three core impediments to AI adoption in India and warned that national AI demand could reach 63 GW within a few years.”
A Cisco executive is quoted in the knowledge base as stating that there is insufficient power, compute and network bandwidth globally, aligning with the three impediments described [S65].
“Shetty advocated “fit‑for‑purpose” solutions that combine on‑device, edge‑cloud and on‑premise resources for AI inference.”
The knowledge base highlights a vision of distributing compute across the entire network, supporting a hybrid on-device and edge approach, which adds nuance to the fit-for-purpose architecture concept [S11].
The panel shows strong consensus on four pillars: (1) a distributed, heterogeneous compute fabric (Hybrid AI); (2) power and energy efficiency as the dominant infrastructural constraint; (3) the necessity of security, safety and trust mechanisms; (4) the strategic importance of sovereign data and models. These agreements cut across technical, commercial and policy domains.
High consensus – the speakers from industry, academia and government repeatedly reinforce the same priorities, indicating that future AI strategies in India are likely to be coordinated around distributed compute architectures, sustainable energy provisioning, robust security frameworks and sovereign data ecosystems.
The panel largely converged on the need for distributed, edge‑centric AI architectures, sustainable power management, and sovereign data/models. Divergences emerged around the practicality of large on‑device inference versus existing power and cooling constraints, and between a formal, theoretical trust framework versus pragmatic security guardrails.
Moderate – while participants share common goals (resilient AI, energy efficiency, security), they differ on the feasibility of device‑level large models and on the preferred security paradigm. These differences suggest that policy and industry efforts must balance ambitious technical visions with realistic infrastructure planning and develop both theoretical and practical security standards.
The discussion was driven forward by a series of escalating insights that moved from a technical premise (Durga’s edge inference) to broader systemic challenges (Rizvi’s energy and sovereignty), a structured problem taxonomy (Arun Shetty’s three impediments), a foundational trust framework (Dr. Kamakoti), and finally to concrete, resource‑aware deployment strategies (Gokul) and actionable security road‑maps (Arun Shetty’s closing). Each of these pivotal comments acted as a turning point, expanding the scope, deepening the analysis, and aligning the participants toward a shared vision of resilient, secure, and sustainable AI infrastructure for India.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

