HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

20 Feb 2026 13:00h - 14:00h

HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on heterogeneous computing and AI infrastructure challenges in India, featuring experts from Qualcomm, Cisco, IIT Madras, and Intel, along with a government minister. The central theme revolved around distributing AI compute across different layers – from edge devices to data centers – to create more efficient and resilient AI systems.


Durga Malladi from Qualcomm emphasized the importance of running AI inference directly on devices, noting that smartphones can now handle 10 billion parameter models while smart glasses can run sub-1 billion parameter models. He advocated for “hybrid AI” that seamlessly distributes computing between devices, edge cloud, and data centers based on connectivity and requirements. The discussion highlighted voice interfaces in native languages as a key application area, with support for 14 languages mentioned.


Arun Shetty from Cisco identified three major impediments to AI adoption: infrastructure constraints (power, compute, and networking), security and safety concerns, and data gaps. He stressed that enterprises and governments possess the best datasets but need secure, fit-for-purpose solutions. The security aspect was particularly emphasized, noting challenges like model hallucination, toxicity injection, and the need for comprehensive visibility across AI systems.


Professor Kamakoti discussed the critical importance of trust in AI systems, explaining that mathematical definitions of trust are complex and context-dependent. He emphasized the need for sovereign AI models and robust cybersecurity measures, particularly for critical infrastructure and public systems. Energy efficiency emerged as a crucial concern, with discussions about power usage effectiveness (PUE) and the need for hybrid energy solutions. The panelists concluded that India’s AI future depends on collaborative efforts to address infrastructure, security, and energy challenges while leveraging the country’s strengths in application development and diverse datasets.


Keypoints

Major Discussion Points:

Heterogeneous Computing and Distributed AI Infrastructure: The panel extensively discussed the need for distributed computing across devices, edge cloud, and data centers rather than concentrating all compute in single locations. This includes running inference on smartphones (up to 10 billion parameter models) and smart glasses to reduce dependency on network connectivity and data centers.


Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption (with projections of 63 gigawatts needed), compute availability, and networking challenges. The discussion emphasized energy efficiency, with data centers requiring 40% power for cooling, 40% for computing, and 20% for connectivity, highlighting the need for better power usage efficiency (PUE).


Security and Safety in AI Systems: Comprehensive discussion on AI security challenges including model vulnerabilities, adversarial AI, data poisoning, and the need for “shadow AI” detection in enterprises. The panel distinguished between safety issues (models not working as intended) and security threats (external actors changing model behavior).


Data Quality and Sovereign AI Models: Emphasis on the importance of high-quality, accessible datasets for AI development, with particular focus on India’s need for sovereign large language models using local data rather than relying solely on public datasets used by global models.


Practical Applications and India’s AI Ecosystem: Discussion of India’s growing AI landscape with 300+ Gen AI startups, focus on application layer development, and the need for localized solutions including voice interfaces in 14 Indian languages and domain-specific models for various verticals.


Overall Purpose:

The discussion aimed to explore India’s path toward building robust, secure, and efficient AI infrastructure through heterogeneous computing approaches, addressing both technical challenges and policy considerations for scaling AI adoption across enterprises and public systems.


Overall Tone:

The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual respect and built upon each other’s points constructively. The tone was forward-looking and solution-oriented, with participants sharing practical insights from their respective domains while acknowledging shared challenges. The minister’s closing remarks reinforced the positive, collaborative atmosphere by emphasizing the partnership between policymakers and technologists for societal welfare.


Speakers

Speakers from the provided list:


Kazim Rizvi – Moderator/Host of the panel discussion


Prof. V. Kamakoti – Professor and Director of a premium educational institution in India, involved in India’s AI policies, expertise in cybersecurity and trust in AI systems


Arun Shetty – Representative from Cisco, expertise in networking, connectivity, AI infrastructure, and AI safety/security


Gokul Subramaniam – Expertise in edge computing, AI deployment models, vertical-specific AI applications, and infrastructure optimization


Durga Malladi – Representative from Qualcomm, expertise in processors, heterogeneous computing, AI inference on devices, and hybrid AI solutions


Sridhar Babu – Honorable Minister, policymaker focused on providing infrastructure support (power, electricity, water, land) for AI development


Additional speakers:


Sarah – Representative from Intel (mentioned only briefly at the end for gift presentation)


Full session reportComprehensive analysis and detailed insights

This panel discussion on heterogeneous computing and AI infrastructure in India brought together leading experts from industry, academia, and government to address critical challenges and opportunities in the country’s AI development. Moderated by Kazim Rizvi, the panel featured Durga Malladi from Qualcomm, Arun Shetty from Cisco, Professor V. Kamakoti from IIT Madras, Gokul Subramaniam from Intel, and Minister Sridhar Babu, creating a convergence of technical expertise and policy perspectives.


The Shift Towards Distributed AI Infrastructure

Durga Malladi from Qualcomm opened with a compelling vision for distributed computing that challenges conventional AI infrastructure thinking. His central principle—that AI user experience should remain consistent regardless of network connectivity—established the framework for reimagining AI deployment. This necessitates running inference directly on devices rather than relying solely on centralized cloud processing.


Malladi demonstrated the feasibility of this approach with impressive technical achievements: modern smartphones can handle up to 10 billion parameter multimodal models, while smart glasses can efficiently run sub-1 billion parameter models with 24-hour battery life. These capabilities represent a significant leap in edge computing power, enabling sophisticated AI applications to function independently of network connectivity.


The concept of “hybrid AI” emerged as Qualcomm’s strategic approach, distributing computing across devices, edge cloud infrastructure, and traditional data centers based on specific workload requirements. This optimization across the computing continuum moves away from forcing all AI processing through centralized bottlenecks.


Voice interfaces exemplified this distributed approach’s practical applications. Malladi emphasized voice as “the most natural user interface,” particularly important for native language interaction. Supporting 14 languages requires heterogeneous processors capable of handling diverse linguistic and cultural contexts, benefiting from localized processing that understands specific user environments.


Infrastructure Constraints and Energy Challenges

Arun Shetty from Cisco identified three critical impediments to AI adoption in India: infrastructure constraints encompassing power, compute, and networking; security and safety concerns; and significant data gaps. The power challenge emerged as particularly acute, with projections that AI infrastructure will require substantial energy scaling in coming years.


Gokul Subramaniam from Intel highlighted three physical constraints India cannot circumvent: land, water, and power. His analysis revealed that in data centers, 40% of energy goes to cooling, 40% to computing, and 20% to connectivity. This breakdown emphasizes the importance of achieving optimal Power Usage Efficiency ratios, where maximum energy goes to actual computing rather than supporting infrastructure.


The cooling challenge becomes complex as compute requirements scale, with different cooling solutions needed for varying power densities. For India, with its diverse climate conditions, this requires region-specific solutions accounting for local environmental factors.


Subramaniam emphasized the leapfrogging opportunity this presents for India, noting that edge computing can reach areas without traditional connectivity infrastructure, potentially democratizing access to AI capabilities across the country’s diverse geographic and economic landscape.


Security and Safety: Understanding the Distinction

Arun Shetty made a crucial distinction between safety and security concerns in AI systems. Safety issues involve models not working as intended—including hallucination, toxicity, and unpredictable behavior. Security concerns involve external actors deliberately changing model behavior through adversarial attacks or data poisoning.


This distinction has profound implications for risk mitigation strategies. Safety requires internal controls and model validation, while security demands external threat detection and defensive mechanisms. The non-deterministic nature of AI models complicates both challenges, as consistent input-output relationships cannot be guaranteed.


Professor Kamakoti provided a mathematical framework for understanding trust in AI systems, referencing the TV show “Yes Prime Minister” to illustrate that trust is neither reflexive, symmetric, nor transitive. Trust is context-dependent and temporal, varying based on circumstances and changing over time. This complexity necessitates new approaches to AI security that account for trust’s nuanced, contextual nature.


Shetty briefly mentioned the challenge of “shadow AI” in enterprises, where organizations lack visibility into AI applications their employees use, creating potential security vulnerabilities and compliance risks.


Data Sovereignty and Quality

The discussion revealed significant opportunities for India to leverage its unique datasets while addressing quality and accessibility challenges. Shetty observed that while most global AI models train on publicly available data, enterprises and governments possess superior datasets that could enable more effective AI applications.


Kazim Rizvi noted that India has approximately 300 GenAI startups building on large language models while simultaneously developing sovereign models. This dual strategy leverages global AI advances while building indigenous capabilities, balancing innovation speed with strategic autonomy.


Professor Kamakoti suggested incorporating “need to know” principles into AI models, similar to security clearance systems, enabling appropriate responses based on user authorization levels while maintaining functionality for authorized users.


Practical Applications and Strategic Opportunities

Gokul Subramaniam highlighted specific AI applications in education, including real-time translation and transcription services that could transform learning experiences. These domain-specific models optimized for educational content could provide personalized learning and adaptive content delivery, functioning effectively even in areas with limited connectivity.


The education sector represents a particularly promising area for distributed AI deployment, potentially democratizing access to high-quality educational resources across India’s diverse geographic regions.


Small and medium businesses also represent significant opportunities for edge AI deployment, making advanced AI capabilities accessible to organizations that previously couldn’t afford sophisticated cloud-based solutions.


Policy Support and Collaborative Framework

Minister Sridhar Babu’s participation highlighted critical policy support for India’s AI infrastructure development. His commitment to providing adequate power, electricity, water, and land infrastructure represents essential government backing for private sector AI initiatives.


The minister emphasized “welfare for all, happiness for all” as the ultimate goal of AI implementation, providing important ethical grounding that ensures AI development serves broader social goals rather than purely technical or commercial objectives.


Future Outlook

The panelists outlined a vision for India’s AI future that balances ambitious technical goals with practical implementation challenges. The hybrid AI approach represents a pragmatic path forward, enabling incremental deployment of AI capabilities across the computing continuum without requiring massive upfront investments in centralized infrastructure.


The development of sovereign AI models represents both a technical challenge and strategic opportunity, requiring sustained investment in data infrastructure, model development capabilities, and human capital to compete globally while serving specifically Indian needs.


Energy efficiency improvements offer significant opportunities for reducing environmental impact while controlling operational costs. The combination of edge computing capabilities with strategic data center deployment could optimize India’s AI infrastructure development within existing resource constraints.


Conclusion

This panel discussion illuminated the complex challenges facing India’s AI infrastructure development while highlighting significant opportunities for innovation and leadership. The shift towards heterogeneous, distributed computing represents a fundamental reimagining of AI deployment that could serve diverse user needs while respecting infrastructure constraints and security requirements.


India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy environment—positions the country to lead in this new paradigm. The collaborative spirit evident in this discussion, where technical experts, policymakers, and industry leaders work toward common goals, provides a compelling framework for navigating the complex challenges ahead while maximizing AI’s transformative potential for all citizens.


The vision articulated by the panelists of AI systems that serve all citizens, respect sovereignty and security requirements, and operate efficiently within India’s constraints offers a roadmap for the country’s AI future that balances innovation with practical implementation realities.


Session transcriptComplete transcript of the session
Durga Malladi

with them. 14 languages. Voice is the most natural user interface to devices around you. So the idea is not to actually keep typing and texting, but it’s about the usage of voice, but in native languages, which actually work very nicely. And that means that you have to make sure that the use cases are built on top of it. So that’s what our focus is from a processor standpoint. One final note, and given that I have maybe just one minute, another aspect of heterogeneous computers, disaggregation of compute within the network itself. What I mean by that is, at some point in time, you might have extremely good connectivity to the network. And at some other point in time, you might have zero connectivity to the network.

And the question to ask is, do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices. Not that you want to do it all the time, but when you can, why not? today we can run up to a 10 billion parameter model multimodal model state of the art on a smartphone and a sub 1 billion parameter model in your glasses without necessarily charging a device the whole day it’s once every 24 hours so we’ve come a long way in that which means use the data centers use the edge cloud as and when necessary they have a role to play at the same time make sure that we also build for devices where the inference actually occurs and users directly perceive that’s where the data originates so it’s important to think about it that way

Kazim Rizvi

yeah there’s there’s also very strong environmental aspect to this and which often gets unnoticed and undiscussed but that element is also very important in terms of efficiently managing the energy requirements because energy as we also know is finite and so I think you one thing which I was struck to me which is spoke what was inferences and the other is that it’s not just about the energy but it’s also about the energy and the A lot of what’s happening in India is also around inferencing models, right? So, I mean, in terms of the Gen AI story, which we have, we have almost 300 Gen AI startups, which are building on top of the large language models.

And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with Sarvam and others, we are also building sovereign large language models, right? So, we are sort of, as Minister Vaishnav has spoken about, every, you know, piece of the puzzles. We are there in terms of fitting that puzzle together. I’d like to come to Mr. Arun Shetty, sir, is with Cisco. And, you know, we just want to take it further from where Durga sir had left in terms of talking about enterprise adoption at scale. And, you know, of course, with Cisco, what are the challenge of bottlenecks, which you see in terms of computer availability, connectivity, which Cisco is trying to do, which you see in generally.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And

Arun Shetty

Yeah, so as you know, we connect and protect the… This should be working, right? Yeah, yeah, yeah. As you know, we connect and protect even in the AI era, right? We started in the internet, we came into the cloud, and we are in this era. First of all, thank you very much for having me, and it’s indeed a pleasure to be representing this esteemed panel. So I think what I’ll do is I’ll summarize based on what others have spoken, actually, and I think those are real problems. The first one is clearly the three impediments for AI adoption is one is clearly infrastructure constraints, and we all spoke about it, and they all spoke about it.

The first one is the power. power is a challenge will be a challenge i think usc is expecting it will be 63 gigawatts of power in couple of years what they require okay and then the compute is a problem we did recognize that compute is becoming a problem and then uh kamakoti sir did tell that cisco is in networking what are you doing in networking and networking will be a problem actually and then we need to see how we need to address and clearly it has to be a fit for purpose solutions because you not only do huge data centers and i think what we see is in couple of years you will see there is more inferencing happening at the edge and that’s what we need that’s what the how the world will move and that’s why solutions have to be fit for purpose for sure the second bigger challenge what we have is the security and the safety aspect so that is something what we need to pay lot of attention because as the adage says what if you can’t see you can’t trust right you can’t trust something what you can’t see so you need to have the visibility across the stack and also you need to see whether the models what we are using are the right models for us or is there anything malicious into the models itself actually vulnerabilities in that model so the security aspect becomes where security and safety aspect becomes very very important because the models hallucinate you can inject toxicity into the model so those are the challenges what we need to address as far as what we use so i think it is very very important to build our models and if you look at the models all the models were built using the public data which was the text voice and video data so but however the enterprises the government has the best data sets so why can’t we use those data sets so the third impediment what we have today is the data set so the third impediment what we have today is the data set so the third impediment is the data gap and data gap is essentially i need to have high quality accessible and manageable data and we can build gpts using that what we can call it as a machine gpt what we can build using that use that for inferencing use that for training use that for inferencing and we get a lot of quality use of ai without data the which is the fuel for the ai today you can’t really move forward on the ai and i think these are the typical three problems and the ways we are looking at addressing this is clearly one is i will not be able to build a huge data center for a specific use case so take a use case and then see how fast i can give that infrastructure a comprehensive secure ai factory or a secure infrastructure whether it is in the data center or in the edge actually so that people can focus on building the use cases or the applications on top of it and the second thing comes on the safety and the security aspect of it and how we can do the defense mechanism and the third one is the data so these are the three problems what cisco is trying to address along with the ecosystem partners of course because this is not a problem what you can solve alone actually yeah thank you

Kazim Rizvi

yeah i think i don’t know if my mic okay it’s okay yeah and i’ll i’ll sort of take from the security point which you have spoken and i’ll come back to dr kamakoti i think we have on the clock it shows seven but on my watch it shows 15 yeah so i’ll go by my watch uh yeah so dr kamakoti would like to focus on critical infra and public systems here and as you know that as with the advent of ai we’re going to use it across these sectors as well so how important do you see heterogeneous compute in terms of contributing to national resilience to safeguard and to sort of you know ensure that our critical infrastructure public systems are secure as well

Prof. V. Kamakoti

So today, the type of things that we need to do for each one of these actions, the type of inferencing, type of response time we need, as Shetty mentioned, it’s going to be different. I hope all of you have seen Yes Prime Minister, and always they say, need to know, right? You need to know, right? And now what happens is if I am going to make a model that has understood the entire data, then this that the model, and it is used to be someone that someone should they need to know that data? That’s a very important question. So that’s where the entire aspect of cybersecurity comes in. And that’s why we are all saying that we have need to have sovereign models.

As he rightly pointed out, we can have adversarial AI, we can go poison the whole thing and then make it teach make it tell the things that, you know, should not be told, or need not be told. Okay. This is something that we need to very much look at from a security point where i do an inferencing and my training data set goes for a toss number one so we need to have something for for education at least as a director of one of the premium students in the country what my worry is that for education like how we have since our board for uh you know movies what we should make models for which certain details alone should be fed into it see is a bacha right whatever you teach what it will tell you back probably do a little more uh generative on that so this is number one number two is again coming back to cisco itself right you do deep packet inspection and basically you do it with some signatures today the the whole story is changing dynamically the malware can change its signature so that’s going to be the biggest challenge now and what sort of inferencing they are going to do they have to bring some more different architecture and that will be a heterogeneous architecture now and so so So, ultimately, you know, as you see, you know, what you see, the trust component, I always repeat this, I’ll finish with this with my one minute.

So, trust is, you know, friends, you know, if you want to define A is equivalent to B, that’s the definition, right? If you want to define A, you have to come with B, which is equivalent to A. So, equivalence in discrete mathematics, equivalence relation should satisfy three properties, reflexive, symmetric, transitive. A is trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive, I trust Gokul, Gokul trust you, I may not trust you. Trust is in addition, trust is context dependent, I trust. I trust you on something, I don’t trust you on something else. It is temporal, morning I trust you, evening I don’t trust you.

So, right? So, the main thing is, we have to build that mathematics. defined trusted and if you go to you know some of these search engine and define trust you get 1 million hits for that so so that is going to be the most important part so specifically on heterogeneous we will have certain different types of security issues something which a can sound something which is originating because of a and that’s where all of us edge connectivity server all the three people have to work together and and we will teach and he’ll put policy so

Kazim Rizvi

but both of you are equally playing an important role in terms of policy dr. Kamothi you’re also you know very influential and important figure in India’s AI policies of course lots to learn from you Goku very quickly would like to come to you and you know just sort of taking away in terms of the practical deployment models and what are the sort of examples you’ve seen which demonstrate that we are moving towards heterogeneous compute right and what needs to be done to also get get to that

Gokul Subramaniam

So I started off with workload and I’ll go back to the same thing. So one of the things that we’re looking at and it’s critical is to see what vertical really needs what kind of domain specific models. And then try to apply that as much as possible as edge inferencing and contain the walls that are there that prevents AI to work efficiently. Primarily it’s like memory, you know, the connectivity, the IO, the thermal and then the power. So from an edge inferencing standpoint, there are quite a few things that are being done, be it an education segment where you want more translation, data being available, transcription. So that the knowledge is being imparted in a way that you have with the right data with the lowest power that’s meaningful for the student.

And more importantly, when we talk security, it’s not only about protecting data. the models we keep talking data and models it’s protecting the user that’s even more fundamental and how you can ensure that that happens second thing is applying it to other verticals be it small and medium business i think there is a great opportunity there where edge inferencing and putting compute with the right kind of power that can translate the businesses into actually using ai more effectively the last aspect that i want to also touch upon is in terms of just power you know as we go from one gig to nine to ten gig in the next five years in the country we have to realize that india is challenged by three physical things that we cannot run away from land water and power and these are very important aspects that it will drive how we set up our infrastructure and you know almost you know in a hundred percent of your power energy that comes into a data center forty percent goes into cooling forty percent into your computer and twenty percent on connectivity and there is this famous metric that you use, the PUE, the power usage efficiency.

It has to be as close to one as possible. All the power that you give goes to the most important thing, which is the computer, not to the cooling and things. And there are a lot of technologies that are being played with with respect to how much you can air cool on a rack, per rack, and that was okay up to about 25 kilowatt, and as you start to get to 100, you have to use liquid cooling, and then how we can set that infrastructure up. And for a country like India, it’s absolutely important to look at what hybrid energy solutions we can go with, because just pure renewable may not be able to address it. You’ll have to have something that is stable and be able to do something off -grid so that there is that dependency for you to get the data from the data centers and push as much as possible to edge, because edge is all about reach.

How can I take it to places across the country where there is no access to connectivity? It’s about how can I leapfrog? How can I leapfrog with verticals that have not used technology as much? We’ve always done a leapfrogging in India, and this is a great moment for us, and total cost of ownership. Those are the big areas.

Kazim Rizvi

Thank you, Gokul. And I think as we are approaching the end of the panel, I’d sort of like to go to Durga and Dr. Shetty also in terms of closing remarks and the way forward. So to both of you, I’ll pose this question in terms of the next two to four years, because I think the AI age, we don’t think too far ahead. We can’t do five -year planning or 10 -year planning. I think two -year planning is sufficient. So what enterprise outcomes are you both looking at? Maybe we can start with Durga in terms of defining India’s access to compute, access to infrastructure, capacity, and also sort of building in scale, cost efficiency and energy efficiency.

Durga Malladi

So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the world as well, where the problems are somewhat similar, is the ability to distribute compute across the entire network. So think of a combination of inference that runs in devices to the largest… extent that’s possible. Edge cloud, on -prem servers, where a lot of the localized processing can be done. And these can be done in air -cooled carts, by the way. The point that was made earlier is absolutely relevant. You don’t necessarily need liquid cooling all the time. You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter models, which are getting pretty sophisticated.

That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the overall requirements of what you need in a data center. And instead of, therefore, concentrating the entire compute in one single location and then building it for just that alone, a holistic approach of devices, edge cloud, plus data center is probably what we are looking forward to. From Qualcomm, we call it as hybrid AI. It’s not just a marketing slogan, but it is something that we truly believe in. Thank you.

Arun Shetty

Since the infrastructure part has been addressed here, so let me talk. A little bit more on safety and security aspects. So I think one of the things what we need to understand about the modern… these models are very intricate and very complex. And it’s also non -deterministic because if you give an input, not necessarily the output will be the same like a standard application, correct? So that’s why it is non -deterministic. So what one should be doing, right? There are two aspects of safety and security. I’ll just touch upon why it is important to know that actually. Safety is all about, we want the models to work in a certain way but it is not working in that certain way or the way we want them to work.

That is the first part of it. That’s where the toxicity part, hallucination, all those challenges come actually. The second part of it is the security part wherein a bad actor from outside can change the behavior of the model. So we need to be careful about both the things actually. So what one should be doing? Say for example, I think Kamakoti sir also told about users to have, that’s it. users also to be secure, right? So it is essential that the organizations or the country has to build that actually. So which means if I’m accessing a chat GPT and sending some confidential info, the system should stop me. So that is the when I’m accessing a third party application, the system should be smart enough to stop me saying that you can’t be sharing that information that’s not allowed for you to share that.

So that’s something which is already happening in organizations today. The second part of it is the first party application, I’m building an application, and I’m using a model. So now the organization should be able to scan what all my AI assets are. Because one of the biggest challenges for enterprise is the shadow AI applications, they don’t know what people are doing actually. So I need to clearly know what all my assets are. That is number one, I detect all my assets or discover all my assets. And next is I should scan. and also ensure that these models and the applications what I’m using are not vulnerable. If it is vulnerable, then I need to put guardrails around it or I need to fix those problems.

And similarly, there are organizations who are already telling that there are a lot of risks. So you need to nist Mitre and OWASP are telling that there are a lot of risks associated with that and we need to ensure that we need to stop that. So that is something what Cisco is focus, our focus to see how we can use AI to defend the, to defend against all these malice and also the vulnerabilities what we see. Thank you so much.

Kazim Rizvi

I think with this, we’ll probably close the panel, but I’d like to invite Honorable Minister once again for his very quick closing remarks that you have sort of. Thank you. us highly motivated to sort of build on this. You’ve heard us in the last one hour. What are your thoughts? We’d love to hear from you in terms of your closing address.

Sridhar Babu

Thank you, Rizvi. And in fact, it’s a great pleasure to be here with the eminent Padmasree Awadi, Professor Kamakoti and Gokul and Durga Prasad and Mr. Vichetti sharing their truly professional experience and how as a policymaker, how we should view the things especially in terms of power, electricity, water and the land. How we should be well equipped to provide all these things where all the eminent panelists over here or the eminent people of the days would be thinking of putting. My primary challenge they have posed before is try to provide all these things. We are here to provide the rest remaining. And in fact, you know, thanks once again for a very apt introduction. very apt dialogue over here.

Ultimately, we have to all, me as a policymaker, and you all technocrats and innovators have to think the basic agenda for this AI impact term is welfare for all, happiness for all. Thank you for inviting me. Thank you so much.

Kazim Rizvi

With this, we will have to close the panel. I’d like to thank all our panelists and also invite colleagues, Sarah from Intel to hand over the gifts. But we’ll just have a group photo. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (5)
Additional Contextmedium

“Modern smartphones now capable of running 10 billion parameter multimodal models”

The knowledge base provides context on AI model parameters, explaining that larger parameter counts (like 70B vs 7B models) generally provide better understanding of complex topics and more accurate responses, though they require more computational power and are more expensive to use [S22]. This helps contextualize the significance of running 10B parameter models on mobile devices.

Additional Contexthigh

“AI infrastructure will require 63 gigawatts of power in the coming years”

The knowledge base confirms the massive energy demands of AI infrastructure, noting that data centres now consume approximately 2% of global electricity and by 2025, energy demand from data centres is expected to double to 1,000 terawatt-hours annually-equivalent to Japan’s electricity consumption [S26]. This provides broader context for the 63 gigawatt figure mentioned.

Additional Contextmedium

“Support for 14 languages as a critical application area for multilingual AI systems”

The knowledge base emphasizes the importance of linguistic diversity in AI systems, noting that current AI models are often based on limited datasets primarily from Western sources, and communities are working to develop AI solutions that reflect their cultural and knowledge heritage [S26]. It also highlights how language barriers affect global participation and representation [S39].

Additional Contextmedium

“Voice represents the most natural user interface for diverse populations”

The knowledge base supports this through examples of accessibility technology, showing how AI-powered voice interfaces and multimodal systems are being used to serve users with different abilities and literacy levels, such as AI glasses that can describe visual scenes through voice output [S23].

Confirmedhigh

“Infrastructure constraints including power and compute limitations as critical impediments to AI adoption”

The knowledge base confirms compute access as a major bottleneck, noting that training a single large language model can cost upwards of USD 100 million, effectively excluding entire regions from AI development [S43]. It describes this as a ‘compute divide’ that limits AI innovation in the Global South.

External Sources (92)
S1
G. V. G. Krishnamurty — G. V. G. Krishnamurty
S2
Arvind Ganesan — Arvind Ganesan
S3
Kevon Swift — https://diplo-media.s3.eu-central-1.amazonaws.com/2023/09/Kevon-Swift-Head-Shot.png Mr Kevon Swift is the Head of Public…
S4
Dhrupad Mathur — Dhrupad Mathur
S5
Kaarika Das — https://diplo-media.s3.eu-central-1.amazonaws.com/2023/09/Kaarika-Das-1.jpg Ms Kaarika Das is a PhD candidate in Economi…
S6
World Economic Forum – Global Coalition for Digital Safety | IGF 2023 Side Event — https://www.intgovforum.org/en/content/enhancing-digital-safety-the-world-economic-forum-global-coalitions-collaborative…
S7
DC-SIG Involving Schools of Internet Governance in achieving SDGs | IGF 2023 — Satish Babu Speech speed 203 words per minute …
S8
Internet Engineering Task Force Open Forum | IGF 2023 Town Hall #32 — Suresh Krishnan Speech speed 210 words per minute …
S9
Abhilash Babu Vinayak — Abhilash Babu Vinayak is a communications consultant in the development sector and lives in Bangalore. He has successful…
S10
Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112 — So what instances of risk exposure on social… media did you see, and how would you recommend tackling this issue? Than…
S11
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — These insights can be valuable for policymakers, industry leaders, and other stakeholders in developing effective and re…
S12
Amir Kiyaei — Amir Kiyaei
S13
Shekhar Shah — Shekhar Shah
S14
Arvin Kamberi — Arvin Kamberi Multimedia Coordinator, DiploFoundation https://dig.watch/wp-content/uploads/arvin.jpg An expert in remote…
S15
World Economic Forum – Global Coalition for Digital Safety | IGF 2023 Side Event — https://www.intgovforum.org/en/content/enhancing-digital-safety-the-world-economic-forum-global-coalitions-collaborative…
S16
P. V. Balakrishnan — P. V. Balakrishnan
S17
Connecting the Unconnected in the field of Education Excellence, Cyber Security & Rural Solutions and Women Empowerment in ICT — We call it Administrator USO. Niraj Verma: Thank you, distinguished guests, participants, including those who have con…
S18
Amrita Choudhury — Amrita Choudhury Director, CCAOI https://dig.watch/wp-content/uploads/amrita1.jpg Ms Amrita Choudhury is Director of CCA…
S19
Survive the AI jargon tsunami: Find shelter in your mother tongue — Practical next steps  Explain concepts in your native language first. Describe what a large language model does (predi…
S20
The MANAV manifesto: Reclaiming agency for the majority — In practice, this means prioritising voice-to-action interfaces that allow a person to navigate complex public services …
S21
AI in Practice: Real-world applications explained — Agentic AI: When AI takes action Traditional AI systems are reactive, which means they respond to your questions or re…
S22
Understanding the language of modern AI — Now that we’ve explored how AI came to be, from biological neurons to artificial neural networks and the transformer rev…
S23
Seeing, moving, living: AI’s promise for accessible technology — At Expo 2025 in Osaka, a bionic hand called RYO demonstrated movements so natural that it could handle tofu without crus…
S24
AI for Good Global Summit — Therefore, to utilize the information, a feature reuse mechanism is proposed for better performance of IOL prediction. E…
S25
State of Play: Chips / DAVOS 2025 — As Yin Fan concluded, it’s important to ensure that chips do not become a flashpoint in a 21st century Cold War, emphasi…
S26
The year of AI clarity: 10 AI Forecasts for 2025 — How can we protect ourselves from the risks of inaccurate predictions? We have three suggestions: Avoid overreacting: …
S27
AI and international peace and security: Key issues and relevance for Geneva — This, in turn, may lead to more confidence in both cyber and data security. Beyond the virtual sphere, AI could play a r…
S28
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — And the third one is data set, the lack of availability of these data sets. In terms of infrastructure, we think there i…
S29
Data centre power demand set to triple by 2035 — Data centre electricity use is forecast to surge almost threefold by 2035. BloombergNEF reported that global facilities …
S30
Living in an Unruly World: The Challenges We Face — China will show the biggest growth in energy consumption – more than the entire OECD area. Second will be India. Togethe…
S31
The Glasgow environment summit: A new paradigm? — India resisted setting an overall reduction in carbon emissions. Instead it had set an ‘emissions-intensity’ target, pro…
S32
Part 8: ‘Maths doesn’t hallucinate: Harnessing AI for governance and diplomacy’ — We are driven by emotions, sensory experiences, and our understanding of the world. AI, on the other hand, navigates a l…
S33
AI for Good Innovation Factory Grand Finale 2025 — The system aims to reduce the $30 billion annual cost of anesthesia complications. Evidence Problem scale: ’60 milli…
S34
Why is Shadow AI dangerous for diplomats? — In addition, visualisations can fix certain interpretations of data as “the” narrative, sometimes oversimplifying comple…
S35
Can we test for trust? The verification challenge in AI — And it is very critical, and many have said this before me on this stage, to avoid overfitting loose concepts like trust…
S36
America’s AI Action Plan — Increasingly powerful general-purpose models show promise in formulating hypotheses and designing experiments. These nas…
S37
Jua Kali AI: Bottom-up algorithms for a Bottom-up economy — As artificial intelligence (AI) becomes a cornerstone of the global economy, AI’s foundations must be anchored in commun…
S38
Multilingualism — https://dig.watch/wp-content/uploads/multilingualism-novi-72-dpi-DORADJEN-FINAL-2021.png AI and multilingualism There a…
S39
AI as a tech ally in saving endangered languages — Technology here supports capacity development and revitalisation efforts already underway, rather than replacing them. …
S40
WS #119 AI for Multilingual Inclusion WS #119 AI for Multilingual Inclusion Session report Speak…
S41
‘The elephant in the AI room’: Does more computing power really bring more useful AI? — This week, in the conference rooms of the AI Impact Summit in New Delhi, a large elephant will be lurking. It’s an eleph…
S42
Prosperity Through Data Infrastructure — Biewald argues that the constraint in model production lies not in energy availability but in the ability to build the n…
S43
WS #462 Bridging the Compute Divide a Global Alliance for AI — Evidence Global Digital Compact endorsed in September at UN General Assembly; includes objective 2 on digital economy …
S44
Challenging the status quo of AI security — Standards can help ensure that security safeguards are properly implemented and that agents behave as intended. Eviden…
S45
Is AI the key to nuclear renaissance? — AI and hunger for energy We have already written about the revolutionary changes that artificial intelligence (AI) bri…
S46
Powering the Technology Revolution / Davos 2025 — Major Discussion Point Major Discussion Point 4: Workforce and Education Needs for AI in Energy AAnne BouverotSpeech …
S47
Cloud computing and data localisation: Lessons on jurisdiction — For many countries, the specific locus of citizen and other data for jurisdictional purposes is the data’s actual locati…
S49
Cloud computing: what goes on up there? — I’m reading two unrelated articles, both on cloud computing. The first article describes Apple’s launch of iCloud during…
S50
Cloud computing: Opportunities and issues for developing countries — https://www.diplomacy.edu/wp-content/uploads/2021/06/IGCBP2010_2011_Goundar.pdf https://www.diplomacy.edu/wp-content/upl…
S51
WSIS Action Line C2 Information and communication infrastructure — Thanks. Thank you. Thank you. AArchana G. GulatiSpeech speed116 words per minuteSpeech length484 wordsSpeech …
S52
America’s AI Action Plan — • Convene, under the auspices of OMB, a cohort of agencies with High Impact Service Providers to pilot and increase the …
S53
What policy levers can bridge the AI divide? — rather than viewing their size as a limitation. This challenges conventional assumptions about AI development requiring …
S54
Diplomatic policy analysis — Overdependence on algorithms without critical human oversight can lead to biased or incomplete conclusions, particularly…
S55
AI diplomacy — Privacy and data protection are particularly pertinent, given that AI systems often require massive datasets, which can …
S56
AI supremacy: One, two, three, go! — Legend has it that in the 13th century Dominican friar Albertus Magnus built an artificial ‘brazen head’ that could talk…
S57
Prosperity Through Data Infrastructure — Biewald argues that the constraint in model production lies not in energy availability but in the ability to build the n…
S58
Online trust: between competences and intentions — Trust (or the lack thereof) is a frequent theme in public debates. It is often seen as a monolithic concept. However, we…
S59
The Internet and trust — Do we need a new social contract for the online era, which will re-establish the relationship of trust between citizens …
S60
20 Keywords for the Digital 2020s: A Digital Policy Prediction Dictionary — Yet as questions of trust have become ubiquitous in digital discussions, clarity about what they in fact refer to has be…
S61
Is AI the key to nuclear renaissance? — AI and hunger for energy We have already written about the revolutionary changes that artificial intelligence (AI) bri…
S62
The year of AI clarity: 10 AI Forecasts for 2025 — Knowledge inclusion Contributing to knowledge diversity, innovation, and learning on the internet. The rise of AI brin…
S63
Artificial intelligence: policy implications — The field of artificial intelligence (AI) has seen significant advances over the past few years, in areas such as smart …
S64
AI to support China’s social welfare system — China is stepping up the use of AI and big data in elderly and social care as it seeks to address economic challenges po…
S65
Ethics and AI | Part 6 — The European Union AI Act: calling a spade a spade The EU Artificial Intelligence Act Another “first” is the Euro…
S66
Building trust for beneficial AI: Trustworthy systems — In broad terms, perceptions of what is fair were similar among respondents. However, differences came in with regard to …
S67
AI in 2026: Learning to live with powerful systems — The past few years have been defined by astonishment. Each new AI release seemed to arrive faster than society could abs…
S68
Local, Everywhere: The blueprint for a Humanitarian AI transformation — Done this way, AI will not only help preserve knowledge but can also increase the security and resilience of digital sol…
S69
Navigating the AI maze: How to choose the right AI platform or tool — Despite its advantages, Financial AI Agent has distinct limitations: Domain specificity: Performance may degrade sign…
S70
How David outwits Goliath in the age of AI? — From bigger is better to smaller is smarter. Last week, as OpenAI touted its USD 500 billion ambitions in a high-profi…
S71
Survive the AI jargon tsunami: Find shelter in your mother tongue — Practical next steps  Explain concepts in your native language first. Describe what a large language model does (predi…
S72
AI as a tech ally in saving endangered languages — According to the United Nations, an indigenous language disappears roughly every two weeks. UNESCO estimates that nearly…
S73
Multilingualism — https://dig.watch/wp-content/uploads/multilingualism-novi-72-dpi-DORADJEN-FINAL-2021.png AI and multilingualism There a…
S74
AI in Practice: Real-world applications explained — Agentic AI: When AI takes action Traditional AI systems are reactive, which means they respond to your questions or re…
S75
‘The elephant in the AI room’: Does more computing power really bring more useful AI? — This week, in the conference rooms of the AI Impact Summit in New Delhi, a large elephant will be lurking. It’s an eleph…
S76
The year of AI clarity: 10 AI Forecasts for 2025 — Additionally, smaller models require less energy for training and operations, making them more environmentally friendly….
S77
Prosperity Through Data Infrastructure — He highlights that companies like Qualcomm are creating interesting takes on the deployment of models. This suggests tha…
S78
WS #462 Bridging the Compute Divide a Global Alliance for AI — Evidence Global Digital Compact endorsed in September at UN General Assembly; includes objective 2 on digital economy …
S79
The AI Pareto Paradox: More computing power – diminishing AI impact?  — For the last few years, the tech world has been locked in a high-stakes arms race for raw computing power. The prevailin…
S80
AI and international peace and security: Key issues and relevance for Geneva — This, in turn, may lead to more confidence in both cyber and data security. Beyond the virtual sphere, AI could play a r…
S81
Challenging the status quo of AI security — Standards can help ensure that security safeguards are properly implemented and that agents behave as intended. Eviden…
S82
Is AI the key to nuclear renaissance? — AI and hunger for energy We have already written about the revolutionary changes that artificial intelligence (AI) bri…
S83
Powering the Technology Revolution / Davos 2025 — Major Discussion Point Major Discussion Point 4: Workforce and Education Needs for AI in Energy AAnne BouverotSpeech …
S84
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — As you’ve heard from other panelists already, this is happening in many ways. AI applications can enable sustainability….
S85
Addressing AI-driven Socio-economic Inequality — In March 2020, entrepreneur and billionaire Elon Musk stated that ‘AI is far more dangerous than nukes’. This wasn’t he …
S86
The New Delhi AI Summit: Inclusive rhetoric, fractured reality — In the words of UN Secretary General, António Guterres, “If we want AI to serve humanity, policy cannot be built on gues…
S87
Qualcomm brings new AI power to mobile chips — Qualcomm is integrating advanced AI technology from its laptop processors into mobile phone chips. The new Snapdragon 8 …
S88
DeepSeek: Some trade-related aspects of the breakthrough  — DeepSeek has become a buzzword, and well-deservingly so. The Chinese company released a large language model (LLM) which…
S89
Rights of persons with disabilities — There are approximately 1 billion people around the world, or 15% of the world’s population, who experience some form of…
S90
Digital accessibility in Kenya after COVID-19 — When the COVID-19 outbreak was first announced in Kenya in March 2020, panic began as people saw the effects of curfews …
S91
We all need accessibility. To everything. Really. — It’s below zero here in Wisconsin (USA). It’s gone up to -16ᐤC as I write. And even though our +30 cm of snow has been m…
S92
The European Accessibility Act is here: What it means, and why it matters — This shows that inclusive design has wide-reaching, everyday value. Captions on videos assist not only those who are dea…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Durga Malladi
4 arguments197 words per minute538 words163 seconds
Argument 1
Voice interfaces in native languages require heterogeneous computing solutions with proper use cases built on top
EXPLANATION
Durga Malladi argues that voice is the most natural user interface to devices, and the focus should be on using voice in native languages rather than typing and texting. This approach requires building specific use cases on top of heterogeneous computing infrastructure to work effectively.
EVIDENCE
He mentions that their technology supports 14 languages [1], emphasizes that voice is the most natural user interface to devices [2], and explains that the idea is not to keep typing and texting but to use voice in native languages which work very nicely [3]. He states that this means use cases must be built on top of the technology [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of native language interfaces is emphasized in [S19] which advocates for explaining AI concepts in native languages first and developing AI teams with native speakers. The MANAV framework in [S20] prioritizes voice-to-action interfaces in local languages as central design challenges for AI governance in the majority world.
MAJOR DISCUSSION POINT
Heterogeneous Computing and AI Infrastructure
AGREED WITH
Gokul Subramaniam, Kazim Rizvi
Argument 2
AI user experience should be invariant to network connectivity quality, requiring on-device inference capabilities
EXPLANATION
Malladi contends that users should have consistent AI experiences regardless of their network connectivity quality. This requires the ability to run AI inference directly on devices rather than relying solely on cloud-based processing.
EVIDENCE
He explains that at some point you might have extremely good connectivity to the network, and at other times zero connectivity [7-8]. He poses the question whether you want your AI user experience to be invariant to the quality of communications at that point in time, answering that obviously you want it to be invariant [9-11]. This means you must have the ability to run inference directly on devices, not that you want to do it all the time, but when you can [12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The distinction between cloud-based and on-device AI processing is detailed in [S21], which explains that on-device processing offers immediate benefits since it works without internet connectivity and provides faster response times. The MANAV framework in [S20] emphasizes designing for intermittent connectivity as a central challenge for AI systems serving the majority world.
MAJOR DISCUSSION POINT
Heterogeneous Computing and AI Infrastructure
Argument 3
Modern smartphones can run 10 billion parameter multimodal models, glasses can run sub-1 billion parameter models
EXPLANATION
Malladi demonstrates the current capabilities of edge devices in running sophisticated AI models. He shows that significant AI processing can now be performed locally on consumer devices without requiring constant connectivity to data centers.
EVIDENCE
He states that today they can run up to a 10 billion parameter model multimodal model state of the art on a smartphone and a sub 1 billion parameter model in glasses without necessarily charging a device the whole day, just once every 24 hours [13]. He notes that they’ve come a long way in this capability [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The relationship between model parameters and capabilities is explained in [S22], which describes how parameter counts (7B, 70B) determine AI processing capacity and that smaller models respond faster while larger models require more computational power. The trend toward smaller, more efficient models is noted in [S26], which discusses how smaller models like DeepSeek are increasingly outperforming larger ones while being much more cost-effective.
MAJOR DISCUSSION POINT
Heterogeneous Computing and AI Infrastructure
Argument 4
Distributed compute across devices, edge cloud, and data centers is the future approach (hybrid AI)
EXPLANATION
Malladi advocates for a holistic approach that distributes AI computing across multiple layers – from devices to edge cloud to data centers. This hybrid approach reduces the overall requirements for centralized data centers while providing more flexible and efficient AI processing.
EVIDENCE
He describes looking forward to the ability to distribute compute across the entire network, thinking of a combination of inference that runs in devices to the largest extent possible [90-91]. He mentions edge cloud and on-premises servers for localized processing that can be done in air-cooled carts [92-93], noting you don’t necessarily need liquid cooling all the time and can run up to 100 to 300 billion parameter models [95-96]. He explains this mitigates overall data center requirements and instead of concentrating entire compute in one location, a holistic approach of devices, edge cloud, plus data center is what they’re looking forward to [99-100]. From Qualcomm, they call it hybrid AI, which is not just a marketing slogan but something they truly believe in [101-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hybrid AI approaches are detailed in [S21] which explains how modern AI applications use hybrid approaches, handling simple tasks on-device for speed and privacy while using cloud-based processing for complex tasks. Cloud-edge collaborative intelligence is discussed in [S24] as vital for optimizing resource utilization and enabling real-time decision-making by leveraging strengths of both cloud and edge computing.
MAJOR DISCUSSION POINT
Heterogeneous Computing and AI Infrastructure
AGREED WITH
Arun Shetty
DISAGREED WITH
Arun Shetty
A
Arun Shetty
8 arguments179 words per minute1219 words407 seconds
Argument 1
Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers
EXPLANATION
Shetty argues that the future of AI will see more processing happening at the edge rather than in large centralized facilities. This shift requires developing specific solutions tailored to different use cases and deployment scenarios.
EVIDENCE
He states that in a couple of years you will see there is more inferencing happening at the edge and that’s how the world will move [44]. He emphasizes that solutions have to be fit for purpose because you not only do huge data centers [44]. He mentions that he will not be able to build a huge data center for a specific use case, so you take a use case and then see how fast you can give that infrastructure [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward edge processing is supported in [S24] which discusses cloud-edge collaborative intelligence and binary trained networks for industrial edge intelligence that achieve high accuracy with minimal memory consumption. Edge computing capabilities are also detailed in [S21] regarding on-device AI processing benefits.
MAJOR DISCUSSION POINT
Heterogeneous Computing and AI Infrastructure
AGREED WITH
Durga Malladi
DISAGREED WITH
Durga Malladi
Argument 2
Three main impediments to AI adoption: infrastructure constraints (power, compute, networking), security/safety, and data gaps
EXPLANATION
Shetty identifies three critical barriers preventing widespread AI adoption. These include physical infrastructure limitations, security vulnerabilities, and lack of quality data for training models.
EVIDENCE
He clearly states that the three impediments for AI adoption are infrastructure constraints including power, compute, and networking which everyone spoke about [43-44]. He mentions that power is a challenge and will be a challenge, with USC expecting 63 gigawatts of power requirement in a couple of years [44]. He identifies compute as becoming a problem and networking as an area that needs addressing [44]. The second bigger challenge is security and safety aspects [44], and the third impediment is the data gap, needing high quality accessible and manageable data [44].
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
Argument 3
Power requirements will reach 63 gigawatts in coming years, presenting significant challenges
EXPLANATION
Shetty highlights the massive power requirements that AI infrastructure will demand, citing specific projections that demonstrate the scale of energy challenges facing AI deployment.
EVIDENCE
He specifically mentions that USC is expecting 63 gigawatts of power requirement in a couple of years [44], identifying power as a challenge that will continue to be a challenge [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Massive power requirements for AI infrastructure are confirmed in [S29], which reports that data centre electricity use is forecast to surge almost threefold by 2035, with global facilities expected to consume around 106 gigawatts. Energy consumption growth is also discussed in [S30], noting that China and India will show the biggest growth in energy consumption.
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
AGREED WITH
Kazim Rizvi, Gokul Subramaniam
Argument 4
Security and safety are critical as models can hallucinate and be injected with toxicity
EXPLANATION
Shetty emphasizes that AI models present unique security challenges because they can produce unreliable outputs and can be manipulated by malicious actors. This requires new approaches to securing AI systems beyond traditional cybersecurity measures.
EVIDENCE
He explains that security and safety aspects are very important because as the adage says, you can’t trust something you can’t see, so you need visibility across the stack [44]. He notes that you need to see whether the models you are using are the right models or if there is anything malicious in the models themselves, vulnerabilities in that model [44]. He states that the models hallucinate and you can inject toxicity into the model, which are challenges that need to be addressed [44].
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
AGREED WITH
Prof. V. Kamakoti, Gokul Subramaniam
DISAGREED WITH
Prof. V. Kamakoti, Gokul Subramaniam
Argument 5
Need visibility across the entire stack and verification that models are appropriate and not malicious
EXPLANATION
Shetty argues for comprehensive monitoring and verification systems that can assess AI models and applications across all layers of the technology stack to ensure they are secure and functioning as intended.
EVIDENCE
He emphasizes the need for visibility across the stack and also to see whether the models being used are the right models or if there is anything malicious into the models themselves, vulnerabilities in that model [44]. He mentions the importance of building defense mechanisms against malice and vulnerabilities [44].
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
Argument 6
High-quality, accessible, and manageable datasets are essential for building effective AI models
EXPLANATION
Shetty contends that the quality and accessibility of training data is a fundamental requirement for successful AI implementation. Without proper datasets, organizations cannot effectively leverage AI technologies.
EVIDENCE
He identifies the third impediment as the data gap, which is essentially needing high quality accessible and manageable data [44]. He explains that you can build GPTs using that data for inferencing and training to get quality use of AI [44]. He emphasizes that without data, which is the fuel for AI today, you can’t really move forward on AI [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical importance of quality datasets is emphasized in [S28] and [S24], which discuss how lack of availability of datasets is a major barrier, and how large amounts of data exist but very small amounts are actually useful for training models. The challenge of domain-specific datasets is noted in [S24] where developers had to gather and annotate their own datasets for manufacturing applications.
MAJOR DISCUSSION POINT
Data and Model Development
Argument 7
Enterprises and governments have the best datasets that should be utilized instead of relying only on public data
EXPLANATION
Shetty suggests that organizations should leverage their own proprietary data rather than depending solely on publicly available datasets used to train general-purpose models. This approach can lead to more relevant and effective AI applications.
EVIDENCE
He notes that if you look at the models, all the models were built using public data which was text, voice and video data, but however the enterprises and government have the best data sets, so why can’t we use those data sets [44]. He suggests building machine GPTs using enterprise and government datasets for training and inferencing [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
This perspective is supported in [S28] which discusses how countries need to build datasets for AI applications and notes that if digital transformation is a challenge, many countries don’t have the data they would need for AI algorithms. The importance of leveraging institutional data is also implied in [S24] discussions about industry-specific know-how and datasets.
MAJOR DISCUSSION POINT
Data and Model Development
DISAGREED WITH
Prof. V. Kamakoti
Argument 8
Organizations need to detect shadow AI applications and scan for vulnerabilities in AI assets
EXPLANATION
Shetty warns about the proliferation of unauthorized AI applications within organizations and the need for systematic discovery and security assessment of all AI assets to maintain security and compliance.
EVIDENCE
He explains that one of the biggest challenges for enterprise is shadow AI applications – they don’t know what people are doing actually [125]. He states that organizations need to clearly know what all their assets are, first detecting and discovering all their assets [127]. Next, they should scan and ensure that the models and applications they’re using are not vulnerable, and if vulnerable, then put guardrails around it or fix those problems [128-129]. He mentions that NIST, Mitre and OWASP are telling that there are a lot of risks associated with AI that need to be stopped [131].
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
K
Kazim Rizvi
3 arguments183 words per minute839 words275 seconds
Argument 1
Energy management is crucial as energy resources are finite, with strong environmental implications
EXPLANATION
Rizvi emphasizes the environmental and sustainability aspects of AI infrastructure deployment. He argues that the finite nature of energy resources requires careful consideration of how AI systems consume power.
EVIDENCE
He mentions there’s a very strong environmental aspect to AI that often gets unnoticed and undiscussed, but that element is very important in terms of efficiently managing energy requirements [14]. He notes that energy as we know is finite [14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Environmental concerns about AI energy consumption are detailed in [S29] regarding massive projected increases in data center power demand, and in [S30] which discusses global energy consumption patterns and CO2 emissions continuing to grow. The finite nature of energy resources and environmental implications are central themes in climate discussions in [S31].
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
AGREED WITH
Arun Shetty, Gokul Subramaniam
Argument 2
India has 300 Gen AI startups building on large language models, leading in application layer development
EXPLANATION
Rizvi highlights India’s significant presence in the generative AI ecosystem, particularly in developing applications that build upon existing large language models. This demonstrates India’s strength in the application layer of AI development.
EVIDENCE
He states that in terms of the Gen AI story, India has almost 300 Gen AI startups which are building on top of large language models [15]. He affirms that India is definitely leading the way in terms of application layer, with no doubt about that [16-17].
MAJOR DISCUSSION POINT
Data and Model Development
Argument 3
Sovereign large language models are being developed to ensure data sovereignty and security
EXPLANATION
Rizvi points to India’s efforts in developing its own large language models rather than relying solely on foreign-developed models. This approach aims to maintain control over data and ensure national security in AI applications.
EVIDENCE
He mentions that with Sarvam and others, India is also building sovereign large language models [18]. He references Minister Vaishnav speaking about every piece of the puzzle and how India is fitting that puzzle together [19-20].
MAJOR DISCUSSION POINT
Data and Model Development
AGREED WITH
Durga Malladi, Gokul Subramaniam
G
Gokul Subramaniam
5 arguments186 words per minute572 words183 seconds
Argument 1
Workload-specific domain models should be applied at edge with focus on memory, connectivity, IO, thermal and power constraints
EXPLANATION
Subramaniam advocates for developing AI models tailored to specific industry verticals and deploying them at the edge. This approach requires careful consideration of technical constraints including memory limitations, connectivity issues, input/output capabilities, thermal management, and power consumption.
EVIDENCE
He emphasizes looking at what vertical really needs what kind of domain specific models and then trying to apply that as much as possible as edge inferencing [67-68]. He identifies the need to contain the walls that prevent AI from working efficiently, primarily memory, connectivity, IO, thermal and power [69]. He provides examples from education segment where you want more translation and data being available, transcription, so knowledge is imparted with the right data with lowest power that’s meaningful for the student [70-71].
MAJOR DISCUSSION POINT
Heterogeneous Computing and AI Infrastructure
AGREED WITH
Durga Malladi, Kazim Rizvi
Argument 2
India faces physical constraints in land, water, and power that will drive infrastructure setup decisions
EXPLANATION
Subramaniam identifies three fundamental physical limitations that India must navigate when developing AI infrastructure. These constraints will significantly influence how and where AI computing facilities are established in the country.
EVIDENCE
He states that as India goes from one gig to nine to ten gig in the next five years, the country is challenged by three physical things that cannot be run away from: land, water and power [72]. He emphasizes these are very important aspects that will drive how infrastructure is set up [72].
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
AGREED WITH
Kazim Rizvi, Arun Shetty
Argument 3
Data centers require 40% power for cooling, 40% for compute, 20% for connectivity – need better power usage efficiency
EXPLANATION
Subramaniam breaks down the power consumption in data centers to highlight inefficiencies in current designs. He advocates for improving power usage efficiency to direct more energy toward actual computing rather than supporting infrastructure.
EVIDENCE
He explains that in a hundred percent of power energy that comes into a data center, forty percent goes into cooling, forty percent into computer and twenty percent on connectivity [72]. He mentions the famous metric PUE (power usage efficiency) that has to be as close to one as possible, meaning all the power you give goes to the most important thing, which is the computer, not to cooling and things [73-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Power efficiency challenges in data centers are discussed in [S24] regarding the need for power-efficient AI deployment and in [S29] which notes that rising AI workloads are pushing utilization rates higher and contributing to soaring demand that affects regional electricity prices.
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
Argument 4
Air-cooled solutions can work up to 25 kilowatts per rack, liquid cooling needed beyond 100 kilowatts
EXPLANATION
Subramaniam provides specific technical thresholds for cooling technologies in data centers. This information helps inform infrastructure planning decisions based on power density requirements.
EVIDENCE
He states there are a lot of technologies being played with regarding how much you can air cool on a rack, per rack, and that was okay up to about 25 kilowatt [75]. He notes that as you start to get to 100 kilowatts, you have to use liquid cooling, and then how to set that infrastructure up [75].
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
Argument 5
Protecting users is more fundamental than just protecting data and models
EXPLANATION
Subramaniam argues that AI security should prioritize user protection above data and model security. This represents a user-centric approach to AI safety that goes beyond traditional data protection measures.
EVIDENCE
When discussing security, he states that it’s not only about protecting data and models, but protecting the user is even more fundamental and how you can ensure that happens [71-72].
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
AGREED WITH
Arun Shetty, Prof. V. Kamakoti
DISAGREED WITH
Arun Shetty, Prof. V. Kamakoti
P
Prof. V. Kamakoti
4 arguments170 words per minute611 words215 seconds
Argument 1
Trust is not reflexive, symmetric, or transitive – it’s context-dependent and temporal, requiring new mathematical frameworks
EXPLANATION
Kamakoti provides a mathematical analysis of trust to demonstrate why traditional approaches to defining trust are inadequate for AI systems. He argues that trust has unique properties that require new mathematical and conceptual frameworks for AI security.
EVIDENCE
He explains that if you want to define A as equivalent to B in discrete mathematics, equivalence relations should satisfy three properties: reflexive, symmetric, transitive [55-56]. He then demonstrates that trust fails all three: trust is not reflexive because ‘I don’t trust myself sometimes’ [57], not symmetric because ‘I trust Sarah, Sarah may not trust me’ [58], and not transitive because ‘I trust Gokul, Gokul trusts you, I may not trust you’ [59]. He adds that trust is context dependent – ‘I trust you on something, I don’t trust you on something else’ [60-61], and temporal – ‘morning I trust you, evening I don’t trust you’ [62]. He concludes that we have to build that mathematics and if you search for trust definition, you get 1 million hits [63].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The complexity of trust in AI systems is explored in [S35], which discusses how concepts like trust and trustworthiness risk being overfitted onto systems where they remain underspecified, and emphasizes the need for explicit, contestable standards rather than treating concepts like safety as uniform.
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
AGREED WITH
Arun Shetty, Gokul Subramaniam
DISAGREED WITH
Arun Shetty, Gokul Subramaniam
Argument 2
Different types of security issues will emerge from heterogeneous architectures requiring collaborative solutions
EXPLANATION
Kamakoti warns that heterogeneous computing will introduce new categories of security vulnerabilities that don’t exist in traditional systems. These will require coordinated responses across different stakeholders and technology layers.
EVIDENCE
He states that specifically on heterogeneous computing, there will be certain different types of security issues, something which can sound, something which is originating because of that [63]. He emphasizes that edge connectivity, server – all three people have to work together [63].
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
Argument 3
Critical infrastructure and public systems require sovereign models to prevent adversarial AI and data poisoning
EXPLANATION
Kamakoti argues that sensitive government and infrastructure systems need domestically developed AI models to prevent foreign interference and malicious manipulation. This is essential for national security and system integrity.
EVIDENCE
He references the ‘need to know’ principle from Yes Prime Minister, questioning whether someone should have access to data if a model has understood entire datasets [47-50]. He explains that’s where cybersecurity comes in and why sovereign models are needed [51-52]. He warns about adversarial AI that can poison the whole thing and make it teach or tell things that should not be told or need not be told [53]. He mentions concerns about where inferencing is done and training datasets going for a toss [54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of sovereign AI models is supported in [S34] which discusses how diplomatic services need in-house AI systems based on open-source models to maintain local control of data and models, and how commercial AI platforms create strategic vulnerabilities where sensitive reasoning may become accessible to foreign authorities.
MAJOR DISCUSSION POINT
Security and Trust in AI Systems
DISAGREED WITH
Arun Shetty
Argument 4
Models for education should be filtered appropriately, similar to movie rating systems
EXPLANATION
Kamakoti suggests that AI models used in educational settings should have content filtering mechanisms similar to how movies are rated for different audiences. This ensures age-appropriate and educationally suitable content delivery.
EVIDENCE
As director of a premium institution, he expresses worry about education, suggesting that like how there are boards for movies, models should be made where certain details alone should be fed into them [54]. He notes that a child (bacha) will tell you back whatever you teach, probably doing a little more generative on that [54].
MAJOR DISCUSSION POINT
Data and Model Development
S
Sridhar Babu
2 arguments141 words per minute166 words70 seconds
Argument 1
Policymakers must provide adequate power, electricity, water, and land resources to support AI infrastructure
EXPLANATION
Sridhar Babu acknowledges the government’s responsibility to provide essential infrastructure resources that enable AI development. He positions policymakers as facilitators who must ensure adequate provision of basic resources for technological advancement.
EVIDENCE
He mentions how as a policymaker, they should view things especially in terms of power, electricity, water and land [139]. He states that the primary challenge posed is to try to provide all these things, and they are here to provide the rest remaining [141-142].
MAJOR DISCUSSION POINT
Policy and National Resilience
Argument 2
The ultimate goal of AI implementation should be welfare and happiness for all citizens
EXPLANATION
Sridhar Babu emphasizes that AI development should serve broader social purposes rather than just technological advancement. He frames AI as a tool for improving citizen welfare and overall societal well-being.
EVIDENCE
He concludes that ultimately, as a policymaker and with all technocrats and innovators, they have to think that the basic agenda for this AI impact term is welfare for all, happiness for all [144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
This welfare-focused approach aligns with the MANAV framework’s emphasis on ‘Sarvajana Hitaya’ (welfare for all) discussed in [S20], which measures intelligent systems not by how efficiently they process data, but by how meaningfully they expand human dignity and serve the public good.
MAJOR DISCUSSION POINT
Policy and National Resilience
Agreements
Agreement Points
Distributed computing across devices, edge, and cloud is the optimal approach
Speakers: Durga Malladi, Arun Shetty
Distributed compute across devices, edge cloud, and data centers is the future approach (hybrid AI) Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers
Both speakers advocate for distributing AI processing across multiple layers rather than relying solely on centralized data centers. Malladi promotes hybrid AI with inference running on devices, edge cloud, and data centers [90-102], while Shetty emphasizes that more inferencing will happen at the edge with fit-for-purpose solutions rather than huge data centers [44].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with established cloud computing frameworks that emphasize resource pooling, rapid elasticity, and measured service across distributed infrastructure [S47]. The technical efficacy of cloud computing does not depend upon data location but enables dynamic processes and online services across multiple locations [S47].
Power and energy constraints are critical challenges for AI infrastructure
Speakers: Kazim Rizvi, Arun Shetty, Gokul Subramaniam
Energy management is crucial as energy resources are finite, with strong environmental implications Power requirements will reach 63 gigawatts in coming years, presenting significant challenges India faces physical constraints in land, water, and power that will drive infrastructure setup decisions
All three speakers recognize power as a fundamental constraint. Rizvi emphasizes that energy is finite with environmental implications [14], Shetty cites specific projections of 63 gigawatts power requirements [44], and Subramaniam identifies power as one of three physical constraints India cannot escape [72].
POLICY CONTEXT (KNOWLEDGE BASE)
This concern is well-documented in policy discussions, with AI significantly increasing energy consumption as data centres now consume approximately 2% of global electricity, comparable to the airline industry [S62]. By 2025, energy demand from data centres is expected to double, reaching 1,000 terawatt-hours annually [S62]. The global data centre industry contributes 1-2% of global greenhouse gas emissions [S61].
Security and trust are fundamental concerns requiring new approaches
Speakers: Arun Shetty, Prof. V. Kamakoti, Gokul Subramaniam
Security and safety are critical as models can hallucinate and be injected with toxicity Trust is not reflexive, symmetric, or transitive – it’s context-dependent and temporal, requiring new mathematical frameworks Protecting users is more fundamental than just protecting data and models
All speakers emphasize security challenges but from different angles. Shetty focuses on model vulnerabilities and hallucination [44], Kamakoti provides mathematical analysis showing trust’s complex nature [55-63], and Subramaniam prioritizes user protection over data protection [71-72].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust in digital systems has been identified as a critical policy issue, with trust being essential for economic success and social functioning [S58]. The challenge involves both technical security and trust in human operators of systems, particularly following revelations about data surveillance [S58]. Policy frameworks emphasize the need for transparency, accountability, and new social contracts for the digital era [S59].
Domain-specific and localized AI solutions are essential
Speakers: Durga Malladi, Gokul Subramaniam, Kazim Rizvi
Voice interfaces in native languages require heterogeneous computing solutions with proper use cases built on top Workload-specific domain models should be applied at edge with focus on memory, connectivity, IO, thermal and power constraints Sovereign large language models are being developed to ensure data sovereignty and security
Speakers agree on the need for tailored AI solutions. Malladi emphasizes voice interfaces in native languages with specific use cases [1-4], Subramaniam advocates for domain-specific models for different verticals [67-71], and Rizvi highlights India’s development of sovereign models [18-20].
POLICY CONTEXT (KNOWLEDGE BASE)
This reflects emerging policy trends toward purpose-built AI systems tailored to specific domains and local contexts [S67]. National AI strategies increasingly emphasize the importance of local data and cultural contexts, with initiatives to create diverse datasets and promote local AI innovation [S62]. The shift recognizes that not every problem requires the most expansive system available [S67].
Similar Viewpoints
Both speakers advocate for moving AI processing closer to users. Malladi argues for on-device inference to ensure consistent user experience regardless of connectivity [7-12], while Shetty predicts more edge inferencing in the future [44].
Speakers: Durga Malladi, Arun Shetty
AI user experience should be invariant to network connectivity quality, requiring on-device inference capabilities Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers
Both emphasize the need for comprehensive security measures against malicious AI. Shetty calls for visibility across the stack to detect malicious models [44], while Kamakoti warns about adversarial AI and data poisoning requiring sovereign models [51-54].
Speakers: Arun Shetty, Prof. V. Kamakoti
Need visibility across the entire stack and verification that models are appropriate and not malicious Critical infrastructure and public systems require sovereign models to prevent adversarial AI and data poisoning
Both speakers highlight the massive power requirements and inefficiencies in current AI infrastructure. Subramaniam breaks down data center power usage showing only 40% goes to actual computing [72-74], while Shetty projects enormous future power needs [44].
Speakers: Gokul Subramaniam, Arun Shetty
Data centers require 40% power for cooling, 40% for compute, 20% for connectivity – need better power usage efficiency Power requirements will reach 63 gigawatts in coming years, presenting significant challenges
Unexpected Consensus
User protection as primary security concern
Speakers: Gokul Subramaniam, Prof. V. Kamakoti
Protecting users is more fundamental than just protecting data and models Models for education should be filtered appropriately, similar to movie rating systems
Both speakers unexpectedly prioritize user protection over traditional data security concerns. Subramaniam explicitly states that protecting users is more fundamental than protecting data and models [71-72], while Kamakoti suggests content filtering for educational AI similar to movie ratings [54].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with comprehensive AI regulatory frameworks like the EU AI Act, which emphasizes human-centric AI development and protection of fundamental rights [S65]. The Act promotes human agency and oversight as key requirements, ensuring AI systems do not hamper fundamental rights [S65]. Privacy and data protection are particularly pertinent given AI systems’ reliance on massive datasets [S55].
Mathematical and systematic approach to trust
Speakers: Prof. V. Kamakoti, Arun Shetty
Trust is not reflexive, symmetric, or transitive – it’s context-dependent and temporal, requiring new mathematical frameworks Organizations need to detect shadow AI applications and scan for vulnerabilities in AI assets
Unexpectedly, both speakers advocate for systematic, rigorous approaches to trust and security. Kamakoti provides mathematical analysis of trust properties [55-63], while Shetty calls for systematic asset discovery and vulnerability scanning [125-131].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks recognize trust as multifaceted, involving both competences and intentions, with different gradations and elements [S60]. Research shows that algorithmic decision-making faces challenges in achieving societal agreement on moral reasoning and causal mechanisms [S66]. The development of technical measures and verification practices represents systematic approaches to building trust [S67].
Welfare-focused AI development
Speakers: Sridhar Babu, Gokul Subramaniam
The ultimate goal of AI implementation should be welfare and happiness for all citizens Protecting users is more fundamental than just protecting data and models
Both speakers unexpectedly emphasize human welfare over technical metrics. Sridhar Babu frames AI’s purpose as welfare and happiness for all [144], while Subramaniam prioritizes user protection as fundamental [71-72].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with national AI strategies that emphasize AI for social welfare, such as China’s initiative to use AI and big data in elderly and social care to address demographic challenges [S64]. The humanitarian sector is developing bottom-up AI approaches anchored in humanitarian principles to support community-centered solutions [S68]. Policy frameworks increasingly emphasize human-centric AI that respects dignity and supports human agency [S68].
Overall Assessment

Strong consensus on distributed computing approaches, power/energy constraints, security challenges, and need for localized AI solutions. Unexpected agreement on user-centric security and welfare-focused development.

High level of consensus across technical and policy perspectives, indicating mature understanding of AI infrastructure challenges and shared vision for sustainable, secure, and inclusive AI deployment in India.

Differences
Different Viewpoints
Approach to AI infrastructure deployment – centralized vs distributed
Speakers: Durga Malladi, Arun Shetty
Distributed compute across devices, edge cloud, and data centers is the future approach (hybrid AI) Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers
While both speakers agree on moving away from purely centralized data centers, they emphasize different aspects. Malladi advocates for a comprehensive hybrid AI approach that distributes compute across devices, edge cloud, and data centers [90-102], while Shetty focuses more specifically on edge inferencing becoming prevalent and the need for fit-for-purpose solutions rather than huge data centers [44]. Malladi presents a more holistic three-tier architecture, whereas Shetty emphasizes the practical shift toward edge processing.
POLICY CONTEXT (KNOWLEDGE BASE)
This reflects ongoing policy debates about data localization versus distributed cloud computing. While technical efficacy doesn’t depend on data location, legal regimes create constraints, with some countries pursuing data localization for national security and sovereignty reasons [S47]. Recent developments show a shift toward more distributed, cost-effective AI deployment models challenging centralized approaches [S70].
Primary focus areas for AI security and trust
Speakers: Arun Shetty, Prof. V. Kamakoti, Gokul Subramaniam
Security and safety are critical as models can hallucinate and be injected with toxicity Trust is not reflexive, symmetric, or transitive – it’s context-dependent and temporal, requiring new mathematical frameworks Protecting users is more fundamental than just protecting data and models
The speakers approach AI security from different angles. Shetty focuses on practical security challenges like model hallucination and toxicity injection, emphasizing the need for visibility across the stack and verification of models [44]. Kamakoti takes a more theoretical approach, arguing that trust itself needs new mathematical frameworks because it doesn’t follow traditional mathematical properties [55-63]. Subramaniam prioritizes user protection over data and model protection [71-72]. These represent different philosophical and practical approaches to the same security challenge.
Data strategy for AI model development
Speakers: Arun Shetty, Prof. V. Kamakoti
Enterprises and governments have the best datasets that should be utilized instead of relying only on public data Critical infrastructure and public systems require sovereign models to prevent adversarial AI and data poisoning
While both speakers advocate for using non-public data, their reasoning differs significantly. Shetty argues that enterprises and governments should use their own datasets because they have better data than the public datasets used to train existing models [44]. Kamakoti emphasizes sovereign models primarily for security reasons, to prevent adversarial AI and data poisoning in critical infrastructure [47-54]. Shetty’s focus is on data quality and effectiveness, while Kamakoti’s is on national security and preventing foreign interference.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight significant disparities in data access even among major tech companies, with inherent biases in data collection affecting AI model training [S57]. National AI strategies emphasize data as the engine behind AI development, with some countries highlighting access to quality datasets while others focus on data sovereignty and governance frameworks [S56]. The availability and quality of data is seen as a driving force behind AI advancement [S56].
Unexpected Differences
Fundamental approach to defining and implementing trust in AI systems
Speakers: Arun Shetty, Prof. V. Kamakoti
Need visibility across the entire stack and verification that models are appropriate and not malicious Trust is not reflexive, symmetric, or transitive – it’s context-dependent and temporal, requiring new mathematical frameworks
This disagreement is unexpected because both speakers are addressing AI security and trust, but they approach it from fundamentally different perspectives. Shetty takes a practical, engineering approach focusing on visibility and verification systems [44], while Kamakoti argues for completely rethinking the mathematical foundations of trust itself [55-63]. Kamakoti’s mathematical deconstruction of trust suggests that traditional security approaches may be fundamentally flawed, which challenges Shetty’s more conventional security framework approach.
Overall Assessment

The speakers show broad consensus on major challenges (power, security, edge computing) but differ significantly in their proposed solutions and philosophical approaches. Key disagreements center on infrastructure architecture (hybrid vs edge-focused), security approaches (practical vs theoretical), and data strategies (quality vs sovereignty).

Moderate disagreement level with significant implications for AI policy and implementation. While speakers agree on fundamental challenges, their different approaches could lead to incompatible solutions. The theoretical vs practical divide in security approaches, and the quality vs sovereignty tension in data strategies, represent fundamental philosophical differences that could impact how AI systems are designed and deployed. These disagreements suggest need for more integrated approaches that combine practical engineering solutions with theoretical frameworks and balance data quality with security concerns.

Partial Agreements
All three speakers agree that edge computing and on-device processing are important for AI deployment. However, they emphasize different aspects: Malladi focuses on ensuring consistent user experience regardless of connectivity [7-12], Shetty emphasizes the practical shift toward edge inferencing [44], and Subramaniam concentrates on technical constraints and domain-specific applications [67-69]. They share the goal of distributed processing but approach it from different technical and user experience perspectives.
Speakers: Durga Malladi, Arun Shetty, Gokul Subramaniam
AI user experience should be invariant to network connectivity quality, requiring on-device inference capabilities Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers Workload-specific domain models should be applied at edge with focus on memory, connectivity, IO, thermal and power constraints
All speakers acknowledge power and energy as critical challenges for AI infrastructure. Shetty provides specific projections of 63 gigawatts power requirements [44], Subramaniam breaks down current inefficiencies in data center power usage [72-74], and Rizvi emphasizes the environmental implications of finite energy resources [14]. They agree on the problem but focus on different aspects – Shetty on scale, Subramaniam on efficiency, and Rizvi on sustainability.
Speakers: Arun Shetty, Gokul Subramaniam, Kazim Rizvi
Power requirements will reach 63 gigawatts in coming years, presenting significant challenges Data centers require 40% power for cooling, 40% for compute, 20% for connectivity – need better power usage efficiency Energy management is crucial as energy resources are finite, with strong environmental implications
Takeaways
Key takeaways
Heterogeneous computing is essential for AI deployment, requiring distributed compute across devices, edge cloud, and data centers rather than centralized solutions AI user experience should be invariant to network connectivity quality, necessitating on-device inference capabilities Three main impediments to AI adoption are infrastructure constraints (power, compute, networking), security/safety issues, and data quality gaps Energy efficiency is critical with power requirements expected to reach 63 gigawatts, requiring hybrid energy solutions and better power usage efficiency Security and trust frameworks need fundamental redesign as traditional trust models don’t apply to AI systems – trust is context-dependent, temporal, and non-transitive India is leading in AI application layer development with 300 Gen AI startups, while also developing sovereign large language models for data security Edge inferencing will become more prevalent, with modern smartphones capable of running 10 billion parameter models and smart glasses running sub-1 billion parameter models Organizations need visibility across AI assets to detect shadow AI applications and scan for vulnerabilities High-quality enterprise and government datasets should be utilized instead of relying solely on public data for model training
Resolutions and action items
Policymakers committed to providing adequate power, electricity, water, and land resources to support AI infrastructure development Focus on building fit-for-purpose solutions rather than massive centralized data centers Develop air-cooled solutions for racks up to 25 kilowatts and liquid cooling for higher capacity requirements Build sovereign AI models to ensure national security and prevent adversarial AI attacks Implement guardrails and scanning systems to protect against malicious AI applications and vulnerabilities Create filtered AI models for specific sectors like education, similar to movie rating systems
Unresolved issues
How to mathematically define and implement trust frameworks for AI systems that account for context-dependency and temporal variations Specific mechanisms for detecting and preventing shadow AI applications across organizations Detailed strategies for managing the transition from current infrastructure to hybrid AI computing models Standards and protocols for ensuring interoperability between devices, edge cloud, and data center components in heterogeneous systems Regulatory frameworks for sovereign AI models and data governance Specific technical solutions for dynamic malware signature detection in AI-powered cybersecurity systems Implementation timeline and resource allocation for scaling from current infrastructure to projected 63 gigawatt power requirements
Suggested compromises
Hybrid AI approach combining on-device inference, edge cloud processing, and data center compute rather than choosing one solution Use air-cooled solutions where possible (up to 25kW) and liquid cooling only when necessary to balance efficiency and cost Leverage both public and private datasets while maintaining security and sovereignty requirements Implement graduated security measures based on use case sensitivity rather than uniform high-security approaches Balance between centralized data centers and distributed edge computing based on specific workload requirements Combine renewable and stable energy sources for data centers rather than relying solely on renewable energy
Thought Provoking Comments
Do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices.
This comment reframes the entire AI infrastructure discussion by challenging the assumption that AI must be cloud-dependent. It introduces the concept of ‘invariant user experience’ regardless of connectivity, which is particularly relevant for a country like India with varying connectivity quality.
This comment established the foundation for the entire discussion about distributed computing and edge inference. It shifted the conversation from traditional centralized AI models to a more nuanced understanding of hybrid AI systems, influencing subsequent speakers to address edge computing, power efficiency, and distributed infrastructure.
Speaker: Durga Malladi
Trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive… Trust is context dependent… It is temporal, morning I trust you, evening I don’t trust you.
This mathematical deconstruction of trust is profound because it challenges fundamental assumptions about AI security and governance. By applying discrete mathematics principles to trust, Kamakoti reveals why traditional security models may be inadequate for AI systems.
This comment elevated the security discussion from technical implementation details to philosophical and mathematical foundations. It provided a framework for understanding why AI security is inherently complex and influenced the conversation toward more nuanced approaches to AI governance and safety.
Speaker: Prof. V. Kamakoti
India is challenged by three physical things that we cannot run away from: land, water and power… almost 100% of your power energy that comes into a data center, 40% goes into cooling, 40% into your computer and 20% on connectivity.
This comment grounds the theoretical AI discussion in harsh physical realities specific to India’s constraints. The specific breakdown of power usage in data centers provides concrete metrics that policymakers and technologists must consider.
This shifted the conversation from abstract technological possibilities to practical implementation challenges. It influenced subsequent discussions about hybrid energy solutions, edge computing as a necessity rather than preference, and the importance of leapfrogging technologies in the Indian context.
Speaker: Gokul Subramaniam
Safety is all about, we want the models to work in a certain way but it is not working in that certain way… Security is wherein a bad actor from outside can change the behavior of the model. So we need to be careful about both things.
This clear distinction between AI safety and security is crucial because these terms are often conflated. Shetty’s differentiation helps clarify that these are separate but equally important challenges requiring different approaches.
This comment provided conceptual clarity that allowed the discussion to address both internal AI failures (safety) and external threats (security) systematically. It influenced the conversation toward comprehensive AI governance frameworks rather than focusing on just one aspect.
Speaker: Arun Shetty
The enterprises, the government has the best data sets, so why can’t we use those data sets… without data which is the fuel for the AI today you can’t really move forward on the AI.
This insight challenges the dominance of public data-trained models and highlights an untapped resource for building more relevant and sovereign AI systems. It suggests a path for India to develop competitive AI capabilities using its own institutional data.
This comment redirected the discussion toward data sovereignty and the potential for India to build competitive AI systems using government and enterprise datasets. It influenced thinking about how India could reduce dependence on foreign AI models while leveraging its own data assets.
Speaker: Arun Shetty
Overall Assessment

These key comments fundamentally shaped the discussion by moving it beyond technical specifications to address systemic challenges and opportunities. Durga’s invariant user experience concept established the distributed computing theme that ran throughout the panel. Kamakoti’s mathematical analysis of trust provided philosophical depth to security discussions. Gokul’s physical constraints grounded the conversation in practical realities, while Shetty’s distinctions between safety/security and emphasis on data sovereignty provided strategic clarity. Together, these insights created a comprehensive framework for understanding AI infrastructure not just as a technical challenge, but as a complex interplay of mathematical, physical, economic, and geopolitical factors specific to India’s context.

Follow-up Questions
How can we build mathematical frameworks to properly define and implement trust in AI systems, given that trust is not reflexive, symmetric, or transitive, and is context-dependent and temporal?
This is critical for developing secure AI systems, especially for critical infrastructure and public systems, as current trust models are inadequate for AI security requirements
Speaker: Prof. V. Kamakoti
How can we effectively detect and prevent shadow AI applications within organizations that employees are using without IT department knowledge?
This represents a significant security risk as organizations cannot protect against threats they cannot see or monitor
Speaker: Arun Shetty
What specific hybrid energy solutions can India implement to support AI infrastructure given the constraints of land, water, and power?
This is essential for scaling AI infrastructure in India while addressing physical resource limitations and achieving better power usage efficiency
Speaker: Gokul Subramaniam
How can we develop domain-specific models for different verticals while maintaining edge inferencing capabilities within power, memory, and thermal constraints?
This is crucial for practical AI deployment across various industries while managing resource limitations
Speaker: Gokul Subramaniam
What guardrails and scanning mechanisms need to be developed to protect against adversarial AI and model poisoning attacks?
This addresses the security vulnerabilities in AI models that can be exploited by bad actors to change model behavior
Speaker: Arun Shetty
How can enterprises and governments leverage their proprietary datasets to build more effective GPT models compared to those trained on public data?
This could lead to more accurate and relevant AI applications by utilizing high-quality, domain-specific data that organizations possess
Speaker: Arun Shetty
What specific architectures and technologies are needed for dynamic malware detection as signatures change, particularly for deep packet inspection systems?
Traditional signature-based security approaches are becoming inadequate as malware becomes more sophisticated and adaptive
Speaker: Prof. V. Kamakoti
How can we implement age-appropriate AI models for education that filter content similar to movie rating systems?
This is important for protecting students and ensuring AI educational tools provide appropriate content for different age groups
Speaker: Prof. V. Kamakoti

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.