Scaling AI for Billions_ Building Digital Public Infrastructure
20 Feb 2026 18:00h - 19:00h
Scaling AI for Billions_ Building Digital Public Infrastructure
Session at a glance
Summary
This panel discussion explored the intersection of artificial intelligence and cybersecurity, examining both how AI can enhance security measures and how AI systems themselves need protection. The conversation featured experts from government, private sector, and cybersecurity companies discussing the dual nature of AI as both an opportunity and a challenge for security professionals.
Daisy Chittilapilly from Cisco emphasized that AI follows the pattern of previous technologies in presenting both benefits and risks, noting that while AI promises better security management at machine scale, it also introduces new vulnerabilities like model jailbreaking and data poisoning. G. Narendra Nath highlighted the unprecedented speed of AI adoption compared to previous technological revolutions, creating challenges for both enterprises and national security, particularly because adversaries can leverage AI more effectively than defensive users.
A.S. Lakshminarayanan from Tata Communications warned that organizations are “running towards a cliff” by implementing AI on already fragile digital infrastructure, arguing that enterprises need an “AI operating system” with proper governance and trust layers rather than focusing solely on individual AI applications. Richard Marko stressed the importance of understanding AI agent operations and maintaining oversight of automated processes to ensure security.
The panelists agreed that AI is fundamentally changing cybersecurity from protecting systems and data to protecting decision-making and trust itself. They emphasized the need for new assessment frameworks, better capacity building across sectors, and a shift from viewing AI as merely an automation tool to recognizing it as a technology that scales decisions. The discussion concluded with expectations that AI-native companies will emerge to disrupt existing business models, while nations must balance AI adoption for competitive advantage with protection against its adverse effects.
Keypoints
Major Discussion Points:
– AI’s Dual Role in Cybersecurity: The discussion extensively covered how AI serves both as a solution for cybersecurity challenges (enabling better threat detection and management at machine scale) and as a source of new risks (model poisoning, jailbreaking, data leakage, and sophisticated attacks by adversaries).
– Infrastructure Fragility and Readiness Gap: Multiple panelists emphasized that current digital infrastructure is already fragile, and organizations are rushing to implement AI without adequate foundational security measures. There’s a significant gap between AI adoption ambitions and actual organizational readiness in terms of data strategy, compute capacity, and security frameworks.
– Speed of AI Adoption vs. Security Preparedness: Unlike previous technological revolutions that allowed time for gradual adaptation, AI is being adopted at “breakneck speed” without sufficient time to understand and mitigate adversarial effects, creating unprecedented risks for enterprises and national infrastructure.
– Need for New Frameworks and Governance: The discussion highlighted the necessity for new assessment frameworks, AI operating systems, and governance structures. This includes corporate AI responsibility, trust and verification mechanisms, and the evolution from traditional cybersecurity to protecting decision-making processes and maintaining system trust.
– Strategic Transformation and Future Outlook: Panelists discussed how AI represents a paradigm shift from scaling transactions to scaling decisions, requiring new business models, talent approaches, and the emergence of AI-native companies that could disrupt existing industries within the next five years.
Overall Purpose:
The discussion aimed to explore the intersection of AI and cybersecurity from multiple perspectives – examining both how AI can enhance cybersecurity capabilities and how AI introduces new security challenges. The panel sought to address current readiness gaps, discuss strategic implications for enterprises and national infrastructure, and envision the future landscape of AI-enabled security.
Overall Tone:
The discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportunities AI presents for cybersecurity (describing it as creating a “level playing field” between attackers and defenders), there was a consistent undercurrent of concern about the pace of adoption outstripping security preparedness. The tone was professional and analytical, with experts sharing both optimistic possibilities and serious warnings about infrastructure fragility and the need for more thoughtful, systematic approaches to AI implementation.
Speakers
Speakers from the provided list:
– Samrat Kishor – Moderator/Host of the discussion
– Daisy Chittilapilly – Works at Cisco, focuses on AI and cybersecurity infrastructure, networking and security solutions
– G. Narendra Nath – Government official working on national security and cybersecurity policy, involved with CERT India and cybersecurity frameworks
– A. S. Lakshminarayanan – Executive at Tata Communications (TataCom), focuses on digital infrastructure and AI operating systems for enterprises
– Richard Marko – Expert in cybersecurity resilience, AI risks, and digital security systems
– Dharshan Shanthamurthy – Works with a cybersecurity company, provides consulting and thought leadership for large enterprises and government officials
– Pradeep Sekar – Cybersecurity expert working with enterprise leaders and boards on AI risk management and strategic cybersecurity
Additional speakers:
None – all speakers mentioned in the transcript are included in the provided speakers names list.
Full session report
This comprehensive panel discussion at a technology summit explored the complex intersection of artificial intelligence and cybersecurity, examining both the transformative opportunities and significant risks that AI presents to digital security infrastructure. The conversation brought together experts from government, telecommunications, cybersecurity, and technology consulting to address what moderator Samrat Kishor framed as the dual challenge of “AI for cybersecurity” and “cybersecurity for AI.”
The Dual Nature of AI in Cybersecurity
The discussion opened with Daisy Chittilapilly from Cisco establishing a fundamental framework for understanding AI’s role in cybersecurity. She emphasised that AI follows the pattern of previous technologies in presenting both opportunities and challenges, but noted its unique potential to redefine how humanity lives, works, and plays. Chittilapilly highlighted that cybersecurity has long struggled to manage threats at human scale, making AI’s promise of machine-scale security management particularly compelling. However, she cautioned that AI simultaneously introduces new vulnerabilities, including model jailbreaking, confidential information leakage, data poisoning, and inherent vulnerabilities in open-source models.
This dual nature became a recurring theme throughout the discussion, with Dharshan Shanthamurthy from the cybersecurity sector providing a particularly optimistic perspective. He argued that AI represents a historic opportunity to level the playing field between attackers and defenders, noting that cybersecurity has traditionally been asymmetric, with intruders needing to succeed only once whilst defenders must succeed consistently. Shanthamurthy described AI as enabling the identification of “needles in haystacks” and transforming security operations centres from requiring constant human monitoring to automated, agent-driven processes. He emphasised that India is positioned at a “sweet spot” between AI and cybersecurity, creating opportunities to develop world-class talent and capabilities.
Infrastructure Fragility and the Readiness Gap
A critical concern emerged around the fragility of existing digital infrastructure and organisations’ readiness for AI implementation. A.S. Lakshminarayanan from Tata Communications delivered perhaps the most sobering assessment, warning that organisations are “fast running towards the cliff” by implementing AI on already fragile digital foundations. He used a powerful metaphor, stating that enterprises “can’t build a skyscraper with a foundation of a bungalow,” which became a reference point throughout the discussion.
Lakshminarayanan detailed how AI will exponentially increase network traffic, particularly east-west traffic, through numerous API calls and long-lived sessions that will strain edge infrastructure. He argued that the excitement around AI has overshadowed the critical need to strengthen foundational digital infrastructure before adding AI capabilities.
This concern was reinforced by Chittilapilly’s reference to Cisco’s AI readiness research, which revealed significant gaps in enterprise preparedness. Despite widespread enthusiasm for AI deployment among Indian enterprises, substantial readiness challenges exist across data strategies, compute capacity, threat understanding, and innovation infrastructure.
Unprecedented Speed and Strategic Asymmetries
G. Narendra Nath, representing the national cybersecurity perspective, highlighted a crucial difference between AI and previous technological revolutions: the unprecedented speed of adoption. Unlike earlier technologies that allowed gradual integration and time to understand both beneficial and adversarial applications, AI adoption is occurring at “breakneck speed” with widespread enterprise willingness to adopt AI tools immediately.
Nath identified a critical strategic asymmetry: adversarial actors, including nation-states and malicious enterprises, are more motivated and focused in their AI implementation than defensive users. Whilst organisations adopt AI primarily for productivity and efficiency gains, adversaries dedicate significant effort and thought to leveraging AI more effectively for malicious purposes. This creates a dangerous imbalance that must be addressed through conscious effort and strategic planning.
He also noted technical challenges unique to AI systems, particularly the lack of separation between control and data planes that characterises traditional systems. In AI, data itself becomes control, making systems vulnerable to model poisoning through inputs and creating drift over time that makes system behaviour unpredictable and non-deterministic.
Risk Management Frameworks and Governance
Pradeep Sekar introduced a comprehensive framework for understanding AI risks through three distinct lenses: compliance risk (meeting regulatory requirements like the EU AI Act), operational risk (assessing model reliability and service provider dependencies), and strategic risk (understanding financial impact of AI-driven attacks on organisational reputation and customer relationships). He noted that whilst most organisations focus on compliance, few have progressed to strategic risk assessment, which requires quantifying potential financial impacts and communicating these to boards in business terms.
Sekar also argued that cybersecurity must evolve beyond protecting systems and data to protecting decision-making and trust. He introduced the concept of measurable trust through provenance, authenticity, and verification mechanisms, suggesting that future security systems will need to assess and rate the trustworthiness of transactions and communications in real-time.
The moderator suggested evolving from corporate social responsibility to “corporate AI responsibility,” where organisations take explicit ownership of their AI systems’ actions and impacts.
The Need for New Operating Systems and Architectures
Lakshminarayanan proposed the concept of an “AI operating system” as a comprehensive solution, arguing that organisations should move beyond evaluating individual large language models to implementing systematic frameworks with distinct layers: context (bringing together relevant information), agentic (enabling autonomous actions), and trust/governance (controlling what agents can and cannot do).
This architectural approach addresses the fundamental challenge of making AI knowledge actionable whilst maintaining control and oversight. Without such governance layers, organisations cannot safely leverage AI models’ capabilities or ensure responsible AI behaviour.
Chittilapilly provided insights into how AI is fundamentally changing infrastructure requirements, leading to a complete rewiring and restacking of enterprise infrastructure. The traditional approach of building security as bolt-on solutions is becoming obsolete, requiring instead system resilience built into all layers of both infrastructure and AI stacks.
Sectoral Challenges and National Perspectives
From a national perspective, Nath highlighted the uneven readiness across different sectors, creating both a “cybersecurity divide” and an “AI divide.” Whilst the financial sector has achieved relative maturity in cybersecurity practices, sectors like healthcare show similar enthusiasm for AI adoption despite significantly lower cybersecurity maturity. This creates new vulnerabilities in critical infrastructure.
He outlined government initiatives to address these challenges, including the development of assessment frameworks for AI systems, capacity building programmes, and leveraging existing institutional frameworks like CERT India and sectoral regulators. The use of regulatory sandboxes in financial and telecommunications sectors provides mechanisms for safely testing AI technologies before production deployment.
Human Factors and Implementation Challenges
Richard Marko brought attention to the human element in AI security, noting that people remain the weakest link in cybersecurity and that AI amplifies this vulnerability. He highlighted how AI-generated deep fakes and sophisticated social engineering make it increasingly difficult for humans to distinguish legitimate communications from scams. Additionally, AI agents performing tasks on behalf of users create new risks through potentially unsupervised actions.
The discussion also revealed talent challenges, with observations that younger professionals often adapt more readily to AI paradigms than experienced professionals, creating additional workforce considerations for organisations implementing AI systems.
Future Outlook and Strategic Implications
The panel identified AI as representing a paradigm shift from scaling transactions (the focus of previous technologies) to scaling decisions. This distinction requires organisations to rethink culture, talent development, and operational approaches. Lakshminarayanan argued that the decisions made in the next five years regarding AI strategy will determine organisational health for decades to come, particularly for his company’s infrastructure planning.
From a national perspective, Nath emphasised that AI adoption represents a competitive advantage that countries cannot afford to ignore, as other nations and enterprises will leverage AI for business improvement. However, this must be balanced with protection against AI’s adverse effects and careful management of dependencies created by AI integration into critical infrastructure.
Assessment and Governance Development
A significant gap identified throughout the discussion was the lack of adequate assessment frameworks for AI systems. Nath highlighted ongoing government efforts to develop frameworks that evaluate both security and functional aspects of AI systems before deployment, including the ability to distinguish between cybersecurity incidents and AI system malfunctioning or poor design.
Both government and private sector initiatives are developing these capabilities, with emphasis on making frameworks accessible to enterprises across sectors and providing clear guidance on security requirements and best practices.
Conclusion
The panel discussion revealed a complex landscape where AI’s transformative potential in cybersecurity is matched by significant implementation challenges and new risk categories. The consensus emerged that whilst AI offers unprecedented opportunities to improve security capabilities and level the playing field between attackers and defenders, successful implementation requires fundamental changes in infrastructure, governance, and strategic thinking.
The conversation highlighted the critical importance of building proper foundations before implementing AI capabilities, developing comprehensive governance frameworks, and addressing the speed mismatch between AI adoption and security preparedness. The discussion emphasised the need for coordinated action across sectors and between public and private stakeholders to address capacity building, assessment frameworks, and strategic risk management.
The panel concluded with recognition that India is well-positioned to develop world-class capabilities at the intersection of AI and cybersecurity, provided that proper attention is paid to building robust foundations and governance frameworks for this transformative technology.
Session transcript
The context is, have you overdone it? Right? When we talk about AI and cybersecurity, these two areas, how do they come together? There’s AI for cybersecurity, and there is cybersecurity for AI. Right? So what we’re going to do is, we’re going to discuss both aspects. We’re going to at least try. So, you know, the first question, and I’d like to actually point it to Ms. Zazie, you know, what has changed, you know, if you were to look at the larger picture, the big picture, you know, in terms of AI coming into cybersecurity? What has changed?
I think as what happens with all technologies, and AI is no different in that sense. It is, of course, as we’ve been hearing over the last few days, a technology that will redefine humanity and how we live, work, play, all of that. But one thing that it hasn’t come, and with all of the other technologies, that have come before it is that it’s both an opportunity and a challenge. And it’s particularly true when it comes to the security space. So on one side, there is the promise that, you know, for some time now, with the advent of technologies, number of things getting connected, all of our lives going fidgetal, cyber threats have become, the landscape has, of course, expanded, and threats have become more and more complex and complicated.
And for some time now, we’ve not been able to manage cybersecurity at human scale. So machine scale was, you know, a lot of tooling was already in that space. So there is the promise with AI that you can manage security better. So there is definitely that opportunity. But at the same time, there is the recognition, like Dario Amadai said on the main stage yesterday, that his biggest concern and all of our concerns is that AI brings a set of risks, which not all of us have. And there are a lot of them that we know of at this point in time today. So both of these, so it’s also, like I said, that commonality is there with all technologies that came before it.
It is both a challenge and an opportunity and a challenge. Because we’ve got to protect models from being jailbroke. We’ve got to make sure that the models don’t leak our confidential information or poison our data. We’ve got to make sure that most of these are open source models, that they come with inherent vulnerabilities, so how do we detect them? So we’ve got to think about securing AI as well.
Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to build applications. So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure. It’s got embedded into the kind of systems which are now getting created and deployed. So we’re looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. We’re also looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI.
So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI. So, and that is where I’d like to bring in, Narendra, you, your perspectives on what are you seeing in terms of national security? You know, is it something which is giving us a spike, a blip, something which you can discuss, disclose here?
Yeah, I mean, it’s required to be discussed. That’s one thing that’s definite. No, one, you know, I take the points that you’ve said. One thing about all the other technological revolutions, as you said, is that, you know, there was a time frame over which that seeped into the system. Okay, and then we had time to look at how do I use it beneficially and also to look at the adversarial effects of it and how do I mitigate those things. Case of AI is that it’s really happening at a breakneck speed. And there’s also an adoption, a willingness to adopt into enterprises of the different AI tools that are there. So that is where the scary part is there.
And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit, in terms of they are looking at how do I improve my productivity, how do I improve my efficiency, that’s the focus area that they are in. So this is where there is a disconnect and this has to be really bridged and that’s where the problem is.
The summit actually in one way it’s helping people become conscious about some of the measures that have to be taken. That is one part. The other is the difference between other systems and this is a little technical in the sense that in the other systems we have a separate control plane and a separate data plane. There we could actually control they provide access limits to the control plane. but here the data itself is the control so you have that poisoning of models happening through the inputs that are there so you could have a drift and over a period of time you will find that the model will not be behaving as you would expect it to behave and it’s not also very deterministic so there are challenges in how do I protect it now there’s AI systems to see that it gives me the consistent results after a period of time then there is also lack of clarity about what is the cyber security issue there and what is the issue of malfunctioning or a poor design of an AI system that lack of clarity also results in the challenges that are there.
I think these are the preliminary thoughts that I have. So at the national scale the issue is that when you have multiple entities at the enterprise scale and financial sector, the telecom sector and all of them and the power sector adopting AI the effect on compromises and the critical information machine infrastructure is something that would actually make us wake up and then see that what could be done. Those are issues that are there.
Excellent pointers, sir. Excellent pointers. And I think since you brought in the private sector and the way they’ve evolved and they’re also subjected to these risks which are evolving in nature. I’d like to bring in Lakshmi, sir, from Tata here. So, sir, a lot of infrastructure is being built, connected, communicated using what you’re building for the nation. So, how are you seeing the paradigm shift from let’s say how it used to be before AI was commoditized and everyday technology. It used to be the labs. Now it’s out in everybody’s hands. So what is the change that you are seeing and the impact you’re seeing on critical infrastructure?
A. S. Lakshminarayanan:
I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that that the digital infrastructure in enterprises today are already fragile. And we know that from an enterprise security point of view, there are so many attacks that are happening. And we know that there are huge issues when it comes to, for example, now we more talk about IT, OT security, the operational technology in factories were never in the purview of IT security. And there are, you know, security today and digital infrastructure in general is still very fragile. It’s islands of different OEM technologies and many, many things. And, you know, I don’t want to, you know, it is a major issue.
Now, on top of this fragility, you add AI. And this fragility is going to be multiplied 100 times. It comes over, right, on many, many kinds of platforms. because AI is going to increase the network traffic, especially the east -west traffic, by, again, multifold. The number of API calls that somebody… And we all are saying, oh, I’ll embed AI at the edge of the device, and if I have a banking application, I’ll do that, but nobody has thought through. If I put an inference there, or if you put an inferencing at the edge, the number of API calls these have to do is tremendous, and these API calls are long -lived sessions. They’re not traditional API calls.
So the edge infrastructure is going to come under tremendous strain. So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations and… …properly. So that is very fragile, and that is one point I want to make. The second point about this is I would like to expand the scope of this discussion. It’s not about AI and cybersecurity alone. It’s also about a broader trust question. I think we all know, you know, whether fake, the messages, you don’t know. Apply that in the enterprise context. And there was a talk about, you know, model drifts and so on and so forth.
So what we at Tadacom are doing, one is to protect the digital infrastructure through many, many things that we can do. And the unfortunate part is I don’t think enterprises have woken up to the fact that they have to do it. So I tell them that you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do. But when it comes to the drift and the trust part of it, I do believe that enterprises require, require an AI operating system. and what we mean by that AI operating system is something that brings the context together because LLMs will provide the knowledge. To make that knowledge into actionable intelligence, you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer which will control what an agent will do or will not do.
And if I take that control in my hands, and say that I will configure and ensure this agent will do something or not do something, I can make use of the models underneath a lot more intelligently. So I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly.
Sir, that’s a great point. In fact, I was having a conversation a few days back, and I was saying that that from the time of corporate social responsibility, it’s time to evolve to corporate AI responsibility, where corporates start talking about how they’re controlling and owning the actions of the AI that they’re building and deploying. Great perspective, sir. Thank you very much. At this point, I’d like to bring in Richard to sort of continue the talk about digital infrastructure and resilience. So how has resilience in your perspective evolved when we talk about AI risks to cybersecurity and vice versa?
Well, the question of resilience is a complex question. So I will bring a few aspects that I think are very important. So it is well understood that there are a lot of people who are in the industry and that people are typically the weakest link in cybersecurity. the reason is that we as human beings we were not evolved to deal with machines, computers and so on and most of us don’t have really deep technical knowledge about how systems work and so on so we are to a big extent dependent on relatively superficial understanding and so we are more easy to be tricked by different social engineering tricks and so on. Now with AI this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication I’m talking about deep fakes and so on so this is one aspect of the risk connected directly with people.
The other aspect is that we want AI to empower people to do things, more things and make them in an easier way so we have those agents and we give them some or we want to give them some commands like do this for me or that for me but we don’t understand all the steps that the agent will take on our behalf when performing those tasks and each of those tasks can be there can be a risk factor involved without us knowing like if you want to perform this action you will need to have those additional tools to achieve that and where you get those additional tools if AI decides on your behalf these are the tools you need software packages, whatever it is and they get to your computer without this being supervised then this is a problem so and we have to be very careful and we have to be very careful where I’m heading, like resilience here is really protecting or paying attention to details.
What is actually happening? What is running in the background? How are your commands transferred to the agents? Is there a possibility for them to be intercepted, to be modified? So it’s even it was difficult and complex even before advent of the new AI agentic approach. Now it’s becoming even more important to really go into all the details and we just heard from Lakshmi that he sees that we are moving towards a cliff. Well, depends on us of course. We want to go fast. We want to employ. We are all excited about AI but we maybe sometimes we need to slow down a little bit and make sure that the pieces are in the place and cyber security is not overlooked.
Excellent, excellent perspectives. And I think an offshoot to that question can be to Ms. Daisy, which is what are you seeing as changing when you’re talking about digital infrastructure and especially the connectivity which it needs because you’re at Cisco, right? And here is something which is connecting a lot of things to a lot of other things. So how are you seeing changes happening, especially when you talk about resilience and what’s going on inside digital infrastructure?
So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. And that is something that I want to reiterate. For the past few years, we’ve been publishing an AI readiness index. And the good news is that we are as ready as everybody else. The bad news is maybe we’re not as ready as we think we are, which is the point Lakshmi is making, right? 90 % of… just under 1 ,000 large enterprises that we spoke to in India want to deploy agents this year. Forty percent want that agent to work alongside a human being, but only about two -thirds of those enterprises really have a data layer, data strategy, a data platform, and a data governance strategy.
Only about one -fourth have the compute capacity they need. Only about one -third are able to understand AI threats and deal with them. And less than one -fifth have the innovation engine to think about building and scaling and maintaining AI applications and use cases. So clearly there is this ambition versus reality gap which we have to solve for. That’s not a problem as long as we all know that that’s where it is, and they were acutely aware of this issue. Thank you. the other thing is AI is essentially leading to what this means is that we are rewiring and restacking the enterprise it’s not just networks it’s compute it’s silicon I know you know at the national level silicon security is a conversation so all this resiliency which we used to build almost like a bolt on at the top and particularly we used to think of it only as cyber resiliency it’s a system resilience which is built into all layers of the infrastructure stack all layers of the AI stack and that’s why at Cisco since you asked me a network specific question we used to have we used to deal with connectivity largely as connectivity and now we know the persona of that end port that connects to an end device that might be doing an inferencing or it’s in the data center that persona has to be that it will be on one side it will be a switch or a router but on the other side it will also be a security defense point.
So this ability of building special grade of security appliances and putting them in various parts of the network is fast becoming an outdated idea. And the point we’ve got to do is we’ve got to break it into a number of virtual instances that can go wherever you want the security policy to be. So it becomes a very virtual distributed mesh rather than hardware. Yes, there will be hardware. I’m not saying it will go away. But this ability to infuse it into the fabric and networks tend to be the all pervasive fabrics. That’s the way at least at Cisco we’re thinking about it. So this domains of networking and security are crashing together. So secure networking is like the conversation in the network space particularly.
The other part about this is the performance requirement which also Lakshmi alluded to. AI will put pressure on the underlying infrastructure. In a way it’s an exponential technology. The demands it will create on its underlying layers is also exponential. So we’ve got to almost build a new category of technology. Silicon systems, applications, everything. A new category has to be built and we have to build it in new ways. You cannot build it in the ways of how we built it in the past. Applications is an interesting one. We used to give an input, expect the same output on the other side. But now if you are going to deploy AI models, this thing is probabilistic.
And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application or a very important citizen service application, you give an input and the output has to be deterministic. But you’re using at the core of it a probabilistic technology. So that refinement also takes a whole lot of work. So it’s rethinking in ways in all layers. of the, from silicon to software to systems, you have to rethink everything. Every rule we have to rethink.
Excellent, excellent. And since you brought in that perspective of rethinking, reimagining, and how we’re using AI in the operating system of the company, I’d like to bring in Darshan here. So Darshan, you do a great work, you do a lot of great work in creating thought leadership content as well as doing consulting work for very large companies. Of course, there are CXOs and a very highly ranked official of the government sitting here, but then what are the other six CXOs thinking about when it comes to AI? Is it still a compliance thing, or has it percolated into strategy?
First of all, thank you. I’ll probably add some context to whatever I’ve heard so far. So first of all, my views is any technology disruption brings in two emotions, right? So hope as well as fear. And I’m sure all the other panelists have rightfully covered the fear construct of AI in cyber safety. And rightly so. No disputing that truth. But there is a huge hope component from a cybersecurity company like for us because we are a hardcore deep tech cybersecurity company. I see a lot of opportunities. And we as a country, India, can also be, we are at the sweet spot between intersection between AI and cybersecurity. And this topic is very aptly crafted because I think it’s a huge opportunity for us to also utilize.
And I’ll tell you why. Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders or anyone who’s actually defending a network because they just need to get one thing right and we need to get everything right. So it’s always been asymmetric. But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack. For example, one classic use case. Can be an agent. security operations center. Because at the end of the day, if you have ever visited a security operations center, it is a 24 bar 7 someone, analyst looking at a screen and almost an inhuman job, so to speak.
But today, with AI, now you’ve got a level playing field because we’ve seen those kinds of use cases being deployed at our SOC, where even a shift handover is done by an agent. So a lot of real use cases. So I’m on the hope side. There’s a lot of opportunities that today we have. And second, in terms of talent, I mean, we have a lot of youngsters sitting in this room who are looking to grow. We have spoken so much about other services, other areas evaporating in terms of job opportunities. I think we can create the world’s cyber security talent combination with AI because cyber security and AI are not two different fields. They actually, cyber security needs AI and AI needs cyber security.
So I think we are at a very, very opportunistic opportune time for us. to really ride this wave and create world -class talent which can address. So now on the second part which you just spoke about, that’s what we are hearing at the CXOs globally since we deal with a lot of people in the payment ecosystem. CXOs obviously have the same construct of hope versus fear, right? So some are obviously being a CISO or a CIO. There is amount of fear that is also coming in because these are real problems, right? For example, deep fakes or spear phishing attacks have become more robust with AI. But one of the key things that we are trying to explain is that, yes, those are things that you need to address, no doubt, but can you also look at how you can take advantage of those AI?
And Lakshmi rightly pointed out, how do you have an AI operating system? Similarly, we talk about how you can have an AI security operating system, right? Which you should have a playbook on how to leverage AI rather than being on the defense player. So those are the… Those are my views. Samarath.
Excellent, excellent views and thank you very much for those perspectives and I’m glad that I still see people coming in, you know, this is an interesting session and some people standing as well. So I would like to bring in Pradeep now. Pradeep, you know, as a follow on to what I just asked Darshan, here is something which is, you know, at the top and we’re saying, you know, while it is percolating into strategy a bit, do you think that we should have a dedicated function within an organization and what are you seeing currently not just in India but elsewhere as well?
Yeah, thank you for that. So probably adding on to what Dharshan said, right, I don’t mind the hope and the fear thing because being in cyber security space, both of them do add more to what we can do, right, for the industry as a whole, for the country as a whole, if you would. When we look at strategically, when we talk to, let’s say, leaders and boards at companies in India across the world, predominantly when the conversation is about AI, the topic goes towards innovation, competitiveness, and ability to bring in, let’s say, productivity gains, right? What often gets missed is that AI is quietly reshaping the risk equation within the enterprise, right? Now, cybersecurity, right, so can no longer be just about protecting systems and the data.
Now, don’t get me wrong, right? Cybersecurity is still needed to be able to identify all the systems within your enterprise, enterprise beyond the extended enterprise, as well as be able to protect the data that is on all of these systems. But it needs to evolve into something more, given the AI landscape, which is, I love how Lakshmi put it, right? It’s going to be about trust. So going forward, can cybersecurity, how can it evolve to start protecting decision -making and trust? Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verification. Now, all of these mechanisms are going to come in, in a way. that we are able to identify, measure, rate, risk, rank, and call out whether this particular transaction that you’re doing, whether it’s a payment approval or it’s an executive communication, is trustworthy or not.
And then accordingly, the agent of the system that’s allowing the transaction to go through allows it or not, right? So that’s something that we’re seeing, and AI in this context is a force multiplier, right, on both sides. For us as defenders, we are seeing, like Darshan said, how we are able to detect, identify threats at scale and speed that we have never seen before, right? And definitely, right, bringing in, again, it’s not going to, so if you ask, okay, is it going to completely revamp how we do and run SOCs? A little yes. It’s not going to replace all the analysts, but definitely in terms of certain tasks that we are doing, we already started seeing Microsoft with its security co -pilot, how it can automate tasks, right?
Like different agents doing different tasks, so we’re already starting to see that. Now, but in addition to that, it’s also helping attackers on the other side of the equation, which is it is industrializing disruption at scale. Think fishing. Think social engineering. Now, all of these manipulation, now it’s happening at an unprecedented scale. That’s going to continue, and you’re going to see it continue for, let’s say, the next few years because that’s where we’re headed in terms of air -aided phishing. I would say, yeah, definitely manipulation and how this is going to impact the industry as a whole. Now, I would say that’s pretty much how all of these, the shift is, the tectonic shift is happening, right, across.
So as I would say working with leaders and board members, we are looking at how to look at these risks and how to frame these risks, and here usually we see three lenses. So one is the compliance risk, which is am I complying with the EU AI Act, right? Am I complying with the TDPDP or other sectoral guidance? So it’s more of a check -in -the -box approach. Maybe helps with me in protecting against regulatory exposure, but not with systemic risk. like what Ms. Daisy was saying. The second angle which some companies have started to move towards is the operational risk, right? Where the boards are starting to ask, the models am I using? Is it reliable?
Is it safe? Is it trustworthy? And what is the risk if this particular model, a service provider who’s providing that model goes down? So that’s the operational risk angle that we’re seeing more of. The third angle which I think it’s very few companies doing today is probably the strategic risk angle, right? Where in being able to call out if I’m using this particular, if there’s an AI -driven attack, identity attack, right? That is reducing or impacting the reputation of my organization with my customers, what is my exposure in financial terms? Now these are questions that boards would start to need to ask because we are starting to ask those questions and get those questions from leaders in order to how are we able to measure those and how do you quantify risk in financial terms?
And be able to convey that to the board as well because that’s what at the end of the day boards are concerned in being able to. the stakeholders and shareholders.
That’s great and those are some interesting lenses that you put to the whole conversation. Sir, I’d like to bring you in now from your vantage point. When we talk about India’s DPI, we are implementing AI into systems which cater to healthcare, which cater to telecom, across the citizen supply chain, if you will. So how do we make sure that the AI deployments that we’re doing and what capabilities do we have to make sure that these deployments are secure and they’re taking care of the risks that the fellow panelists highlighted?
The financial sector, for example, is mature. But let’s say, take the health sector. It’s not as mature as others. But if you look at the enthusiasm, for example, of the health sector to adopt AI, you’ll find that the level of enthusiasm is similar to what is there in the other sectors. So that is a big challenge. We’ve been engaging with the health sector, for example. We’ve had recent meetings also to say that, how do I improve the cybersecurity posture of that sector? So that’s a big challenge, actually. So we had this digital divide. We have a cybersecurity divide that’s there. And now we are going to have this AI divide that’s going to be there across enterprises in different sectors.
So that is a challenge that is required to be addressed. That, I think, is the capacity building part. And also coming up with frameworks where people have access to that framework and understand what is really required to be done. And you talked of assessment. When an enterprise is coming with the AI system, is it secure? Is it doing the work? Is it doing the work it’s supposed to do? So we don’t have those assessment frameworks now. So if you’re aware, you know, the testing and assessment part is an important part, and creating that infrastructure so that people could go and then test and assess, that is an important part. The department of DRD has come up with an ETI framework, if you’re aware of it.
Similarly, from our office also, we funded a project, and still it’s around a year back. It started in November of 2024, we funded the project for coming with an assessment framework for AI systems. So that one is the security aspect of that, and the other is, of course, the functional aspect, you know, that also. In the sense that somebody claims that this AI system does something. How do you actually assess that? So I think one is the capacity building part, and the other is the, you know, having the frameworks in place is good. One thing good about this country is that we have an institutional framework that’s been established, especially because of cybersecurity over the period of time, like we have got the CERT India and CIPC, or their institutional framework.
and also the sectoral regulators also come up with sandboxing regulations in the sense that if you want to try out something new, you have these regulations that helps you to try out something new. So I think these like in the financial sector, you have the RBI sandboxing, the telecom sector also this mechanism. So I think people start using these sandboxes to prove technologies, to prove applications, to prove use cases and that will help them to actually understand how it really works before they deploy in production. That I think would help going forward.
Awesome. Thank you, sir. And I think it’s enlightening and enriching for all of us here to know your perspective especially what the government is doing towards it. I’d like to bring in Lakshmi, sir from Tata here for the next question. So, sir, if we reconvene five years from now here, what are we going to be talking about? What did we do? What did we get right?
A. S. Lakshminarayanan:
I think this discussions are very healthy. whether AI with a positive lens or with a fear lens. I think we need to – I – on two comments. One is, you know, the question on assessment. We ourselves in TataCom, when we asked ourselves the question, where do you want to be five years from now? And I made a statement that the next five years will determine the health of the company for the next 50 years because the technologies are moving very fast. So for an assessment framework, we developed a framework ourselves. We studied a lot of material. We didn’t find something good. So we developed an assessment framework where on one axis we plotted the capability.
You know, it includes talent. It includes the platform, which is when I said, look, no point in doing individual use cases in an organization. How many use cases will you do? You need a platform approach, which is where we said AI operating system is required. So that is maturing. So one – on one axis. On one axis, we are going to plot the capability. So it’s talent, even culture. AI, I don’t know whether people have appreciated this is a very different paradigm even now in the discussions I see people talking about how AI can help automate things and do things faster no, that’s not what AI will do AI, you know while the previous technologies of cloud and internet have helped companies to scale transactions AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated and how it can be done so this is a new paradigm, so in the capability axis the culture dimensions would have to be thought through carefully and talent appropriate, I find some of the younger talent are more easy to train on AI than some of the older, unfortunately so I think the whole talent and capability equation is one axis, and we’re going to plot ourselves and the other axis is on the outcomes what outcomes do you really want to deliver with AI?
And there, you know, outcomes could be more on efficiency. Outcomes could be more on the revenue enhancement. Outcomes could be more on the trust and the customer satisfaction. All those outcomes need to be plotted. I must admit, we ourselves are somewhere in the lower quadrant, and I hope we as a company will move to the top quadrant. And that needs to be defined, and that needs to be visualized. And only then you can move towards that. And that’s what we’re driving the company towards. And all the platform development that we’re doing, strengthening our infrastructure for enterprises, and we’ve shared some of these assessments to our customers as well. So that is one. So I hope that most people would see themselves moving towards the top quadrant in five years’ time.
The second thing that I worried about in the context of strategy is, again, you know, when people talk about AI and strategy, and I think that’s something that’s been really important to me, and I think that’s something that’s been really important to me, And I believe that, like in the previous technology when we had Internet and cloud, there were new business models that came about. So we had intermediaries coming, the booking .coms and others who disintermediated many, many people, or fintechs who came and did things better than the larger banks. And only when the larger entities woke up to the fact that these people are going to eat their lunch. And that’s what happened in the previous wave of technology.
In the AI, I think similar disruption is waiting to happen. Don’t know where and when and what. But if a strategy does not think about that, as to what disruptions are going to happen, we would have missed the bus. So five years from now, I would expect a new class of companies who are AI native, who are out there going to disrupt the existing business model. So those are the two things I would expect in five years to happen.
Fabulous. And sir, one last question to you. If you were to give me a call five years from now and say, Samrat, this is how… nation states have changed what would that be?
See one is AI and I have talked somewhere else also, adoption of AI is a competitive advantage so that’s why you have to adopt AI and you don’t have any other because there are other nations who are going to adopt, there are other enterprises going to adopt AI and they are going to try to look at how do I do business better so that way going down you will find that we would adopt AI and this conference is very good for that five is down the line the other is protecting yourself from the adverse effects of AI because it’s a very powerful tool and then it’s just a thought process but I think as pointed out just one year we have found that such a lot of development has happened we do not know where this is really going to lead us so the thing is for us to be on our toes and to actually look through that how is this technology going to affect the way we do business and how we run our countries and then also and then you know this development of capacity capability and identify the dependencies that we have when this technology is adopted and try to see that how do I mitigate the dangers of those dependencies this is where I think the thought process would be and this is where I think the road map for the next five years for us.
Thank you, thank you very much sir and thank you all the panelists for taking time out and agreeing to do this for the audience I see the room is full and a lot of people waiting on the sides as well thank you all for paying attention please put your hands together for the esteemed panel that we have here together we have to conclude this panel only for the paucity of time otherwise we could have gone on thank you very much Thank you.
Daisy Chittilapilly
Speech speed
165 words per minute
Speech length
1068 words
Speech time
386 seconds
AI as both a cybersecurity tool and a new attack surface
Explanation
Daisy highlights that AI promises better security management but also creates new vulnerabilities, especially because humans cannot manage cybersecurity at scale. She stresses the need to secure AI models against open‑source flaws, data leakage and poisoning.
Evidence
“So there is the promise with AI that you can manage security better” [1]. “And for some time now, we’ve not been able to manage cybersecurity at human scale” [5]. “So we’ve got to think about securing AI as well” [14]. “We’ve got to make sure that most of these are open source models, that they come with inherent vulnerabilities, so how do we detect them?” [17]. “We’ve got to make sure that the models don’t leak our confidential information or poison our data” [18].
Major discussion point
AI as both a cybersecurity tool and a new attack surface
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Deep embedding of AI into digital infrastructure
Explanation
Daisy points out that AI will stress existing infrastructure and that security must become a virtual, distributed mesh rather than a set of hardware appliances. She calls for breaking security policies into virtual instances that can follow AI workloads wherever they run.
Evidence
“AI will put pressure on the underlying infrastructure” [12]. “So it becomes a very virtual distributed mesh rather than hardware” [55]. “And the point we’ve got to do is we’ve got to break it into a number of virtual instances that can go wherever you want the security policy to be” [56]. “So this ability of building special grade of security appliances and putting them in various parts of the network is fast becoming an outdated idea” [57].
Major discussion point
Deep embedding of AI into digital infrastructure and its operational impact
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Risk management frameworks and AI readiness
Explanation
Daisy notes that many enterprises lack the capacity to understand and mitigate AI‑related threats, citing low percentages of compute capacity, data strategy, and innovation capability. She references Cisco’s AI readiness index as a way to gauge this gap.
Evidence
“For the past few years, we’ve been publishing an AI readiness index” [86]. “Only about one -third are able to understand AI threats and deal with them” [89]. “Forty percent want that agent to work alongside a human being, but only about two -thirds of those enterprises really have a data layer, data strategy, a data platform, and a data governance strategy” [90]. “And less than one -fifth have the innovation engine to think about building and scaling and maintaining AI applications and use cases” [91].
Major discussion point
Risk management frameworks, regulatory approaches, and strategic foresight
Topics
Capacity development | Artificial intelligence | The enabling environment for digital development
G. Narendra Nath
Speech speed
186 words per minute
Speech length
1261 words
Speech time
405 seconds
AI as both a cybersecurity tool and a new attack surface
Explanation
Narath stresses that nation‑states and large adversarial enterprises are using AI offensively, leading to rapid model poisoning and other attacks that outpace defensive measures.
Evidence
“So at the national scale the issue is that when you have multiple entities … adopting AI the effect on compromises and the critical information machine infrastructure is something that would actually make us wake up” [6]. “though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it” [10]. “but here the data itself is the control so you have that poisoning of models … model will not be behaving as you would expect” [16]. “And the other is the adversarial part of the AI is that” [33]. “Case of AI is that it’s really happening at a breakneck speed” [35].
Major discussion point
AI as both a cybersecurity tool and a new attack surface
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Deep embedding of AI into digital infrastructure and its operational impact
Explanation
He notes that existing IT/OT environments are already fragile and that AI multiplies this fragility, creating an AI divide across sectors and magnifying infrastructure weakness.
Evidence
“Now, on top of this fragility, you add AI” [15]. “Because I genuinely think that that the digital infrastructure in enterprises today are already fragile” [52]. “And this fragility is going to be multiplied 100 times” [54]. “And now we are going to have this AI divide that’s going to be there across enterprises in different sectors” [53].
Major discussion point
Deep embedding of AI into digital infrastructure and its operational impact
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Risk management frameworks, regulatory approaches, and strategic foresight
Explanation
Narath describes capacity‑building initiatives such as sandbox environments, assessment frameworks, and sector‑specific regulations that help enterprises test AI safely before production deployment.
Evidence
“So if you’re aware, you know, the testing and assessment part is an important part, and creating that infrastructure so that people could go and then test and assess, that is an important part” [73]. “So I think people start using these sandboxes to prove technologies, to prove applications, to prove use cases” [83]. “So I think one is the capacity building part, and the other is the, you know, having the frameworks in place is good” [84]. “and also the sectoral regulators also come up with sandboxing regulations” [85]. “So I think these like in the financial sector, you have the RBI sandboxing” [87].
Major discussion point
Risk management frameworks, regulatory approaches, and strategic foresight
Topics
The enabling environment for digital development | Capacity development | Artificial intelligence
A. S. Lakshminarayanan
Speech speed
162 words per minute
Speech length
1324 words
Speech time
488 seconds
Deep embedding of AI into digital infrastructure and its operational impact
Explanation
Lakshminarayanan explains that AI has moved from the application layer to the core of infrastructure, dramatically increasing east‑west traffic and stressing edge inference workloads with long‑lived API sessions.
Evidence
“So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure” [41]. “So it’s becoming a fundamental part of the infrastructure that is being then used to build applications” [42]. “Now, on top of this fragility, you add AI” [15]. “because AI is going to increase the network traffic, especially the east -west traffic, by, again, multifold” [37]. “If I put an inference there, or if you put an inferencing at the edge, the number of API calls these have to do is tremendous, and these API calls are long -lived sessions” [50]. “So the edge infrastructure is going to come under tremendous strain” [51].
Major discussion point
Deep embedding of AI into digital infrastructure and its operational impact
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Governance, trust, and the need for an AI operating system
Explanation
He argues that enterprises need an AI operating system that provides context, agentic control, and a trust‑governance layer to manage model drift and ensure safe AI actions.
Evidence
“But when it comes to the drift and the trust part of it, I do believe that enterprises require, require an AI operating system” [45]. “You need a platform approach, which is where we said AI operating system is required” [49]. “To make that knowledge into actionable intelligence, you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer which will control what an agent will do or will not do” [61]. “and what we mean by that AI operating system is something that brings the context together because LLMs will provide the knowledge” [62]. “So I think rather than focusing on whether this LLM is good … this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly” [63].
Major discussion point
Governance, trust, and the need for an AI operating system
Topics
Artificial intelligence
Talent development and future outlook
Explanation
Lakshminarayanan foresees a new class of AI‑native companies that will disrupt existing business models and stresses the need for talent pipelines and cultural shifts to scale AI‑driven decisions.
Evidence
“So five years from now, I would expect a new class of companies who are AI native, who are out there going to disrupt the existing business model” [92]. “AI, I don’t know whether people have appreciated this is a very different paradigm … talent appropriate, I find some of the younger talent are more easy to train on AI than some of the older” [93]. “And I made a statement that the next five years will determine the health of the company for the next 50 years because the technologies are moving very fast” [98]. “So those are the two things I would expect in five years to happen” [99].
Major discussion point
Talent development and future outlook
Topics
Talent development | Artificial intelligence | The digital economy
Richard Marko
Speech speed
137 words per minute
Speech length
463 words
Speech time
201 seconds
AI as both a cybersecurity tool and a new attack surface
Explanation
Marko warns that AI‑generated deep‑fakes make it hard to distinguish legitimate from fraudulent communications, exploiting the fact that humans are the weakest link in cybersecurity and are prone to social engineering.
Evidence
“Now with AI this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication I’m talking about deep fakes and so on” [25]. “So it is well understood that there are a lot of people who are in the industry and that people are typically the weakest link in cybersecurity” [26]. “the reason is that we as human beings we were not evolved to deal with machines…” [27]. “Think social engineering” [28]. “That’s going to continue, and you’re going to see it continue for, let’s say, the next few years because that’s where we’re headed in terms of air‑aided phishing” [31].
Major discussion point
AI as both a cybersecurity tool and a new attack surface
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Human rights and the ethical dimensions
Dharshan Shanthamurthy
Speech speed
174 words per minute
Speech length
590 words
Speech time
202 seconds
AI as both a cybersecurity tool and a new attack surface
Explanation
Dharshan highlights that AI strengthens deep‑fakes and spear‑phishing, while also emphasizing the symbiotic relationship between AI and cybersecurity.
Evidence
“For example, deep fakes or spear phishing attacks have become more robust with AI” [13]. “They actually, cyber security needs AI and AI needs cyber security” [2]. “I think we can create the world’s cyber security talent combination with AI because cyber security and AI are not two different fields” [4].
Major discussion point
AI as both a cybersecurity tool and a new attack surface
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Governance, trust, and the need for an AI operating system
Explanation
He proposes an AI security operating system and a playbook to shift from a defensive stance to proactive AI leverage.
Evidence
“Similarly, we talk about how you can have an AI security operating system, right?” [7]. “Which you should have a playbook on how to leverage AI rather than being on the defense player” [65]. “But one of the key things that we are trying to explain is that, yes, those are things that you need to address, no doubt, but can you also look at how you can take advantage of those AI?” [82].
Major discussion point
Governance, trust, and the need for an AI operating system
Topics
Artificial intelligence | The enabling environment for digital development
Talent development and future outlook
Explanation
Dharshan asserts that AI levels the playing field, enabling broader talent to detect threats and encouraging a cultural shift toward AI‑driven security operations.
Evidence
“But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack” [95]. “But today, with AI, now you’ve got a level playing field because we’ve seen those kinds of use cases being deployed at our SOC, where even a shift handover is done by an agent” [96]. “Cybersecurity has so far been a very asymmetric equation” [32].
Major discussion point
Talent development and future outlook
Topics
Talent development | Artificial intelligence
Pradeep Sekar
Speech speed
177 words per minute
Speech length
829 words
Speech time
280 seconds
Risk management frameworks, regulatory approaches, and strategic foresight
Explanation
Pradeep outlines three lenses for AI risk—compliance, operational, and strategic—and stresses that boards must ask trust‑related questions to quantify risk.
Evidence
“So as I would say working with leaders and board members, we are looking at how to look at these risks and how to frame these risks, and here usually we see three lenses” [74]. “So one is the compliance risk, which is am I complying with the EU AI Act, right?” [78]. “So that’s the operational risk angle that we’re seeing more of” [79]. “The second angle which some companies have started to move towards is the operational risk, right?” [81]. “Now these are questions that boards would start to need to ask because we are starting to ask those questions and get those questions from leaders…” [71].
Major discussion point
Risk management frameworks, regulatory approaches, and strategic foresight
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Governance, trust, and the need for an AI operating system
Explanation
He argues that trust can be quantified through provenance, authenticity and verification, and that cybersecurity must evolve to protect decision‑making and trust, not just data.
Evidence
“Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verification” [67]. “that we are able to identify, measure, rate, risk, rank, and call out whether this particular transaction … is trustworthy or not” [68]. “It’s going to be about trust” [69]. “So going forward, can cybersecurity, how can it evolve to start protecting decision‑making and trust?” [59]. “Now, cybersecurity, right, so can no longer be just about protecting systems and the data” [58].
Major discussion point
Governance, trust, and the need for an AI operating system
Topics
Artificial intelligence | Human rights and the ethical dimensions | Building confidence and security in the use of ICTs
Samrat Kishor
Speech speed
175 words per minute
Speech length
1101 words
Speech time
375 seconds
Deep embedding of AI into digital infrastructure and its operational impact
Explanation
Samrat notes that AI has moved from the application layer to become a foundational part of infrastructure, influencing system design and deployment.
Evidence
“So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure” [41]. “So it’s becoming a fundamental part of the infrastructure that is being then used to build applications” [42]. “It’s got embedded into the kind of systems which are now getting created and deployed” [44].
Major discussion point
Deep embedding of AI into digital infrastructure and its operational impact
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Governance, trust, and the need for an AI operating system (corporate AI responsibility)
Explanation
He proposes evolving from corporate social responsibility to corporate AI responsibility, where organisations own and control the actions of their deployed AI systems.
Evidence
“In fact, I was having a conversation a few days back, and I was saying that that from the time of corporate social responsibility, it’s time to evolve to corporate AI responsibility, where corporates start talking about how they’re controlling and owning the actions of the AI that they’re building and deploying” [66].
Major discussion point
Governance, trust, and the need for an AI operating system
Topics
Artificial intelligence | Human rights and the ethical dimensions
Risk management frameworks for AI deployments
Explanation
Samrat asks how to ensure AI deployments are secure and that risks highlighted by panelists are addressed.
Evidence
“So how do we make sure that the AI deployments that we’re doing and what capabilities do we have to make sure that these deployments are secure and they’re taking care of the risks that the fellow panelists highlighted?” [23].
Major discussion point
Risk management frameworks, regulatory approaches, and strategic foresight
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Agreements
Agreement points
Infrastructure fragility and inadequate preparation for AI implementation
Speakers
– Daisy Chittilapilly
– A. S. Lakshminarayanan
– G. Narendra Nath
Arguments
There’s an ambition versus reality gap – 90% of enterprises want to deploy AI agents, but only a fraction have adequate data strategy, compute capacity, or AI threat understanding
Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
There’s a cybersecurity divide across sectors, with varying levels of maturity in adopting AI while maintaining security posture
Summary
All three speakers agree that current digital infrastructure is inadequately prepared for AI implementation, with significant gaps between AI adoption ambitions and actual readiness across enterprises and sectors
Topics
Building confidence and security in the use of ICTs | Artificial intelligence | Information and communication technologies for development
Need for comprehensive AI governance and operating systems
Speakers
– A. S. Lakshminarayanan
– G. Narendra Nath
– Pradeep Sekar
Arguments
Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do
Assessment frameworks for AI systems are needed to evaluate both security and functional aspects before deployment
Cybersecurity must evolve from protecting systems and data to protecting decision-making and trust through measurable mechanisms
Summary
These speakers converge on the need for comprehensive frameworks and operating systems that go beyond traditional approaches to govern AI implementation with proper trust, assessment, and control mechanisms
Topics
Artificial intelligence | The enabling environment for digital development | Data governance
AI as both opportunity and threat in cybersecurity
Speakers
– Daisy Chittilapilly
– Dharshan Shanthamurthy
– Pradeep Sekar
Arguments
AI presents both opportunities and challenges in cybersecurity, similar to previous technologies but at unprecedented scale and speed
AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
AI industrializes disruption at scale, particularly in phishing and social engineering attacks
Summary
All speakers acknowledge AI’s dual nature in cybersecurity – providing powerful defensive capabilities while simultaneously enabling more sophisticated attacks
Topics
Building confidence and security in the use of ICTs | Artificial intelligence
Rapid pace of AI adoption creates unique challenges
Speakers
– G. Narendra Nath
– A. S. Lakshminarayanan
– Richard Marko
Arguments
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions
AI represents a paradigm shift from scaling transactions to scaling decisions, requiring different cultural and talent approaches
Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation
Summary
These speakers agree that AI’s unprecedented speed of adoption creates unique challenges requiring careful consideration and sometimes deliberate slowing of implementation to ensure proper security measures
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Capacity development
Similar viewpoints
Both speakers from major technology companies (Cisco and Tata) emphasize the fundamental inadequacy of current infrastructure approaches and the need for integrated, distributed security solutions rather than traditional bolt-on security measures
Speakers
– Daisy Chittilapilly
– A. S. Lakshminarayanan
Arguments
Networks must integrate security as a distributed mesh rather than separate appliances, with secure networking becoming the primary focus
Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
Topics
Building confidence and security in the use of ICTs | Information and communication technologies for development
Both speakers emphasize the strategic imperative of AI adoption while stressing the need for comprehensive risk management that goes beyond basic compliance to address operational and strategic concerns
Speakers
– G. Narendra Nath
– Pradeep Sekar
Arguments
AI adoption is a competitive advantage at national level, but countries must also protect against adverse effects and identify dependencies
Organizations need to move beyond compliance-focused approaches to address operational and strategic AI risks
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Both speakers focus on the human vulnerability aspect of AI-enhanced attacks, emphasizing how AI amplifies existing human weaknesses in cybersecurity through more sophisticated social engineering and unsupervised agent actions
Speakers
– Richard Marko
– Pradeep Sekar
Arguments
People remain the weakest link in cybersecurity, and AI agents performing tasks on behalf of users create new risks through unsupervised actions
AI industrializes disruption at scale, particularly in phishing and social engineering attacks
Topics
Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society | Artificial intelligence
Unexpected consensus
Need to slow down AI implementation despite competitive pressures
Speakers
– Richard Marko
– A. S. Lakshminarayanan
– G. Narendra Nath
Arguments
Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation
The next five years will determine organizational health for the next 50 years, with AI-native companies potentially disrupting existing business models
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions
Explanation
Despite the competitive pressures and transformative potential of AI, multiple speakers from different backgrounds (cybersecurity expert, telecom executive, and government official) converge on the counterintuitive need to deliberately slow down implementation to ensure proper foundations and security measures
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Corporate responsibility evolution for AI era
Speakers
– Samrat Kishor
– A. S. Lakshminarayanan
Arguments
Corporate AI responsibility should evolve from corporate social responsibility, with companies taking ownership of their AI actions
Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do
Explanation
The moderator’s suggestion for corporate AI responsibility finds unexpected alignment with the industry executive’s technical proposal for AI governance systems, suggesting convergence between ethical frameworks and practical implementation needs
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Overall assessment
Summary
The speakers demonstrate remarkable consensus on key challenges: infrastructure inadequacy, the dual nature of AI in cybersecurity, need for comprehensive governance frameworks, and the unprecedented speed of AI adoption requiring careful management. Despite coming from different sectors (government, private industry, cybersecurity), they align on both the transformative potential and significant risks of AI implementation.
Consensus level
High level of consensus with strong implications for coordinated action. The agreement across diverse stakeholders suggests these challenges are universally recognized and require collaborative solutions spanning public-private partnerships, new regulatory frameworks, and fundamental rethinking of digital infrastructure approaches. The consensus particularly around slowing implementation despite competitive pressures indicates mature understanding of the risks involved.
Differences
Different viewpoints
Pace of AI implementation versus security preparedness
Speakers
– G. Narendra Nath
– A. S. Lakshminarayanan
– Richard Marko
– Dharshan Shanthamurthy
Arguments
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions
Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
Resilience requires attention to details and understanding of background processes, sometimes necessitating slowing down implementation
AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
Summary
While Narendra Nath, Lakshminarayanan, and Marko emphasize the need to slow down AI implementation due to infrastructure fragility and security risks, Dharshan presents a more optimistic view focusing on AI’s opportunities for cybersecurity improvement
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Focus on fear versus hope in AI cybersecurity discourse
Speakers
– Dharshan Shanthamurthy
– G. Narendra Nath
– A. S. Lakshminarayanan
Arguments
AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
AI adoption is happening at breakneck speed without adequate time to assess and mitigate adversarial effects, unlike previous technological revolutions
Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
Summary
Dharshan emphasizes the hopeful aspects and opportunities AI brings to cybersecurity, while Narendra Nath and Lakshminarayanan focus more on the risks and challenges that need immediate attention
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Capacity development
Unexpected differences
Role of human factors in AI security
Speakers
– Richard Marko
– Dharshan Shanthamurthy
Arguments
People remain the weakest link in cybersecurity, and AI agents performing tasks on behalf of users create new risks through unsupervised actions
AI creates a level playing field between defenders and attackers, offering hope for better threat detection and automated security operations
Explanation
While both speakers acknowledge AI’s impact on cybersecurity, Marko emphasizes increased human vulnerability and risks from AI agents acting without supervision, whereas Dharshan focuses on AI’s potential to automate and improve security operations, representing fundamentally different views on human-AI interaction in security contexts
Topics
Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society | Artificial intelligence
Overall assessment
Summary
The main areas of disagreement center around the appropriate pace of AI implementation, the balance between optimism and caution in AI adoption, and the role of human factors in AI security. While all speakers acknowledge both opportunities and risks in AI cybersecurity, they differ significantly in their emphasis and proposed approaches.
Disagreement level
Moderate disagreement level with significant implications for AI policy and implementation strategies. The disagreements reflect different risk tolerances and implementation philosophies that could lead to substantially different approaches to AI governance and cybersecurity frameworks at organizational and national levels.
Partial agreements
Partial agreements
All speakers agree that current enterprise infrastructure and frameworks are inadequate for AI deployment, but they propose different solutions – Daisy focuses on infrastructure readiness gaps, Lakshminarayanan proposes AI operating systems, and Narendra Nath emphasizes assessment frameworks
Speakers
– Daisy Chittilapilly
– A. S. Lakshminarayanan
– G. Narendra Nath
Arguments
There’s an ambition versus reality gap – 90% of enterprises want to deploy AI agents, but only a fraction have adequate data strategy, compute capacity, or AI threat understanding
Enterprises require an AI operating system with context, agentic, and trust/governance layers to control what agents can and cannot do
Assessment frameworks for AI systems are needed to evaluate both security and functional aspects before deployment
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
All speakers agree that AI requires fundamental changes in how organizations approach technology and security, but they focus on different aspects – Daisy on network architecture, Pradeep on trust mechanisms, and Lakshminarayanan on organizational paradigms
Speakers
– Daisy Chittilapilly
– Pradeep Sekar
– A. S. Lakshminarayanan
Arguments
Networks must integrate security as a distributed mesh rather than separate appliances, with secure networking becoming the primary focus
Cybersecurity must evolve from protecting systems and data to protecting decision-making and trust through measurable mechanisms
AI represents a paradigm shift from scaling transactions to scaling decisions, requiring different cultural and talent approaches
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Organizational Strategy and Risk Management
Similar viewpoints
Both speakers from major technology companies (Cisco and Tata) emphasize the fundamental inadequacy of current infrastructure approaches and the need for integrated, distributed security solutions rather than traditional bolt-on security measures
Speakers
– Daisy Chittilapilly
– A. S. Lakshminarayanan
Arguments
Networks must integrate security as a distributed mesh rather than separate appliances, with secure networking becoming the primary focus
Digital infrastructure in enterprises is already fragile, and adding AI will multiply this fragility by 100 times due to increased network traffic and API calls
Topics
Building confidence and security in the use of ICTs | Information and communication technologies for development
Both speakers emphasize the strategic imperative of AI adoption while stressing the need for comprehensive risk management that goes beyond basic compliance to address operational and strategic concerns
Speakers
– G. Narendra Nath
– Pradeep Sekar
Arguments
AI adoption is a competitive advantage at national level, but countries must also protect against adverse effects and identify dependencies
Organizations need to move beyond compliance-focused approaches to address operational and strategic AI risks
Topics
Artificial intelligence | The enabling environment for digital development | Building confidence and security in the use of ICTs
Both speakers focus on the human vulnerability aspect of AI-enhanced attacks, emphasizing how AI amplifies existing human weaknesses in cybersecurity through more sophisticated social engineering and unsupervised agent actions
Speakers
– Richard Marko
– Pradeep Sekar
Arguments
People remain the weakest link in cybersecurity, and AI agents performing tasks on behalf of users create new risks through unsupervised actions
AI industrializes disruption at scale, particularly in phishing and social engineering attacks
Topics
Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society | Artificial intelligence
Takeaways
Key takeaways
AI in cybersecurity presents a dual nature – offering opportunities for better threat detection and automated security operations while simultaneously creating new attack vectors and risks at unprecedented scale and speed
Current digital infrastructure is fragile and unprepared for AI implementation, with most enterprises lacking adequate data strategy, compute capacity, and AI threat understanding despite ambitious deployment plans
AI fundamentally changes cybersecurity from protecting systems and data to protecting decision-making and trust, requiring new frameworks that can measure and verify authenticity and provenance
Organizations need AI operating systems with integrated context, agentic, and governance layers rather than implementing isolated AI use cases
The paradigm shift involves scaling decisions rather than just transactions, requiring new cultural approaches, talent development, and risk management strategies
India is positioned advantageously at the intersection of AI and cybersecurity to develop world-class talent and leverage emerging opportunities
Assessment frameworks for AI systems are critically needed to evaluate both security and functional aspects before production deployment
Corporate AI responsibility should emerge as a new standard, with organizations taking ownership of their AI systems’ actions and impacts
Resolutions and action items
Development of assessment frameworks for AI systems by government agencies (mentioned ongoing project funded by National Cyber Security Coordinator’s office starting November 2024)
Implementation of AI operating systems with trust and governance layers by enterprises like TataComm
Utilization of regulatory sandboxes (RBI, telecom sector) for testing AI technologies before production deployment
Building secure networking infrastructure with distributed security mesh rather than separate appliances
Capacity building initiatives across sectors, particularly in less mature sectors like healthcare
Development of institutional frameworks leveraging existing cybersecurity infrastructure (CERT India, CIPC)
Unresolved issues
How to bridge the cybersecurity maturity divide across different sectors while maintaining uniform AI adoption enthusiasm
Specific mechanisms for measuring and quantifying AI-related strategic risks in financial terms for board-level decision making
How to balance the speed of AI adoption with necessary security precautions without losing competitive advantage
Standardization of AI assessment frameworks across industries and regions
Managing the talent gap between younger AI-native workers and experienced professionals in traditional paradigms
Identifying and preparing for potential business model disruptions from AI-native companies
Addressing the fundamental challenge of applying deterministic security requirements to probabilistic AI technologies
Developing effective governance mechanisms for AI agents performing unsupervised tasks on behalf of users
Suggested compromises
Slowing down AI implementation when necessary to ensure proper security foundations are in place, despite competitive pressures
Using regulatory sandboxes as a middle ground to test AI technologies safely before full production deployment
Implementing AI operating systems that provide governance controls while still enabling AI innovation and productivity gains
Building virtual distributed security mesh that balances the need for pervasive security with infrastructure flexibility
Adopting a platform approach for AI implementation rather than individual use cases to balance efficiency with governance
Focusing on capability building and outcome definition simultaneously rather than pursuing either in isolation
Thought provoking comments
I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that the digital infrastructure in enterprises today are already fragile… So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations… So you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do.
Speaker
A. S. Lakshminarayanan
Reason
This metaphor powerfully reframes the entire AI adoption conversation from one of opportunity to one of foundational risk. It challenges the prevailing excitement about AI by highlighting that enterprises are building advanced AI capabilities on inherently fragile digital infrastructure.
Impact
This comment fundamentally shifted the discussion’s tone and became a recurring theme. Multiple subsequent speakers referenced this ‘fragility’ concept, with Daisy citing their AI readiness index showing the ‘ambition versus reality gap’ and other panelists building on the infrastructure concerns. It moved the conversation from theoretical AI benefits to practical implementation challenges.
Case of AI is that it’s really happening at a breakneck speed… the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit… So this is where there is a disconnect and this has to be really bridged.
Speaker
G. Narendra Nath
Reason
This insight reveals a critical asymmetry in AI adoption – that malicious actors are more motivated and focused in their AI implementation than defensive users, creating a dangerous imbalance. It’s a sophisticated analysis of the strategic dynamics at play.
Impact
This comment introduced the concept of motivational asymmetry that influenced later discussions. Dharshan later built on this by discussing how AI could level the playing field for defenders, directly addressing this imbalance. It shifted the conversation from technical challenges to strategic and motivational ones.
I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly… enterprises require an AI operating system… you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer.
Speaker
A. S. Lakshminarayanan
Reason
This concept of an ‘AI operating system’ with distinct layers (context, agentic, trust/governance) provides a concrete architectural framework for managing AI complexity. It moves beyond abstract concerns to propose a systematic solution.
Impact
This framework became a reference point for other speakers. Dharshan later mentioned ‘AI security operating system’ as a parallel concept, and the layered approach influenced discussions about governance and control. It elevated the conversation from problem identification to solution architecture.
Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders… But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack.
Speaker
Dharshan Shanthamurthy
Reason
This reframes AI as potentially solving one of cybersecurity’s fundamental problems – the asymmetric advantage of attackers. It provides a hopeful counterpoint to the fear-based discussions while acknowledging the historical challenge.
Impact
This comment provided crucial balance to the discussion, shifting from predominantly risk-focused to opportunity-focused. It directly responded to earlier concerns about adversarial AI use and influenced Pradeep’s later discussion about AI as a ‘force multiplier’ for both sides.
AI is quietly reshaping the risk equation within the enterprise… cybersecurity can no longer be just about protecting systems and the data… it needs to evolve into something more… how can it evolve to start protecting decision-making and trust? Because trust is starting to become measurable… through provenance, through authenticity, as well as verification.
Speaker
Pradeep Sekar
Reason
This insight fundamentally redefines cybersecurity’s scope from protecting static assets to protecting dynamic processes like decision-making. The concept of ‘measurable trust’ introduces a new paradigm for security thinking.
Impact
This expanded definition of cybersecurity influenced the discussion’s scope, connecting to earlier points about trust and governance. It provided a bridge between technical security measures and business decision-making, elevating the strategic importance of the conversation.
AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated… this is a new paradigm.
Speaker
A. S. Lakshminarayanan
Reason
This distinction between ‘scaling transactions’ (previous technologies) versus ‘scaling decisions’ (AI) provides a fundamental conceptual framework for understanding AI’s unique impact. It challenges conventional thinking about AI as merely an automation tool.
Impact
This paradigm shift concept influenced the final portions of the discussion about future disruptions and strategic thinking. It provided a lens for understanding why traditional approaches to technology adoption may be insufficient for AI.
Overall assessment
These key comments transformed the discussion from a surface-level exploration of AI and cybersecurity to a deep, multi-layered analysis of systemic challenges and paradigm shifts. Lakshminarayanan’s infrastructure fragility metaphor set a sobering tone that grounded the entire conversation in practical reality, while his later insights about AI operating systems and decision-scaling provided constructive frameworks for moving forward. The interplay between pessimistic realism (infrastructure fragility, adversarial asymmetry) and optimistic pragmatism (leveling the playing field, measurable trust) created a balanced, nuanced discussion. The comments built upon each other, with speakers referencing and expanding on previous insights, creating a cohesive narrative that evolved from problem identification to solution frameworks to paradigm recognition. The discussion successfully bridged technical, strategic, and policy perspectives, largely due to these pivotal insights that elevated the conversation beyond typical AI hype or fear-mongering.
Follow-up questions
How do we develop effective assessment frameworks for AI systems that cover both security and functional aspects?
Speaker
G. Narendra Nath
Explanation
There’s a critical need for standardized frameworks to test and assess AI systems before deployment, as current assessment infrastructure is lacking
How can we bridge the cybersecurity maturity divide across different sectors adopting AI at similar rates?
Speaker
G. Narendra Nath
Explanation
Different sectors have varying levels of cybersecurity maturity, yet similar enthusiasm for AI adoption, creating uneven risk exposure
What new business models and disruptions will emerge from AI-native companies in the next five years?
Speaker
A. S. Lakshminarayanan
Explanation
Understanding potential market disruptions is crucial for strategic planning, as AI may create new intermediaries similar to how internet/cloud technologies did
How do we distinguish between cybersecurity issues and AI system malfunctioning or poor design?
Speaker
G. Narendra Nath
Explanation
The lack of clarity between security breaches and system design flaws creates challenges in proper incident response and remediation
How can enterprises build the necessary infrastructure capacity to handle AI’s exponential demands on networks and systems?
Speaker
Daisy Chittilapilly and A. S. Lakshminarayanan
Explanation
Current digital infrastructure is fragile and may not support the increased network traffic, API calls, and computational demands of AI systems
How do we make AI decision-making deterministic in critical applications while using inherently probabilistic technology?
Speaker
Daisy Chittilapilly
Explanation
Financial and citizen service applications require predictable outputs, but AI models are probabilistic by nature, creating a fundamental challenge
What mechanisms are needed to measure and quantify trust in AI systems for enterprise decision-making?
Speaker
Pradeep Sekar
Explanation
Trust is becoming measurable through provenance and verification, but enterprises need concrete methods to assess and rate trustworthiness of AI-driven decisions
How can we develop comprehensive capacity building programs to address the AI divide across different sectors?
Speaker
G. Narendra Nath
Explanation
Different sectors have varying levels of readiness for AI adoption, requiring targeted education and framework development
What are the specific dependencies created by AI adoption and how can we mitigate the risks of those dependencies?
Speaker
G. Narendra Nath
Explanation
Understanding and managing dependencies is crucial for national security and business continuity as AI becomes more integrated into critical systems
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

