Scaling AI for Billions_ Building Digital Public Infrastructure

20 Feb 2026 18:00h - 19:00h

Scaling AI for Billions_ Building Digital Public Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing AI and cybersecurity as a two-way relationship, with AI being used to protect systems while security is needed to safeguard AI models themselves [1-4]. Daisy highlighted that AI brings both an opportunity to manage increasingly complex threats at machine scale and a set of risks such as model jail-breaking, data leakage and vulnerabilities in open-source models [12-23]. Samrat noted that AI has moved from the application layer into core infrastructure, making it a fundamental component of system design [25-28]. Narendra warned that the rapid, “breakneck” adoption of AI gives adversarial nation-states and enterprises powerful new tools, while the lack of a separate control plane makes models vulnerable to drift and poisoning, creating national-scale security challenges across sectors [40-44][48-52].


Lakshmi argued that today’s digital infrastructure is already fragile and that AI will amplify this fragility by vastly increasing east-west traffic and long-lived API calls at the edge, stressing networks and platforms [61-70][71-76]. She proposed an “AI operating system” that layers context, agentic control and governance to ensure trust and prevent model misuse [87-90]. Richard added that human users remain the weakest link, especially as deep-fakes blur the line between legitimate and malicious communications, and that resilience now requires detailed visibility into AI-driven actions and careful pacing of deployments [99-110]. Daisy reinforced the gap between enterprise ambition and readiness, noting that most large firms lack data strategy, compute capacity, and the ability to understand AI-related threats, and she called for a shift from hardware-centric security appliances to a virtual, distributed security mesh that accommodates AI’s probabilistic nature [119-151].


Dharshan emphasized the dual emotions of hope and fear, pointing out that AI levels the playing field for defenders through SOC agents and can also create new talent pipelines, while CXOs must balance regulatory compliance with operational and strategic AI risks [158-176][184-191]. Pradeep expanded on this by describing three risk lenses-compliance, operational (model reliability and trust), and strategic (reputation and financial impact)-and stressed that AI acts as a force multiplier for both attackers and defenders [223-235]. Narendra highlighted the need for capacity building, assessment frameworks and sandbox regulations to evaluate AI security before production deployment, leveraging existing institutional structures such as CERT-India and sectoral sandboxes [241-270]. Looking ahead, Lakshmi outlined a self-developed assessment framework that plots capability (talent, culture, platform) against desired outcomes (efficiency, revenue, trust), and warned that AI-native companies will likely disrupt existing business models within five years [282-315]. The discussion concluded that while AI promises transformative benefits for cybersecurity, realizing them requires coordinated governance, trust mechanisms, infrastructure redesign, and strategic foresight to mitigate emerging risks [91-93][318-322].


Keypoints

Major discussion points


AI is both a security tool and a new attack surface.


The panel opened by distinguishing “AI for cybersecurity” and “cybersecurity for AI” and highlighted that AI brings opportunity (e.g., scaling security operations) and challenge (e.g., model jail-breaks, data leakage, poisoning) [3][8-13][21-24].


Speed of adoption outpaces risk mitigation, creating a geopolitical “arms race.”


Narendra emphasized that AI is being adopted at a “breakneck speed” and that nation-states and adversarial enterprises are already weaponising it, widening the gap between defenders’ productivity gains and attackers’ capabilities [40-45][46-49].


Current digital infrastructure is fragile, and AI amplifies that fragility.


Lakshmi warned that enterprises are “running towards the cliff” because existing IT/OT systems are already weak; AI multiplies the strain (e.g., massive east-west traffic, long-lived API sessions at the edge) and demands a fundamentally new “AI operating system” with trust and governance layers [61-70][71-76][84-90].


Governance, trust, and risk-assessment frameworks are essential for responsible AI deployment.


The need for an AI operating system that embeds context, agents, and a trust/governance layer was reiterated, and Pradeep outlined three risk lenses-compliance, operational, and strategic-that boards must adopt to evaluate AI-driven decisions and trustworthiness [84-90][206-214][222-232].


Future outlook: AI will reshape talent, business models, and national strategy, but only if organizations act now.


Dharshan highlighted the “hope” side-AI leveling the defender-attacker playing field and creating new talent pipelines-while Lakshmi and others warned that without a clear assessment framework and strategic foresight, AI-native disruptors will overtake incumbents within five years [158-170][184-190][278-315].


Overall purpose / goal of the discussion


The panel was convened to examine the dual impact of artificial intelligence on cybersecurity-both as an enabler for defending systems and as a new vulnerability vector-and to surface practical, strategic, and policy-level actions (governance models, risk frameworks, infrastructure redesign, talent development) that governments, enterprises, and regulators should adopt to harness AI responsibly while mitigating its emerging threats.


Overall tone and its evolution


Opening (0:00-2:00): Curious and exploratory, with participants framing AI as a transformative opportunity.


Mid-section (2:00-10:00): The tone shifts to cautionary urgency, emphasizing rapid adoption, adversarial use, and the fragility of existing infrastructure.


Later segment (10:00-20:00): Becomes constructive and solution-focused, introducing concepts such as AI operating systems, trust layers, and new governance models.


Closing (20:00-38:00): Moves toward a forward-looking, balanced optimism-recognising risks but also highlighting strategic opportunities, talent development, and the need for proactive planning over the next five years.


Overall, the conversation progresses from inquisitive optimism to measured concern, then to pragmatic recommendations, ending on a cautiously hopeful note about shaping AI-driven cybersecurity futures.


Speakers

Samrat Kishor


Expertise: AI, cybersecurity, digital infrastructure (moderator)


Role/Title: Moderator/Host of the panel discussion[S7]


Daisy Chittilapilly


Expertise: AI, cybersecurity, networking, digital transformation


Role/Title: Cisco representative (speaker on AI and resilience)


G. Narendra Nath


Expertise: National security, cybersecurity policy, AI governance


Role/Title: Government official involved in national security and cybersecurity frameworks (CERT India, DRD)[S2]


Dharshan Shanthamurthy


Expertise: Cybersecurity, AI, deep-tech consulting, thought leadership


Role/Title: Leader at a hardcore deep-tech cybersecurity company, consultant and thought-leader for large enterprises and government officials[S1]


Pradeep Sekar


Expertise: AI risk management, cybersecurity strategy, regulatory compliance


Role/Title: Panelist (cybersecurity professional)


Richard Marko


Expertise: Cybersecurity resilience, AI-enabled threats, human factors in security


Role/Title: Speaker (cybersecurity expert)


A. S. Lakshminarayanan


Expertise: Digital infrastructure, AI operating systems, trust & governance in AI


Role/Title: Executive at Tata Communications (referred to as “Lakshmi, sir, from Tata”)[S9]


Additional speakers:


Ms. Zazie


Expertise:


Role/Title:


Full session reportComprehensive analysis and detailed insights

1. Opening & framing – Samrat Kishor opened the panel by framing artificial intelligence (AI) and cybersecurity as a two-way relationship: AI can be deployed for cybersecurity, while cybersecurity is required for AI itself. He asked Daisy Chittilapilly about the big-picture changes that AI is bringing to security [1-4][6-7].


2. Opportunity & challenge – Daisy Chittilapilly (Cisco) explained that, as with any new technology, AI is simultaneously an opportunity and a challenge. The expanding cyber-threat landscape, driven by ever-greater connectivity and “fidgetal” lives, has out-grown human-scale defence, prompting a shift to machine-scale tools [12-14]. AI promises to improve security management at that scale [15-16], yet it also introduces novel risks: models can be jail-broken, confidential data may leak, and open-source models carry inherent vulnerabilities that must be detected and mitigated [21-24].


3. AI as infrastructure – Samrat noted that AI has moved from being a mere application-layer add-on to becoming a fundamental component of the technology stack, embedded in the systems that organisations design, build and operate [25-30][31-33]. He then turned to G. Narendra Nath for a national-security perspective.


4. National-security perspective (Narendra)


– AI adoption is occurring at “breakneck speed”, outpacing the development of safeguards, and nation-states as well as large adversarial enterprises are already weaponising AI [40-45].


– Unlike traditional systems that separate control- and data-planes, AI models use the data itself as the control plane, making them vulnerable to poisoning, drift and non-deterministic behaviour; over time a model can “drift” and stop behaving as expected, blurring the line between a cyber-security incident and a poor AI design [48-52].


– The rapid spread across finance, telecom, power and other critical sectors raises systemic risk [55-60].


5. Critical-infrastructure view (Lakshmi) – Samrat asked A. S. Lakshminarayanan (Lakshmi) about the state of existing digital infrastructure. She argued that enterprises are “running towards the cliff” because current IT/OT systems are already weak, and AI will multiply that fragility roughly a hundred-fold by dramatically increasing east-west traffic and long-lived API sessions at the edge [61-70][71-76].


– To address this, she proposed an AI operating system composed of a context layer, an agentic layer and a trust/governance layer, enabling organisations to turn LLM-derived knowledge into governed, actionable intelligence [84-90].


– Lakshmi warned that AI will scale decisions, not just transactions, and that, just as booking.com and fintechs disrupted incumbents after the internet wave, a new class of AI-native companies will likely upend existing business models[308-315].


6. Corporate-AI-responsibility – Samrat linked the discussion to the need for “corporate AI responsibility”, likening it to corporate social responsibility (CSR) as a governance imperative.


7. Resilience & human factor (Richard) – Richard Marko highlighted the human factor as the weakest link. Deep-fakes and AI-generated phishing make it harder to distinguish legitimate from malicious communications [98-100], and true resilience now requires granular visibility into what AI agents are doing in the background, how commands are transferred, and whether they can be intercepted or altered [101-105][106-110].


8. Digital-infrastructure readiness (Daisy)


– Daisy presented Cisco’s AI readiness index, revealing a stark ambition-versus-reality gap: about 90 % of just under 1 000 large Indian enterprises plan to deploy AI agents this year, yet only two-thirds have a data strategy, one-fourth possess sufficient compute capacity, and less than one-fifth understand AI-related threats [118-123].


– She argued that traditional, hardware-centric security appliances are becoming obsolete; security must become virtual, distributed and embedded in the network fabric, rewiring the entire stack (silicon, compute, networking) to accommodate AI’s probabilistic nature, which demands new rules for applications that can no longer guarantee deterministic outputs [124-132][136-151].


9. Hope vs. fear (Dharshan) – Dharshan Shanthamurthy described the emotional duality surrounding AI. He noted that AI levels the playing field for defenders-e.g., SOC agents can automate shift handovers-and creates new talent pipelines for a deep-tech cybersecurity workforce [158-176]. He called for an AI security operating system or playbook that enables organisations to proactively leverage AI rather than merely react to threats [184-191].


10. Board-level risk lenses (Pradeep) – Pradeep Sekar outlined three risk lenses for board-level oversight: compliance risk (e.g., EU AI Act, sectoral regulations), operational risk (model reliability, availability, trust) and strategic risk (reputation and financial impact of AI-driven attacks) [222-236]. He cited Microsoft’s Security Co-pilot as a concrete example of AI automating SOC tasks [212-215], and warned that attackers can industrialise phishing and social-engineering at unprecedented scale [216-219].


11. Government capacity-building (Narendra)


– Narendra highlighted a “digital-AI divide” across sectors and stressed the need for capacity-building and assessment frameworks.


– Existing institutional mechanisms such as CERT-India, CIPC, and sector-specific sandboxes-RBI’s sandbox for finance and the telecom regulator’s sandbox-can be leveraged to test AI systems before production [241-247].


– He announced a government-funded project (started November 2024) to develop AI-security assessment standards, alongside the ETI framework, to provide systematic evaluation of AI deployments [260-267][268-270].


12. Five-year outlook (Lakshmi) – Lakshmi described Tata Communications’ internal assessment framework, which plots capability (talent, culture, platform) against outcomes (efficiency, revenue, trust) on a two-axis matrix [282-295][300-304]. She warned that the next five years will determine the long-term health of companies, as AI-native disruptors reshape markets [308-315].


13. Nation-state perspective (Narendra – final) – In response to Samrat’s final question, Narendra asserted that AI will be a competitive advantage for nations that adopt it responsibly. He emphasized the urgency of mitigating adverse effects through capacity building, clear frameworks and a five-year roadmap[318-322].


14. Consensus & disagreements – Across the discussion, the panel reached strong consensus on three core themes: (1) AI’s dual nature as both a security enabler and a new attack surface; (2) the necessity of a layered AI governance model-often termed an AI operating system or AI security operating system/playbook; and (3) the urgency of developing assessment frameworks, sandboxes and capacity-building programmes [84-90][190-192][206-210][91-93].


However, disagreements emerged regarding the primary mitigation route: Daisy advocated for a virtual, distributed security mesh embedded in the network fabric [124-132], whereas Narendra emphasised procedural safeguards such as assessment frameworks and regulatory sandboxes [241-247][260-267][268-270]. A second divergence concerned capacity-building focus: Daisy highlighted a universal enterprise-wide AI readiness gap [118-123], while Narendra called for sector-specific initiatives [242-252][254-259]. Finally, Richard’s human-centric view of resilience contrasted with Lakshmi’s infrastructure-centric emphasis [98-100][68-76].


15. Closing – Samrat thanked the panel and the audience, underscoring the need for coordinated governance that blends corporate AI responsibility with board-level risk lenses, while simultaneously investing in talent, infrastructure, and robust assessment mechanisms [91-93][318-322]. This balanced outlook reflects cautious optimism: if acted upon, AI’s transformative potential can be harnessed without compromising cybersecurity resilience.


Session transcriptComplete transcript of the session
Samrat Kishor

The context is, have you overdone it? Right? When we talk about AI and cybersecurity, these two areas, how do they come together? There’s AI for cybersecurity, and there is cybersecurity for AI. Right? So what we’re going to do is, we’re going to discuss both aspects. We’re going to at least try. So, you know, the first question, and I’d like to actually point it to Ms. Zazie, you know, what has changed, you know, if you were to look at the larger picture, the big picture, you know, in terms of AI coming into cybersecurity? What has changed?

Daisy Chittilapilly

I think as what happens with all technologies, and AI is no different in that sense. It is, of course, as we’ve been hearing over the last few days, a technology that will redefine humanity and how we live, work, play, all of that. But one thing that it hasn’t come, and with all of the other technologies, that have come before it is that it’s both an opportunity and a challenge. And it’s particularly true when it comes to the security space. So on one side, there is the promise that, you know, for some time now, with the advent of technologies, number of things getting connected, all of our lives going fidgetal, cyber threats have become, the landscape has, of course, expanded, and threats have become more and more complex and complicated.

And for some time now, we’ve not been able to manage cybersecurity at human scale. So machine scale was, you know, a lot of tooling was already in that space. So there is the promise with AI that you can manage security better. So there is definitely that opportunity. But at the same time, there is the recognition, like Dario Amadai said on the main stage yesterday, that his biggest concern and all of our concerns is that AI brings a set of risks, which not all of us have. And there are a lot of them that we know of at this point in time today. So both of these, so it’s also, like I said, that commonality is there with all technologies that came before it.

It is both a challenge and an opportunity and a challenge. Because we’ve got to protect models from being jailbroke. We’ve got to make sure that the models don’t leak our confidential information or poison our data. We’ve got to make sure that most of these are open source models, that they come with inherent vulnerabilities, so how do we detect them? So we’ve got to think about securing AI as well.

Samrat Kishor

Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to build applications. So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure. It’s got embedded into the kind of systems which are now getting created and deployed. So we’re looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. We’re also looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI.

So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI. So, and that is where I’d like to bring in, Narendra, you, your perspectives on what are you seeing in terms of national security? You know, is it something which is giving us a spike, a blip, something which you can discuss, disclose here?

G. Narendra Nath

Yeah, I mean, it’s required to be discussed. That’s one thing that’s definite. No, one, you know, I take the points that you’ve said. One thing about all the other technological revolutions, as you said, is that, you know, there was a time frame over which that seeped into the system. Okay, and then we had time to look at how do I use it beneficially and also to look at the adversarial effects of it and how do I mitigate those things. Case of AI is that it’s really happening at a breakneck speed. And there’s also an adoption, a willingness to adopt into enterprises of the different AI tools that are there. So that is where the scary part is there.

And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit, in terms of they are looking at how do I improve my productivity, how do I improve my efficiency, that’s the focus area that they are in. So this is where there is a disconnect and this has to be really bridged and that’s where the problem is.

The summit actually in one way it’s helping people become conscious about some of the measures that have to be taken. That is one part. The other is the difference between other systems and this is a little technical in the sense that in the other systems we have a separate control plane and a separate data plane. There we could actually control they provide access limits to the control plane. but here the data itself is the control so you have that poisoning of models happening through the inputs that are there so you could have a drift and over a period of time you will find that the model will not be behaving as you would expect it to behave and it’s not also very deterministic so there are challenges in how do I protect it now there’s AI systems to see that it gives me the consistent results after a period of time then there is also lack of clarity about what is the cyber security issue there and what is the issue of malfunctioning or a poor design of an AI system that lack of clarity also results in the challenges that are there.

I think these are the preliminary thoughts that I have. So at the national scale the issue is that when you have multiple entities at the enterprise scale and financial sector, the telecom sector and all of them and the power sector adopting AI the effect on compromises and the critical information machine infrastructure is something that would actually make us wake up and then see that what could be done. Those are issues that are there.

Samrat Kishor

Excellent pointers, sir. Excellent pointers. And I think since you brought in the private sector and the way they’ve evolved and they’re also subjected to these risks which are evolving in nature. I’d like to bring in Lakshmi, sir, from Tata here. So, sir, a lot of infrastructure is being built, connected, communicated using what you’re building for the nation. So, how are you seeing the paradigm shift from let’s say how it used to be before AI was commoditized and everyday technology. It used to be the labs. Now it’s out in everybody’s hands. So what is the change that you are seeing and the impact you’re seeing on critical infrastructure?

A. S. Lakshminarayanan

I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that that the digital infrastructure in enterprises today are already fragile. And we know that from an enterprise security point of view, there are so many attacks that are happening. And we know that there are huge issues when it comes to, for example, now we more talk about IT, OT security, the operational technology in factories were never in the purview of IT security. And there are, you know, security today and digital infrastructure in general is still very fragile. It’s islands of different OEM technologies and many, many things. And, you know, I don’t want to, you know, it is a major issue.

Now, on top of this fragility, you add AI. And this fragility is going to be multiplied 100 times. It comes over, right, on many, many kinds of platforms. because AI is going to increase the network traffic, especially the east -west traffic, by, again, multifold. The number of API calls that somebody… And we all are saying, oh, I’ll embed AI at the edge of the device, and if I have a banking application, I’ll do that, but nobody has thought through. If I put an inference there, or if you put an inferencing at the edge, the number of API calls these have to do is tremendous, and these API calls are long -lived sessions. They’re not traditional API calls.

So the edge infrastructure is going to come under tremendous strain. So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations and… …properly. So that is very fragile, and that is one point I want to make. The second point about this is I would like to expand the scope of this discussion. It’s not about AI and cybersecurity alone. It’s also about a broader trust question. I think we all know, you know, whether fake, the messages, you don’t know. Apply that in the enterprise context. And there was a talk about, you know, model drifts and so on and so forth.

So what we at Tadacom are doing, one is to protect the digital infrastructure through many, many things that we can do. And the unfortunate part is I don’t think enterprises have woken up to the fact that they have to do it. So I tell them that you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do. But when it comes to the drift and the trust part of it, I do believe that enterprises require, require an AI operating system. and what we mean by that AI operating system is something that brings the context together because LLMs will provide the knowledge. To make that knowledge into actionable intelligence, you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer which will control what an agent will do or will not do.

And if I take that control in my hands, and say that I will configure and ensure this agent will do something or not do something, I can make use of the models underneath a lot more intelligently. So I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly.

Samrat Kishor

Sir, that’s a great point. In fact, I was having a conversation a few days back, and I was saying that that from the time of corporate social responsibility, it’s time to evolve to corporate AI responsibility, where corporates start talking about how they’re controlling and owning the actions of the AI that they’re building and deploying. Great perspective, sir. Thank you very much. At this point, I’d like to bring in Richard to sort of continue the talk about digital infrastructure and resilience. So how has resilience in your perspective evolved when we talk about AI risks to cybersecurity and vice versa?

Richard Marko

Well, the question of resilience is a complex question. So I will bring a few aspects that I think are very important. So it is well understood that there are a lot of people who are in the industry and that people are typically the weakest link in cybersecurity. the reason is that we as human beings we were not evolved to deal with machines, computers and so on and most of us don’t have really deep technical knowledge about how systems work and so on so we are to a big extent dependent on relatively superficial understanding and so we are more easy to be tricked by different social engineering tricks and so on. Now with AI this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication I’m talking about deep fakes and so on so this is one aspect of the risk connected directly with people.

The other aspect is that we want AI to empower people to do things, more things and make them in an easier way so we have those agents and we give them some or we want to give them some commands like do this for me or that for me but we don’t understand all the steps that the agent will take on our behalf when performing those tasks and each of those tasks can be there can be a risk factor involved without us knowing like if you want to perform this action you will need to have those additional tools to achieve that and where you get those additional tools if AI decides on your behalf these are the tools you need software packages, whatever it is and they get to your computer without this being supervised then this is a problem so and we have to be very careful and we have to be very careful where I’m heading, like resilience here is really protecting or paying attention to details.

What is actually happening? What is running in the background? How are your commands transferred to the agents? Is there a possibility for them to be intercepted, to be modified? So it’s even it was difficult and complex even before advent of the new AI agentic approach. Now it’s becoming even more important to really go into all the details and we just heard from Lakshmi that he sees that we are moving towards a cliff. Well, depends on us of course. We want to go fast. We want to employ. We are all excited about AI but we maybe sometimes we need to slow down a little bit and make sure that the pieces are in the place and cyber security is not overlooked.

Samrat Kishor

Excellent, excellent perspectives. And I think an offshoot to that question can be to Ms. Daisy, which is what are you seeing as changing when you’re talking about digital infrastructure and especially the connectivity which it needs because you’re at Cisco, right? And here is something which is connecting a lot of things to a lot of other things. So how are you seeing changes happening, especially when you talk about resilience and what’s going on inside digital infrastructure?

Daisy Chittilapilly

So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. And that is something that I want to reiterate. For the past few years, we’ve been publishing an AI readiness index. And the good news is that we are as ready as everybody else. The bad news is maybe we’re not as ready as we think we are, which is the point Lakshmi is making, right? 90 % of… just under 1 ,000 large enterprises that we spoke to in India want to deploy agents this year. Forty percent want that agent to work alongside a human being, but only about two -thirds of those enterprises really have a data layer, data strategy, a data platform, and a data governance strategy.

Only about one -fourth have the compute capacity they need. Only about one -third are able to understand AI threats and deal with them. And less than one -fifth have the innovation engine to think about building and scaling and maintaining AI applications and use cases. So clearly there is this ambition versus reality gap which we have to solve for. That’s not a problem as long as we all know that that’s where it is, and they were acutely aware of this issue. Thank you. the other thing is AI is essentially leading to what this means is that we are rewiring and restacking the enterprise it’s not just networks it’s compute it’s silicon I know you know at the national level silicon security is a conversation so all this resiliency which we used to build almost like a bolt on at the top and particularly we used to think of it only as cyber resiliency it’s a system resilience which is built into all layers of the infrastructure stack all layers of the AI stack and that’s why at Cisco since you asked me a network specific question we used to have we used to deal with connectivity largely as connectivity and now we know the persona of that end port that connects to an end device that might be doing an inferencing or it’s in the data center that persona has to be that it will be on one side it will be a switch or a router but on the other side it will also be a security defense point.

So this ability of building special grade of security appliances and putting them in various parts of the network is fast becoming an outdated idea. And the point we’ve got to do is we’ve got to break it into a number of virtual instances that can go wherever you want the security policy to be. So it becomes a very virtual distributed mesh rather than hardware. Yes, there will be hardware. I’m not saying it will go away. But this ability to infuse it into the fabric and networks tend to be the all pervasive fabrics. That’s the way at least at Cisco we’re thinking about it. So this domains of networking and security are crashing together. So secure networking is like the conversation in the network space particularly.

The other part about this is the performance requirement which also Lakshmi alluded to. AI will put pressure on the underlying infrastructure. In a way it’s an exponential technology. The demands it will create on its underlying layers is also exponential. So we’ve got to almost build a new category of technology. Silicon systems, applications, everything. A new category has to be built and we have to build it in new ways. You cannot build it in the ways of how we built it in the past. Applications is an interesting one. We used to give an input, expect the same output on the other side. But now if you are going to deploy AI models, this thing is probabilistic.

And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application or a very important citizen service application, you give an input and the output has to be deterministic. But you’re using at the core of it a probabilistic technology. So that refinement also takes a whole lot of work. So it’s rethinking in ways in all layers. of the, from silicon to software to systems, you have to rethink everything. Every rule we have to rethink.

Samrat Kishor

Excellent, excellent. And since you brought in that perspective of rethinking, reimagining, and how we’re using AI in the operating system of the company, I’d like to bring in Darshan here. So Darshan, you do a great work, you do a lot of great work in creating thought leadership content as well as doing consulting work for very large companies. Of course, there are CXOs and a very highly ranked official of the government sitting here, but then what are the other six CXOs thinking about when it comes to AI? Is it still a compliance thing, or has it percolated into strategy?

Dharshan Shanthamurthy

First of all, thank you. I’ll probably add some context to whatever I’ve heard so far. So first of all, my views is any technology disruption brings in two emotions, right? So hope as well as fear. And I’m sure all the other panelists have rightfully covered the fear construct of AI in cyber safety. And rightly so. No disputing that truth. But there is a huge hope component from a cybersecurity company like for us because we are a hardcore deep tech cybersecurity company. I see a lot of opportunities. And we as a country, India, can also be, we are at the sweet spot between intersection between AI and cybersecurity. And this topic is very aptly crafted because I think it’s a huge opportunity for us to also utilize.

And I’ll tell you why. Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders or anyone who’s actually defending a network because they just need to get one thing right and we need to get everything right. So it’s always been asymmetric. But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack. For example, one classic use case. Can be an agent. security operations center. Because at the end of the day, if you have ever visited a security operations center, it is a 24 bar 7 someone, analyst looking at a screen and almost an inhuman job, so to speak.

But today, with AI, now you’ve got a level playing field because we’ve seen those kinds of use cases being deployed at our SOC, where even a shift handover is done by an agent. So a lot of real use cases. So I’m on the hope side. There’s a lot of opportunities that today we have. And second, in terms of talent, I mean, we have a lot of youngsters sitting in this room who are looking to grow. We have spoken so much about other services, other areas evaporating in terms of job opportunities. I think we can create the world’s cyber security talent combination with AI because cyber security and AI are not two different fields. They actually, cyber security needs AI and AI needs cyber security.

So I think we are at a very, very opportunistic opportune time for us. to really ride this wave and create world -class talent which can address. So now on the second part which you just spoke about, that’s what we are hearing at the CXOs globally since we deal with a lot of people in the payment ecosystem. CXOs obviously have the same construct of hope versus fear, right? So some are obviously being a CISO or a CIO. There is amount of fear that is also coming in because these are real problems, right? For example, deep fakes or spear phishing attacks have become more robust with AI. But one of the key things that we are trying to explain is that, yes, those are things that you need to address, no doubt, but can you also look at how you can take advantage of those AI?

And Lakshmi rightly pointed out, how do you have an AI operating system? Similarly, we talk about how you can have an AI security operating system, right? Which you should have a playbook on how to leverage AI rather than being on the defense player. So those are the… Those are my views. Samarath.

Samrat Kishor

Excellent, excellent views and thank you very much for those perspectives and I’m glad that I still see people coming in, you know, this is an interesting session and some people standing as well. So I would like to bring in Pradeep now. Pradeep, you know, as a follow on to what I just asked Darshan, here is something which is, you know, at the top and we’re saying, you know, while it is percolating into strategy a bit, do you think that we should have a dedicated function within an organization and what are you seeing currently not just in India but elsewhere as well?

Pradeep Sekar

Yeah, thank you for that. So probably adding on to what Dharshan said, right, I don’t mind the hope and the fear thing because being in cyber security space, both of them do add more to what we can do, right, for the industry as a whole, for the country as a whole, if you would. When we look at strategically, when we talk to, let’s say, leaders and boards at companies in India across the world, predominantly when the conversation is about AI, the topic goes towards innovation, competitiveness, and ability to bring in, let’s say, productivity gains, right? What often gets missed is that AI is quietly reshaping the risk equation within the enterprise, right? Now, cybersecurity, right, so can no longer be just about protecting systems and the data.

Now, don’t get me wrong, right? Cybersecurity is still needed to be able to identify all the systems within your enterprise, enterprise beyond the extended enterprise, as well as be able to protect the data that is on all of these systems. But it needs to evolve into something more, given the AI landscape, which is, I love how Lakshmi put it, right? It’s going to be about trust. So going forward, can cybersecurity, how can it evolve to start protecting decision -making and trust? Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verification. Now, all of these mechanisms are going to come in, in a way. that we are able to identify, measure, rate, risk, rank, and call out whether this particular transaction that you’re doing, whether it’s a payment approval or it’s an executive communication, is trustworthy or not.

And then accordingly, the agent of the system that’s allowing the transaction to go through allows it or not, right? So that’s something that we’re seeing, and AI in this context is a force multiplier, right, on both sides. For us as defenders, we are seeing, like Darshan said, how we are able to detect, identify threats at scale and speed that we have never seen before, right? And definitely, right, bringing in, again, it’s not going to, so if you ask, okay, is it going to completely revamp how we do and run SOCs? A little yes. It’s not going to replace all the analysts, but definitely in terms of certain tasks that we are doing, we already started seeing Microsoft with its security co -pilot, how it can automate tasks, right?

Like different agents doing different tasks, so we’re already starting to see that. Now, but in addition to that, it’s also helping attackers on the other side of the equation, which is it is industrializing disruption at scale. Think fishing. Think social engineering. Now, all of these manipulation, now it’s happening at an unprecedented scale. That’s going to continue, and you’re going to see it continue for, let’s say, the next few years because that’s where we’re headed in terms of air -aided phishing. I would say, yeah, definitely manipulation and how this is going to impact the industry as a whole. Now, I would say that’s pretty much how all of these, the shift is, the tectonic shift is happening, right, across.

So as I would say working with leaders and board members, we are looking at how to look at these risks and how to frame these risks, and here usually we see three lenses. So one is the compliance risk, which is am I complying with the EU AI Act, right? Am I complying with the TDPDP or other sectoral guidance? So it’s more of a check -in -the -box approach. Maybe helps with me in protecting against regulatory exposure, but not with systemic risk. like what Ms. Daisy was saying. The second angle which some companies have started to move towards is the operational risk, right? Where the boards are starting to ask, the models am I using? Is it reliable?

Is it safe? Is it trustworthy? And what is the risk if this particular model, a service provider who’s providing that model goes down? So that’s the operational risk angle that we’re seeing more of. The third angle which I think it’s very few companies doing today is probably the strategic risk angle, right? Where in being able to call out if I’m using this particular, if there’s an AI -driven attack, identity attack, right? That is reducing or impacting the reputation of my organization with my customers, what is my exposure in financial terms? Now these are questions that boards would start to need to ask because we are starting to ask those questions and get those questions from leaders in order to how are we able to measure those and how do you quantify risk in financial terms?

And be able to convey that to the board as well because that’s what at the end of the day boards are concerned in being able to. the stakeholders and shareholders.

Samrat Kishor

That’s great and those are some interesting lenses that you put to the whole conversation. Sir, I’d like to bring you in now from your vantage point. When we talk about India’s DPI, we are implementing AI into systems which cater to healthcare, which cater to telecom, across the citizen supply chain, if you will. So how do we make sure that the AI deployments that we’re doing and what capabilities do we have to make sure that these deployments are secure and they’re taking care of the risks that the fellow panelists highlighted?

G. Narendra Nath

The financial sector, for example, is mature. But let’s say, take the health sector. It’s not as mature as others. But if you look at the enthusiasm, for example, of the health sector to adopt AI, you’ll find that the level of enthusiasm is similar to what is there in the other sectors. So that is a big challenge. We’ve been engaging with the health sector, for example. We’ve had recent meetings also to say that, how do I improve the cybersecurity posture of that sector? So that’s a big challenge, actually. So we had this digital divide. We have a cybersecurity divide that’s there. And now we are going to have this AI divide that’s going to be there across enterprises in different sectors.

So that is a challenge that is required to be addressed. That, I think, is the capacity building part. And also coming up with frameworks where people have access to that framework and understand what is really required to be done. And you talked of assessment. When an enterprise is coming with the AI system, is it secure? Is it doing the work? Is it doing the work it’s supposed to do? So we don’t have those assessment frameworks now. So if you’re aware, you know, the testing and assessment part is an important part, and creating that infrastructure so that people could go and then test and assess, that is an important part. The department of DRD has come up with an ETI framework, if you’re aware of it.

Similarly, from our office also, we funded a project, and still it’s around a year back. It started in November of 2024, we funded the project for coming with an assessment framework for AI systems. So that one is the security aspect of that, and the other is, of course, the functional aspect, you know, that also. In the sense that somebody claims that this AI system does something. How do you actually assess that? So I think one is the capacity building part, and the other is the, you know, having the frameworks in place is good. One thing good about this country is that we have an institutional framework that’s been established, especially because of cybersecurity over the period of time, like we have got the CERT India and CIPC, or their institutional framework.

and also the sectoral regulators also come up with sandboxing regulations in the sense that if you want to try out something new, you have these regulations that helps you to try out something new. So I think these like in the financial sector, you have the RBI sandboxing, the telecom sector also this mechanism. So I think people start using these sandboxes to prove technologies, to prove applications, to prove use cases and that will help them to actually understand how it really works before they deploy in production. That I think would help going forward.

Samrat Kishor

Awesome. Thank you, sir. And I think it’s enlightening and enriching for all of us here to know your perspective especially what the government is doing towards it. I’d like to bring in Lakshmi, sir from Tata here for the next question. So, sir, if we reconvene five years from now here, what are we going to be talking about? What did we do? What did we get right?

A. S. Lakshminarayanan

I think this discussions are very healthy. whether AI with a positive lens or with a fear lens. I think we need to – I – on two comments. One is, you know, the question on assessment. We ourselves in TataCom, when we asked ourselves the question, where do you want to be five years from now? And I made a statement that the next five years will determine the health of the company for the next 50 years because the technologies are moving very fast. So for an assessment framework, we developed a framework ourselves. We studied a lot of material. We didn’t find something good. So we developed an assessment framework where on one axis we plotted the capability. You know, it includes talent.

It includes the platform, which is when I said, look, no point in doing individual use cases in an organization. How many use cases will you do? You need a platform approach, which is where we said AI operating system is required. So that is maturing. So one – on one axis. On one axis, we are going to plot the capability. So it’s talent, even culture. AI, I don’t know whether people have appreciated this is a very different paradigm even now in the discussions I see people talking about how AI can help automate things and do things faster no, that’s not what AI will do AI, you know while the previous technologies of cloud and internet have helped companies to scale transactions AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated and how it can be done so this is a new paradigm, so in the capability axis the culture dimensions would have to be thought through carefully and talent appropriate, I find some of the younger talent are more easy to train on AI than some of the older, unfortunately so I think the whole talent and capability equation is one axis, and we’re going to plot ourselves and the other axis is on the outcomes what outcomes do you really want to deliver with AI?

And there, you know, outcomes could be more on efficiency. Outcomes could be more on the revenue enhancement. Outcomes could be more on the trust and the customer satisfaction. All those outcomes need to be plotted. I must admit, we ourselves are somewhere in the lower quadrant, and I hope we as a company will move to the top quadrant. And that needs to be defined, and that needs to be visualized. And only then you can move towards that. And that’s what we’re driving the company towards. And all the platform development that we’re doing, strengthening our infrastructure for enterprises, and we’ve shared some of these assessments to our customers as well. So that is one. So I hope that most people would see themselves moving towards the top quadrant in five years’ time.

The second thing that I worried about in the context of strategy is, again, you know, when people talk about AI and strategy, and I think that’s something that’s been really important to me, and I think that’s something that’s been really important to me, And I believe that, like in the previous technology when we had Internet and cloud, there were new business models that came about. So we had intermediaries coming, the booking .coms and others who disintermediated many, many people, or fintechs who came and did things better than the larger banks. And only when the larger entities woke up to the fact that these people are going to eat their lunch. And that’s what happened in the previous wave of technology.

In the AI, I think similar disruption is waiting to happen. Don’t know where and when and what. But if a strategy does not think about that, as to what disruptions are going to happen, we would have missed the bus. So five years from now, I would expect a new class of companies who are AI native, who are out there going to disrupt the existing business model. So those are the two things I would expect in five years to happen.

Samrat Kishor

Fabulous. And sir, one last question to you. If you were to give me a call five years from now and say, Samrat, this is how… nation states have changed what would that be?

G. Narendra Nath

See one is AI and I have talked somewhere else also, adoption of AI is a competitive advantage so that’s why you have to adopt AI and you don’t have any other because there are other nations who are going to adopt, there are other enterprises going to adopt AI and they are going to try to look at how do I do business better so that way going down you will find that we would adopt AI and this conference is very good for that five is down the line the other is protecting yourself from the adverse effects of AI because it’s a very powerful tool and then it’s just a thought process but I think as pointed out just one year we have found that such a lot of development has happened we do not know where this is really going to lead us so the thing is for us to be on our toes and to actually look through that how is this technology going to affect the way we do business and how we run our countries and then also and then you know this development of capacity capability and identify the dependencies that we have when this technology is adopted and try to see that how do I mitigate the dangers of those dependencies this is where I think the thought process would be and this is where I think the road map for the next five years for us.

Samrat Kishor

Thank you, thank you very much sir and thank you all the panelists for taking time out and agreeing to do this for the audience I see the room is full and a lot of people waiting on the sides as well thank you all for paying attention please put your hands together for the esteemed panel that we have here together we have to conclude this panel only for the paucity of time otherwise we could have gone on thank you very much Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI can be deployed for cybersecurity while cybersecurity is required for AI.”

The knowledge base highlights the dual nature of AI in security, noting it can both enhance defenses and introduce new risks, confirming the two-way relationship described [S34].

Confirmedhigh

“AI is simultaneously an opportunity and a challenge; the expanding cyber‑threat landscape has out‑grown human‑scale defence, prompting a shift to machine‑scale tools.”

Reports describe escalating threats in scale, sophistication and frequency, and stress the need for AI-driven tools to keep pace, supporting the opportunity-challenge framing [S119] and [S120] and the human-capacity gap [S121].

Confirmedhigh

“AI introduces novel risks such as model jail‑breaking, confidential data leakage, and vulnerabilities in open‑source models.”

Open-source model risks and broader AI security concerns are documented, including potential data exposure and model manipulation [S122] and [S123]; agentic AI behaviours that can act independently are also noted [S114].

Additional Contextmedium

“AI has moved from being a mere application‑layer add‑on to becoming a fundamental component of the technology stack.”

AI is described as a technology that will redefine how societies work and is being embedded as core infrastructure, indicating its shift from peripheral to foundational status [S1] and its rapid advancement [S128].

Confirmedhigh

“AI adoption is occurring at “breakneck speed”, outpacing the development of safeguards, and nation‑states as well as large adversarial enterprises are already weaponising AI.”

UN remarks emphasize AI’s rapid pace and the urgency of governance, while other sources note that legislation is being drafted at breakneck speed, reflecting concerns about weaponisation and insufficient safeguards [S67] and [S129].

Additional Contextmedium

“AI models use the data itself as the control plane, making them vulnerable to poisoning, drift and non‑deterministic behaviour.”

The knowledge base discusses AI model vulnerabilities such as data poisoning and model drift, underscoring the control-plane nature of data in AI systems [S34].

Confirmedhigh

“The rapid spread of AI across finance, telecom, power and other critical sectors raises systemic risk.”

Threats to critical infrastructure and the systemic nature of AI-driven risks are highlighted in discussions on cyber-national security intersections [S120] and [S126].

External Sources (129)
S1
Scaling AI for Billions_ Building Digital Public Infrastructure — -Dharshan Shanthamurthy- Works with a cybersecurity company, provides consulting and thought leadership for large enterp…
S2
Scaling AI for Billions_ Building Digital Public Infrastructure — -G. Narendra Nath- Government official working on national security and cybersecurity policy, involved with CERT India a…
S3
Scaling AI for Billions_ Building Digital Public Infrastructure — – G. Narendra Nath- Pradeep Sekar – Richard Marko- Pradeep Sekar
S4
Scaling AI for Billions_ Building Digital Public Infrastructure — – Daisy Chittilapilly- A. S. Lakshminarayanan- G. Narendra Nath – Daisy Chittilapilly- Dharshan Shanthamurthy- Pradeep …
S5
Scaling AI for Billions_ Building Digital Public Infrastructure — – Richard Marko- Dharshan Shanthamurthy
S6
Event page with the recording — – **Marko Markovic**: Role/title not mentioned. Appears to be a travel guide content creator or host, providing detailed…
S7
Scaling AI for Billions_ Building Digital Public Infrastructure — -Samrat Kishor- Moderator/Host of the discussion
S8
Announcement of New Delhi Frontier AI Commitments — -Bharat: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified …
S9
Scaling AI for Billions_ Building Digital Public Infrastructure — – A. S. Lakshminarayanan- G. Narendra Nath- Pradeep Sekar – Daisy Chittilapilly- A. S. Lakshminarayanan- G. Narendra Na…
S10
Thousands of companies vulnerable to cyberattacks due to exploited flaw in open-source AI framework, researchers find — Security analysts havewarned about actively exploitinga contentious vulnerability within the widely-used open-source AI …
S11
Hackers exploit AI: The hidden dangers of open-source models — As AI adoption grows, security experts warn that malicious actors are finding new ways to exploitvulnerabilities in open…
S12
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S13
https://dig.watch/event/india-ai-impact-summit-2026/scaling-ai-for-billions_-building-digital-public-infrastructure — And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application …
S14
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S15
How AI Is Transforming Indias Workforce for Global Competitivene — And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capa…
S16
Employees embrace AI but face major training and trust gaps — SnapLogic haspublished new researchhighlighting how AI adoption reshapes daily work across industries while exposing tru…
S17
How AI Is Transforming Diplomacy and Conflict Management — “people developing these models don’t even have full legibility over how they’re working”[72]. “adversarial negotiations…
S18
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Daniel Lohrman: Yeah, but I cannot, the video is not started, so I don’t know if you can see me, but I can certainly s…
S19
Internet Governance Forum 2024 — However, the session also discussed the inherent risks associated with AI systems. Daniel Lohrmann emphasized concerns s…
S20
Building the Next Wave of AI_ Responsible Frameworks & Standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S21
Advancing Scientific AI with Safety Ethics and Responsibility — Both speakers agree that evaluation should occur before deployment rather than after, with Speaker 1 emphasizing socio-t…
S22
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa recommends following the UK and Singapore model of creating regulatory sandboxes where innovators can test AI syst…
S23
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — In adherence to its national strategy, Ireland actively participates in initiatives like EU Cybernet, aimed at bolsterin…
S24
Agenda item 6 — Brunei Darussalam:Thank you, Mr. Chair. I extend my delegation’s gratitude to you for your ongoing guidance in this work…
S25
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/5/OEWG 2025 — This comment helped refocus the discussion on the importance of capacity building as a foundational element for inclusiv…
S26
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S27
Generative AI: Steam Engine of the Fourth Industrial Revolution? — The adoption of newer technologies is not limited to a specific industry and is prevalent across all sectors. Currently,…
S28
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Moret argues that private sector companies have a responsibility to actively prevent AI systems from being used to viola…
S29
Bridging the AI innovation gap — This was identified as a critical need but requires further research into specific skill gaps and capacity building requ…
S30
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S31
Laying the foundations for AI governance — Dawn Song: Yeah, that’s a great question. I think in AI safety and security, we are facing huge challenges. The field is…
S32
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucia…
S33
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Dr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging…
S34
Challenging the status quo of AI security — – Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing secur…
S35
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — Cybersecurity | Infrastructure Rosenworcel argues that the rapid expansion of connected devices creates software vulner…
S36
The opportunity costs of an arms race — Conflict can easily erupt due tomisinterpreted intent.This is one aspect, among many, at which multilateral forums on se…
S37
Modern Diplomacy — This concern has been discussed earlier in relation to the Gulf War. IT greatly enhances the role of brainpower in rel…
S38
Keynote-Roy Jakobs — “Innovation and governance must advance together With speed Because trust determines adoption … If they move at differ…
S39
(Day 5) General Debate – General Assembly, 79th session: morning session — Murat Nurtleu – Kazakhstan: Mr. President, Mr. Secretary General, Excellencies, ladies and gentlemen, let me first cong…
S40
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S41
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade: Okay, so I’ll be very brief, the truth is, I hope your boss is not watching. Ah, my boss is always watch…
S42
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S43
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Context-based analysis and stakeholder engagement are crucial for effective risk assessment
S44
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S45
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Authorities and independent media will lag behind while malicious actors remain behind. one step ahead. Accountability w…
S46
IBM CEO’s take on AI’s influence on the business landscape — IBM’s CEO, Arvind Krishna, has left no room for doubt – AI is set to revolutionize the business world. Earlier this year…
S47
Shaping the Future AI Strategies for Jobs and Economic Development — These key comments transformed what could have been a superficial discussion about AI benefits into a sophisticated anal…
S48
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Malawi: Thank you, Chair. Thank you, Chair, for giving us the floor. Malawi acknowledges the critical role of capacit…
S49
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S50
Building the AI-Ready Future From Infrastructure to Skills — And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And i…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S52
S53
Building Trustworthy AI Foundations and Practical Pathways — “Frontier risks are risks which are very, very difficult to observe, right?”[59]. “There are social risks which are easi…
S54
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — – Practical, actionable recommendations based on risk assessment Chris Martin: And guys, I know this seems daunting. …
S55
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S56
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S57
Scaling AI for Billions_ Building Digital Public Infrastructure — The conversation highlighted the critical importance of building proper foundations before implementing AI capabilities,…
S58
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This distinction has profound implications for risk mitigation strategies. Safety requires internal controls and model v…
S59
NSA’s AISC releases guidance on securing AI systems — The National Security Agency’s Artificial Intelligence Security Center (NSA AISC) hasintroducednew guidelines to bolster…
S60
AI Development Beyond Scaling: Panel Discussion Report — Bengio highlights that current AI systems can develop sub-goals that weren’t chosen by humans and can go against instruc…
S61
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S62
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S63
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S64
From principles to practice: Governing advanced AI in action — Chris Meserole: concerns possible global alignment? Well, first of all, it’s great to be here and just, you know, a wond…
S65
Challenging the status quo of AI security — – Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing secur…
S66
Can National Security Keep Up with AI? / Davos 2025 — AI technology has both beneficial and potentially harmful applications. This dual-use nature creates dilemmas and challe…
S67
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S68
Panel discussion: International law, cyber-norms, CBMs, capacity building,institutional dialogue — Dr Katherine Getao, one of the esteemed panellists, highlighted the dual nature of digitalisation, presenting both signi…
S69
Agenda item 5: Day 1 Afternoon session — Australia:Thank you, Chair. The relevance and value of our open-ended working group relies upon us candidly exploring an…
S70
Agentic AI drives a new identity security crisis — New research from Rubrik Zero Labswarnsthat agentic AI is reshaping the identity landscape faster than organisations can…
S71
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — – Nadav Zafrir- Jill Popelka- Marc Murtra Cybersecurity | Infrastructure Rosenworcel argues that the rapid expansion o…
S72
Challenging the status quo of AI security — – Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing secur…
S73
Smart machines, dark intentions: UN urges global action on AI threats — The United Nations haswarnedthat terrorists could seize control of AI-powered vehicles to launch devastating attacks in …
S74
The opportunity costs of an arms race — Conflict can easily erupt due tomisinterpreted intent.This is one aspect, among many, at which multilateral forums on se…
S75
(Day 3) General Debate – General Assembly, 79th session: morning session — William Samoei Ruto – Kenya: Your Excellency, President of the 79th Session of the United Nations General Assembly, Amb…
S76
(Day 5) General Debate – General Assembly, 79th session: morning session — Murat Nurtleu – Kazakhstan: Mr. President, Mr. Secretary General, Excellencies, ladies and gentlemen, let me first cong…
S77
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S78
Global AI adoption rises quickly but benefits remain unequal — Microsoft’s AI Economy Institute hasreleased its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs…
S79
Scaling AI for Billions_ Building Digital Public Infrastructure — And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that ther…
S80
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Audience:Thank you for giving me the floor. My name is Ada Majalo. I’m coming from the Africa IGF as a MAG member. Very …
S81
Informal Stakeholder Consultation Session — So naturally, it amplifies the current structure that streams users’ controls over their data. It further strengthens a …
S82
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S83
Open Forum #33 Building an International AI Cooperation Ecosystem — Sajid Rahman: Thank you, and good afternoon. You know, it’s a great pleasure to speak about something which is not only …
S84
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S85
Closing remarks – Charting the path forward — Al Mesmar highlights that as AI systems become more powerful, governing access to computational infrastructure and large…
S86
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion revealed significant consensus across diverse stakeholders on fundamental questions about AI standards. A…
S87
IBM CEO’s take on AI’s influence on the business landscape — IBM’s CEO, Arvind Krishna, has left no room for doubt – AI is set to revolutionize the business world. Earlier this year…
S88
AI will not replace people – but people who use AI will replace people who do not | IBM’s Report — According toIBM’s report, executives estimate that around 40% of their workforce will need to reskill due to implementin…
S89
GermanAsian AI Partnerships Driving Talent Innovation the Future — AI and digital technologies are reshaping how businesses operate faster than ever before. For companies the challenge is…
S90
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S91
AI for equality: Bridging the innovation gap — Cherie Blair: Look at this place, it’s buzzing. It’s amazing. Cherie Blair: Well, I have to say I’m a bit of a techie e…
S92
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Anshul Sonak: ≫ Thanks, Yiping. Good morning. So, calling from Silicon Valley, this is a very interesting conversation. …
S93
Opening address of the co-chairs of the AI Governance Dialogue — Majed Sultan Al Mesmar: Bismillah ar-Rahman ar-Rahim. Excellencies, distinguished guests, colleagues, friends, As-salamu…
S94
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S95
OPENING SESSION | IGF 2023 — It involves both possibilities and risks and is a transformative technology. The Hiroshima AI process will aim to reflec…
S96
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S97
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S98
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S99
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S100
Agenda item 5: Day 2 Afternoon session — China has adopted a proactive stance towards developing and harmonising new norms within the cybersecurity sphere, showc…
S101
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — 25 ,000 people. And I think it’s possible. I think it’s possible to use the technology at the expense that it has reache…
S102
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski: Okay, that’s interesting. I hear a little bit of a delay. Good idea. All right. Good afternoon, early…
S103
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this…
S104
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — These key comments fundamentally shaped the discussion by challenging conventional narratives about AI development and g…
S105
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S106
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S107
High Level Dialogue with the Secretary-General — The tone was largely serious and earnest, with participants speaking candidly about shortcomings in current youth engage…
S108
Global Risks 2025 / Davos 2025 — Kashim Shettima: Well, the word for crisis in the Chinese language is wei desu, wei stands for danger and desu for op…
S109
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S110
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Dennis Kenji Kipker:Yeah, thank you very much, Jeannie, and thank you for the possibility to speak here today. As a prof…
S111
Open Forum #3 Cyberdefense and AI in Developing Economies — # Expert Panel Discussion: Cyber Defence and Artificial Intelligence Challenges for Developing Economies Jose Cepeda: a…
S112
WS #31 Cybersecurity in AI: balancing innovation and risks — Dr. Alison: Okay. Thank you. So I speak from a personal perspective here. So I don’t know if, realistically, I don’t…
S113
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Helmut Reisinger:Yeah. Good afternoon, everybody. As-salamu alaykum. I am representing Palo Alto Networks. We are a cybe…
S114
How agentic AI is transforming cybersecurity — Cybersecurity is gaining a new teammate—one that never sleeps and acts independently.Agentic AIdoesn’t wait for instruct…
S115
Cutting through Cyber Complexity / DAVOS 2025 — Hoda Al Khzaimi highlights how AI and emerging technologies are rapidly changing the cybersecurity landscape. She argues…
S116
Beyond answers: How AI is redefining web communication for International Geneva — It may seem paradoxical to look backward when facing advanced technology. However, in an age where AI generates content …
S117
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Julie Sweet points out that despite the focus on AI scaling, the vast majority of data infrastructure work that companie…
S118
Agenda item 5: Day 2 Morning session — The country reflects on the UK’s own dealings with acknowledged Russian cyber interferences in its political system, dee…
S119
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Cyber threats are escalating in scale, sophistication, and frequency
S120
Opening of the session — The cyber threat landscape is rapidly evolving, with increasing sophistication and complexity of attacks targeting criti…
S121
Call for action: Building a hub for effective cybersecurity | IGF 2023 — There is a deep-rooted concern about the ever-expanding gap in the cybersecurity field. Despite technological advances, …
S123
Don’t waste the crisis: How AI can help reinvent International Geneva — Risks extend beyond breaches of confidential data. Everyday interactions—like querying AI platforms—can inadvertently ex…
S124
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S125
INTERNATIONAL CIIP HANDBOOK 2008 / 2009 — The establishment of these organizational units and their location within the government structures are influenced by va…
S126
WS #84 The Venn Intersection of Cyber and National Security — It led to a detailed discussion of India’s cybersecurity initiatives and frameworks, offering insights into how one nati…
S127
LEBANON NATIONAL CYBER SECURITY STRATEGY — –  Use proven and well-known security information feeds to compile the necessary information to build a reputation data…
S128
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S129
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — Many laws are being developed at a breakneck speed
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Daisy Chittilapilly
3 arguments165 words per minute1068 words386 seconds
Argument 1
AI offers machine‑scale security management but introduces model leakage, jail‑breaking, and open‑source vulnerabilities
EXPLANATION
Daisy explains that AI can handle security tasks at machine scale, addressing the growing complexity of cyber threats, but it also brings new risks such as models being jail‑broken, leaking confidential data, and the presence of vulnerabilities in open‑source AI models.
EVIDENCE
She notes that AI promises to manage security at machine scale, referencing existing tooling that already operates in this space (lines 14‑16). She then highlights specific risks: the need to protect models from jail‑breaking, prevent confidential information leakage, and detect vulnerabilities in open‑source models (lines 21‑24).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source AI tools have been shown to contain exploitable flaws, such as the vulnerability in the Ray framework, and researchers report malicious code being embedded in open-source models, confirming the risk of model leakage and jail-breaking [S10][S11][S12].
MAJOR DISCUSSION POINT
Dual nature of AI in cybersecurity
AGREED WITH
G. Narendra Nath, Dharshan Shanthamurthy, Pradeep Sekar
Argument 2
AI pressures every layer of the stack, requiring a shift from hardware‑centric security appliances to a virtual, distributed security mesh
EXPLANATION
Daisy argues that the traditional approach of placing dedicated security appliances at fixed network points is becoming outdated. Instead, security must be virtualised and distributed across the fabric, allowing policies to be applied wherever needed.
EVIDENCE
She describes the move from hardware‑centric appliances to breaking security policies into many virtual instances that can be placed anywhere in the network, creating a virtual distributed mesh rather than relying on fixed hardware (lines 124‑132).
MAJOR DISCUSSION POINT
AI integration into critical infrastructure and system fragility
DISAGREED WITH
G. Narendra Nath
Argument 3
There is a significant AI readiness gap in large enterprises, with many lacking data strategy, compute capacity, threat understanding, and innovation capability.
EXPLANATION
Daisy points out that while organisations are eager to deploy AI agents, most do not have the foundational data platforms, sufficient compute resources, or the ability to understand and mitigate AI‑related threats, creating a mismatch between ambition and reality.
EVIDENCE
She cites Cisco’s AI readiness index showing that only about two‑thirds of enterprises have a data layer and strategy, one‑fourth have adequate compute capacity, one‑third can understand AI threats, and less than one‑fifth possess an innovation engine to build and scale AI applications (lines 118-123).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Surveys reveal significant skill and trust gaps in AI adoption, with many organisations lacking data strategies, compute resources, and expertise, underscoring the readiness gap highlighted [S16][S23][S25][S29][S15].
MAJOR DISCUSSION POINT
Enterprise capability gaps for AI adoption
G
G. Narendra Nath
5 arguments186 words per minute1261 words405 seconds
Argument 1
Rapid AI adoption outpaces mitigation; adversarial nation‑states exploit AI, and data becomes the control plane leading to model poisoning
EXPLANATION
Narendra points out that AI is being adopted at breakneck speed, leaving little time for mitigation measures. He warns that nation‑states and adversarial enterprises can weaponise AI, and because data now acts as the control plane, models are vulnerable to poisoning and drift.
EVIDENCE
He notes the breakneck speed of AI adoption and the willingness of enterprises to adopt AI tools (lines 40‑42). He then explains that adversarial nation‑states are using AI as a tool, while data itself serves as the control plane, enabling model poisoning and drift over time (lines 48‑49).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI geopolitics note that nation-states are weaponising AI and that data now serves as a control plane, making models vulnerable to poisoning attacks [S17][S19][S33].
MAJOR DISCUSSION POINT
Dual nature of AI in cybersecurity
AGREED WITH
Daisy Chittilapilly, Dharshan Shanthamurthy, Pradeep Sekar
Argument 2
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment
EXPLANATION
Narendra says that there is currently a lack of assessment frameworks for AI systems, and the government is funding projects to develop such frameworks, including an ETI framework and sandbox mechanisms to test AI before production.
EVIDENCE
He mentions the absence of assessment frameworks and the launch of a project funded in November 2024 to create an AI assessment framework, referencing the department of DRD’s ETI framework (lines 260‑267).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent initiatives propose AI assessment frameworks and sandbox environments to test AI before production, as documented in emerging standards and pilot projects [S20][S21][S22].
MAJOR DISCUSSION POINT
Governmental frameworks, capacity building, and assessment mechanisms
AGREED WITH
Daisy Chittilapilly, Pradeep Sekar
DISAGREED WITH
Daisy Chittilapilly
Argument 3
Highlights existing institutional structures (CERT‑India, CIPC) and sector‑specific sandboxes (RBI, telecom) as foundations for capacity building
EXPLANATION
Narendra emphasizes that India already has institutional bodies like CERT‑India and CIPC, as well as sectoral sandbox regulators such as RBI for finance and telecom, which can be leveraged to test and validate AI technologies safely.
EVIDENCE
He cites the established institutional framework of CERT‑India and CIPC, and mentions sectoral sandbox mechanisms like RBI’s sandbox and telecom’s sandbox that help organisations trial new technologies (lines 268‑270).
MAJOR DISCUSSION POINT
Governmental frameworks, capacity building, and assessment mechanisms
Argument 4
Calls for continuous vigilance: adopting AI for competitive advantage while proactively mitigating dependencies and adverse effects
EXPLANATION
Narendra stresses that AI adoption is a competitive necessity, but nations must also guard against its adverse effects by building capacity, identifying dependencies, and mitigating risks associated with rapid AI deployment.
EVIDENCE
He states that AI adoption provides a competitive edge, but stresses the need to protect against adverse effects, develop capacity, and mitigate dependencies as AI evolves (lines 318‑322).
MAJOR DISCUSSION POINT
Future outlook and five‑year vision for AI and cybersecurity
Argument 5
AI adoption is uneven across sectors, with the health sector lagging behind, creating an AI divide that requires sector‑specific capacity building and assessment frameworks.
EXPLANATION
Narendra highlights that while sectors like finance are relatively mature, the health sector shows high enthusiasm but low maturity, leading to a divide that must be addressed through tailored capacity‑building initiatives and the development of assessment frameworks for AI systems.
EVIDENCE
He contrasts the maturity of the financial sector with the enthusiasm yet lower maturity of the health sector, describing a “digital divide” and an emerging “AI divide” across enterprises, and calls for sector‑specific capacity building and assessment frameworks (lines 242-252, 254-259).
MAJOR DISCUSSION POINT
Sectoral disparities in AI readiness and need for tailored frameworks
D
Dharshan Shanthamurthy
4 arguments174 words per minute590 words202 seconds
Argument 1
AI levels the playing field for defenders while also creating new asymmetric threats for attackers
EXPLANATION
Dharshan observes that AI gives defenders a technological parity with attackers, enabling large‑scale threat detection, yet it also equips adversaries with powerful tools to launch sophisticated attacks, maintaining an asymmetric threat landscape.
EVIDENCE
He notes that AI provides a level playing field for defenders, allowing them to identify needles in haystacks, while also empowering attackers to industrialise disruption at scale (lines 170‑171 and 162‑176).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies describe AI giving defenders parity with attackers while also enabling sophisticated, large-scale attacks, reflecting the dual asymmetric impact [S33][S27].
MAJOR DISCUSSION POINT
Dual nature of AI in cybersecurity
Argument 2
Calls for an AI security operating system/playbook that enables organizations to leverage AI defensively rather than merely reacting to threats
EXPLANATION
Dharshan advocates for a structured AI security operating system or playbook that guides organisations on how to proactively use AI for defence, shifting from a reactive posture to a strategic one.
EVIDENCE
He explicitly calls for an AI security operating system/playbook that organizations should have to leverage AI defensively (lines 190‑192).
MAJOR DISCUSSION POINT
Governance, trust, and the need for an AI operating system
AGREED WITH
A. S. Lakshminarayanan, Pradeep Sekar, Samrat Kishor
Argument 3
Notes AI as a force multiplier that can both empower SOCs and industrialise attacks, urging a balanced “hope vs. fear” approach
EXPLANATION
Dharshan highlights that AI can dramatically boost security operation centres (SOCs) by automating tasks, but the same technology also enables attackers to scale phishing and social‑engineering attacks, so a balanced perspective is required.
EVIDENCE
He describes AI empowering SOCs through automation (e.g., Microsoft security copilot) while also industrialising attacks such as AI‑driven phishing, emphasizing the need for a balanced hope‑versus‑fear stance (lines 162‑176).
MAJOR DISCUSSION POINT
Strategic risk management and organizational risk lenses
Argument 4
AI offers India a strategic opportunity to develop world‑class cybersecurity talent, positioning the country as a global leader in AI‑driven security.
EXPLANATION
Dharshan argues that the convergence of AI and cybersecurity can be leveraged to create a skilled talent pipeline, enabling India to harness AI for defence while also fostering a new generation of experts who can drive innovation in the sector.
EVIDENCE
He notes that AI can create a level playing field for defenders, and emphasizes the chance to build “world‑class talent” by combining cybersecurity and AI expertise, highlighting the hope side of AI for India (lines 158-183).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on India’s AI workforce transformation and programs to empower developing nations highlight AI as a lever for building world-class cybersecurity talent [S15][S32].
MAJOR DISCUSSION POINT
Talent development and national opportunity in AI‑enabled cybersecurity
A
A. S. Lakshminarayanan
3 arguments162 words per minute1324 words488 seconds
Argument 1
AI multiplies existing digital‑infrastructure fragility, especially at the edge, by vastly increasing east‑west traffic and long‑lived API sessions
EXPLANATION
Lakshminarayanan warns that current enterprise digital infrastructure is already fragile, and the addition of AI will amplify this fragility, particularly at the edge where AI inference will generate massive east‑west traffic and long‑lived API sessions.
EVIDENCE
He describes the fragility of today’s digital infrastructure and explains that AI will multiply this fragility 100‑fold, increasing east‑west traffic and long‑lived API sessions at the edge (lines 68‑76).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research points to AI increasing traffic loads and exposing fragility in edge infrastructures, raising concerns about digital-infrastructure resilience [S31][S18].
MAJOR DISCUSSION POINT
AI integration into critical infrastructure and system fragility
Argument 2
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions
EXPLANATION
Lakshminarayanan suggests building an AI operating system that combines a context layer, an agentic layer, and a trust/governance layer, enabling LLMs to produce actionable intelligence while being governed securely.
EVIDENCE
He outlines the three layers—context, agentic, and trust/governance—that together form an AI operating system to safely orchestrate LLM actions (lines 87‑90).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Frameworks calling for layered AI governance, combining context, agency, and trust components, are being drafted to orchestrate LLM actions safely [S20][S21].
MAJOR DISCUSSION POINT
Governance, trust, and the need for an AI operating system
AGREED WITH
Dharshan Shanthamurthy, Pradeep Sekar, Samrat Kishor
Argument 3
Projects that the next five years will define long‑term health, emphasizing the rollout of an AI operating system and the emergence of AI‑native companies that disrupt existing business models
EXPLANATION
Lakshminarayanan foresees the next five years as decisive for the health of enterprises, focusing on deploying an AI operating system and anticipating AI‑native firms that will disrupt current business models.
EVIDENCE
He discusses Tata Com’s internal assessment framework, the need for an AI operating system, and predicts AI‑native companies will disrupt existing models within five years (lines 278‑315).
MAJOR DISCUSSION POINT
Future outlook and five‑year vision for AI and cybersecurity
P
Pradeep Sekar
3 arguments177 words per minute829 words280 seconds
Argument 1
Emphasises protecting decision‑making and trust through provenance, authenticity, and verification mechanisms
EXPLANATION
Pradeep stresses that cybersecurity must evolve to safeguard not only systems and data but also the decision‑making process, using provenance, authenticity, and verification to ensure trust in AI‑driven transactions.
EVIDENCE
He explains that trust can be measured via provenance, authenticity, and verification, allowing organisations to assess whether a transaction is trustworthy before an AI‑driven agent acts (lines 206‑210).
MAJOR DISCUSSION POINT
Governance, trust, and the need for an AI operating system
AGREED WITH
A. S. Lakshminarayanan, Dharshan Shanthamurthy, Samrat Kishor
Argument 2
Introduces three risk lenses for boards: compliance (e.g., EU AI Act), operational (model reliability and availability), and strategic (reputation and financial impact of AI‑driven attacks)
EXPLANATION
Pradeep outlines three perspectives for board‑level risk management: compliance with regulations, operational concerns about model reliability and uptime, and strategic implications such as reputational and financial damage from AI‑driven attacks.
EVIDENCE
He details the three lenses—compliance (EU AI Act, TDPDP), operational (model reliability, service continuity), and strategic (reputation, financial impact)—and notes how boards are beginning to ask these questions (lines 222‑236).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory analyses outline compliance, operational, and strategic risk dimensions for AI, aligning with board-level lenses discussed in AI governance literature [S26][S20].
MAJOR DISCUSSION POINT
Strategic risk management and organizational risk lenses
AGREED WITH
G. Narendra Nath, Daisy Chittilapilly
Argument 3
AI acts as a force multiplier for both defenders and attackers, enhancing detection capabilities while also enabling large‑scale AI‑driven phishing and social‑engineering attacks.
EXPLANATION
Pradeep explains that AI dramatically improves the speed and scale at which security operations can detect threats, but the same technology empowers adversaries to automate and amplify phishing and other social‑engineering campaigns, intensifying the overall threat landscape.
EVIDENCE
He describes AI helping defenders to detect threats at unprecedented scale and speed, while also noting that attackers can industrialise disruption through AI‑driven phishing and social engineering (lines 212-219).
MAJOR DISCUSSION POINT
Dual impact of AI as a force multiplier in cybersecurity
R
Richard Marko
2 arguments137 words per minute463 words201 seconds
Argument 1
Highlights humans as the weakest link; deep‑fakes and AI‑generated phishing amplify social‑engineering risks
EXPLANATION
Richard points out that people remain the most vulnerable element in cybersecurity, and AI‑generated deep‑fakes and sophisticated phishing increase the effectiveness of social‑engineering attacks.
EVIDENCE
He states that humans are the weakest link and that AI makes it harder to distinguish scams from genuine communications, citing deep‑fakes and AI‑enhanced phishing as examples (lines 98‑100).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rise of deep-fake media and AI-generated phishing campaigns is documented as amplifying human-centric social-engineering threats [S33][S27].
MAJOR DISCUSSION POINT
Human factor, resilience, and evolving threat landscape
Argument 2
Stresses the need for granular visibility into AI agent actions, guarding against interception or manipulation of commands
EXPLANATION
Richard argues that organisations must have detailed insight into what AI agents are doing, ensuring that commands are not intercepted, altered, or executed without supervision.
EVIDENCE
He calls for visibility into what is running in the background, how commands are transferred, and whether they can be intercepted or modified, emphasizing the importance of detailed scrutiny (lines 101‑105).
MAJOR DISCUSSION POINT
Human factor, resilience, and evolving threat landscape
S
Samrat Kishor
2 arguments175 words per minute1101 words375 seconds
Argument 1
AI has moved from being an application‑layer add‑on to a fundamental component embedded throughout the infrastructure stack.
EXPLANATION
Samrat explains that AI is no longer just a feature on top of existing systems; it is now woven into the core infrastructure that underpins applications, changing how systems are designed and deployed.
EVIDENCE
He notes that AI is becoming a fundamental part of the infrastructure used to build applications and that the perspective has shifted from viewing AI only at the application layer to it being embedded deep in the infrastructure (lines 26-32).
MAJOR DISCUSSION POINT
Shift in AI integration within technology stacks
Argument 2
Enterprises should adopt a corporate AI responsibility framework, similar to corporate social responsibility, to own and control the actions of AI systems they deploy.
EXPLANATION
Samrat argues that organizations need to formalise accountability for AI, ensuring that AI behaviours are governed, transparent, and aligned with ethical standards, moving beyond mere compliance to proactive stewardship.
EVIDENCE
He likens the emerging need to “corporate AI responsibility” to traditional CSR, stating that corporates must discuss how they control and own the actions of the AI they build and deploy (lines 91-93).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder discussions stress corporate responsibility for preventing AI misuse, advocating formal AI stewardship frameworks akin to CSR [S28][S26].
MAJOR DISCUSSION POINT
Governance and ethical stewardship of AI
Agreements
Agreement Points
AI brings both significant opportunities for security and new, serious risks, creating a dual‑nature dynamic.
Speakers: Daisy Chittilapilly, G. Narendra Nath, Dharshan Shanthamurthy, Pradeep Sekar
AI offers machine‑scale security management but introduces model leakage, jail‑breaking, and open‑source vulnerabilities Rapid AI adoption outpaces mitigation; adversarial nation‑states exploit AI, and data becomes the control plane leading to model poisoning AI levels the playing field for defenders while also creating new asymmetric threats AI acts as a force multiplier for both defenders and attackers, enhancing detection capabilities while also enabling large‑scale AI‑driven phishing and social‑engineering attacks
All four speakers note that AI can improve cyber-defence (e.g., machine-scale management, parity for defenders) but at the same time introduces novel threats such as model leakage, nation-state weaponisation, and AI-driven phishing, highlighting a clear opportunity-risk duality [8-21][40-44][48-49][170-176][212-219].
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects the widely-recognised dual-use nature of AI, highlighted in security analyses that note AI can both strengthen defenses and create novel threats [S65] and in broader policy discussions on AI’s dual-use challenges [S66].
A structured AI governance framework (often described as an AI operating system or security playbook) is essential to ensure trustworthy, controllable AI actions.
Speakers: A. S. Lakshminarayanan, Dharshan Shanthamurthy, Pradeep Sekar, Samrat Kishor
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions Calls for an AI security operating system/playbook that enables organizations to leverage AI defensively rather than merely reacting to threats Emphasises protecting decision‑making and trust through provenance, authenticity, and verification mechanisms Enterprises should adopt a corporate AI responsibility framework, similar to corporate social responsibility, to own and control the actions of AI systems they deploy
Each speaker stresses the need for a layered, policy-driven AI governance model-whether called an AI operating system, a security playbook, or corporate AI responsibility-to manage context, agency, and trust, and to keep AI actions under organisational control [87-90][190-192][206-210][91-93].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a formal AI operating-system echo emerging policy guidance such as the NSA’s AI Security Center playbook and the call for structured, inclusive AI governance in international forums [S59][S62].
Developing assessment frameworks, capacity‑building initiatives, and clear risk‑management lenses is critical for safe AI deployment.
Speakers: G. Narendra Nath, Daisy Chittilapilly, Pradeep Sekar
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment There is a significant AI readiness gap in large enterprises, with many lacking data strategy, compute capacity, threat understanding, and innovation capability Introduces three risk lenses for boards: compliance (e.g., EU AI Act), operational (model reliability and availability), and strategic (reputation and financial impact of AI‑driven attacks)
All three speakers call for formal mechanisms-assessment frameworks and sandboxes, readiness programmes to close capability gaps, and board-level risk lenses-to manage AI risks and build capacity across sectors [260-267][118-123][222-236].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on assessment frameworks and capacity-building aligns with recommendations from multistakeholder panels that stress evidence-based approaches, interdisciplinary skill development and governance foundations before deployment [S49][S51][S57].
Similar Viewpoints
Both emphasise the creation of a dedicated AI operating system/playbook that integrates context, agency, and governance to make AI deployments safe and controllable [87-90][190-192].
Speakers: A. S. Lakshminarayanan, Dharshan Shanthamurthy
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions Calls for an AI security operating system/playbook that enables organizations to leverage AI defensively rather than merely reacting to threats
Both the government representative and the private‑sector executive stress the importance of formal AI assessment frameworks and sandbox‑type testing to ensure secure AI roll‑out [260-267][283-295].
Speakers: G. Narendra Nath, A. S. Lakshminarayanan
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment We developed an assessment framework ourselves… (internal AI operating system and capability matrix) to evaluate AI readiness
Unexpected Consensus
Alignment between a government official and a private‑sector leader on the need for sandbox‑based assessment frameworks for AI security.
Speakers: G. Narendra Nath, A. S. Lakshminarayanan
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment We developed an assessment framework ourselves… (internal AI operating system and capability matrix) to evaluate AI readiness
While Narendra discussed national-level sandbox initiatives and the ETI framework, Lakshminarayanan, representing a major telecom company, independently reported building an internal assessment framework, showing an unexpected convergence of public-policy and private-sector approaches to AI risk assessment [260-267][283-295].
Overall Assessment

The panel shows strong convergence on three core themes: (1) AI’s dual nature of opportunity and risk; (2) the necessity of a layered AI governance/operating‑system model; (3) the urgent need for assessment frameworks, capacity building, and risk‑lens tools. These shared positions cut across government, industry, and academia, indicating a high degree of consensus on how AI should be integrated securely into critical infrastructure and enterprise practice.

High consensus – most speakers align on the same strategic priorities, suggesting that future policy and industry road‑maps are likely to co‑evolve around governance frameworks, capacity development, and balanced risk‑benefit perspectives.

Differences
Different Viewpoints
Different preferred mechanisms for securing AI systems
Speakers: Daisy Chittilapilly, G. Narendra Nath
AI pressures every layer of the stack, requiring a shift from hardware‑centric security appliances to a virtual, distributed security mesh Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment
Daisy argues that security should be embedded in the network fabric through a virtual, distributed mesh and an AI operating system that governs LLM actions (lines 124-132, 87-90) [124-132][87-90]. Narendra, by contrast, stresses the need for formal assessment frameworks and sandbox environments to test AI systems prior to deployment, emphasizing regulatory and testing approaches rather than architectural redesign (lines 260-267, 268-270) [260-267][268-270]. Both seek safer AI but diverge on whether the solution is architectural (virtual mesh) or procedural (assessment/sandbox).
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over internal safety controls versus external threat-detection mirrors analyses that distinguish safety-focused model validation from security-focused defensive measures [S58].
Focus of capacity‑building efforts – enterprise‑wide AI readiness versus sector‑specific AI divide
Speakers: Daisy Chittilapilly, G. Narendra Nath
There is a significant AI readiness gap in large enterprises, with many lacking data strategy, compute capacity, threat understanding, and innovation capability AI adoption is uneven across sectors, with the health sector lagging behind, creating an AI divide that requires sector‑specific capacity building and assessment frameworks
Daisy highlights a broad ambition-reality gap across enterprises, pointing to missing data layers, compute, and threat expertise (lines 118-123) [118-123]. Narendra focuses on sectoral disparities, noting that finance is mature while health is enthusiastic but immature, calling for tailored capacity-building and assessment frameworks for each sector (lines 242-252, 254-259) [242-252][254-259]. The disagreement lies in whether capacity-building should be pursued as a universal enterprise initiative or as targeted sector-specific programs.
Unexpected Differences
Human factor versus infrastructure‑centric view of AI risk
Speakers: Richard Marko, A. S. Lakshminarayanan
Highlights humans as the weakest link; deep‑fakes and AI‑generated phishing amplify social‑engineering risks AI multiplies existing digital‑infrastructure fragility, especially at the edge, by vastly increasing traffic and long‑lived API sessions
Richard stresses that the primary AI-related security challenge lies with people and social engineering (lines 98-100) [98-100], whereas Lakshminarayanan argues that the core problem is the fragility of the underlying digital infrastructure, which AI will exacerbate (lines 68-76) [68-76]. The contrast between a human-centric risk focus and an infrastructure-centric risk focus was not anticipated given the overall technical nature of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between human-centric risk considerations and infrastructure-centric security aligns with studies on AI as critical public-service infrastructure that stress adoption barriers and human factors, as well as broader calls for human responsibility in AI governance [S52][S61].
Overall Assessment

The panel displayed broad consensus on AI’s dual nature as both opportunity and risk, but diverged on the primary pathways to secure AI—architectural redesign versus procedural assessment, enterprise‑wide versus sector‑specific capacity building, and differing governance mechanisms. These disagreements highlight the need for coordinated policy that integrates technical, regulatory, and organizational strategies.

Moderate to high: while participants share common goals (secure, trustworthy AI), they propose distinct, sometimes competing, approaches. This could lead to fragmented initiatives unless a harmonised framework that balances architectural, regulatory, and governance measures is established.

Partial Agreements
All three speakers agree that AI governance and trust are essential, but they differ on the primary mechanism: Lakshminarayanan suggests a layered AI operating system (lines 87-90) [87-90]; Daisy focuses on embedding security in the network fabric via a virtual mesh (lines 124-132) [124-132]; Pradeep stresses provenance‑based verification of AI‑driven decisions (lines 206-210) [206-210]. The shared goal is trustworthy AI, yet the implementation pathways diverge.
Speakers: A. S. Lakshminarayanan, Daisy Chittilapilly, Pradeep Sekar
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions AI pressures every layer of the stack, requiring a shift from hardware‑centric security appliances to a virtual, distributed security mesh Emphasises protecting decision‑making and trust through provenance, authenticity, and verification mechanisms
Samrat calls for a formal corporate AI responsibility model (lines 91-93) [91-93], while Pradeep proposes board‑level risk lenses (compliance, operational, strategic) to govern AI (lines 222-236) [222-236]. Both aim to embed AI governance at the organizational level, but Samrat emphasizes a responsibility framework, whereas Pradeep focuses on risk‑lens based oversight.
Speakers: Samrat Kishor, Pradeep Sekar
Enterprises should adopt a corporate AI responsibility framework, similar to corporate social responsibility, to own and control the actions of AI systems they deploy Introduces three risk lenses for boards: compliance, operational, and strategic
Takeaways
Key takeaways
AI presents both a powerful opportunity for scaling cybersecurity defenses and a new set of risks such as model leakage, jail‑breaking, data‑driven control‑plane attacks, and open‑source vulnerabilities. The speed of AI adoption outpaces the development of mitigation measures, creating a gap where adversarial nation‑states and sophisticated attackers can exploit AI tools. Existing digital‑infrastructure is already fragile; AI amplifies this fragility—especially at the edge—by dramatically increasing east‑west traffic and long‑lived API sessions. Security must shift from hardware‑centric, perimeter‑only appliances to a virtual, distributed security mesh that is embedded throughout the network and AI stack. A dedicated AI operating system (or AI security operating system) is needed to provide context, agentic control, and governance/trust layers for safe LLM‑driven actions. Human factors remain the weakest link; AI‑generated deep‑fakes and phishing heighten social‑engineering threats, demanding granular visibility into AI agent behavior. Boards should evaluate AI risk through three lenses: compliance (e.g., EU AI Act), operational (model reliability, availability), and strategic (reputation and financial impact of AI‑driven attacks). Governmental bodies are beginning to create assessment frameworks, sandbox environments, and capacity‑building programs to address AI security at national scale. The next five years are critical: organizations need to mature AI governance, talent, and platform capabilities, while AI‑native companies are expected to disrupt existing business models.
Resolutions and action items
Develop and adopt AI assessment frameworks for security and functional validation (suggested by G. Narendra Nath and A. S. Lakshminarayanan). Leverage sector‑specific sandboxes (RBI, telecom, etc.) to pilot AI solutions before production deployment. Create an AI operating system that integrates context, agentic, and trust/governance layers (proposed by A. S. Lakshminarayanan). Establish AI security operating system/playbooks to shift organizations from reactive defense to proactive, AI‑enabled security operations (highlighted by Dharshan Shanthamurthy). Invest in capacity building: talent development, data strategy, compute resources, and governance mechanisms (noted by Daisy Chittilapilly). Encourage corporate AI responsibility frameworks to govern AI actions and outcomes (raised by Samrat Kishor).
Unresolved issues
Specific standards and metrics for measuring AI trust, provenance, and authenticity across enterprises remain undefined. How to operationalize continuous monitoring of AI model drift and prevent long‑term degradation without clear industry guidelines. The exact mechanisms for integrating AI governance into existing IT/OT environments, especially at the edge, were not detailed. Methods for quantifying strategic AI risk (financial impact, reputation) for board reporting need further development. Coordination between national security agencies and private sector on threat intelligence sharing for AI‑enabled attacks was mentioned but not resolved.
Suggested compromises
Balance rapid AI adoption with deliberate security integration—adopt AI for competitive advantage while simultaneously investing in mitigation and governance (as advocated by multiple speakers). Combine hope (AI as a force multiplier for defenders) with fear (AI as a new attack vector) to drive balanced investment in both offensive and defensive capabilities. Transition from hardware‑only security appliances to a hybrid approach that includes virtual, distributed security functions, acknowledging current infrastructure constraints while moving toward a more flexible model.
Thought Provoking Comments
AI is both an opportunity and a challenge – we can use it to manage security at machine scale, but we also have to protect models from jailbreaking, data leakage, poisoning, and the inherent vulnerabilities of open‑source models.
She framed AI in cybersecurity as a dual‑edged sword, highlighting not just the promise of automation but the concrete new attack surfaces that AI introduces.
Set the stage for the entire panel by establishing the central tension; prompted other speakers (e.g., Narendra and Lakshmi) to discuss specific risks (model drift, edge‑infrastructure strain) and mitigation strategies.
Speaker: Daisy Chittilapilly
The adoption of AI is happening at breakneck speed, and while many enterprises use AI to boost productivity, nation‑states and adversarial enterprises are weaponising AI, creating a disconnect that must be bridged.
He highlighted the unprecedented rapidity of AI uptake and the parallel rise of sophisticated AI‑enabled threats, emphasizing a strategic gap between defenders and attackers.
Shifted the conversation from technical challenges to a geopolitical perspective; led to deeper discussion on national‑scale implications and the need for faster, coordinated responses.
Speaker: G. Narendra Nath
Digital infrastructure is already fragile; adding AI will multiply that fragility a hundredfold, especially by exploding east‑west traffic and long‑lived API sessions at the edge, demanding an AI operating system with context, agentic, and trust‑governance layers.
He quantified the systemic impact of AI on existing IT/OT ecosystems and introduced the concept of an AI operating system as a holistic governance framework.
Introduced a new architectural paradigm that redirected the discussion toward platform‑level solutions; other panelists (Daisy, Richard) referenced this when talking about network virtualization and resilience.
Speaker: A. S. Lakshminarayanan
Resilience now must protect not only the infrastructure but also the hidden actions of AI agents—understanding what runs in the background, how commands are transferred, and guarding against interception or modification.
He expanded the notion of resilience to include the opaque behavior of autonomous agents, linking technical risk to human factors like deep‑fakes.
Deepened the analysis of AI‑driven threats by adding a layer of operational opacity; reinforced Lakshmi’s call for trust and governance, and prompted Daisy to discuss virtualized security meshes.
Speaker: Richard Marko
AI levels the playing field for defenders, turning the historically asymmetric cyber‑war into a more balanced contest; we can now use AI agents in SOCs to automate shift handovers and other tasks.
He offered a hopeful counter‑narrative to the fear‑focused discourse, suggesting AI can restore parity between attackers and defenders and create new talent opportunities.
Shifted tone from risk‑centric to opportunity‑centric; inspired subsequent comments about building AI‑enabled security operations and the need for AI‑security operating systems.
Speaker: Dharshan Shanthamurthy
We should view AI risk through three lenses – compliance (e.g., EU AI Act), operational (model reliability, trust), and strategic (financial impact on reputation) – and translate these into quantifiable financial metrics for the board.
He provided a structured risk‑management framework that moves the conversation from abstract threats to concrete governance and reporting mechanisms.
Guided the panel toward actionable governance models; influenced Lakshmi’s discussion of assessment frameworks and Narendra’s mention of sandboxing and regulatory structures.
Speaker: Pradeep Sekar
AI will not just automate tasks; it will scale decisions, requiring a new paradigm where we assess capabilities (talent, culture, platform) against desired outcomes (efficiency, revenue, trust) on a two‑axis framework.
He introduced a strategic assessment matrix that reframes AI implementation as a capability‑outcome alignment problem rather than a collection of isolated use cases.
Provided a concrete roadmap for enterprises, prompting other speakers to reference the need for platform approaches and governance layers; set the tone for the forward‑looking “five‑year” vision.
Speaker: A. S. Lakshminarayanan (later in the discussion)
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the panel from a broad framing of AI as a double‑edged technology to concrete, multi‑dimensional challenges and solutions. Daisy’s dual‑nature framing opened the floor, while Narendra’s warning about rapid, adversarial adoption shifted focus to national security. Lakshmi’s exposition of infrastructure fragility and the AI operating system concept introduced a new architectural paradigm, which Richard expanded into a deeper resilience narrative. Dharshan’s hopeful view of AI leveling the cyber‑defense playing field rebalanced the tone, and Pradeep’s three‑lens risk framework gave the conversation a practical governance structure. Finally, Lakshmi’s capability‑outcome matrix provided a strategic roadmap, tying together talent, platforms, and trust. Collectively, these comments redirected the dialogue from abstract concerns to actionable frameworks, influencing subsequent speakers and shaping a forward‑looking consensus on the need for holistic, governance‑driven AI integration in cybersecurity.

Follow-up Questions
How can we develop effective methods to protect AI models from jailbreaking, data leakage, and poisoning, particularly for open‑source models?
She highlighted the inherent vulnerabilities of open‑source AI models and the need for detection and mitigation techniques.
Speaker: Daisy Chittilapilly
What clear definitions and distinctions are needed between cybersecurity issues and AI malfunction or poor design, to avoid confusion in risk assessment?
He noted a lack of clarity on what constitutes a cybersecurity problem versus an AI design flaw, which hampers effective mitigation.
Speaker: G. Narendra Nath
How can comprehensive assessment frameworks be created for evaluating the security and functional integrity of AI systems before deployment?
Multiple participants mentioned the absence of standardized testing, certification, and assessment processes for AI deployments.
Speaker: G. Narendra Nath, A. S. Lakshminarayanan, Pradeep Sekar
What should an AI operating system look like, incorporating context, agentic, trust, and governance layers to safely manage LLMs and AI agents?
She advocated for an AI OS that provides governance and trust mechanisms to control AI behavior across applications.
Speaker: A. S. Lakshminarayanan
How can we ensure resilience by monitoring and securing the background processes, command pipelines, and agent interactions in AI‑driven workflows?
He emphasized the need to understand and protect the detailed steps an AI agent takes, including interception risks.
Speaker: Richard Marko
What strategies are needed to close the ambition‑versus‑reality gap in AI readiness, especially regarding data strategy, compute capacity, and AI threat awareness?
She presented data showing many enterprises lack essential foundations despite strong AI deployment ambitions.
Speaker: Daisy Chittilapilly
How can organizations quantify AI‑related operational and strategic risks, including trust, provenance, authenticity, and financial impact, for board‑level decision‑making?
He outlined three risk lenses (compliance, operational, strategic) and the need for metrics to translate AI risk into financial terms.
Speaker: Pradeep Sekar
What are the implications of AI‑induced increases in east‑west traffic, API call volume, and edge inference on the fragility of critical infrastructure?
She warned that AI will multiply network strain, especially at the edge, potentially overwhelming existing infrastructure.
Speaker: A. S. Lakshminarayanan
What capacity‑building initiatives and talent development programs are required to bridge the AI and cybersecurity skill gap across sectors such as health?
He highlighted a digital and AI divide, stressing the need for skilled personnel and training frameworks.
Speaker: G. Narendra Nath
How can sandbox regulatory frameworks be expanded and standardized to safely test AI innovations across diverse sectors?
He referenced existing sandboxes (RBI, telecom) and suggested broader use for AI experimentation before production rollout.
Speaker: G. Narendra Nath
What new business models and disruption patterns might emerge from AI‑native companies, and how should incumbents prepare?
She warned that AI could spawn a new class of disruptors, similar to past internet and fintech waves, requiring strategic foresight.
Speaker: A. S. Lakshminarayanan
What novel categories of technology (silicon, software, systems) need to be designed to meet the exponential performance demands of AI workloads?
She argued that existing hardware‑software stacks are insufficient for AI’s exponential growth, calling for re‑imagined technology stacks.
Speaker: Daisy Chittilapilly
How can corporations institutionalize ‘AI responsibility’ to govern and own the actions of AI systems they develop and deploy?
He introduced the concept of corporate AI responsibility, indicating a need for policies and frameworks to manage AI behavior.
Speaker: Samrat Kishor

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.