Scaling AI for Billions_ Building Digital Public Infrastructure
20 Feb 2026 18:00h - 19:00h
Scaling AI for Billions_ Building Digital Public Infrastructure
Summary
The panel opened by framing AI and cybersecurity as a two-way relationship, with AI being used to protect systems while security is needed to safeguard AI models themselves [1-4]. Daisy highlighted that AI brings both an opportunity to manage increasingly complex threats at machine scale and a set of risks such as model jail-breaking, data leakage and vulnerabilities in open-source models [12-23]. Samrat noted that AI has moved from the application layer into core infrastructure, making it a fundamental component of system design [25-28]. Narendra warned that the rapid, “breakneck” adoption of AI gives adversarial nation-states and enterprises powerful new tools, while the lack of a separate control plane makes models vulnerable to drift and poisoning, creating national-scale security challenges across sectors [40-44][48-52].
Lakshmi argued that today’s digital infrastructure is already fragile and that AI will amplify this fragility by vastly increasing east-west traffic and long-lived API calls at the edge, stressing networks and platforms [61-70][71-76]. She proposed an “AI operating system” that layers context, agentic control and governance to ensure trust and prevent model misuse [87-90]. Richard added that human users remain the weakest link, especially as deep-fakes blur the line between legitimate and malicious communications, and that resilience now requires detailed visibility into AI-driven actions and careful pacing of deployments [99-110]. Daisy reinforced the gap between enterprise ambition and readiness, noting that most large firms lack data strategy, compute capacity, and the ability to understand AI-related threats, and she called for a shift from hardware-centric security appliances to a virtual, distributed security mesh that accommodates AI’s probabilistic nature [119-151].
Dharshan emphasized the dual emotions of hope and fear, pointing out that AI levels the playing field for defenders through SOC agents and can also create new talent pipelines, while CXOs must balance regulatory compliance with operational and strategic AI risks [158-176][184-191]. Pradeep expanded on this by describing three risk lenses-compliance, operational (model reliability and trust), and strategic (reputation and financial impact)-and stressed that AI acts as a force multiplier for both attackers and defenders [223-235]. Narendra highlighted the need for capacity building, assessment frameworks and sandbox regulations to evaluate AI security before production deployment, leveraging existing institutional structures such as CERT-India and sectoral sandboxes [241-270]. Looking ahead, Lakshmi outlined a self-developed assessment framework that plots capability (talent, culture, platform) against desired outcomes (efficiency, revenue, trust), and warned that AI-native companies will likely disrupt existing business models within five years [282-315]. The discussion concluded that while AI promises transformative benefits for cybersecurity, realizing them requires coordinated governance, trust mechanisms, infrastructure redesign, and strategic foresight to mitigate emerging risks [91-93][318-322].
Keypoints
Major discussion points
– AI is both a security tool and a new attack surface.
The panel opened by distinguishing “AI for cybersecurity” and “cybersecurity for AI” and highlighted that AI brings opportunity (e.g., scaling security operations) and challenge (e.g., model jail-breaks, data leakage, poisoning) [3][8-13][21-24].
– Speed of adoption outpaces risk mitigation, creating a geopolitical “arms race.”
Narendra emphasized that AI is being adopted at a “breakneck speed” and that nation-states and adversarial enterprises are already weaponising it, widening the gap between defenders’ productivity gains and attackers’ capabilities [40-45][46-49].
– Current digital infrastructure is fragile, and AI amplifies that fragility.
Lakshmi warned that enterprises are “running towards the cliff” because existing IT/OT systems are already weak; AI multiplies the strain (e.g., massive east-west traffic, long-lived API sessions at the edge) and demands a fundamentally new “AI operating system” with trust and governance layers [61-70][71-76][84-90].
– Governance, trust, and risk-assessment frameworks are essential for responsible AI deployment.
The need for an AI operating system that embeds context, agents, and a trust/governance layer was reiterated, and Pradeep outlined three risk lenses-compliance, operational, and strategic-that boards must adopt to evaluate AI-driven decisions and trustworthiness [84-90][206-214][222-232].
– Future outlook: AI will reshape talent, business models, and national strategy, but only if organizations act now.
Dharshan highlighted the “hope” side-AI leveling the defender-attacker playing field and creating new talent pipelines-while Lakshmi and others warned that without a clear assessment framework and strategic foresight, AI-native disruptors will overtake incumbents within five years [158-170][184-190][278-315].
Overall purpose / goal of the discussion
The panel was convened to examine the dual impact of artificial intelligence on cybersecurity-both as an enabler for defending systems and as a new vulnerability vector-and to surface practical, strategic, and policy-level actions (governance models, risk frameworks, infrastructure redesign, talent development) that governments, enterprises, and regulators should adopt to harness AI responsibly while mitigating its emerging threats.
Overall tone and its evolution
– Opening (0:00-2:00): Curious and exploratory, with participants framing AI as a transformative opportunity.
– Mid-section (2:00-10:00): The tone shifts to cautionary urgency, emphasizing rapid adoption, adversarial use, and the fragility of existing infrastructure.
– Later segment (10:00-20:00): Becomes constructive and solution-focused, introducing concepts such as AI operating systems, trust layers, and new governance models.
– Closing (20:00-38:00): Moves toward a forward-looking, balanced optimism-recognising risks but also highlighting strategic opportunities, talent development, and the need for proactive planning over the next five years.
Overall, the conversation progresses from inquisitive optimism to measured concern, then to pragmatic recommendations, ending on a cautiously hopeful note about shaping AI-driven cybersecurity futures.
Speakers
– Samrat Kishor
Expertise: AI, cybersecurity, digital infrastructure (moderator)
Role/Title: Moderator/Host of the panel discussion[S7]
– Daisy Chittilapilly
Expertise: AI, cybersecurity, networking, digital transformation
Role/Title: Cisco representative (speaker on AI and resilience)
– G. Narendra Nath
Expertise: National security, cybersecurity policy, AI governance
Role/Title: Government official involved in national security and cybersecurity frameworks (CERT India, DRD)[S2]
– Dharshan Shanthamurthy
Expertise: Cybersecurity, AI, deep-tech consulting, thought leadership
Role/Title: Leader at a hardcore deep-tech cybersecurity company, consultant and thought-leader for large enterprises and government officials[S1]
– Pradeep Sekar
Expertise: AI risk management, cybersecurity strategy, regulatory compliance
Role/Title: Panelist (cybersecurity professional)
– Richard Marko
Expertise: Cybersecurity resilience, AI-enabled threats, human factors in security
Role/Title: Speaker (cybersecurity expert)
– A. S. Lakshminarayanan
Expertise: Digital infrastructure, AI operating systems, trust & governance in AI
Role/Title: Executive at Tata Communications (referred to as “Lakshmi, sir, from Tata”)[S9]
Additional speakers:
– Ms. Zazie
Expertise:
Role/Title:
1. Opening & framing – Samrat Kishor opened the panel by framing artificial intelligence (AI) and cybersecurity as a two-way relationship: AI can be deployed for cybersecurity, while cybersecurity is required for AI itself. He asked Daisy Chittilapilly about the big-picture changes that AI is bringing to security [1-4][6-7].
2. Opportunity & challenge – Daisy Chittilapilly (Cisco) explained that, as with any new technology, AI is simultaneously an opportunity and a challenge. The expanding cyber-threat landscape, driven by ever-greater connectivity and “fidgetal” lives, has out-grown human-scale defence, prompting a shift to machine-scale tools [12-14]. AI promises to improve security management at that scale [15-16], yet it also introduces novel risks: models can be jail-broken, confidential data may leak, and open-source models carry inherent vulnerabilities that must be detected and mitigated [21-24].
3. AI as infrastructure – Samrat noted that AI has moved from being a mere application-layer add-on to becoming a fundamental component of the technology stack, embedded in the systems that organisations design, build and operate [25-30][31-33]. He then turned to G. Narendra Nath for a national-security perspective.
4. National-security perspective (Narendra) –
– AI adoption is occurring at “breakneck speed”, outpacing the development of safeguards, and nation-states as well as large adversarial enterprises are already weaponising AI [40-45].
– Unlike traditional systems that separate control- and data-planes, AI models use the data itself as the control plane, making them vulnerable to poisoning, drift and non-deterministic behaviour; over time a model can “drift” and stop behaving as expected, blurring the line between a cyber-security incident and a poor AI design [48-52].
– The rapid spread across finance, telecom, power and other critical sectors raises systemic risk [55-60].
5. Critical-infrastructure view (Lakshmi) – Samrat asked A. S. Lakshminarayanan (Lakshmi) about the state of existing digital infrastructure. She argued that enterprises are “running towards the cliff” because current IT/OT systems are already weak, and AI will multiply that fragility roughly a hundred-fold by dramatically increasing east-west traffic and long-lived API sessions at the edge [61-70][71-76].
– To address this, she proposed an AI operating system composed of a context layer, an agentic layer and a trust/governance layer, enabling organisations to turn LLM-derived knowledge into governed, actionable intelligence [84-90].
– Lakshmi warned that AI will scale decisions, not just transactions, and that, just as booking.com and fintechs disrupted incumbents after the internet wave, a new class of AI-native companies will likely upend existing business models [308-315].
6. Corporate-AI-responsibility – Samrat linked the discussion to the need for “corporate AI responsibility”, likening it to corporate social responsibility (CSR) as a governance imperative.
7. Resilience & human factor (Richard) – Richard Marko highlighted the human factor as the weakest link. Deep-fakes and AI-generated phishing make it harder to distinguish legitimate from malicious communications [98-100], and true resilience now requires granular visibility into what AI agents are doing in the background, how commands are transferred, and whether they can be intercepted or altered [101-105][106-110].
8. Digital-infrastructure readiness (Daisy) –
– Daisy presented Cisco’s AI readiness index, revealing a stark ambition-versus-reality gap: about 90 % of just under 1 000 large Indian enterprises plan to deploy AI agents this year, yet only two-thirds have a data strategy, one-fourth possess sufficient compute capacity, and less than one-fifth understand AI-related threats [118-123].
– She argued that traditional, hardware-centric security appliances are becoming obsolete; security must become virtual, distributed and embedded in the network fabric, rewiring the entire stack (silicon, compute, networking) to accommodate AI’s probabilistic nature, which demands new rules for applications that can no longer guarantee deterministic outputs [124-132][136-151].
9. Hope vs. fear (Dharshan) – Dharshan Shanthamurthy described the emotional duality surrounding AI. He noted that AI levels the playing field for defenders-e.g., SOC agents can automate shift handovers-and creates new talent pipelines for a deep-tech cybersecurity workforce [158-176]. He called for an AI security operating system or playbook that enables organisations to proactively leverage AI rather than merely react to threats [184-191].
10. Board-level risk lenses (Pradeep) – Pradeep Sekar outlined three risk lenses for board-level oversight: compliance risk (e.g., EU AI Act, sectoral regulations), operational risk (model reliability, availability, trust) and strategic risk (reputation and financial impact of AI-driven attacks) [222-236]. He cited Microsoft’s Security Co-pilot as a concrete example of AI automating SOC tasks [212-215], and warned that attackers can industrialise phishing and social-engineering at unprecedented scale [216-219].
11. Government capacity-building (Narendra) –
– Narendra highlighted a “digital-AI divide” across sectors and stressed the need for capacity-building and assessment frameworks.
– Existing institutional mechanisms such as CERT-India, CIPC, and sector-specific sandboxes-RBI’s sandbox for finance and the telecom regulator’s sandbox-can be leveraged to test AI systems before production [241-247].
– He announced a government-funded project (started November 2024) to develop AI-security assessment standards, alongside the ETI framework, to provide systematic evaluation of AI deployments [260-267][268-270].
12. Five-year outlook (Lakshmi) – Lakshmi described Tata Communications’ internal assessment framework, which plots capability (talent, culture, platform) against outcomes (efficiency, revenue, trust) on a two-axis matrix [282-295][300-304]. She warned that the next five years will determine the long-term health of companies, as AI-native disruptors reshape markets [308-315].
13. Nation-state perspective (Narendra – final) – In response to Samrat’s final question, Narendra asserted that AI will be a competitive advantage for nations that adopt it responsibly. He emphasized the urgency of mitigating adverse effects through capacity building, clear frameworks and a five-year roadmap [318-322].
14. Consensus & disagreements – Across the discussion, the panel reached strong consensus on three core themes: (1) AI’s dual nature as both a security enabler and a new attack surface; (2) the necessity of a layered AI governance model-often termed an AI operating system or AI security operating system/playbook; and (3) the urgency of developing assessment frameworks, sandboxes and capacity-building programmes [84-90][190-192][206-210][91-93].
However, disagreements emerged regarding the primary mitigation route: Daisy advocated for a virtual, distributed security mesh embedded in the network fabric [124-132], whereas Narendra emphasised procedural safeguards such as assessment frameworks and regulatory sandboxes [241-247][260-267][268-270]. A second divergence concerned capacity-building focus: Daisy highlighted a universal enterprise-wide AI readiness gap [118-123], while Narendra called for sector-specific initiatives [242-252][254-259]. Finally, Richard’s human-centric view of resilience contrasted with Lakshmi’s infrastructure-centric emphasis [98-100][68-76].
15. Closing – Samrat thanked the panel and the audience, underscoring the need for coordinated governance that blends corporate AI responsibility with board-level risk lenses, while simultaneously investing in talent, infrastructure, and robust assessment mechanisms [91-93][318-322]. This balanced outlook reflects cautious optimism: if acted upon, AI’s transformative potential can be harnessed without compromising cybersecurity resilience.
The context is, have you overdone it? Right? When we talk about AI and cybersecurity, these two areas, how do they come together? There’s AI for cybersecurity, and there is cybersecurity for AI. Right? So what we’re going to do is, we’re going to discuss both aspects. We’re going to at least try. So, you know, the first question, and I’d like to actually point it to Ms. Zazie, you know, what has changed, you know, if you were to look at the larger picture, the big picture, you know, in terms of AI coming into cybersecurity? What has changed?
I think as what happens with all technologies, and AI is no different in that sense. It is, of course, as we’ve been hearing over the last few days, a technology that will redefine humanity and how we live, work, play, all of that. But one thing that it hasn’t come, and with all of the other technologies, that have come before it is that it’s both an opportunity and a challenge. And it’s particularly true when it comes to the security space. So on one side, there is the promise that, you know, for some time now, with the advent of technologies, number of things getting connected, all of our lives going fidgetal, cyber threats have become, the landscape has, of course, expanded, and threats have become more and more complex and complicated.
And for some time now, we’ve not been able to manage cybersecurity at human scale. So machine scale was, you know, a lot of tooling was already in that space. So there is the promise with AI that you can manage security better. So there is definitely that opportunity. But at the same time, there is the recognition, like Dario Amadai said on the main stage yesterday, that his biggest concern and all of our concerns is that AI brings a set of risks, which not all of us have. And there are a lot of them that we know of at this point in time today. So both of these, so it’s also, like I said, that commonality is there with all technologies that came before it.
It is both a challenge and an opportunity and a challenge. Because we’ve got to protect models from being jailbroke. We’ve got to make sure that the models don’t leak our confidential information or poison our data. We’ve got to make sure that most of these are open source models, that they come with inherent vulnerabilities, so how do we detect them? So we’ve got to think about securing AI as well.
Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to build applications. So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure. It’s got embedded into the kind of systems which are now getting created and deployed. So we’re looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. We’re also looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI.
So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI. So, and that is where I’d like to bring in, Narendra, you, your perspectives on what are you seeing in terms of national security? You know, is it something which is giving us a spike, a blip, something which you can discuss, disclose here?
Yeah, I mean, it’s required to be discussed. That’s one thing that’s definite. No, one, you know, I take the points that you’ve said. One thing about all the other technological revolutions, as you said, is that, you know, there was a time frame over which that seeped into the system. Okay, and then we had time to look at how do I use it beneficially and also to look at the adversarial effects of it and how do I mitigate those things. Case of AI is that it’s really happening at a breakneck speed. And there’s also an adoption, a willingness to adopt into enterprises of the different AI tools that are there. So that is where the scary part is there.
And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit, in terms of they are looking at how do I improve my productivity, how do I improve my efficiency, that’s the focus area that they are in. So this is where there is a disconnect and this has to be really bridged and that’s where the problem is.
The summit actually in one way it’s helping people become conscious about some of the measures that have to be taken. That is one part. The other is the difference between other systems and this is a little technical in the sense that in the other systems we have a separate control plane and a separate data plane. There we could actually control they provide access limits to the control plane. but here the data itself is the control so you have that poisoning of models happening through the inputs that are there so you could have a drift and over a period of time you will find that the model will not be behaving as you would expect it to behave and it’s not also very deterministic so there are challenges in how do I protect it now there’s AI systems to see that it gives me the consistent results after a period of time then there is also lack of clarity about what is the cyber security issue there and what is the issue of malfunctioning or a poor design of an AI system that lack of clarity also results in the challenges that are there.
I think these are the preliminary thoughts that I have. So at the national scale the issue is that when you have multiple entities at the enterprise scale and financial sector, the telecom sector and all of them and the power sector adopting AI the effect on compromises and the critical information machine infrastructure is something that would actually make us wake up and then see that what could be done. Those are issues that are there.
Excellent pointers, sir. Excellent pointers. And I think since you brought in the private sector and the way they’ve evolved and they’re also subjected to these risks which are evolving in nature. I’d like to bring in Lakshmi, sir, from Tata here. So, sir, a lot of infrastructure is being built, connected, communicated using what you’re building for the nation. So, how are you seeing the paradigm shift from let’s say how it used to be before AI was commoditized and everyday technology. It used to be the labs. Now it’s out in everybody’s hands. So what is the change that you are seeing and the impact you’re seeing on critical infrastructure?
I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that that the digital infrastructure in enterprises today are already fragile. And we know that from an enterprise security point of view, there are so many attacks that are happening. And we know that there are huge issues when it comes to, for example, now we more talk about IT, OT security, the operational technology in factories were never in the purview of IT security. And there are, you know, security today and digital infrastructure in general is still very fragile. It’s islands of different OEM technologies and many, many things. And, you know, I don’t want to, you know, it is a major issue.
Now, on top of this fragility, you add AI. And this fragility is going to be multiplied 100 times. It comes over, right, on many, many kinds of platforms. because AI is going to increase the network traffic, especially the east -west traffic, by, again, multifold. The number of API calls that somebody… And we all are saying, oh, I’ll embed AI at the edge of the device, and if I have a banking application, I’ll do that, but nobody has thought through. If I put an inference there, or if you put an inferencing at the edge, the number of API calls these have to do is tremendous, and these API calls are long -lived sessions. They’re not traditional API calls.
So the edge infrastructure is going to come under tremendous strain. So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations and… …properly. So that is very fragile, and that is one point I want to make. The second point about this is I would like to expand the scope of this discussion. It’s not about AI and cybersecurity alone. It’s also about a broader trust question. I think we all know, you know, whether fake, the messages, you don’t know. Apply that in the enterprise context. And there was a talk about, you know, model drifts and so on and so forth.
So what we at Tadacom are doing, one is to protect the digital infrastructure through many, many things that we can do. And the unfortunate part is I don’t think enterprises have woken up to the fact that they have to do it. So I tell them that you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do. But when it comes to the drift and the trust part of it, I do believe that enterprises require, require an AI operating system. and what we mean by that AI operating system is something that brings the context together because LLMs will provide the knowledge. To make that knowledge into actionable intelligence, you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer which will control what an agent will do or will not do.
And if I take that control in my hands, and say that I will configure and ensure this agent will do something or not do something, I can make use of the models underneath a lot more intelligently. So I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly.
Sir, that’s a great point. In fact, I was having a conversation a few days back, and I was saying that that from the time of corporate social responsibility, it’s time to evolve to corporate AI responsibility, where corporates start talking about how they’re controlling and owning the actions of the AI that they’re building and deploying. Great perspective, sir. Thank you very much. At this point, I’d like to bring in Richard to sort of continue the talk about digital infrastructure and resilience. So how has resilience in your perspective evolved when we talk about AI risks to cybersecurity and vice versa?
Well, the question of resilience is a complex question. So I will bring a few aspects that I think are very important. So it is well understood that there are a lot of people who are in the industry and that people are typically the weakest link in cybersecurity. the reason is that we as human beings we were not evolved to deal with machines, computers and so on and most of us don’t have really deep technical knowledge about how systems work and so on so we are to a big extent dependent on relatively superficial understanding and so we are more easy to be tricked by different social engineering tricks and so on. Now with AI this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication I’m talking about deep fakes and so on so this is one aspect of the risk connected directly with people.
The other aspect is that we want AI to empower people to do things, more things and make them in an easier way so we have those agents and we give them some or we want to give them some commands like do this for me or that for me but we don’t understand all the steps that the agent will take on our behalf when performing those tasks and each of those tasks can be there can be a risk factor involved without us knowing like if you want to perform this action you will need to have those additional tools to achieve that and where you get those additional tools if AI decides on your behalf these are the tools you need software packages, whatever it is and they get to your computer without this being supervised then this is a problem so and we have to be very careful and we have to be very careful where I’m heading, like resilience here is really protecting or paying attention to details.
What is actually happening? What is running in the background? How are your commands transferred to the agents? Is there a possibility for them to be intercepted, to be modified? So it’s even it was difficult and complex even before advent of the new AI agentic approach. Now it’s becoming even more important to really go into all the details and we just heard from Lakshmi that he sees that we are moving towards a cliff. Well, depends on us of course. We want to go fast. We want to employ. We are all excited about AI but we maybe sometimes we need to slow down a little bit and make sure that the pieces are in the place and cyber security is not overlooked.
Excellent, excellent perspectives. And I think an offshoot to that question can be to Ms. Daisy, which is what are you seeing as changing when you’re talking about digital infrastructure and especially the connectivity which it needs because you’re at Cisco, right? And here is something which is connecting a lot of things to a lot of other things. So how are you seeing changes happening, especially when you talk about resilience and what’s going on inside digital infrastructure?
So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. And that is something that I want to reiterate. For the past few years, we’ve been publishing an AI readiness index. And the good news is that we are as ready as everybody else. The bad news is maybe we’re not as ready as we think we are, which is the point Lakshmi is making, right? 90 % of… just under 1 ,000 large enterprises that we spoke to in India want to deploy agents this year. Forty percent want that agent to work alongside a human being, but only about two -thirds of those enterprises really have a data layer, data strategy, a data platform, and a data governance strategy.
Only about one -fourth have the compute capacity they need. Only about one -third are able to understand AI threats and deal with them. And less than one -fifth have the innovation engine to think about building and scaling and maintaining AI applications and use cases. So clearly there is this ambition versus reality gap which we have to solve for. That’s not a problem as long as we all know that that’s where it is, and they were acutely aware of this issue. Thank you. the other thing is AI is essentially leading to what this means is that we are rewiring and restacking the enterprise it’s not just networks it’s compute it’s silicon I know you know at the national level silicon security is a conversation so all this resiliency which we used to build almost like a bolt on at the top and particularly we used to think of it only as cyber resiliency it’s a system resilience which is built into all layers of the infrastructure stack all layers of the AI stack and that’s why at Cisco since you asked me a network specific question we used to have we used to deal with connectivity largely as connectivity and now we know the persona of that end port that connects to an end device that might be doing an inferencing or it’s in the data center that persona has to be that it will be on one side it will be a switch or a router but on the other side it will also be a security defense point.
So this ability of building special grade of security appliances and putting them in various parts of the network is fast becoming an outdated idea. And the point we’ve got to do is we’ve got to break it into a number of virtual instances that can go wherever you want the security policy to be. So it becomes a very virtual distributed mesh rather than hardware. Yes, there will be hardware. I’m not saying it will go away. But this ability to infuse it into the fabric and networks tend to be the all pervasive fabrics. That’s the way at least at Cisco we’re thinking about it. So this domains of networking and security are crashing together. So secure networking is like the conversation in the network space particularly.
The other part about this is the performance requirement which also Lakshmi alluded to. AI will put pressure on the underlying infrastructure. In a way it’s an exponential technology. The demands it will create on its underlying layers is also exponential. So we’ve got to almost build a new category of technology. Silicon systems, applications, everything. A new category has to be built and we have to build it in new ways. You cannot build it in the ways of how we built it in the past. Applications is an interesting one. We used to give an input, expect the same output on the other side. But now if you are going to deploy AI models, this thing is probabilistic.
And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application or a very important citizen service application, you give an input and the output has to be deterministic. But you’re using at the core of it a probabilistic technology. So that refinement also takes a whole lot of work. So it’s rethinking in ways in all layers. of the, from silicon to software to systems, you have to rethink everything. Every rule we have to rethink.
Excellent, excellent. And since you brought in that perspective of rethinking, reimagining, and how we’re using AI in the operating system of the company, I’d like to bring in Darshan here. So Darshan, you do a great work, you do a lot of great work in creating thought leadership content as well as doing consulting work for very large companies. Of course, there are CXOs and a very highly ranked official of the government sitting here, but then what are the other six CXOs thinking about when it comes to AI? Is it still a compliance thing, or has it percolated into strategy?
First of all, thank you. I’ll probably add some context to whatever I’ve heard so far. So first of all, my views is any technology disruption brings in two emotions, right? So hope as well as fear. And I’m sure all the other panelists have rightfully covered the fear construct of AI in cyber safety. And rightly so. No disputing that truth. But there is a huge hope component from a cybersecurity company like for us because we are a hardcore deep tech cybersecurity company. I see a lot of opportunities. And we as a country, India, can also be, we are at the sweet spot between intersection between AI and cybersecurity. And this topic is very aptly crafted because I think it’s a huge opportunity for us to also utilize.
And I’ll tell you why. Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders or anyone who’s actually defending a network because they just need to get one thing right and we need to get everything right. So it’s always been asymmetric. But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack. For example, one classic use case. Can be an agent. security operations center. Because at the end of the day, if you have ever visited a security operations center, it is a 24 bar 7 someone, analyst looking at a screen and almost an inhuman job, so to speak.
But today, with AI, now you’ve got a level playing field because we’ve seen those kinds of use cases being deployed at our SOC, where even a shift handover is done by an agent. So a lot of real use cases. So I’m on the hope side. There’s a lot of opportunities that today we have. And second, in terms of talent, I mean, we have a lot of youngsters sitting in this room who are looking to grow. We have spoken so much about other services, other areas evaporating in terms of job opportunities. I think we can create the world’s cyber security talent combination with AI because cyber security and AI are not two different fields. They actually, cyber security needs AI and AI needs cyber security.
So I think we are at a very, very opportunistic opportune time for us. to really ride this wave and create world -class talent which can address. So now on the second part which you just spoke about, that’s what we are hearing at the CXOs globally since we deal with a lot of people in the payment ecosystem. CXOs obviously have the same construct of hope versus fear, right? So some are obviously being a CISO or a CIO. There is amount of fear that is also coming in because these are real problems, right? For example, deep fakes or spear phishing attacks have become more robust with AI. But one of the key things that we are trying to explain is that, yes, those are things that you need to address, no doubt, but can you also look at how you can take advantage of those AI?
And Lakshmi rightly pointed out, how do you have an AI operating system? Similarly, we talk about how you can have an AI security operating system, right? Which you should have a playbook on how to leverage AI rather than being on the defense player. So those are the… Those are my views. Samarath.
Excellent, excellent views and thank you very much for those perspectives and I’m glad that I still see people coming in, you know, this is an interesting session and some people standing as well. So I would like to bring in Pradeep now. Pradeep, you know, as a follow on to what I just asked Darshan, here is something which is, you know, at the top and we’re saying, you know, while it is percolating into strategy a bit, do you think that we should have a dedicated function within an organization and what are you seeing currently not just in India but elsewhere as well?
Yeah, thank you for that. So probably adding on to what Dharshan said, right, I don’t mind the hope and the fear thing because being in cyber security space, both of them do add more to what we can do, right, for the industry as a whole, for the country as a whole, if you would. When we look at strategically, when we talk to, let’s say, leaders and boards at companies in India across the world, predominantly when the conversation is about AI, the topic goes towards innovation, competitiveness, and ability to bring in, let’s say, productivity gains, right? What often gets missed is that AI is quietly reshaping the risk equation within the enterprise, right? Now, cybersecurity, right, so can no longer be just about protecting systems and the data.
Now, don’t get me wrong, right? Cybersecurity is still needed to be able to identify all the systems within your enterprise, enterprise beyond the extended enterprise, as well as be able to protect the data that is on all of these systems. But it needs to evolve into something more, given the AI landscape, which is, I love how Lakshmi put it, right? It’s going to be about trust. So going forward, can cybersecurity, how can it evolve to start protecting decision -making and trust? Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verification. Now, all of these mechanisms are going to come in, in a way. that we are able to identify, measure, rate, risk, rank, and call out whether this particular transaction that you’re doing, whether it’s a payment approval or it’s an executive communication, is trustworthy or not.
And then accordingly, the agent of the system that’s allowing the transaction to go through allows it or not, right? So that’s something that we’re seeing, and AI in this context is a force multiplier, right, on both sides. For us as defenders, we are seeing, like Darshan said, how we are able to detect, identify threats at scale and speed that we have never seen before, right? And definitely, right, bringing in, again, it’s not going to, so if you ask, okay, is it going to completely revamp how we do and run SOCs? A little yes. It’s not going to replace all the analysts, but definitely in terms of certain tasks that we are doing, we already started seeing Microsoft with its security co -pilot, how it can automate tasks, right?
Like different agents doing different tasks, so we’re already starting to see that. Now, but in addition to that, it’s also helping attackers on the other side of the equation, which is it is industrializing disruption at scale. Think fishing. Think social engineering. Now, all of these manipulation, now it’s happening at an unprecedented scale. That’s going to continue, and you’re going to see it continue for, let’s say, the next few years because that’s where we’re headed in terms of air -aided phishing. I would say, yeah, definitely manipulation and how this is going to impact the industry as a whole. Now, I would say that’s pretty much how all of these, the shift is, the tectonic shift is happening, right, across.
So as I would say working with leaders and board members, we are looking at how to look at these risks and how to frame these risks, and here usually we see three lenses. So one is the compliance risk, which is am I complying with the EU AI Act, right? Am I complying with the TDPDP or other sectoral guidance? So it’s more of a check -in -the -box approach. Maybe helps with me in protecting against regulatory exposure, but not with systemic risk. like what Ms. Daisy was saying. The second angle which some companies have started to move towards is the operational risk, right? Where the boards are starting to ask, the models am I using? Is it reliable?
Is it safe? Is it trustworthy? And what is the risk if this particular model, a service provider who’s providing that model goes down? So that’s the operational risk angle that we’re seeing more of. The third angle which I think it’s very few companies doing today is probably the strategic risk angle, right? Where in being able to call out if I’m using this particular, if there’s an AI -driven attack, identity attack, right? That is reducing or impacting the reputation of my organization with my customers, what is my exposure in financial terms? Now these are questions that boards would start to need to ask because we are starting to ask those questions and get those questions from leaders in order to how are we able to measure those and how do you quantify risk in financial terms?
And be able to convey that to the board as well because that’s what at the end of the day boards are concerned in being able to. the stakeholders and shareholders.
That’s great and those are some interesting lenses that you put to the whole conversation. Sir, I’d like to bring you in now from your vantage point. When we talk about India’s DPI, we are implementing AI into systems which cater to healthcare, which cater to telecom, across the citizen supply chain, if you will. So how do we make sure that the AI deployments that we’re doing and what capabilities do we have to make sure that these deployments are secure and they’re taking care of the risks that the fellow panelists highlighted?
The financial sector, for example, is mature. But let’s say, take the health sector. It’s not as mature as others. But if you look at the enthusiasm, for example, of the health sector to adopt AI, you’ll find that the level of enthusiasm is similar to what is there in the other sectors. So that is a big challenge. We’ve been engaging with the health sector, for example. We’ve had recent meetings also to say that, how do I improve the cybersecurity posture of that sector? So that’s a big challenge, actually. So we had this digital divide. We have a cybersecurity divide that’s there. And now we are going to have this AI divide that’s going to be there across enterprises in different sectors.
So that is a challenge that is required to be addressed. That, I think, is the capacity building part. And also coming up with frameworks where people have access to that framework and understand what is really required to be done. And you talked of assessment. When an enterprise is coming with the AI system, is it secure? Is it doing the work? Is it doing the work it’s supposed to do? So we don’t have those assessment frameworks now. So if you’re aware, you know, the testing and assessment part is an important part, and creating that infrastructure so that people could go and then test and assess, that is an important part. The department of DRD has come up with an ETI framework, if you’re aware of it.
Similarly, from our office also, we funded a project, and still it’s around a year back. It started in November of 2024, we funded the project for coming with an assessment framework for AI systems. So that one is the security aspect of that, and the other is, of course, the functional aspect, you know, that also. In the sense that somebody claims that this AI system does something. How do you actually assess that? So I think one is the capacity building part, and the other is the, you know, having the frameworks in place is good. One thing good about this country is that we have an institutional framework that’s been established, especially because of cybersecurity over the period of time, like we have got the CERT India and CIPC, or their institutional framework.
and also the sectoral regulators also come up with sandboxing regulations in the sense that if you want to try out something new, you have these regulations that helps you to try out something new. So I think these like in the financial sector, you have the RBI sandboxing, the telecom sector also this mechanism. So I think people start using these sandboxes to prove technologies, to prove applications, to prove use cases and that will help them to actually understand how it really works before they deploy in production. That I think would help going forward.
Awesome. Thank you, sir. And I think it’s enlightening and enriching for all of us here to know your perspective especially what the government is doing towards it. I’d like to bring in Lakshmi, sir from Tata here for the next question. So, sir, if we reconvene five years from now here, what are we going to be talking about? What did we do? What did we get right?
I think this discussions are very healthy. whether AI with a positive lens or with a fear lens. I think we need to – I – on two comments. One is, you know, the question on assessment. We ourselves in TataCom, when we asked ourselves the question, where do you want to be five years from now? And I made a statement that the next five years will determine the health of the company for the next 50 years because the technologies are moving very fast. So for an assessment framework, we developed a framework ourselves. We studied a lot of material. We didn’t find something good. So we developed an assessment framework where on one axis we plotted the capability. You know, it includes talent.
It includes the platform, which is when I said, look, no point in doing individual use cases in an organization. How many use cases will you do? You need a platform approach, which is where we said AI operating system is required. So that is maturing. So one – on one axis. On one axis, we are going to plot the capability. So it’s talent, even culture. AI, I don’t know whether people have appreciated this is a very different paradigm even now in the discussions I see people talking about how AI can help automate things and do things faster no, that’s not what AI will do AI, you know while the previous technologies of cloud and internet have helped companies to scale transactions AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated and how it can be done so this is a new paradigm, so in the capability axis the culture dimensions would have to be thought through carefully and talent appropriate, I find some of the younger talent are more easy to train on AI than some of the older, unfortunately so I think the whole talent and capability equation is one axis, and we’re going to plot ourselves and the other axis is on the outcomes what outcomes do you really want to deliver with AI?
And there, you know, outcomes could be more on efficiency. Outcomes could be more on the revenue enhancement. Outcomes could be more on the trust and the customer satisfaction. All those outcomes need to be plotted. I must admit, we ourselves are somewhere in the lower quadrant, and I hope we as a company will move to the top quadrant. And that needs to be defined, and that needs to be visualized. And only then you can move towards that. And that’s what we’re driving the company towards. And all the platform development that we’re doing, strengthening our infrastructure for enterprises, and we’ve shared some of these assessments to our customers as well. So that is one. So I hope that most people would see themselves moving towards the top quadrant in five years’ time.
The second thing that I worried about in the context of strategy is, again, you know, when people talk about AI and strategy, and I think that’s something that’s been really important to me, and I think that’s something that’s been really important to me, And I believe that, like in the previous technology when we had Internet and cloud, there were new business models that came about. So we had intermediaries coming, the booking .coms and others who disintermediated many, many people, or fintechs who came and did things better than the larger banks. And only when the larger entities woke up to the fact that these people are going to eat their lunch. And that’s what happened in the previous wave of technology.
In the AI, I think similar disruption is waiting to happen. Don’t know where and when and what. But if a strategy does not think about that, as to what disruptions are going to happen, we would have missed the bus. So five years from now, I would expect a new class of companies who are AI native, who are out there going to disrupt the existing business model. So those are the two things I would expect in five years to happen.
Fabulous. And sir, one last question to you. If you were to give me a call five years from now and say, Samrat, this is how… nation states have changed what would that be?
See one is AI and I have talked somewhere else also, adoption of AI is a competitive advantage so that’s why you have to adopt AI and you don’t have any other because there are other nations who are going to adopt, there are other enterprises going to adopt AI and they are going to try to look at how do I do business better so that way going down you will find that we would adopt AI and this conference is very good for that five is down the line the other is protecting yourself from the adverse effects of AI because it’s a very powerful tool and then it’s just a thought process but I think as pointed out just one year we have found that such a lot of development has happened we do not know where this is really going to lead us so the thing is for us to be on our toes and to actually look through that how is this technology going to affect the way we do business and how we run our countries and then also and then you know this development of capacity capability and identify the dependencies that we have when this technology is adopted and try to see that how do I mitigate the dangers of those dependencies this is where I think the thought process would be and this is where I think the road map for the next five years for us.
Thank you, thank you very much sir and thank you all the panelists for taking time out and agreeing to do this for the audience I see the room is full and a lot of people waiting on the sides as well thank you all for paying attention please put your hands together for the esteemed panel that we have here together we have to conclude this panel only for the paucity of time otherwise we could have gone on thank you very much Thank you.
Australia:Thank you, Chair. The relevance and value of our open-ended working group relies upon us candidly exploring and addressing the threats, both existing and emerging, that have serious implicat…
EventNew research from Rubrik Zero Labswarnsthat agentic AI is reshaping the identity landscape faster than organisations can secure it. The study reveals a surge in non-human identities created through au…
Updates– Nadav Zafrir- Jill Popelka- Marc Murtra Cybersecurity | Infrastructure Rosenworcel argues that the rapid expansion of connected devices creates software vulnerabilities, and AI allows malicious ac…
Event– Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing security systems, but it also introduces entirely new types of security risks and cha…
EventThe United Nations haswarnedthat terrorists could seize control of AI-powered vehicles to launch devastating attacks in public spaces. A new report outlines how extremists might exploit autonomous car…
UpdatesConflict can easily erupt due tomisinterpreted intent.This is one aspect, among many, at which multilateral forums on security issues excel: they provide the opportunity to convey one’s own long-term …
BlogWilliam Samoei Ruto – Kenya: Your Excellency, President of the 79th Session of the United Nations General Assembly, Ambassador Philemon Yang, United Nations Secretary General, Mr. Antonio Guterres, …
EventMurat Nurtleu – Kazakhstan: Mr. President, Mr. Secretary General, Excellencies, ladies and gentlemen, let me first congratulate His Excellency Philemon Yang on his election as the President of the c…
Event27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l atest example of technologies being deployed for malevolent ends. They can pose seri…
ResourceMicrosoft’s AI Economy Institute hasreleased its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs, and the impact of digital infrastructure. AI has reached over 1.2 billion user…
UpdatesAnd the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which w…
EventAudience:Thank you for giving me the floor. My name is Ada Majalo. I’m coming from the Africa IGF as a MAG member. Very interesting session, really. I think when we talk about AI, most of the time it’…
EventSo naturally, it amplifies the current structure that streams users’ controls over their data. It further strengthens a handful of companies and their dominant roles over digital life. So overall, AI …
EventAs the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the promise and challenges of agentic AI governance. While technical capabilities con…
EventSajid Rahman: Thank you, and good afternoon. You know, it’s a great pleasure to speak about something which is not only shaping our lives, but also demanding combined stewardship from different stakeh…
EventGuardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
EventAl Mesmar highlights that as AI systems become more powerful, governing access to computational infrastructure and large AI models becomes increasingly important. This requires robust risk assessment …
EventThe discussion revealed significant consensus across diverse stakeholders on fundamental questions about AI standards. All participants agreed that standards are essential for building trust, enabling…
EventIBM’s CEO, Arvind Krishna, has left no room for doubt – AI is set to revolutionize the business world. Earlier this year, Krishna articulated a vision where humans work hand in hand with AI, a perspec…
UpdatesAccording toIBM’s report, executives estimate that around 40% of their workforce will need to reskill due to implementing AI and automation over the next three years. This indicates a significant tran…
UpdatesAI and digital technologies are reshaping how businesses operate faster than ever before. For companies the challenge is no longer access to technology but access to people. People with the skills to …
EventIn 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 2026, AIis expectedto move beyond experimentation and pilot projects and begin res…
UpdatesCherie Blair: Look at this place, it’s buzzing. It’s amazing. Cherie Blair: Well, I have to say I’m a bit of a techie enthusiast. So when I set up the foundation in 2008, I was thinking of… I wasn’…
EventAnshul Sonak: ≫ Thanks, Yiping. Good morning. So, calling from Silicon Valley, this is a very interesting conversation. This is 6 a.m. for me in the morning. ≫ Thank you for being there. ≫ I appreciat…
EventMajed Sultan Al Mesmar: Bismillah ar-Rahman ar-Rahim. Excellencies, distinguished guests, colleagues, friends, As-salamu alaykum wa-rahmatullahi wa-barakatuh. Very good morning to all of you. It is a …
EventThe tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained an enthusiastic and inclusive approach, emphasizing partnership over competition…
EventIt involves both possibilities and risks and is a transformative technology. The Hiroshima AI process will aim to reflect the opinions provided by various stakeholders. Opinions were collected from in…
EventThe discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes contentious issues. While there were moments of challenge and critique (particularl…
EventThe discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinking about complex trade-offs. While there were clear disagreements about the level…
EventThe tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation. While Kurbalija maintains an expert, measured delivery, there’s a growing sens…
EventThe tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shifts to educational and expansive while presenting AI capabilities. It becomes inc…
EventChina has adopted a proactive stance towards developing and harmonising new norms within the cybersecurity sphere, showcasing a positive sentiment towards global consensus-building and retaining harmo…
Event25 ,000 people. And I think it’s possible. I think it’s possible to use the technology at the expense that it has reached and integrate it within my system. I am already considered a high -income coun…
EventBrandon Soloski: Okay, that’s interesting. I hear a little bit of a delay. Good idea. All right. Good afternoon, early afternoon, everyone. I’m not sure if folks in the room are able to hear me. We…
EventAbhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this a reality? We have a lot of expectations because we are catching up with the We…
EventThese key comments fundamentally shaped the discussion by challenging conventional narratives about AI development and governance. Rather than focusing solely on technical capabilities or resource gap…
EventThe discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebration of achievements, and forward-looking optimism. However, there are moments of…
EventThe tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory yet professional atmosphere, with speakers expressing gratitude for the collabora…
EventThe tone was largely serious and earnest, with participants speaking candidly about shortcomings in current youth engagement practices. However, there were also moments of inspiration and hope, partic…
EventKashim Shettima: Well, the word for crisis in the Chinese language is wei desu, wei stands for danger and desu for opportunity. Yes, we have challenges, but those challenges are also pregnant wit…
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
Event“AI can be deployed for cybersecurity while cybersecurity is required for AI.”
The knowledge base highlights the dual nature of AI in security, noting it can both enhance defenses and introduce new risks, confirming the two-way relationship described [S34].
“AI is simultaneously an opportunity and a challenge; the expanding cyber‑threat landscape has out‑grown human‑scale defence, prompting a shift to machine‑scale tools.”
Reports describe escalating threats in scale, sophistication and frequency, and stress the need for AI-driven tools to keep pace, supporting the opportunity-challenge framing [S119] and [S120] and the human-capacity gap [S121].
“AI introduces novel risks such as model jail‑breaking, confidential data leakage, and vulnerabilities in open‑source models.”
Open-source model risks and broader AI security concerns are documented, including potential data exposure and model manipulation [S122] and [S123]; agentic AI behaviours that can act independently are also noted [S114].
“AI has moved from being a mere application‑layer add‑on to becoming a fundamental component of the technology stack.”
AI is described as a technology that will redefine how societies work and is being embedded as core infrastructure, indicating its shift from peripheral to foundational status [S1] and its rapid advancement [S128].
“AI adoption is occurring at “breakneck speed”, outpacing the development of safeguards, and nation‑states as well as large adversarial enterprises are already weaponising AI.”
UN remarks emphasize AI’s rapid pace and the urgency of governance, while other sources note that legislation is being drafted at breakneck speed, reflecting concerns about weaponisation and insufficient safeguards [S67] and [S129].
“AI models use the data itself as the control plane, making them vulnerable to poisoning, drift and non‑deterministic behaviour.”
The knowledge base discusses AI model vulnerabilities such as data poisoning and model drift, underscoring the control-plane nature of data in AI systems [S34].
The panel shows strong convergence on three core themes: (1) AI’s dual nature of opportunity and risk; (2) the necessity of a layered AI governance/operating‑system model; (3) the urgent need for assessment frameworks, capacity building, and risk‑lens tools. These shared positions cut across government, industry, and academia, indicating a high degree of consensus on how AI should be integrated securely into critical infrastructure and enterprise practice.
High consensus – most speakers align on the same strategic priorities, suggesting that future policy and industry road‑maps are likely to co‑evolve around governance frameworks, capacity development, and balanced risk‑benefit perspectives.
The panel displayed broad consensus on AI’s dual nature as both opportunity and risk, but diverged on the primary pathways to secure AI—architectural redesign versus procedural assessment, enterprise‑wide versus sector‑specific capacity building, and differing governance mechanisms. These disagreements highlight the need for coordinated policy that integrates technical, regulatory, and organizational strategies.
Moderate to high: while participants share common goals (secure, trustworthy AI), they propose distinct, sometimes competing, approaches. This could lead to fragmented initiatives unless a harmonised framework that balances architectural, regulatory, and governance measures is established.
The discussion was driven forward by a series of pivotal insights that moved the panel from a broad framing of AI as a double‑edged technology to concrete, multi‑dimensional challenges and solutions. Daisy’s dual‑nature framing opened the floor, while Narendra’s warning about rapid, adversarial adoption shifted focus to national security. Lakshmi’s exposition of infrastructure fragility and the AI operating system concept introduced a new architectural paradigm, which Richard expanded into a deeper resilience narrative. Dharshan’s hopeful view of AI leveling the cyber‑defense playing field rebalanced the tone, and Pradeep’s three‑lens risk framework gave the conversation a practical governance structure. Finally, Lakshmi’s capability‑outcome matrix provided a strategic roadmap, tying together talent, platforms, and trust. Collectively, these comments redirected the dialogue from abstract concerns to actionable frameworks, influencing subsequent speakers and shaping a forward‑looking consensus on the need for holistic, governance‑driven AI integration in cybersecurity.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

