Cyber Defenders in the Age of AI

21 Jan 2026 15:45h - 16:30h

Session at a glance

Summary

This World Economic Forum panel discussion focused on cybersecurity challenges in the age of artificial intelligence, featuring experts from regulatory, technology, and telecommunications sectors. The conversation was moderated by James Harding and included Jessica Rosenworcel from MIT Media Lab and former FCC chair, Nadav Zafrir from Checkpoint Software Technologies, Jill Popelka from Darktrace, and Marc Murtra from Telefonica.


The panelists emphasized how AI has fundamentally transformed the cybersecurity landscape by dramatically increasing both the speed and sophistication of attacks. They shared personal experiences with AI-driven threats, particularly deepfake voice impersonations targeting CEOs, which have become increasingly common and convincing. Rosenworcel highlighted how the rapid expansion of connected devices creates more vulnerabilities that AI can exploit at unprecedented speed, while also offering new defensive capabilities.


A key theme was the blurring lines between state and non-state actors, as AI democratizes access to previously sophisticated attack capabilities. The discussion revealed how AI agents can now communicate with each other in ways that create new security vulnerabilities, with attackers moving faster than defenders because they face fewer regulatory constraints. The panelists stressed the importance of moving from traditional identity-based security to contextual, multi-layered approaches.


The conversation also addressed broader societal implications, including threats to information integrity and democratic processes, as demonstrated by AI-generated deepfakes of political figures. European technological sovereignty emerged as a concern, with participants noting Europe’s dependence on third-party cybersecurity technologies. When asked whether the next two years would bring greater safety or vulnerability, the panel was cautiously optimistic long-term while acknowledging significant near-term challenges, with most believing that collaborative defense efforts and human resilience would ultimately prevail over malicious actors.


Keypoints

Major Discussion Points:

AI-Enhanced Cyber Attacks and Vulnerabilities: The panel discussed how AI is dramatically increasing the speed, sophistication, and scale of cyber attacks. Examples included deepfake voice impersonations targeting CEOs, AI-powered email scams, and the ability for malicious actors to identify vulnerabilities at unprecedented speeds. The panelists noted that AI allows less skilled attackers to conduct sophisticated operations.


The Blurring Lines Between State and Non-State Actors: Participants explored how AI is making it increasingly difficult to distinguish between government-sponsored cyber attacks and criminal organizations. The democratization of AI tools means that capabilities once exclusive to nation-states are now accessible to smaller groups, complicating attribution and response strategies.


Defensive Capabilities and Collaboration: The discussion covered how AI can also enhance cybersecurity defenses through anomaly detection, autonomous response systems, and AI-powered security agents. Panelists emphasized the importance of public-private partnerships, information sharing between security companies, and the need for interoperability between different security solutions.


Geopolitical and Sovereignty Concerns: The conversation addressed cybersecurity as a matter of national security, particularly highlighting Europe’s dependence on non-European cybersecurity technologies. Panelists discussed the need for regional technological sovereignty and the development of domestic AI and cybersecurity capabilities.


Information Integrity and Trust in Digital Communications: The panel examined how AI-generated deepfakes and sophisticated impersonation attacks are undermining trust in digital communications, affecting everything from business operations to democratic processes and journalism. They discussed the challenge of maintaining information integrity in an age of easily manipulated content.


Overall Purpose:

The discussion aimed to educate the World Economic Forum audience about the evolving cybersecurity landscape in the age of AI, examining both the new threats and defensive opportunities that artificial intelligence presents. The panel sought to provide insights from regulatory, technological, and business perspectives while exploring the geopolitical implications of AI-enhanced cyber warfare.


Overall Tone:

The discussion maintained a serious but measured tone throughout, with the moderator explicitly stating his hope for an educational “lesson” rather than an argumentative debate. The tone was cautiously optimistic despite discussing significant threats – panelists acknowledged the severity of emerging risks while expressing confidence in human resilience and technological solutions. There were moments of levity (references to Marvel movies, James Bond analogies) that helped make complex technical topics more accessible. The conversation concluded on a generally hopeful note, with most panelists predicting that while the next two years may be challenging, the long-term outlook favors the “cyber defenders” through improved collaboration, technology, and human adaptability.


Speakers

Speakers from the provided list:


James Harding – Moderator, Observer from London, former journalist


Jessica Rosenworcel – Currently running the MIT Media Lab, former chair of the Federal Communications Commission (FCC)


Nadav Zafrir – Representative from Checkpoint Software Technologies (Israel), former head of Unit 8200 (Israeli intelligence)


Marc Murtra – Chair of Telefonica


Jill Popelka – CEO of Darktrace (UK cybersecurity company)


Audience – Various audience members who asked questions during the session


Moderator – Session moderator (appears to be the same as James Harding based on context)


Additional speakers:


Liz Corbyn – From the European Broadcasting Union, represents public broadcasters around Europe


Daniela Tonella – CIO of ING Dutch digital bank


Full session report

Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion

Executive Summary

This World Economic Forum panel discussion, moderated by James Harding from The Observer, examined the transformative impact of artificial intelligence on cybersecurity. Harding opened by noting this was a “gentle” topic compared to the previous day’s address by President Trump, while acknowledging the self-selecting nature of the Davos audience interested in cybersecurity challenges.


The panel featured Jessica Rosenworcel, currently at MIT Media Lab and former chair of the Federal Communications Commission; Nadav Zafrir from Check Point Software Technologies and former head of Unit 8200 (Israeli intelligence); Jill Popelka from Darktrace; and Marc Murtra from TelefĂłnica. The discussion revealed how AI has fundamentally altered the cybersecurity landscape, creating both unprecedented threats and new defensive opportunities while challenging the foundations of digital trust.


The AI-Driven Transformation of Cyber Threats

Acceleration and Sophistication of Attacks

The panelists unanimously agreed that artificial intelligence has dramatically transformed the cybersecurity threat landscape. Rosenworcel emphasized how AI enables malicious actors to identify vulnerabilities at unprecedented speed, noting the challenge of securing an expanding universe of connected devices. She shared a concrete example of AI’s impact, describing how President Biden’s voice was deepfaked for robocalls in New Hampshire, demonstrating the sophistication of current AI-enabled attacks.


Zafrir highlighted a critical asymmetry: attackers are moving faster than defenders because they face fewer regulatory constraints and demonstrate superior collaboration. He noted that capabilities previously exclusive to nation-states are now visible on the dark web, fundamentally changing the threat landscape.


Popelka provided a personal example, describing how her own voice had been deepfaked and used in an attempted attack during a board meeting context. She emphasized how these attacks have become increasingly common and convincing, representing a new category of social engineering that exploits both technological capabilities and human psychology.


Democratization of Advanced Attack Capabilities

A central theme was how AI democratizes access to previously sophisticated attack capabilities. Murtra explained that AI makes advanced techniques accessible to non-experts, fundamentally changing who can pose cybersecurity threats. TelefĂłnica, with its 340 million customers and data points, faces this democratized threat landscape across a massive attack surface.


Zafrir provided insight into this democratization, noting that AI creates semantic connections between humans and machines through natural language interfaces. Individuals no longer need advanced mathematical or programming skills to write code or cause significant damage, which he suggested explains why “attackers are having fun” in the current environment.


Fundamental Security Model Inadequacies

The Challenge of Securing AI Agents

One of the most significant insights came from Zafrir’s analysis of how current security models fail to address AI agents effectively. He explained that while cybersecurity professionals understand how to secure humans and infrastructure, the critical problem lies in the interoperability between these different security approaches during the current transition period.


Zafrir described AI agents as “very naive” entities that “start crossing lanes” between different systems and security domains. Current security frameworks attempt to treat these agents as humans, applying identity-based security models, but AI agents lack human identity and operate fundamentally differently. This mismatch creates significant vulnerabilities.


To address this challenge, Zafrir suggested developing “guardian agents”—proprietary AI systems designed specifically to oversee and secure other AI agents, recognizing that securing AI systems may require AI-native security solutions.


Moving Beyond Traditional Authentication

The inadequacy of traditional security models extends beyond AI agents to broader questions about authentication. The panelists agreed that binary authentication approaches are insufficient for the AI era, advocating instead for contextual and layered identity verification systems that consider behavioral patterns and environmental factors.


Defensive Capabilities and Collaborative Approaches

AI-Enhanced Defense Strategies

Despite significant threats, panelists recognized AI’s defensive potential. Popelka emphasized anomaly detection based on understanding normal network behavior, arguing this provides better defense than traditional approaches that attempt to identify known threat patterns.


The defensive applications include autonomous response systems, intelligent threat detection, and AI-powered security agents that can operate at the same speed as AI-enhanced attacks. However, implementing these capabilities requires fundamental changes in organizational approaches to cybersecurity.


The Imperative for Collaboration

Strong consensus emerged around the necessity of enhanced collaboration, though panelists differed on specific mechanisms. Rosenworcel emphasized the need for governments to create “safe spaces” for public-private sector dialogue, noting that exponentially larger threats require different kinds of collaboration than previously existed.


Zafrir advocated for open platform collaboration between security companies, moving away from “closed garden consolidation” towards more interoperable approaches. He mentioned Check Point’s strategy of acquiring European AI startups, including British and Swiss companies, as part of building collaborative capabilities.


Popelka supported direct vulnerability sharing between companies, arguing that when organizations share information about discovered vulnerabilities, it strengthens the overall security ecosystem.


Geopolitical Implications and Sovereignty Concerns

European Technological Vulnerability

Murtra raised significant concerns about European cybersecurity sovereignty, noting that Europe lacks indigenous cybersecurity technology and deep expertise. He provided a specific example of a London airport cyber attack that required bringing in external European forensic experts, illustrating the dependency on foreign expertise.


This dependency creates strategic vulnerabilities, particularly problematic during conflicts involving state actors. The sovereignty concerns extend to broader questions about supply chain security and the ability to maintain critical infrastructure security during geopolitical tensions.


Attribution and Response Challenges

The discussion highlighted how AI complicates traditional approaches to cyber attack attribution. Audience members noted that AI helps attackers hide linguistic patterns and obfuscate their origins, making it increasingly difficult to determine attack sources. This attribution challenge has profound implications for diplomatic responses and legal accountability.


Information Integrity and Trust Challenges

Threats to Democratic Processes and Journalism

Liz Corbyn from the European Broadcasting Union raised concerns about AI-generated deepfakes of political figures and the challenges this poses for public broadcasters and journalism. The proliferation of convincing deepfake content threatens informed democratic discourse by making it difficult for citizens to distinguish authentic information from manipulated content.


The Erosion of Digital Trust

Perhaps the most profound concern was the potential erosion of trust in digital communication systems. Zafrir observed that the proliferation of AI-enhanced scams threatens trust itself, which he characterized as “the foundation of civilization.”


Daniela Tonella from ING questioned whether society might be approaching the end of digitally mediated communication due to increasing distrust from scams and impersonation. This concern reflects recognition that if people lose confidence in digital communications, it could fundamentally alter how societies and economies function.


Regulatory and Governance Challenges

AI Development Context

Rosenworcel provided crucial historical context, noting that AI development is occurring primarily in private markets by private actors, lacking the military contract guidelines that shaped previous revolutionary technologies. This difference means AI capabilities are being released without the structured oversight that characterized previous technological revolutions.


Harding expressed skepticism about AI’s democratizing potential, referencing historical patterns where technology promises of democratization often failed to materialize. He drew an analogy to MI5 and MI6 perspectives on AI, though noted the complexity of determining which intelligence services might have advantages in an AI-enhanced environment.


Legal Framework Inadequacies

The discussion highlighted significant gaps in legal frameworks for addressing AI-related cybersecurity challenges. Rosenworcel noted that legal systems must define machine activities and determine human accountability for AI actions—fundamental questions that current legal structures cannot adequately address.


Future Outlook and Prospects

Two-Year Outlook: Mixed Perspectives

When asked about prospects for the next two years, panelists expressed mixed views. Rosenworcel suggested that while the period would expose vulnerabilities, it also offers long-term opportunities for greater safety through technical advances and cultural adaptation to new trust models.


Popelka emphasized human resilience and collaborative problem-solving as ultimately leading to safer outcomes, arguing that the combination of technological innovation and human adaptability would eventually create more robust security environments.


Murtra characterized the situation as an arms race that remains balanced in the short term, with defenders likely to prevail long-term through improved collaboration and technology. Zafrir offered a “tale of two cities” perspective, suggesting the outcome depends on how effectively the cybersecurity community responds to current challenges.


Quantum Computing Considerations

The discussion briefly addressed quantum computing as a future challenge, though with less urgency than AI-related threats. Zafrir noted that while quantum computing will require new encryption approaches, much current data may lose relevance by the time quantum threats materialize.


However, Rosenworcel raised concerns about unequal access to quantum computing capabilities, suggesting that disparities in quantum readiness could create new vulnerabilities.


Areas of Consensus and Disagreement

Strong Agreement

Panelists demonstrated consensus on several fundamental points: AI dramatically increases the speed and sophistication of cyber attacks, traditional security models are inadequate for AI-driven environments, and collaborative approaches are essential for effective defense. All agreed that the current period represents a fundamental transition requiring new frameworks rather than incremental improvements.


Key Disagreements

Primary disagreements centered on approaches rather than problem identification. Harding questioned whether unrestricted AI development was wise from a security perspective, while Zafrir argued that controlling AI development is neither possible nor desirable, advocating instead for keeping pace with security solutions.


The panelists also differed on collaboration mechanisms, with varying emphasis on government-facilitated cooperation versus direct private sector collaboration.


Conclusion

This World Economic Forum discussion revealed the profound complexity of cybersecurity challenges in the AI era, extending beyond technical considerations to legal, social, economic, and geopolitical dimensions. While panelists agreed on the nature and severity of challenges, their discussion highlighted the need for multifaceted approaches combining technological innovation, regulatory adaptation, and international cooperation.


The conversation’s most significant contribution was recognizing that AI represents not merely an enhancement of existing cybersecurity challenges, but a fundamental transformation requiring new frameworks for understanding digital security. As Murtra noted using an “Avengers” metaphor, addressing these challenges requires bringing together the best capabilities from across the cybersecurity community.


The panelists’ mixed outlook for the next two years—acknowledging significant risks while maintaining cautious optimism about human adaptability and collaborative potential—reflects the genuine uncertainty surrounding this technological transition. The discussion ultimately emphasized that successfully navigating the AI-enhanced cybersecurity landscape will require unprecedented cooperation across sectors and boundaries, with the preservation of digital trust representing one of the fundamental challenges of our time.


Session transcript

James Harding

observer here from London and delighted, excited and strangely relieved to be talking about something as gentle and as innocuous as cyber security in the age of AI on the back of President Donald Trump’s address to the World Economic Forum.

Who could have thought that a session about the unseen and profound potential disruptions to all of our lives, businesses and countries could feel like a kind of intellectual respite from the conversations of the last hour and a bit.

There are, I think, if you’ve been to Davos and you’ve been to the World Economic Forum many times and if you’re joining us and listening online, you’ll discover that there are basically two kinds of sessions, I think, at the WEF.

There are those where you entertain an argument and there are those where you get to have a lesson. And I hope that in this one, given that there are plenty of other places at this particular forum where you can go and have an argument, we’re really going to get a lesson because I think that just as many of us are wrapping our heads around the idea of cyber security and something that we’ve begun to understand personally and we’ve begun to understand in our organisations, we’re beginning to realise that AI completely changes the nature of that and means that we face possibly threats, possibly more uneven threats than we understood.

And so what I was hoping to do was to introduce our panel and ask each of you, we’ve got an extraordinary range of people, not just in terms of the businesses but our kind of geographic spread here. I always think that these audiences too, the rooms, are self-selecting. If you’re here, you’ve probably got an interest or an experience of this issue.

So I hope we’ll have time as we get towards the end of this conversation also to bring people in to talk about their experiences too. Jessica Rosenworcel is now running the MIT Media Lab. former chair, of course, of the Federal Communications Commission.

So I suppose you’ve seen these threats from a regulatory point of view, from a government point of view, but increasingly now from a sort of research and technology point of view. Nadav Zafrir, is it called Checkpoint Technologies?

Nadav Zafrir

Checkpoint Software Technologies.

James Harding

Checkpoint Software Technologies, forgive me, from Israel. Nadav, likewise, having you here at the sort of cutting edge of the software, I’m hoping that there’s a kind of, metaphorically speaking, a bank of software technologists behind you who can explain what we can and can’t do.

I come from the UK. Jill Popelka runs Darktrace, a company that is a point of pride in the UK as one of the forefronts in this area. Before we even knew that we had a problem, we knew that we had Darktrace, so we are delighted that you’re here, Jill.

And Marc Murtra is the Chair of Telefonica. I’m slightly embarrassed, Mark, because when I first came to Davos, I used to spend my time chasing after the CEO and the Chair of Telefonica, so now I’m glad I can just show up and sit on the panel. So thank you very much.

I hope you enjoy this conversation. And as I say, as we get towards the end, please catch my eye. We’ll start with some kind of questions and comments.

Jessica, why don’t you start? Would you just give us a sense of where you see attacks or where you’ve experienced attacks on cybersecurity, particularly augmented by artificial intelligence that makes you think, oh, we’ve got a bigger problem than I realised?

Jessica Rosenworcel

Well, when I was running the Federal Communications Commission, the thing that struck me most during my tenure was just the radical expansion of the number of connected devices and the data they produce.

It’s growing so fast. And every one of those devices, those connections… relies on software, and that software frequently goes to market with known vulnerabilities that might be small and vulnerabilities that are unknown.

And when you introduce AI into the system and you have malicious actors, they can use AI to identify where those problems are at unbelievable speed, unlike anything we’ve ever seen before. So it multiplies the possibilities for bad actors to take advantage of all those connections and that expanded attack service that they provide. But I would also flip the script and say there are opportunities also for all of us to use AI to understand where those vulnerabilities are and fix them before they go to market, and that’s new too.

James Harding

So can I just ask about the sort of vulnerability versus the strength of defense point on this? As you know, in London, the Thames flows through the middle of London or one side of the river is MI6 that kind of tracks foreign spies or sets out foreign spies to look at foreign threats. On the other side is MI5 that tries to defend.

I did not know that, but I do now. And the thing that’s brilliant about the argument between MI5 and MI6 is they keep saying the other person has it easy. So the MI6, the James Bond guys go, you know what, if you’re MI5, AI is a wonderful defensive capability because it can track unusual behavior.

And if you’re MI5, you look at MI6 and say those guys have got it easy because AI massively helps in asymmetric warfare. So I think my

Jessica Rosenworcel

point is they’re both right. Oh no, I thought you could act as an adjudicator

Nadav Zafrir

of that. All right, well let’s cut let’s come back to those vulnerabilities and defenses in a second. I’ll take your example about MI5 and MI6.

I actually think that the ones that have it having the most fun now are neither. The ones that are having real fun are GCHQ. Okay, go on.

You know

James Harding

GCHQ are the tech end as you know, so security and no one has ever accused them of having fun before

Nadav Zafrir

You know, I I used to run the equivalent in Israel which is 8200 and used to visit Cheltenham a lot and I Left the service about 12 years ago. I’ve never I’ve never had any FOMO I’ve never I never looked back until the last couple of years. Oh, why because attackers are having more fun possibilities more capabilities than ever before It’s just an incredible time and the way I see Cyber is sort of a learning competition between offense and defense.

It’s always been like that. It’s like that now The hard part is that attackers are obviously moving faster than defenders. They don’t have regulations.

They don’t have to go through procurement They collaborate better than we do And so right now I think and we’re looking at the the attack side from the attackers lens and perspective Some of the things that we’ve already been seeing are exasperated and we need to take care of that I actually think that at the heart of that is we know how to secure Humans and infrastructure and I think we’re building the tools to secure non-humans The problem is interoperability You know, that’s and the next few years are going to be chaotic because of that interoperability Right, and so we need to focus on that because we are introducing to Jessica’s point we’re introducing new capabilities much faster than we understand what they can do and where they’re going and Thinking about these agents that are moving around us and are and everybody’s saying I have ten agents working for every employee I have a hundred agents.

I have a thousand agents. The thing about these agents is That they’re very naive at the end of the day and they start crossing lanes and we treat them as humans So we try to secure them based on identity, but they don’t have a human identity and they’re very naive And that is going to change the way we need to look at security altogether.

But until we get there, we’re at a very fragile time.

James Harding

Just explain, why the last two years? What happened in the last two years? You know, obviously…

It’s just the sort of chat GPT phenomenon.

Nadav Zafrir

All of a sudden, we have connected between humans and machines through semantic language. And any one of us doesn’t have to be a PhD in mathematics in order to write code, in order to change the world. And that’s why attackers are having fun.

But that changes everything.

James Harding

And I suppose what I’m trying to get at is one of the arguments that was made when OpenAI launched chat GPT and then the other AI services that were made available on a retail basis was that enormous amounts of power were being handed over without really understanding what individuals, good and bad, would be able to do it.

And the genie is out of the box, out of the bottle or whatever. So do you think that from a cyber security point of view, that kind of free market approach to AI was crazy?

Nadav Zafrir

I think it’s inevitable. I don’t think you can really control it. I don’t know from a regulator perspective what you think.

But I think that we can’t slow down science. We can’t slow down advancement. We just need to make sure that we run as fast as these technologies with security on top of that.

So, for example, when you have all these agents, you can’t have OpenAI securing OpenAI. You need to have a proprietary LLM to overlook that. We call it the guardian agent.

James Harding

Jessica, wait. Before I come to Jill, wait.

Jessica Rosenworcel

Well, a lot of our revolutionary technologies in the past, like the development of the internet, the development of aerospace, they started with a deep set of military contracts. And with those contracts, they got a stimulant and they got some guidelines. I think AI is different because it is taking place in private markets.

It’s being developed chiefly by private actors. And so those guidelines are being developed in real time and they’re not getting the same frameworks that some earlier technologies had when they reached the marketplace.

James Harding

Right, okay, that is ominous. Let’s come back in a second. Jill, will you just kick off just by giving an example either from within your life or Darktrace or if you like a client where you’re like, hey Hope, AI-driven cyber breach, this is one of the experiences you’ve had.

Jill Popelka

Certainly, as we’ve already discussed, AI is increasing the velocity of attacks. It’s creating sophistication complexity that we haven’t seen before in cyber attacks. And so we’ve seen deep fakes really start, we’ve been really interested in looking at deep fakes and Nadav and I were just talking about, I took the role of CEO of Darktrace about 18 months ago and when I did that, my first board meeting, I walked out, I was very focused on doing my job and one of my team members just had a blank stare on his face.

He said, you are not gonna believe this. He said, I’ve just gotten a voicemail from you requesting financial data and customer information and it sounded exactly like you. And he knew that I was in the board meeting, obviously, and he played it for me and that is shocking to hear your own voice requesting things from an executive.

And so I told this story yesterday in a room full of CEOs and what really surprised me was that they all said, that happened to me. Yes. That happened to me too.

It’s clearly. So this is apparently a tactic that cyber attackers use because CEOs in their early tenure, they haven’t established like the communication channels, the trusted relationships, not everyone has your cell phone number, your personal cell phone number and so they can get away with these things.

So we have to, in forums like this, talk about these vulnerabilities because we wouldn’t want to normally as a CEO stand up on stage and say, hey, I was the target of a deepfake impersonation, right? But if we don’t talk about it, then we’re missing some of the point of creating that human defense. You know, deep fakes, one of the things that we are looking at, and how do you really protect against those, right now it has to be a multilayered approach.

It has to be a multilayered approach to cybersecurity. And really working together, public and private partnerships are great, private-private partnerships are great, and some of us on stage share those, but you also have to have the human in that mix.

James Harding

Can we just, I hope we’ll get into the sort of harder end of this, but just on the soft end, Jill, so I was at a conference six months ago, a big banking conference, ashen-faced CEO disappears, comes back, says I’m sorry, we were in effect being extorted for several hundred million dollars, and what was interesting to me was that was then never shared in the room.

That would be, I mean maybe that’s material, there’s a reason for it, but I just wonder, how do you think CEOs, chairs, boards, should talk about this, because they end up looking vulnerable and risky?

Jill Popelka

I think it’s one of the times when we have to recognize that talking vulnerably is a strength, right, and trusting one another, because it is in some cases us against the bad guys, and the more that we can work together to ensure we understand how to protect, I mean we’ve each talked about the way that we look at protecting our organizations and our customers, how can we do that better together?

James Harding

Right, Marc, so firstly, so you had the same thing happen?

Marc Murtra

Yeah, similar thing, similar thing, and the voice weirdly seemed like mine, the way they spoke didn’t, and they probably do that when you’re named, or they probably span out automatically and see if somebody makes a mistake, and it didn’t work in our case at all, but it was weird.

James Harding

And can I just ask, when you look at Telefonica customers who are finding themselves vulnerable in one way or another too, I mean I don’t know whether or not you think about this more as a sort of corporate challenge, cyber security, or a customer challenge?

Marc Murtra

So it’s an integral challenge, you know, and Telefonica manages 340 million customers or points of data and entry. And if we look at it whatever way we want to, if we look at everybody here, everybody has a mobile, probably their bank account numbers, where they’re going, who they speak to. So the same happens to Telefonica, will happen to The Observer and to any company, the security service, the pension systems, everything is out there, all right?

And a cyber attack can do, simplifying a lot, one of three things, eavesdrop, stop or manipulate. And we were talking about a manipulation, but think about a self-driving car or think about a drone or… So the possibilities are very big.

So what I would highlight is the following. So what we telecom operators do is we integrate third-party products and manage the security. And I think there are two levels of attacks, potential attacks.

You were saying it’s more fun to be an attacker and they don’t have regulations, but I would segment it non-state actors and state actors. The reality is the best and the brightest usually work for large, interesting corporations and are not trying to steal or rob, though some of them do. So we are at a level of protection, I would say from non-state actors and at a different level of protection of state actors.

But where am I going with this? I would like to highlight that Europe has a huge vulnerability taking into account the current situation is that there is very little technology, very little cybersecurity within Europe. What we’re doing is we’re integrating third-party technologies.

So if we want to have autonomy or if there is a problem someday with a state actor, we can find the same situation we have with regards to the defense industry. If you don’t have technology, if you don’t have capacity, if you don’t have. Any deep know-how, it is a big problem.

For example, when there was a cyber attack in the London airport, the people that had to come in to do the forensic were from outside of Europe. And if we are going into an era of areas of influence in Europe, we better start building cyber security know-how.

James Harding

Can we just talk about that for a minute? Because let’s just talk about vulnerabilities. And let’s start, if you like, at the country level with state actors.

And it’s actually interesting thinking kind of Israel, Europe, the US. You might all think of different state actors as the problem in those circumstances. So, Nadav, why don’t you go first?

When Israel thinks of state actors and cyber security, who’s the threat and where’s the vulnerability?

Nadav Zafrir

Well, there are some notorious centres of excellence, quote-unquote, for criminal activity. But I would say that I actually think that is sort of rear-view mirror, to be honest. Because the lines are blurring.

Between the state and the non-state. And this is where the power of AI becomes incredible. What we’re seeing on the dark web are capabilities that until a couple of years ago were really state-level capabilities in the hands of a few.

You know, we hear about these startups that have two or three people. The same thing is happening with the new attacker community so that things are blurring. We used to say that some people work for a notorious government and then they moonlight as attackers.

You don’t really even need that anymore. When you have access to this technology. And what they’re taking advantage of, to your point about each one of us has their, you know, our information and our email and our mobile.

And I’m not saying it’s impossible to secure it But all I’m saying is that when you look at the the threat of vector You got to realize that what number one a lot of individuals country of companies that were sort of out of the curve The curve is moving towards them.

And so they will need to invest more And the second thing is that I’m sure that you’re introducing all these new capabilities to your clients Which are really awesome like I you know all of how many of you use AI to clear your inbox How many of you still read your emails, you know, I don’t read my emails I have you know your favorite You know co-pilot or bedrock or Gemini doing it for you.

Here’s the thing now you you’re sort of interoperating systems You’ve got two machines writing emails that are sent into your inbox You’ve got another Agent reading it for you and you’d be it’ll blow your mind how naive these models are You can just one machine tells the other machine.

Hey, I want you to go into this data center and encrypt everything This agent says but I don’t have access. But wait a second in slack. I have other agent friends Who has access?

Some agent says oh I got it. You need some help Let’s take you there and so It’s really a transformational time and and I totally agree that from a sovereignty perspective It’s it’s a new world order and you know to your point about the you know, the talk that we heard before This world order is changing and and this arms race around AI is just in its initial phases

James Harding

So come on just so Jessica we just pick that up, but will you just do it on the back of your comment about a world in which the private markets are driving AI and the guardrails aren’t there because what Nadav describes when I hear that is a world in which you multiply the vulnerabilities but it becomes less and less clear about the particular group or company or software provider that’s actually responsible because they’re farming it out.

Jessica Rosenworcel

I co-sign everything he just said. Historically, I think you could see these clear divisions between malicious state actors and scam artists. We’re adding machines to the mix and we’re giving both of those communities unbelievable new tools.

They don’t have to be skilled like they used to have to be skilled because they can operate with vocabularies that they never could before because they needed to be great at encryption and breaking things and understanding systems.

All of this is blurring into possibilities for fraud and disruption without the lines being nearly as clear as they used to be.

James Harding

You trained as a lawyer, didn’t you?

Jessica Rosenworcel

Yes, at one point.

James Harding

I guess what I’m saying is if I came to you in your old job as the chair of FCC and I said, look, can you just explain to me who is liable here in the world that Nadav described because you could legitimately say I didn’t ask my co-pilot, my Gemini, to tackle these other agents but it did.

Jessica Rosenworcel

I think governments are going to have two big problems with this going forward. First, how do we define those activities of the machine? Do we assume that a human is responsible?

Do we hold someone accountable for that? I think legal systems around the world are going to have to wrestle with that fact. Then I think there’s this other thing which is that governments generally look at critical infrastructure and say it’s got to be reliable, it’s got to be resilient.

If you fail to provide it in a way that’s reliable and resilient, I’m going to fine you. The provider of infrastructure has very little incentive to knock on the government’s door and say I’m seeing this problem over here. And the thing about the government is the government’s sitting in a position where if they knew that they might be able to tell every other similar provider Be careful.

Watch out figure out what’s going on and At the risk of using this vocabulary. I actually think governments have to create safe spaces for those dialogues Because what we are facing is exponentially larger and more confusing than ever before And it’s going to require a different kind of collaboration and regulation than before

James Harding

So Jewel, will you just talk about that? Because I suppose there’s two there are two suspicions. I’ve got in response to what Jessica said I think this session is called cyber defenders in the age of AI.

I get sounds like a Marvel movie It sounds amazing, right? And you are of course those characters, but but but but I think the citizen suspicion is the the people on offense are smarter stronger more mischievous than the people on defense and that they’re that the The sort of dark traces of the world are always having to play catch-up with the people on offense And the second thing is that what Jessica just described which is find a place where people can collaborate and share information It’s actually really difficult for you because your competitive advantage is not necessarily sharing that information So, so how do we have faith in the cyber defenders of it in the age of AI?

Jill Popelka

Well, let’s look at two things one historically Cybersecurity has been attacker centric. We’ve been really focused on who are the attackers? Where are the attackers?

What are they doing dark traces since its inception looked at this differently. We look at an organization We have an understanding of that organization’s network traffic how it communicates and then with that as an established normal We defend against anything that’s not normal. So we look at anomalies We have anomaly detection and we can do this for any organization and we do it for many And that anomaly detection then runs up against a multi-layered AI threat model that checks to see is this cyber risk How do we need to prioritize this and with de-identified customer data over the course of 15 years?

We’ve been able to figure out pretty precisely what’s going on and then we move on to autonomous response. And so in this world where AI is making the velocity of these threats just so quick, right, we can precisely respond and ensure that the organization stays safe by defining normal and then preventing anything that’s anomalous.

James Harding

So I understand that, but I suppose my question is, if you do get called in by the GCHQs, which of course you do and others, and they say to you, great, we really appreciate your identification of risk, your autonomous response, but we really need you to start sharing that with other organizations because they’re working with others, what happens then?

Jill Popelka

Well, it’s interesting, because Nadav and I just had a conversation right outside this room where we really want to work together because we are complementary solutions. And when you’re de-identifying data and when you’re fighting against the same thing, believe it or not, we work together in the background as well. If we see a vulnerability in another solution, we have a team that goes and tells that solution that the vulnerability exists.

Rather than coming and talking to the media, we’re going to go tell them so that they can shore that up. We’re smarter than you think, actually, James.

James Harding

I’m relieved, I’m relieved. Sorry, Nadav, did you want to say?

Nadav Zafrir

No, I mean, I agree. I think that the dogma that has been in for the last decade or so, it used to be consolidation, consolidation, and a closed garden. Nobody’s going to figure this out by themselves.

The only way to go forward is an open platform, and we’ll build this open platform alternative where you can choose best of breed capabilities, where you can interoperate, where I’ll see something, Darktrace will see another thing.

We’ll find ways under the radar with our APIs to make sure that at the end of the day, the customer is secure.

James Harding

Can I, Mark, can I just ask you a bit about the geopolitics of this? Because, of course, the telephonic customer base is, I think, to use the jargon of the times, is spread across more than one hemisphere.

Marc Murtra

Yeah, yeah, that’s right.

James Harding

And in different hemispheres, it seems as though we’re beginning to have different models of AI, and I wonder whether that means we have different vulnerabilities or different system approaches to defense.

Marc Murtra

No, I would say that, in general, what we see is a symmetrical management of vulnerabilities. So we’re managing security in a similar way in Brazil than in Spain or than in Germany. What is different is the use of the data that is set in there.

But, you know, if I might pull the string a little bit regarding geopolitics, so with regarding blurring lines of state or non-state actors, I’m pretty sure we think the same, there might be some semantics about it, but there is a big difference with regards to the objectives, right?

A non-state actor will want to steal as much money as they can process, right? Because if suddenly somebody in the middle of Barcelona has 20 million euros, what are they going to do with those 20 million euros before the actors, the police come from there? But a non-state actor is going to have different objectives, right?

So, and you’ve got to defend the whole system and they can choose wherever the system is vulnerable. It’s good news that I think is implicit to what has been said. And let me give a very prosaic example, for example, if you want to eavesdrop on somebody.

So before the five years ago, the problem of eavesdropping of somebody for the police or for a criminal is that you need five people because eight hours listening and 99.9% of what you hear is absolutely useless.

And you can be there for three months listening, fall asleep for the five minutes and you’ve lost it. And now with AI, you don’t need that. You can just record that and that’s what a criminal can do.

But the advantage is that the good guys, if this is our Marvel universe, you know, the Avengers, we work together, better or worse, and the whole system is anti-fragile. and it was being discussed here. Every vulnerability makes the system strong.

And one day I will retire, one day everybody here will retire, but somebody will take over and making the system solid and safe. Whilst the non-state actors or the state actors, they’ll be here, they’re there, and one day they’ll go or their regime will collapse or somebody will take care of them.

James Harding

Can I ask, can we do a couple of minutes, if you like, on security inequality? I can never work out with technology like who gets the better end of the deal and it always promises to be a democratizing force and so far hasn’t generally turned out to be that. And so I suppose my question is, you started, Jill, by saying, look, I’ve been deepfaked and then, Marc, you said I’ve been deepfaked and now I sort of have a world in which all these CEOs have been deepfaked.

I’m thinking, oh, is it the case that actually, generally people are not victims of deepfake at the extent to which kind of wealthy, prominent people are deepfaked or is it the other way around? So like just within society, who’s more vulnerable?

Jessica Rosenworcel

You know, I don’t think it’s only a question of risk in economic environments. When I was running the Federal Communications Commission, I came in one morning and before I had my coffee, I learned that President Biden had called several thousand people in New Hampshire and urged them not to vote in the primary.

I heard it, sounded just like him. Incredibly cheap and easy to make that and distribute it. And we went to elaborate efforts to figure out who developed it, traced it back through the network, found the people on the system.

It took a lot of work. When I think about how cheap and easy it was to produce that and the volume with which we can produce it and the velocity With which we can push it onto our networks it is um an Extraordinary challenge, but I don’t think it is just a question of risk in commercial environment or for CEOs It’s a risk with a broader population and even in voting and democracies, but but here’s the good news, right?

Nadav Zafrir

Because the democratization works both ways, right? So um to your point about millions and millions of subscribers That you know, all they have is that they have a they have a line and they they pay a subscription to Telefonica and Ten years ago. They wouldn’t have they Yes, given they weren’t susceptible to very sophisticated attacks, but they couldn’t hire You know the latest thing we can have an agent on every phone they’re carrying That has the latest and greatest policies that the US government used to have ten years ago So that is being democratized again as well You know, we always used to say in security when you used to ask what’s the biggest problem in security?

You would say talent we used to Missing two million people. We’re missing three million people Well, what do you know a lot of those people used to do that used to do things that these new agents are really good At and so now on every one of your customers we can embed On ten different agents that work for the customers.

One of them will read the emails however, one will be the guardian, you know the guardian Agent that will look after that and and honestly, you know to your point about Europe just as an anecdote We looked at at Checkpoint.

We looked we wanted to build our own foundational model. We looked at the whole world We actually found two startups that were relevant. One was British and the second was here in Zurich.

Marc Murtra

Really, both were European?

Nadav Zafrir

Both were European, and for us, we decided, so we talked to the Brits, but somebody else outbid us. We bought the company in Zurich. We already have 200 PhD researchers that are working, and they’re super passionate about this because you know what they’re concerned about?

From their, sort of, their ethos is about privacy. And they’re creating a small language model, which we are now embedding into all of these. And so, and you know, we just had a meeting today with the Prime Minister of Netherlands.

They also have like some universities that are pushing out these brilliant PhDs in mathematics. They don’t need cyber background. They need mathematics.

James Harding

So how does quantum fit into this conversation? Or can we come back in 2029 and talk about it then?

Nadav Zafrir

Well, I mean, I’m not an expert. I know that you probably, from the MIT perspective, have more to say about that than I do.

Jessica Rosenworcel

Now, why don’t you go ahead?

James Harding

Look, can I just give you my, if the very dumb question is, people seem to be saying it’s happening sooner, and that when it does happen, it will remake the whole world of kind of cryptography.

Nadav Zafrir

So not really, not really. I don’t, from cyber perspective, I don’t think it’s as big a problem as, the bigger problem is that people are recording now to use it later. That we cannot solve.

So if somebody else has all my data, a government has all my data, it’s all encrypted. They can’t use it. Maybe 10 years from now, they can use it.

Okay. I’m not super concerned about that. What we need to do right now is do something that we call quantum-ready encryption.

And quantum-ready encryption is available. And we can get ready for that. I honestly-

James Harding

We can, you’re saying?

Nadav Zafrir

I believe that we can.

James Harding

Jill?

Jessica Rosenworcel

That requires changes in bandwidth and, you know, like the unequal access if people don’t actually globally have the same bandwidth for quantum computing.

Nadav Zafrir

Yeah, it’s sort of the encryption models that we’re using now the RSA model and and are Built for a non quantum era. Yes And they’re they’re based on factoring right and everything can be broken if you have enough time When you have quantum computing it shrinks time literally, right? And so if you shrink time the current encryption doesn’t work, but there are other forms of encryption that we are deploying now

James Harding

Products if I understand you correctly what you if I understand it correctly what you’re saying is by the time the quantum Cape capability is in place and current encryption is no longer secure A lot of the data will no longer be of interest to the people who have those capabilities.

That’s what I hope

Nadav Zafrir

That’s what I hope and if we get ready for that, I think we’ll be in an okay shape Yeah, and you know because I don’t think it’s Ever Some very narrow use cases perhaps are in the next couple of years But general use we have a little more time in my humble opinion

James Harding

Can I can I I want to sort of bring people in if people have thoughts or questions, please catch catch my eyes So I’m gonna bring you the mic But I’m gonna just make sure that I come back at one point to pull all this together So when we finish I just to warn you I’m gonna ask you whether or not we should given everything You’ve said come away from this optimistic or not, sir

Audience

Cyber security At least in a technical level. It doesn’t actually matter who is the one doing the acting it could be Russians They could be North Koreans in the technical level. It doesn’t matter but in terms of geopolitics It matters a lot.

Yeah, and but the thing is with AI maybe These are attackers are getting very at hiding their linguistic Comments for example or these like different measures to obfuscate and to make sure that we don’t know where those attacks are coming from And in geopolitics, it matters if it was a Russian is doing it or the North Koreans doing it So in terms of attribution, how does how does AI play into how does AI play into that?

James Harding

I’m afraid Jessica. Yes, it’s harder. Yeah

Jessica Rosenworcel

But of course, you know Most security authorities have degrees of you know, they follow trends and pattern recognition So they do have some sense, but you’re right the public attribution Has enormous consequences for geopolitics and the information informing that attribution going forward may not be of the quality.

It was historically

James Harding

I’m a confident presumably of making that attribution decreases. Most politicians are confident. Yeah, that’s true for my agencies I hope you know wait other thoughts questions.

I know love do you want to come in on that? Other thoughts or views Don’t be shy in about six or seven. Yes

Audience

Thank you very much. My name is Liz Corbyn. I’m from the European Broadcasting Union, which represents public broadcasters around Europe And some of the things that you were talking about around the integrity of information the integrity of content Obviously has really wide things and something that we’re deeply concerned about and everybody in this room who’s running businesses Depends on information that they have and I’m wondering what you see around the threat to the information ecosystem in terms of how that impacts businesses and the decisions that they make and how they can protect themselves against this and what the producers of these products the big techs should be doing about this to Support the the integrity of our of our news and information

James Harding

Who wants to come in? Last Saturday the Saturday before last we were trying to Put together a package just on Iran, right? We sat we looked at we’ve laid out a table of 20 20 25 pictures two of them real But it took us the afternoon to figure out what was real.

Jessica Rosenworcel

So you’re the journalist here, how would you respond to that?

James Harding

Fearfully, yeah, but what Liz is describing…

Audience

The point is that everybody relies on the information, so what should the big techs be doing, but what would you see that the development of this technology needs to take place?

Nadav Zafrir

I think the question honestly depends, it depends what you’re looking at, right? So if you’re looking at deepfake, specifically video, if you invest in it, we are coming up with solutions that can vet what is fake and what’s not. That is possible, but in my opinion, to your point, it has nothing to do with cyber, the problem is what is the source of truth?

Yes, this is a real video, but how do you interpret it? And now that politicians around the world are claiming some data, it’s that person saying it, but how do you verify that the data is real or just made up? And I think what that is doing is deteriorating trust, and in some way from the eerie part about it is that trust is the foundation of a civilization.

So these things are happening at the same time, but again, just like you said, it has both sides, because the playing field is leveled. We, from a security side, from a data sovereignty side, can also harness these technologies to verify. Now the question is, do your readers, your public, your listeners, do they want it?

But that’s a different story.

James Harding

No, I think people generally do. The reason why I talk about security and equality… is these things are expensive you know the I’m imagining Jill the people who are customers of dark trace are often protecting you know very valuable you know communications networks and information what Liz is talking about is what happens when information is there in the for the public good but itself hugely susceptible to what were your three things eavesdropping stopping

Audience

I think education has to be come into this as well perhaps starting even at the elementary school level certainly once kids have cell phones in middle school high school and college students because the the need for to be skeptical about you were talking about what is the truth or what is real data yeah the need to be skeptical the need for validation we’re teaching these kids the same way we did 50 years ago and they’re now going they’re now operating in a very different world and we need to help prepare them for that yeah I think that’s

James Harding

any other thoughts questions points of view we’re just coming towards yet yes

Audience

sir thank you fantastic conversation Daniela Tonella CIO ING Dutch digital bank mostly a question for you are we looking to the end of the digitally mediated communication in general our clients are getting scammed spammed you know they don’t trust anything coming pretending to be us and not the sophisticated users that the skeptical ones but you know normal people on the street do we have to anticipate something for the end of digitally

James Harding

mediated communication great question actually mark why don’t you do that and

Marc Murtra

so so The easy answer and the hard answer. So the easy answer is look, hey, we just move data from side to side. It’s up to you to find the system.

And if anybody tries to manipulate what you do, once you send it to us, then that is our job to to keep it. So that would be my first easy way out. The second is there is…

I mean, I know, I think I know a lot of things, but some other I don’t think, don’t know. But there are ways of destroying this, and I would come up with one. Is that we all have our mobile phones on us all the time, and there are ways to certify that.

So based on that, I think the the impersonation bubble can be destroyed, and that can be done precisely with a telecom market system that doesn’t go through the internet. So I’m pretty sure there’s many ways to destroy that, but…

James Harding

So can I finish up? By the way, I love the question, because the more emails, you know, we get from people. Now I’m answering fewer and fewer emails, and I can’t work out whether I’m clever or rude.

You know, because it’s just not answering. It’s really confusing. Well, I’m going to finish up by saying thank you, because it’s like an eye-opening conversation, but also to be honest that I’m confused.

Because you could listen to the conversation we’ve just had and say, Jessica, your point about devices to begin with, or your point, Nadav, about the extent to which, you know, there are kind of agents talking to other agents, or your point, Jill, Mark, that people are getting deepfaked, but not talking about it publicly, that we’ve still got a taboo, and think actually we’re in for an age of much greater insecurity, and we’re more vulnerable.

Or you could take your point, Mark, that, you know, we could pop the impersonation bubble. We could take your point, Nadav, that… that actually we’ve got agents that can act as defences within all of our devices and think we’re actually heading to an age that’s more likely to be safe, more likely to be secure.

So here’s my last question, which one is it? In the next two years are we likely to be more or

Jessica Rosenworcel

less safe when it comes to cyber attacks? Jessica. The next two years.

The next two years. The next two years are going to expose a lot of vulnerabilities, but I think over the long term there’s an opportunity to be more safe if we work on technical trust in the digital age

Nadav Zafrir

and also cultural trust. Nadav. I think it’s a tale of two cities.

It’s really the best of times and the worst of times. We’re going to see both. We’re going to see that, you know, impersonation killing what we know right now, but we’re going to change the way we look at identity and make it contextual.

We have limitless compute power. We have the ability to code everything with human language, and so we can check everything from a contextual basis. It won’t be, you know, binary.

Is this person or not? Because I know everything about that person. I can ask more questions.

I can do it, you know, extremely fast, and I can make identity contextual and layered, and so I think it’s both. The next couple of years, I agree, are going to be tough. Jill.

I’m going to

Jill Popelka

go with more safe because I believe in human resilience and brilliance, and we are resilient, and we are working diligently together to solve this problem, and I do think it starts at the in the early days.

We have to change the way that the education system works so it’s agile and nimble enough to produce, to continue to produce people to use these technologies for good and to ensure

James Harding

that they continue to be safe for humanity. Jill, thank you. Mark.

The first is starting with a

Marc Murtra

disclaimer. I can say things are going to be safer and get home. Of course.

Boom. So disclaimer with security, it’s always very dangerous. I know you’re asking for two years, but five, ten years, I would say safer, safer, safer.

That’s what I would say. In the next two years, if I had to say something, more or less equivalent. I think the arms race is more or less balanced, and I think the Avengers will win in the end.

James Harding

They tend to. That’s the way the movie goes. Ladies and gentlemen, please join me in thanking Marc and Jill and Nadav and Jessica.

Thank you very much. Great job. It was fun.

J

Jessica Rosenworcel

Speech speed

165 words per minute

Speech length

903 words

Speech time

327 seconds

AI enables malicious actors to identify vulnerabilities at unprecedented speed, multiplying attack possibilities

Explanation

Rosenworcel argues that the rapid expansion of connected devices creates software vulnerabilities, and AI allows malicious actors to identify and exploit these vulnerabilities at unbelievable speed unlike anything seen before. This multiplies the possibilities for bad actors to take advantage of expanded attack surfaces.


Evidence

The radical expansion of connected devices and data they produce, with software frequently going to market with known and unknown vulnerabilities


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Infrastructure


Agreed with

– Nadav Zafrir
– Jill Popelka
– Marc Murtra

Agreed on

AI dramatically increases the speed and sophistication of cyber attacks


AI development in private markets lacks the military contract guidelines that shaped earlier revolutionary technologies

Explanation

Rosenworcel contends that unlike previous revolutionary technologies like the internet and aerospace which developed with military contracts providing stimulant and guidelines, AI is being developed primarily by private actors in private markets. This means guidelines are being developed in real time without the same frameworks earlier technologies had.


Evidence

Historical comparison with internet and aerospace development which started with military contracts


Major discussion point

Regulatory and Governance Challenges in AI-Driven Cybersecurity


Topics

Legal and regulatory | Economic


Legal systems must wrestle with defining machine activities and determining human accountability for AI actions

Explanation

Rosenworcel identifies that governments will face challenges in defining machine activities and determining whether humans should be held responsible for AI actions. This represents a fundamental challenge for legal frameworks dealing with AI-driven activities.


Evidence

Example of AI agents potentially acting beyond human instructions or knowledge


Major discussion point

Regulatory and Governance Challenges in AI-Driven Cybersecurity


Topics

Legal and regulatory | Human rights


Agreed with

– Nadav Zafrir
– Jill Popelka

Agreed on

Traditional security models are inadequate for AI-driven threats


Governments need to create safe spaces for collaboration between public and private sectors to address exponentially larger threats

Explanation

Rosenworcel argues that traditional regulatory approaches where governments fine infrastructure providers for failures create disincentives for reporting problems. She advocates for safe spaces where providers can share threat information with governments, enabling broader protection across similar providers.


Evidence

Current system where infrastructure providers have little incentive to report problems to government due to potential fines


Major discussion point

Regulatory and Governance Challenges in AI-Driven Cybersecurity


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Nadav Zafrir
– Jill Popelka

Agreed on

Collaboration and information sharing are essential for effective cybersecurity defense


The next two years will expose vulnerabilities but offer long-term opportunities for greater safety through technical and cultural trust

Explanation

Rosenworcel predicts that the immediate future will reveal many vulnerabilities in current systems, but believes that over the long term there are opportunities to achieve greater safety. This requires building both technical trust in digital systems and cultural trust in society.


Major discussion point

Future Outlook and Quantum Computing Implications


Topics

Cybersecurity | Sociocultural


N

Nadav Zafrir

Speech speed

168 words per minute

Speech length

2072 words

Speech time

738 seconds

Attackers are moving faster than defenders because they lack regulations and procurement constraints, and collaborate better

Explanation

Zafrir describes cybersecurity as a learning competition between offense and defense, where attackers currently have advantages. Attackers move faster because they don’t face regulatory constraints or procurement processes, and they collaborate more effectively than defenders do.


Evidence

Comparison of attacker capabilities versus defender constraints in terms of regulations and procurement processes


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Jessica Rosenworcel
– Jill Popelka
– Marc Murtra

Agreed on

AI dramatically increases the speed and sophistication of cyber attacks


Current security models treat AI agents as humans with identity-based security, but agents are naive and lack human identity

Explanation

Zafrir explains that organizations are introducing AI agents rapidly without understanding their capabilities, treating them with human-based identity security models. However, these agents are naive and don’t have human identity, creating fundamental security mismatches that will require new approaches.


Evidence

Examples of organizations claiming to have ten, hundred, or thousand agents per employee


Major discussion point

The Challenge of Securing AI Agents and Non-Human Systems


Topics

Cybersecurity | Infrastructure


Agreed with

– Jessica Rosenworcel
– Jill Popelka

Agreed on

Traditional security models are inadequate for AI-driven threats


The interoperability problem between human and non-human systems creates fragility during this transitional period

Explanation

Zafrir identifies interoperability as the core challenge in cybersecurity, arguing that while we know how to secure humans and infrastructure, we’re still building tools to secure non-humans. The next few years will be chaotic due to this interoperability gap.


Major discussion point

The Challenge of Securing AI Agents and Non-Human Systems


Topics

Cybersecurity | Infrastructure


AI agents can inadvertently collaborate across systems in ways that create new attack vectors

Explanation

Zafrir describes scenarios where AI agents can be manipulated to collaborate across different systems, with one agent requesting another to encrypt data centers, and agents helping each other gain access through platforms like Slack. This creates new, unexpected attack vectors through agent-to-agent communication.


Evidence

Specific example of agents communicating through email and Slack to gain unauthorized access to data centers


Major discussion point

The Challenge of Securing AI Agents and Non-Human Systems


Topics

Cybersecurity | Infrastructure


Lines are blurring between state and non-state actors as AI capabilities become more accessible

Explanation

Zafrir argues that traditional distinctions between state-level and criminal actors are becoming obsolete as AI democratizes sophisticated attack capabilities. What were once state-level capabilities are now accessible to small groups, similar to how startups can operate with just a few people.


Evidence

Comparison to startups operating with 2-3 people, and capabilities previously requiring state resources now available on dark web


Major discussion point

Geopolitical and State Actor Threats


Topics

Cybersecurity | Legal and regulatory


Open platform collaboration between security companies is essential, moving away from closed garden consolidation

Explanation

Zafrir advocates for abandoning the previous decade’s approach of consolidation and closed systems in favor of open platforms. He argues that no single entity can solve cybersecurity alone, requiring best-of-breed capabilities and interoperability between different security solutions.


Evidence

Reference to collaboration with Darktrace and API-based information sharing


Major discussion point

Defense Strategies and Collaborative Approaches


Topics

Cybersecurity | Infrastructure


Agreed with

– Jessica Rosenworcel
– Jill Popelka

Agreed on

Collaboration and information sharing are essential for effective cybersecurity defense


The deterioration of trust threatens the foundation of civilization itself

Explanation

Zafrir connects cybersecurity and information integrity challenges to broader societal concerns, arguing that the erosion of trust in information and systems threatens the fundamental basis of civilization. He sees this as happening simultaneously with technological advances that both create and potentially solve these problems.


Major discussion point

Information Integrity and Trust Challenges


Topics

Sociocultural | Human rights


Quantum computing will require quantum-ready encryption, but current data may lose relevance by the time quantum threats materialize

Explanation

Zafrir acknowledges that quantum computing will break current RSA encryption models by dramatically reducing the time needed for factoring, but argues that quantum-ready encryption is available now. He suggests that much current data being collected may lose its value by the time quantum capabilities become widespread.


Evidence

Explanation of how quantum computing shrinks time for breaking current encryption models based on factoring


Major discussion point

Future Outlook and Quantum Computing Implications


Topics

Cybersecurity | Infrastructure


J

Jill Popelka

Speech speed

198 words per minute

Speech length

802 words

Speech time

242 seconds

AI increases velocity and sophistication of attacks, with deepfake impersonation becoming a common CEO targeting tactic

Explanation

Popelka describes how AI has increased both the speed and complexity of cyber attacks, with deepfake voice impersonation becoming a particular threat to new CEOs. She shares her personal experience of being deepfaked within 18 months of taking her role, and notes that this is a common experience among CEOs.


Evidence

Personal experience of being deepfaked requesting financial data, and confirmation from other CEOs that they experienced similar attacks


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Economic


Agreed with

– Jessica Rosenworcel
– Nadav Zafrir
– Marc Murtra

Agreed on

AI dramatically increases the speed and sophistication of cyber attacks


Anomaly detection based on understanding normal network behavior provides better defense than attacker-centric approaches

Explanation

Popelka explains that Darktrace’s approach differs from traditional cybersecurity by focusing on understanding an organization’s normal network traffic and communications, then defending against anything anomalous. This approach runs against multi-layered AI threat models to assess cyber risk and prioritize responses.


Evidence

Darktrace’s 15 years of de-identified customer data used to precisely identify and respond to threats


Major discussion point

Defense Strategies and Collaborative Approaches


Topics

Cybersecurity | Infrastructure


Multilayered defense approaches combining technology and human elements are necessary

Explanation

Popelka advocates for multilayered cybersecurity approaches that combine technological solutions with human awareness and response. She emphasizes the importance of public-private partnerships and private-private partnerships, while maintaining human involvement in the security mix.


Evidence

Discussion of the need for CEOs to speak vulnerably about attacks to strengthen collective defense


Major discussion point

Defense Strategies and Collaborative Approaches


Topics

Cybersecurity | Human rights


Agreed with

– Jessica Rosenworcel
– Nadav Zafrir

Agreed on

Traditional security models are inadequate for AI-driven threats


Vulnerability sharing between companies strengthens overall security ecosystem

Explanation

Popelka describes how security companies proactively share vulnerability information with each other rather than exploiting discoveries for competitive advantage. When Darktrace identifies vulnerabilities in other solutions, they directly inform those companies to help them fix the issues rather than publicizing them.


Evidence

Darktrace’s practice of directly notifying other companies about vulnerabilities rather than going to media


Major discussion point

Defense Strategies and Collaborative Approaches


Topics

Cybersecurity | Economic


Agreed with

– Jessica Rosenworcel
– Nadav Zafrir

Agreed on

Collaboration and information sharing are essential for effective cybersecurity defense


Human resilience and collaborative problem-solving will ultimately lead to safer outcomes

Explanation

Popelka expresses optimism about cybersecurity’s future based on human resilience and brilliance, emphasizing collaborative efforts to solve security problems. She advocates for changes to education systems to make them agile enough to produce people who can use AI technologies for good and maintain safety for humanity.


Major discussion point

Future Outlook and Quantum Computing Implications


Topics

Sociocultural | Development


M

Marc Murtra

Speech speed

147 words per minute

Speech length

1059 words

Speech time

430 seconds

AI democratizes both offensive and defensive capabilities, making sophisticated attacks accessible to non-experts

Explanation

Murtra explains how AI changes the landscape by removing the need for specialized skills in both attack and defense. He uses the example of eavesdropping, where previously criminals needed five people working eight-hour shifts to monitor communications, but now AI can automatically process and analyze all communications for relevant information.


Evidence

Specific example of eavesdropping operations requiring five people for continuous monitoring versus AI-automated analysis


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Economic


Agreed with

– Jessica Rosenworcel
– Nadav Zafrir
– Jill Popelka

Agreed on

AI dramatically increases the speed and sophistication of cyber attacks


Disagreed with

– James Harding
– Nadav Zafrir

Disagreed on

Optimism vs. pessimism about AI’s impact on cybersecurity democratization


Europe has significant vulnerability due to lack of indigenous cybersecurity technology and deep know-how

Explanation

Murtra argues that Europe faces particular cybersecurity challenges because it lacks domestic technology and deep cybersecurity expertise, instead relying on integrating third-party products. He warns that this creates vulnerabilities similar to defense industry dependencies, where external expertise is required even for incident response.


Evidence

Example of London airport cyber attack requiring forensic experts from outside Europe


Major discussion point

Geopolitical and State Actor Threats


Topics

Cybersecurity | Legal and regulatory


The cybersecurity arms race remains balanced in the short term, with defenders likely to prevail long-term

Explanation

Murtra distinguishes between state and non-state actors based on their objectives and capabilities, arguing that while the system must defend everywhere and attackers can choose vulnerable points, the collaborative nature of defenders creates an anti-fragile system. He believes that over time, the collective defense system will prove stronger than individual attackers.


Evidence

Comparison of non-state actors limited by their ability to process stolen money versus state actors with different objectives


Major discussion point

Future Outlook and Quantum Computing Implications


Topics

Cybersecurity | Geopolitical


A

Audience

Speech speed

161 words per minute

Speech length

460 words

Speech time

171 seconds

Attribution of attacks becomes more difficult as AI helps attackers hide linguistic patterns and obfuscate origins

Explanation

An audience member points out that while the technical aspects of cyber attacks may not depend on the attacker’s identity, geopolitical attribution matters significantly for policy responses. AI is making attribution more difficult by helping attackers hide linguistic markers and other identifying characteristics that previously helped determine attack origins.


Evidence

Distinction between technical response (which doesn’t require knowing attacker identity) and geopolitical response (which does require attribution)


Major discussion point

Geopolitical and State Actor Threats


Topics

Cybersecurity | Legal and regulatory


AI threatens the integrity of information ecosystems, making it difficult to distinguish real from fake content

Explanation

A representative from the European Broadcasting Union raises concerns about AI’s impact on information integrity, noting that businesses and decision-makers depend on reliable information. The challenge extends beyond individual security to the broader information ecosystem that supports business decisions and democratic processes.


Evidence

James Harding’s example of spending an afternoon trying to determine which 2 of 25 Iran-related images were real


Major discussion point

Information Integrity and Trust Challenges


Topics

Sociocultural | Human rights


Education systems need fundamental changes to prepare people for skeptical evaluation of digital information

Explanation

An audience member argues that education must evolve to teach skepticism and validation skills, starting from elementary school and continuing through when students get cell phones and beyond. Current educational approaches from 50 years ago are inadequate for preparing people to operate in today’s information environment.


Major discussion point

Information Integrity and Trust Challenges


Topics

Sociocultural | Development


J

James Harding

Speech speed

180 words per minute

Speech length

2477 words

Speech time

824 seconds

AI fundamentally changes the nature of cybersecurity, creating potentially more uneven threats than previously understood

Explanation

Harding argues that while many people are just beginning to understand cybersecurity personally and organizationally, AI completely transforms this landscape. He suggests that AI introduces new types of threats that may be more asymmetric and challenging than what we previously faced.


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Infrastructure


The free market approach to AI development may have been problematic from a cybersecurity perspective

Explanation

Harding questions whether the rapid, unrestricted release of AI capabilities like ChatGPT to the general public was wise from a security standpoint. He suggests that enormous amounts of power were handed over without understanding what individuals, both good and bad actors, would be able to do with these tools, essentially letting ‘the genie out of the bottle.’


Evidence

Reference to OpenAI’s launch of ChatGPT and other AI services made available on a retail basis


Major discussion point

Regulatory and Governance Challenges in AI-Driven Cybersecurity


Topics

Legal and regulatory | Economic


Disagreed with

– Nadav Zafrir

Disagreed on

Controllability of AI development and deployment


There is a concerning reluctance among business leaders to publicly discuss cybersecurity vulnerabilities

Explanation

Harding observes that even when executives face serious cyber threats, they often don’t share these experiences publicly, creating a taboo around vulnerability disclosure. He cites an example of a banking CEO who was being extorted for several hundred million dollars but never shared this information with other attendees at a conference, suggesting this secrecy may weaken collective defense efforts.


Evidence

Example of ashen-faced banking CEO who disappeared from conference due to extortion attempt but never shared the experience with other attendees


Major discussion point

Defense Strategies and Collaborative Approaches


Topics

Cybersecurity | Economic


AI-driven information manipulation threatens the foundation of journalism and decision-making

Explanation

Harding describes the practical challenges journalists face in distinguishing real from fake content in the AI era. He illustrates how even professional news organizations with resources and expertise struggle to verify information authenticity, spending entire afternoons trying to determine which images are real versus AI-generated.


Evidence

Personal example of his team spending an afternoon trying to identify which 2 out of 25 Iran-related pictures were real


Major discussion point

Information Integrity and Trust Challenges


Topics

Sociocultural | Human rights


The democratization promise of technology has historically not materialized, raising questions about AI’s impact on security inequality

Explanation

Harding expresses skepticism about whether AI will truly democratize cybersecurity capabilities or instead create new forms of inequality. He questions whether AI-driven threats disproportionately affect wealthy, prominent individuals or if they create broader vulnerabilities across society, noting that technology’s promise to be a democratizing force has generally not been fulfilled historically.


Evidence

Observation that prominent CEOs are frequently targeted by deepfake attacks


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Sociocultural


Disagreed with

– Nadav Zafrir
– Marc Murtra

Disagreed on

Optimism vs. pessimism about AI’s impact on cybersecurity democratization


M

Moderator

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

The session aims to provide educational insight rather than argumentative debate about AI-driven cybersecurity challenges

Explanation

The moderator establishes that this particular forum session is designed to be instructional, helping participants understand the complex intersection of AI and cybersecurity. They distinguish this from other World Economic Forum sessions that focus on debate and argument, positioning this as a learning opportunity about profound but often unseen disruptions to businesses and countries.


Evidence

Distinction between two types of WEF sessions: those for arguments versus those for lessons


Major discussion point

AI’s Impact on Cybersecurity Threats and Vulnerabilities


Topics

Cybersecurity | Development


Agreements

Agreement points

AI dramatically increases the speed and sophistication of cyber attacks

Speakers

– Jessica Rosenworcel
– Nadav Zafrir
– Jill Popelka
– Marc Murtra

Arguments

AI enables malicious actors to identify vulnerabilities at unprecedented speed, multiplying attack possibilities


Attackers are moving faster than defenders because they lack regulations and procurement constraints, and collaborate better


AI increases velocity and sophistication of attacks, with deepfake impersonation becoming a common CEO targeting tactic


AI democratizes both offensive and defensive capabilities, making sophisticated attacks accessible to non-experts


Summary

All speakers agree that AI fundamentally accelerates and enhances cyber attack capabilities, making threats more sophisticated and accessible to a broader range of actors


Topics

Cybersecurity | Infrastructure


Collaboration and information sharing are essential for effective cybersecurity defense

Speakers

– Jessica Rosenworcel
– Nadav Zafrir
– Jill Popelka

Arguments

Governments need to create safe spaces for collaboration between public and private sectors to address exponentially larger threats


Open platform collaboration between security companies is essential, moving away from closed garden consolidation


Vulnerability sharing between companies strengthens overall security ecosystem


Summary

Speakers consensus that traditional siloed approaches to cybersecurity are inadequate and that collaborative, open approaches are necessary to address AI-enhanced threats


Topics

Cybersecurity | Legal and regulatory


Traditional security models are inadequate for AI-driven threats

Speakers

– Jessica Rosenworcel
– Nadav Zafrir
– Jill Popelka

Arguments

Legal systems must wrestle with defining machine activities and determining human accountability for AI actions


Current security models treat AI agents as humans with identity-based security, but agents are naive and lack human identity


Multilayered defense approaches combining technology and human elements are necessary


Summary

Agreement that existing security frameworks designed for human actors and traditional systems are fundamentally mismatched for AI-driven environments


Topics

Cybersecurity | Legal and regulatory


Similar viewpoints

Both speakers view the changing nature of threat actors and maintain cautious optimism about long-term defensive capabilities despite short-term challenges

Speakers

– Nadav Zafrir
– Marc Murtra

Arguments

Lines are blurring between state and non-state actors as AI capabilities become more accessible


The cybersecurity arms race remains balanced in the short term, with defenders likely to prevail long-term


Topics

Cybersecurity | Geopolitical


Both express concerns about the governance and regulatory challenges posed by AI development outside traditional oversight frameworks

Speakers

– Jessica Rosenworcel
– Audience

Arguments

AI development in private markets lacks the military contract guidelines that shaped earlier revolutionary technologies


Attribution of attacks becomes more difficult as AI helps attackers hide linguistic patterns and obfuscate origins


Topics

Legal and regulatory | Cybersecurity


Both emphasize the critical role of human adaptation and education in addressing AI-driven security challenges

Speakers

– Jill Popelka
– Audience

Arguments

Human resilience and collaborative problem-solving will ultimately lead to safer outcomes


Education systems need fundamental changes to prepare people for skeptical evaluation of digital information


Topics

Sociocultural | Development


Unexpected consensus

Quantum computing is manageable with current preparation

Speakers

– Nadav Zafrir
– Jessica Rosenworcel

Arguments

Quantum computing will require quantum-ready encryption, but current data may lose relevance by the time quantum threats materialize


The next two years will expose vulnerabilities but offer long-term opportunities for greater safety through technical and cultural trust


Explanation

Despite the panel’s focus on AI-driven threats, both speakers surprisingly downplay quantum computing as an immediate concern, suggesting it’s a manageable challenge with available solutions


Topics

Cybersecurity | Infrastructure


AI democratizes defensive capabilities alongside offensive ones

Speakers

– Nadav Zafrir
– Marc Murtra
– Jill Popelka

Arguments

The deterioration of trust threatens the foundation of civilization itself


AI democratizes both offensive and defensive capabilities, making sophisticated attacks accessible to non-experts


Anomaly detection based on understanding normal network behavior provides better defense than attacker-centric approaches


Explanation

While acknowledging AI’s threat amplification, speakers unexpectedly agree that AI also democratizes defensive capabilities, potentially leveling the playing field rather than just favoring attackers


Topics

Cybersecurity | Economic


Overall assessment

Summary

The speakers demonstrate strong consensus on AI’s transformative impact on cybersecurity, the inadequacy of current security models, and the necessity of collaborative approaches. They agree on both the severity of emerging threats and the potential for AI to enhance defensive capabilities.


Consensus level

High level of consensus with nuanced differences in emphasis rather than fundamental disagreements. This suggests a mature understanding of the challenges and a shared recognition that traditional approaches must evolve. The implications are significant: the cybersecurity community appears aligned on the need for systemic changes in how we approach security in the AI era.


Differences

Different viewpoints

Optimism vs. pessimism about AI’s impact on cybersecurity democratization

Speakers

– James Harding
– Nadav Zafrir
– Marc Murtra

Arguments

The democratization promise of technology has historically not materialized, raising questions about AI’s impact on security inequality


The democratization works both ways, right? So um to your point about millions and millions of subscribers That you know, all they have is that they have a they have a line and they they pay a subscription to Telefonica and Ten years ago. They wouldn’t have they Yes, given they weren’t susceptible to very sophisticated attacks, but they couldn’t hire You know the latest thing we can have an agent on every phone they’re carrying That has the latest and greatest policies that the US government used to have ten years ago So that is being democratized again as well


AI democratizes both offensive and defensive capabilities, making sophisticated attacks accessible to non-experts


Summary

Harding expresses skepticism about AI’s democratizing potential based on historical technology patterns, while Zafrir and Murtra argue that AI genuinely democratizes both attack and defense capabilities, making advanced security accessible to ordinary users


Topics

Cybersecurity | Sociocultural | Economic


Controllability of AI development and deployment

Speakers

– James Harding
– Nadav Zafrir

Arguments

The free market approach to AI development may have been problematic from a cybersecurity perspective


I think it’s inevitable. I don’t think you can really control it. I don’t know from a regulator perspective what you think. But I think that we can’t slow down science. We can’t slow down advancement. We just need to make sure that we run as fast as these technologies with security on top of that


Summary

Harding questions whether the unrestricted release of AI capabilities was wise from a security standpoint, while Zafrir argues that controlling AI development is neither possible nor desirable, advocating instead for keeping pace with security solutions


Topics

Legal and regulatory | Economic | Cybersecurity


Unexpected differences

Quantum computing timeline and threat assessment

Speakers

– James Harding
– Nadav Zafrir
– Jessica Rosenworcel

Arguments

So how does quantum fit into this conversation? Or can we come back in 2029 and talk about it then?


Quantum computing will require quantum-ready encryption, but current data may lose relevance by the time quantum threats materialize


That requires changes in bandwidth and, you know, like the unequal access if people don’t actually globally have the same bandwidth for quantum computing


Explanation

Unexpectedly, there was disagreement about the urgency and practical implications of quantum computing threats. Harding seemed to suggest it might be a distant concern, Zafrir was relatively optimistic about managing the transition with quantum-ready encryption, while Rosenworcel raised concerns about unequal access creating new vulnerabilities. This disagreement was unexpected given their general alignment on other AI-related threats


Topics

Cybersecurity | Infrastructure


Overall assessment

Summary

The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts centering on approaches rather than core problems. Main disagreements involved the controllability of AI development, the democratizing potential of AI for cybersecurity, and the timeline/urgency of quantum computing threats


Disagreement level

Low to moderate disagreement level. The speakers largely agreed on the nature and severity of AI-driven cybersecurity challenges, but differed on regulatory approaches, the feasibility of controlling AI development, and optimism about technology’s democratizing effects. This consensus on problems but divergence on solutions suggests the field may benefit from multiple complementary approaches rather than a single unified strategy


Partial agreements

Partial agreements

Similar viewpoints

Both speakers view the changing nature of threat actors and maintain cautious optimism about long-term defensive capabilities despite short-term challenges

Speakers

– Nadav Zafrir
– Marc Murtra

Arguments

Lines are blurring between state and non-state actors as AI capabilities become more accessible


The cybersecurity arms race remains balanced in the short term, with defenders likely to prevail long-term


Topics

Cybersecurity | Geopolitical


Both express concerns about the governance and regulatory challenges posed by AI development outside traditional oversight frameworks

Speakers

– Jessica Rosenworcel
– Audience

Arguments

AI development in private markets lacks the military contract guidelines that shaped earlier revolutionary technologies


Attribution of attacks becomes more difficult as AI helps attackers hide linguistic patterns and obfuscate origins


Topics

Legal and regulatory | Cybersecurity


Both emphasize the critical role of human adaptation and education in addressing AI-driven security challenges

Speakers

– Jill Popelka
– Audience

Arguments

Human resilience and collaborative problem-solving will ultimately lead to safer outcomes


Education systems need fundamental changes to prepare people for skeptical evaluation of digital information


Topics

Sociocultural | Development


Takeaways

Key takeaways

AI fundamentally transforms cybersecurity by enabling both attackers and defenders to operate at unprecedented speed and scale


Current security models are inadequate for AI agents, which lack human identity but are treated as humans in security frameworks


The lines between state and non-state actors are blurring as AI democratizes sophisticated attack capabilities


Collaboration and information sharing between private companies and with government agencies is essential for effective defense


Europe faces significant cybersecurity vulnerability due to lack of indigenous technology and expertise


AI threatens information integrity and trust, which are foundational to civilization and democratic processes


Anomaly detection based on understanding normal behavior is more effective than traditional attacker-centric approaches


The next 2-5 years will be a critical transition period with increased vulnerabilities, but long-term prospects for safety are positive if proper measures are taken


Resolutions and action items

Develop quantum-ready encryption systems to prepare for future quantum computing threats


Create open platform architectures that allow best-of-breed security solutions to interoperate


Establish ‘guardian agents’ – proprietary AI systems to oversee and secure other AI agents


Reform education systems to teach digital skepticism and information validation skills from elementary through college levels


Build contextual and layered identity verification systems rather than binary authentication


Develop technical solutions for deepfake detection and content verification


Strengthen public-private partnerships and create safe spaces for vulnerability sharing


Unresolved issues

Legal frameworks for determining accountability when AI agents act autonomously


How to balance the democratization of AI capabilities with security risks


Attribution challenges as AI makes it easier for attackers to hide their origins and linguistic patterns


The fundamental question of whether digitally mediated communication can remain trustworthy


How to protect smaller organizations and individuals who cannot afford enterprise-level security solutions


The timeline and practical implementation of quantum-ready security measures


How to maintain information integrity in news and media when deepfakes become indistinguishable from reality


Suggested compromises

Accept that some vulnerabilities will be exposed in the short term while building long-term resilience


Focus on making systems ‘anti-fragile’ where each vulnerability discovered strengthens the overall system


Acknowledge that perfect security is impossible but work toward contextual, layered defense approaches


Balance the need for information sharing with competitive business interests through de-identified data sharing


Accept that the current transition period will be chaotic while building toward better interoperability standards


Thought provoking comments

I actually think that at the heart of that is we know how to secure Humans and infrastructure and I think we’re building the tools to secure non-humans. The problem is interoperability… The thing about these agents is That they’re very naive at the end of the day and they start crossing lanes and we treat them as humans So we try to secure them based on identity, but they don’t have a human identity and they’re very naive And that is going to change the way we need to look at security altogether.

Speaker

Nadav Zafrir


Reason

This comment fundamentally reframes the cybersecurity challenge by identifying that AI agents don’t fit existing security paradigms designed for humans. It introduces the concept that our security models are fundamentally mismatched to the new threat landscape, moving beyond surface-level concerns about AI-powered attacks to deeper architectural problems.


Impact

This shifted the conversation from discussing AI as simply a tool that enhances existing threats to recognizing it as something that requires entirely new security frameworks. It introduced the concept of ‘naive agents’ that became a recurring theme and helped explain why traditional identity-based security approaches are failing.


I think AI is different because it is taking place in private markets. It’s being developed chiefly by private actors. And so those guidelines are being developed in real time and they’re not getting the same frameworks that some earlier technologies had when they reached the marketplace.

Speaker

Jessica Rosenworcel


Reason

This observation provides crucial historical context by contrasting AI development with previous revolutionary technologies that had military/government oversight from the start. It explains why AI feels so uncontrolled and why regulatory frameworks are lagging behind technological capabilities.


Impact

This comment elevated the discussion from technical cybersecurity issues to broader questions of governance and societal risk management. It helped explain why the ‘genie is out of the bottle’ phenomenon feels so pronounced with AI and influenced subsequent discussions about the need for new forms of collaboration between public and private sectors.


All of a sudden, we have connected between humans and machines through semantic language. And any one of us doesn’t have to be a PhD in mathematics in order to write code, in order to change the world. And that’s why attackers are having fun.

Speaker

Nadav Zafrir


Reason

This succinctly captures the democratization paradox of AI – the same accessibility that empowers legitimate users also empowers malicious actors. It explains why the threat landscape has changed so dramatically in just two years since ChatGPT’s launch.


Impact

This comment helped the audience understand the fundamental shift in the cybersecurity landscape and why traditional barriers to sophisticated attacks have been lowered. It connected the technical capabilities of AI to the practical reality of increased threat vectors and influenced the discussion about security inequality.


I actually think governments have to create safe spaces for those dialogues Because what we are facing is exponentially larger and more confusing than ever before And it’s going to require a different kind of collaboration and regulation than before

Speaker

Jessica Rosenworcel


Reason

This addresses a critical paradox in cybersecurity: organizations need to share threat information to defend collectively, but competitive and legal pressures discourage such sharing. The call for ‘safe spaces’ recognizes that traditional regulatory approaches are insufficient for the AI era.


Impact

This comment introduced the theme of necessary collaboration that ran through the rest of the discussion, influencing conversations about public-private partnerships and the need for new models of information sharing. It also set up the discussion about vulnerability in sharing versus strength in collaboration.


The lines are blurring. Between the state and the non-state… What we’re seeing on the dark web are capabilities that until a couple of years ago were really state-level capabilities in the hands of a few.

Speaker

Nadav Zafrir


Reason

This observation fundamentally challenges traditional threat attribution models and geopolitical frameworks for understanding cyberattacks. It suggests that the democratization of AI has collapsed the distinction between nation-state and criminal actors.


Impact

This comment significantly influenced the discussion about attribution challenges and geopolitical implications. It led to deeper exploration of how governments and organizations should respond when traditional categories of threats no longer apply, and it connected to later discussions about the difficulty of determining responsibility in AI-mediated attacks.


I think the question honestly depends, it depends what you’re looking at, right? So if you’re looking at deepfake, specifically video, if you invest in it, we are coming up with solutions that can vet what is fake and what’s not. That is possible, but in my opinion, to your point, it has nothing to do with cyber, the problem is what is the source of truth?… And I think what that is doing is deteriorating trust, and in some way from the eerie part about it is that trust is the foundation of a civilization.

Speaker

Nadav Zafrir


Reason

This comment elevates the discussion from technical cybersecurity concerns to fundamental questions about societal trust and the foundations of civilization. It distinguishes between technical solutions (detecting deepfakes) and the deeper epistemological crisis (determining truth), suggesting the latter is more threatening.


Impact

This was perhaps the most profound moment in the discussion, shifting focus from cybersecurity as a technical problem to cybersecurity as an existential challenge to social cohesion. It influenced the final optimistic/pessimistic assessments and connected cybersecurity to broader concerns about democracy and social stability.


Overall assessment

These key comments transformed what could have been a technical discussion about cybersecurity tools into a profound examination of how AI is reshaping the fundamental assumptions underlying digital security, governance, and social trust. The most impactful insights came from recognizing that AI doesn’t just enhance existing threats—it creates entirely new categories of vulnerability that existing frameworks cannot address. The discussion evolved from tactical concerns (how to defend against AI-powered attacks) to strategic ones (how to maintain societal trust and effective governance in an age where the distinction between human and machine actors is blurring). The participants’ willingness to acknowledge uncertainty and complexity, rather than offering simple solutions, made the conversation particularly valuable for understanding the true scope of the challenges ahead.


Follow-up questions

How should legal systems define liability when AI agents act autonomously without explicit human instruction?

Speaker

James Harding


Explanation

This addresses a fundamental gap in current legal frameworks as AI systems become more autonomous and make decisions that could cause harm or damage without direct human oversight.


What specific regulatory frameworks are needed for AI development happening primarily in private markets versus traditional government-contracted technology development?

Speaker

Jessica Rosenworcel


Explanation

This highlights the difference between AI development and previous revolutionary technologies that had military/government oversight from the beginning, suggesting new regulatory approaches may be needed.


How can Europe build indigenous cybersecurity capabilities and reduce dependence on third-party technologies from other regions?

Speaker

Marc Murtra


Explanation

This addresses a strategic vulnerability where Europe lacks deep cybersecurity know-how and relies on external forensic experts, which could be problematic in conflicts involving state actors.


What is the timeline and practical impact of quantum computing on current cybersecurity measures?

Speaker

James Harding


Explanation

While briefly discussed, the quantum threat to current encryption methods and the readiness of quantum-resistant solutions needs more detailed exploration given its potential to revolutionize cybersecurity.


How can attribution of cyberattacks be maintained when AI makes it easier for attackers to hide their linguistic and technical fingerprints?

Speaker

Audience member


Explanation

This is crucial for geopolitical responses to cyberattacks, as proper attribution is necessary for diplomatic and defensive actions, but AI is making this increasingly difficult.


What should big tech companies do to protect the integrity of information ecosystems and support trustworthy news and content?

Speaker

Liz Corbyn (European Broadcasting Union)


Explanation

This addresses the broader societal impact of AI-generated misinformation and the responsibility of technology companies in maintaining information integrity.


How should education systems be reformed to prepare students for a world where digital skepticism and validation skills are essential?

Speaker

Audience member


Explanation

This recognizes that current educational approaches are inadequate for preparing people to navigate an environment filled with AI-generated content and sophisticated digital deception.


Are we approaching the end of digitally mediated communication due to increasing distrust from scams and impersonation?

Speaker

Daniela Tonella (ING)


Explanation

This explores whether the proliferation of AI-powered scams and deepfakes might fundamentally undermine trust in digital communication channels.


How can contextual and layered identity verification systems be implemented to replace binary authentication methods?

Speaker

Nadav Zafrir


Explanation

This suggests a need for research into more sophisticated identity verification that goes beyond simple yes/no authentication to consider behavioral and contextual factors.


What are the specific technical solutions for detecting and preventing deepfake content, particularly in video format?

Speaker

Nadav Zafrir


Explanation

While mentioned as possible, the specific technical approaches and their effectiveness in combating increasingly sophisticated deepfake technology need further exploration.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.