Tech Transformed Cybersecurity: AI’s Role in Securing the Future

1 Nov 2023 12:30h - 12:55h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ken Naumann

The speakers in the analysis delved into the intersection of AI and cybersecurity, exploring various key aspects. They expressed concerns about the potential manipulation and poisoning of AI systems by hackers, which can have negative consequences. Hackers continuously find new ways to access AI and manipulate its data, resulting in erratic or even malicious behavior of AI systems. This highlights the alarming issue of AI systems becoming challenging to control once they have been manipulated.

The analysis also highlighted the regulatory challenges associated with AI technology. It was noted that regulations and standards for AI often struggle to keep up with the rapid pace of technological development. The adoption of generative AI has surprised the speakers considerably over the last year and a half, emphasizing the need for regulations and standards to effectively oversee and ensure the responsible use of AI.

The discussion further addressed the importance of establishing standards for the role of AI in cyber activities. The cyber community was urged to collaborate and develop these standards to effectively harness AI’s potential in enhancing cybersecurity, shaping the ethical and safe implementation of AI in the cyber domain.

Additionally, the analysis explored the significance of secure cross-border data sharing for improving AI. The speakers highlighted the role of data sharing, emphasizing the need to share data across country borders securely. This step would optimize AI capabilities and enable greater global collaboration in AI-driven initiatives.

The analysis also examined the role of leadership in determining AI’s responsibilities. It was agreed that leaders need to make careful decisions about when to entrust more responsibility to AI technology. Safety, honesty, and the protection of current job holders were stressed as paramount considerations when integrating AI into various sectors.

Moreover, the analysis discussed differing perspectives on the timeline and approach to integrating AI into various roles. While some individuals believed AI could take over the analyst role in a short period of three to five years, others argued for a more measured and gradual process.

An interesting observation was made regarding the evolving role of cybersecurity specialists. It was suggested that their responsibilities might expand beyond protecting the environment to include safeguarding AI systems. This evolution reflects the increasing significance of cybersecurity in the context of AI technology.

In conclusion, the analysis highlighted the potential risks and challenges associated with AI and cybersecurity. The importance of addressing the manipulation and control of AI systems, bridging the gap between regulations and rapid technological advancement, establishing standards for AI in cyber activities, and promoting secure cross-border data sharing were emphasized. Additionally, the need for careful decision-making by leaders and the evolving role of cybersecurity specialists in protecting both the environment and AI systems were discussed.

Moderator – Massimo Marioni

Title: The Critical Role of AI in Securing the Future

Summary: The panel discussion titled “AI’s role in securing the future” focused on the importance of leveraging AI to identify and address cybersecurity vulnerabilities in a constantly evolving online landscape. The panelists stressed the need for advanced systems capable of early risk detection and effective communication to individuals.

With the rapid pace of technological advancements, integrating AI is crucial in enhancing online safety. The session highlighted how AI can proactively identify and resolve security issues before they cause significant harm. Dr. Helmut Reisinger, CEO of EMEA and LATAM at Palo Alto Networks, provided impressive examples of how AI is currently being used to address cybersecurity vulnerabilities.

However, Ken Naumann, CEO of NetWitness, discussed the challenges of manipulative tactics used to exploit AI systems. Understanding these tactics is critical in safeguarding the integrity and security of AI systems.

Looking ahead, the panel discussed the potential of AI to make cyberspace safer. They emphasized the importance of talent development to further advance AI capabilities. As AI evolves rapidly, individuals must receive adequate training and education to keep up with developments in the workplace.

The panel also addressed the complex issue of global collaboration in establishing regulations for AI. Despite differing opinions on AI usage, finding a way to set regulations is essential. The example of Italy wanting to ban a specific AI technology highlighted the complexity of this challenge. The panel agreed that international cooperation is necessary to establish and enforce regulations across borders.

The session concluded with a discussion on striking a balance between promoting innovation and mitigating risks. The panelists, as senior leaders, offered insights on implementing rules to achieve this balance effectively.

In summary, the panel discussion emphasized the significant role of AI in identifying and mitigating cybersecurity vulnerabilities. It underscored the importance of talent development, global collaboration, and effective regulation to harness the potential of AI while managing associated risks. Safeguarding the future of digital security necessitates strategic implementation of AI technologies.

Sean Yang

The analysis focuses on the importance of AI governance and training in preparing for AI in the workplace. It emphasizes the need for different stakeholders to receive tailored training and awareness to effectively fulfill their responsibilities. This includes AI users, technical vendors or providers, government regulators, third-party certification bodies, and the public. Stakeholders must have a clear understanding of their roles and responsibilities in relation to AI.

Decision makers, such as executives who make policies and strategies, need to improve their awareness about AI and understand the risks associated with AI applications. A top-down approach to AI governance is often employed, where executives play a crucial role in making informed decisions. Therefore, it is necessary for decision makers to possess a comprehensive understanding of the risks associated with AI.

Furthermore, the analysis highlights the need to review and update traditional engineering concepts, such as software engineering, security engineering, and data engineering, in light of the rapid development of AI technology. The integration of AI into various industries necessitates the adaptation and improvement of existing concepts and practices.

The role of universities and educational institutions is also emphasized. It is noted that many universities still utilize outdated textbooks in their AI and software engineering courses. To bridge this gap and ensure that graduates have the necessary skills for the industry, universities should update their training materials and curriculum to align with current industry practices. This collaboration between industry and academia can help address the skills gap and ensure that graduates are well-prepared for the AI-driven workplace.

Another important point made in the analysis is that AI is a general enabling technology and should be viewed as such, rather than as a standalone product. The focus should not only be on AI technology itself but also on the management of its applications and scenarios. This highlights the need for AI governance to manage the entire AI lifecycle, from design to operations, to maximize its potential benefits and mitigate risks.

The analysis concludes with the assertion that AI is a people-oriented technology. It highlights the potential of AI to support and serve people, as well as the importance of AI governance in improving its applications. This perspective underscores the need for responsible and ethical development and deployment of AI to ensure positive impacts on society and individuals.

Overall, the analysis emphasizes the significance of AI governance and training in effectively preparing for AI in the workplace. It provides insights into the specific needs and responsibilities of different stakeholders, the importance of decision makers’ awareness of AI risks, the need to update traditional engineering concepts, the importance of collaboration between universities and industry, and the people-centric nature of AI. These insights can guide policymakers, businesses, and educational institutions in developing strategies and frameworks to harness the potential of AI while ensuring its responsible and beneficial use.

Helmut Reisinger

The analysis reveals several key points regarding the role of AI in cybersecurity. Firstly, AI is essential in dealing with the rapidly growing cyber threat landscape as it enables faster detection and response. Palo Alto Networks, for example, detects 1.5 million new attacks daily, and with the use of AI, the meantime to detect is reduced to just 10 seconds, and to repair is reduced to one minute. This highlights the significant impact that AI can have in combating cyber threats.

It is argued that reliance on AI for cybersecurity is inevitable due to the speed, scale, and sophistication of threats. In the past, the time between infiltration and exfiltration of data was 40 days in 2021, but AI reduced it to 5 days last year. It is believed that AI has the potential to further reduce this time to a matter of hours, demonstrating its importance in responding effectively to cyber threats.

Additionally, machine learning and AI are regarded as crucial for cross-correlation in cybersecurity. By cross-correlating telemetry data across various aspects such as user identity, device identity, and application, machine learning algorithms can provide valuable insights for detecting and preventing cyber attacks.

The analysis also highlights the need to consolidate security estate for end-to-end security. With around 3,500 technology providers and medium to large enterprises using 20 to 30 different security tools on average, the cybersecurity sector is currently fragmented. This fragmentation leads to a lack of intercommunication between tools, which hinders the effectiveness of security measures. Therefore, it is important to streamline and integrate security tools to ensure comprehensive and cohesive protection against cyber threats.

Challenges arise with the use of open-source components in coding. While open-source coding is prevalent, with 80% of code created in the world utilising open-source components, the presence of malware in just one open-source library can have a significant snowball effect, compromising the security of the entire system. This highlights the need for caution and thorough security measures when working with open-source components.

Furthermore, the analysis underscores the importance of considering regional regulations and governance in cybersecurity. While cybersecurity is a universal topic, different regions and countries may have varying standards and regulations. For example, Saudi Arabia has specific governance on where data needs to be stored. Adhering to and adapting to these regulations is crucial to ensuring compliance and maintaining the security of data.

The analysis suggests that convergence of global standards on cybersecurity, data governance, and AI regulation is expected in the future, although it may not happen immediately. This convergence would provide a unified framework for addressing cybersecurity challenges worldwide and supporting global collaboration.

Real-time and autonomous cybersecurity solutions are deemed crucial in the current landscape. As the time between infiltration and exfiltration of data shrinks, the ability to respond in real time becomes increasingly important. AI is seen as a prerequisite for highly automated cybersecurity solutions that can effectively detect and mitigate threats in real time.

It is highlighted that the effectiveness of AI in security is reliant on the quality of data it is trained on. Good data is essential for achieving the desired outcome of rapid detection and remediation. Therefore, organizations should ensure that they have access to the right telemetry data to maximize the effectiveness of AI in cybersecurity.

Policy makers are advised to encourage the growth of AI in cybersecurity while being aware of its risks. AI is a driver on both the cybersecurity and attacker side, with an observed 910% increase in faked/vulnerable chat websites after the launch of GPT chat. Therefore, policies should address the potential misuse of AI while promoting its benefits in enhancing cybersecurity.

Lastly, the analysis highlights the interdependence of cybersecurity and AI for the safety of digital assets. Both are crucial for providing real-time cybersecurity solutions. However, the integration of AI and cybersecurity is necessary, as AI without cybersecurity or cybersecurity without AI will not be as effective in protecting digital assets.

In conclusion, the analysis emphasizes the importance of AI in addressing the growing cyber threat landscape. It provides evidence of AI’s effectiveness in faster detection and response, cross-correlation in cybersecurity, and the consolidation of security measures. However, challenges with open-source components and regional regulations need to be considered. The convergence of global standards is expected in the long run, but real-time and autonomous cybersecurity solutions are currently crucial. The quality of data used to train AI is essential for its effectiveness, and policymakers should encourage AI growth while mitigating risks. Ultimately, the interdependence of cybersecurity and AI is crucial for safeguarding digital assets.

Session transcript

Moderator – Massimo Marioni:
AI’s role in securing the future. Dr. Helmut Reisinger, Chief Executive Officer, EMEA and LATAM, Palo Alto Networks. Ken Naumann, Chief Executive Officer, NetWitness. Sean Yang, Global Cybersecurity and Privacy Officer, Huawei. Massimo Marioni, Moderator, Europe Editor, Fortune. Hello everyone. Hello everyone. Welcome to the panel titled AI’s role in securing the future. Now, in today’s world, where there are always new online dangers, we really need elite systems to warn us early about these risks. And technology is changing fast. So that’s why AI has become super important in keeping us all safe online. Now this session is all about how AI can fix and find online security problems and identify them before they cause great damage. So we’ll start off by asking Helmut, can you start by explaining how AI can be used to identify and mitigate cybersecurity vulnerabilities? And can you tell us about any cool ways that that’s already been done?

Helmut Reisinger:
Yeah. Good afternoon, everybody. As-salamu alaykum. I am representing Palo Alto Networks. We are a cybersecurity specialist. And just to give you one number, we are detecting every day 1.5 million new attacks that have not been there before. Newly individual identifiable attacks. This cannot be done by humans. So AI is part of the solution. And we have been doing AI machine learning for more than eight years now. We did not start when JetGPT, the generative AI, was announced. And it’s built across our different platforms. And why is that important? Because we believe that the threat landscape that you are facing here in the kingdom, in the region, but also globally, and this has been shared since the morning, is actually exponentially growing. And AI brings three dimensions to it. It’s gonna be more speedy, or it allows for more speed on attack side. It allows for more scale. Ransomware as a service. Now you can even program it and get scale and speed. And it will allow for an even higher sophistication if you think about social engineering. And taking this together with the ingredients what drives the threat landscape that is exposing you as public organizations, as enterprises here in the kingdom, which is geopolitics is a driver. A driver is your supercritical infrastructure that you have here supplying energy to the world. It’s the AI and digital transformation that you’re having. And with that, we believe you need to leverage AI on that. And how do we do that? Is we combine telemetry data of security from firewalls, networks, the cloud assets, and we provide it then into security operation center solution that we provide. And that gives an outcome based on AI, which is basically 10 seconds meantime to detect and one minute meantime to repair. Because the topic is that the speed, the time between infiltration of an organization and exfiltration of data is shrinking. It was about 40, I think I heard it in the morning as well, somebody said it was two months in the OT infrastructure people were wandering around. It was about 40 days, 2021. It’s been five days last year. And with AI, it’s gonna be a matter of hours. So in a nutshell, AI enables what we believe is the future, which is real-time cybersecurity and highly automated cybersecurity. Because we human beings, we cannot deal with all of that at the same time. A borderless space.

Moderator – Massimo Marioni:
So how AI can identify and nip these risks before they happen. Ken, on the flip side, what are some of the common tactics used to manipulate or poison AI systems that we need to be aware of?

Ken Naumann:
Yeah, I think many of the techniques being used now are really not that different from typical techniques that everyday hackers use, right? And what these criminal organizations are doing or nation-states that are pointed in the wrong direction are coming up with ways to access… Sorry, that’s a drone that’s going around. That came up in the last panel I did. Try to ignore it. Coming up with ways to access AI and poison the data. So creating situations where AI is starting to hallucinate, starting to actually act as a bad actor within an organization’s environment. And once that gets out into the wild, it’s really hard to bring back in. So as these organizations become more sophisticated and are able to access the data, controlling the AI and manipulating these models, you are going to start to see AI take on a life of its own that was deployed for the benefit of an organization actually turn against that organization. And hackers are currently working on that today.

Moderator – Massimo Marioni:
Now, looking ahead, what do you see as the future for AI making cyber safe a safer helmet?

Helmut Reisinger:
Well, if I take into account… By the way, that’s a good example here. It’s a very noisy drone. That’s easily identifiable. If you have digital threats, they are not as easily identifiable. And this is why what we at Palo Alto always do is we cross-correlate with machine learning and AI. What do we cross-correlate? We cross-correlate telemetry data for cyber security, as I said, across firewalls, networks, cloud assets, and endpoints. And we cross-correlate the behavior, the user identity, the device identity, and the application. And out of this cross-correlation, which you need to do by machine learning and AI, then you can apply the right models and then you come to the outcomes of 10 seconds mean time to detect and one minute mean time to repair. So this cross-correlation is critical. And what we see, and I think this is for the whole of the cyber security industry that we are all representing here, as a challenge is that today’s system are very, or the industry itself is very fragmented. There’s 3,500 technology providers out there. On average, a medium to large enterprise in the kingdom, in Germany, in the United States is using between 20 to 30 different tools to protect the digital assets. But they don’t talk to each other. This is why we fundamentally believe, what Gartner is also saying, we need to help you on a modular basis to consolidate your security estate so that you have an end-to-end security in whichever cloud you have your workloads, and also from code to cloud. We heard the CEO from Aramco speaking about the importance of OT, and there’s a lot of code being created. The problem is 80% of the code that is created in the world, also here in the kingdom, is using open-source components. Now the problem is if one of these open-source libraries contain malware, you have a big snowball effect. And again here, identity, device, application and behavior cross-correlated with AI. This is the way how to sort it.

Moderator – Massimo Marioni:
Sean, I can see you at the end there. Building a pool of talent is a key factor for progressing AI. So what kind of classes or training do people need to prepare for AI in the workplace, especially when AI keeps changing at such a rapid rate?

Sean Yang:
Yeah, thanks for asking. I think in recent days, and suddenly AI getting very hot. Every country, they start working on AI and AI-related security. And I would like to see, like the GCF, and all the people working here and trying to improve the international consensus on AI governance. But if you’re talking about the real classes, to answer your questions, I would like to see first, we need to think about what kind of structure we need to build. Now, we should like to say AI governance, we need to have different roles. Just like one of the speakers mentioned about, like cybersecurity is a team sport, and same like AI. Now we identify with five rules. First one is AI user, like the enterprise or like anyone who apply AI to their product and to their production or to their daily enterprise operations. The number two is technical vendors or the AI providers. And also the government regulators and the third party certification body and also the public peoples. Because eventually AI’s application will significantly impact their life. Okay, if we identify the different multiple stakeholders, then different stakeholders need to take their responsibility and also they need a different training or different awareness. So I would like to see, in the recent days, I found a very interesting things. And two weeks ago, and we had a discussion in Singapore International Cybersecurity Week. And we’re talking about talent, we say over-knowledge but unskilled workforce. Which means now to get knowledge is very easy, but questions how to apply this kind of knowledge to their practice is a kind of challenge. So from this point of view, I would like to see to fill three gaps. The first one is to see how we can significantly improve the decision makers’ awareness. And for example, if we’re talking about governance, normally it’s always from top to down. So which means the top senior executives who decided the strategy, who decided the policies and needed to have awareness about AI. So which means they needed to know, may not need to know all the details, but they need to know and what kind of risk behind AI’s applications. The second one is working level. I would like to see a lot of situations is pretty similar like cybersecurity. You can see, Ken just mentioned about a lot of thing like open source software. To address all this kind of supply chain security issues, we need to review all this kind of traditional concept like the software engineering, security engineering, data engineering. All this kind of ideas is pretty traditional ideas. However, now we have AI. Then we have to review and also we need to put a lot of new meaning and new concept inside that can absolutely consolidate the cornerstone of the basic abilities in the working lab or technology levels to support the fast growth of the digital transformation and also the AI applications. The third one and probably is like a training inside universities. Huawei worked together with 79 university in China and we figured out a lot of universities do using a very old textbook. So let’s read one. We work together with top 11 universities to see how we can share our practice on a software engineering capabilities together with them then using this way and we train all this kind of training of trainer the young teachers and as well as The young graduate and once we finish the graduation They already understand what is the practice inside industry they can quickly update Quickly catch up the industry practice

Moderator – Massimo Marioni:
Thank you, and now another key issue is is collaboration not just in the workplace, but but across the world And and that’s a complex challenge so Can you explain how different countries can set rules for AI even when they’re not all? Necessarily aligned on how to use the technology for instance you know when chat GPT first exploded Italy Wanted to ban it for it for you know a certain amount of time So it’s a very complex challenge, and I’ve heard people say if you don’t have worldwide Regulation over AI you’ve got no regulation Ken. Do you do you think do you agree with that sentiment?

Ken Naumann:
I do agree with that you know the adoption of generative AI Has surprised me considerably over the last year year and a half and You know for me And my belief is you know the the regulations whether they’re on a country Basis or on a worldwide basis are going to be playing catch-up for the the future and I don’t think we’re ever going to totally catch up through Coming up with a comprehensive set of regulations or standards things that I think we can do are things like what we’re doing today Where we’re sharing information We’re sharing ideas, and I think that the GCF is has done a big service to the entire cyber community Other things we can do is come up with standards as a community not necessarily You know trying to get governments to cooperate with one another, but as a community of cyber professionals on You know what the role of AI should be as it relates to cyber You know standards around modeling standards around data the ability to share data across country borders and Coming up with safe and effective ways to to do that. I think it’s going to be a big step in the right direction and Ultimately you know the the more data that can be shared Honestly and Securely I think the the more likely we are going to be able to catch up with any

Moderator – Massimo Marioni:
Bad use of the technology yeah helmet. What’s your take well?

Helmut Reisinger:
first of all Cybersecurity is a universal topic because digitalization is happening everywhere notably also here in the kingdom On the other hand we should not dream. We should be realistic that we will not have tomorrow Immediately one standard across the globe which means we need to respect different ecosystems of digital space regulation or cyber security regulation for example Sean is coming from Shenzhen We as Palo Alto our sassy solution is fully compliant as well for China active businesses Which means if a German company active in China needs the same security across the globe in Saudi Arabia in China as well as for example in Brazil they get one standard But it fits as well to the local regulation that is needed we need to adapt that same is here You have a specific governance here on data and where data needs to be stored in the kingdom That’s where we need to simply adapt. That’s what we need to respect on the other hand I believe that some areas other areas other Theaters in the world are setting the pace we heard this morning from Barroso Europe was probably setting the pace in GDPR Europe was also quite fast when it comes to AI talking about unacceptable risk AI Sensitive AI foundational model AI and then basically a risk-free AI now This week u.s. Has also issued the first executive order on AI This will help to set the scene to get the discussion going and to get to a better level and of course AI I Think regulation is kind of needed Because there is a big potential of using it for the dark side of the world against your industrial your enterprise your public sector services that you want to provide and I can only see it also in Europe, you know about one year ago President Biden issued an executive order as well on a tech surface management ASM a tech service, but does you step from outside and look into an enterprise? What are your? risk areas and vulnerabilities and he forced Every entity of the federal government of the u.s. To do an attack surface analysis every seven days This is by far not the standard in Europe, but the closer you get to the Ukraine border I can tell you Baltics you heard the lady from Estonia this morning the more alertedness you have on that So I think this will set step-by-step to standard and I think the world will step-by-step converge on that But again, let’s not dream and be realistic

Moderator – Massimo Marioni:
Now you’re all senior leaders within your companies What do you think are the most important things for leaders to think about when they’re making rules in order to strike a balance between? Promoting innovation and Safeguarding against potential risks. I’ll give you all a chance to to answer here. So let’s start with with Sean

Sean Yang:
Thank you for question You know, actually we think is probably First of all, we need to say AI is not a product AI is a general Enabling technology if you compare with the last round of the industry Renovation like the computer science they change everything, right? so from this point view, I would like to see if we’re talking about all the rules and the governance and probably is not need to Focus on AI technology itself, but need to think about how we how we can build the rules structures and governance structures and to manage this kind of scenarios or this kind of product if we not talking about the Application scenarios that we’re talking about AI governance. There’s no meaning because AI technology are evolving, right? They are changing if you took based on a changing technology talking about the governance Sometimes it’s not cannot generate a concrete things. That’s number one number two I think in that we are facing a lot of challenge and probably generate by AI and First of all, we need to say in AI eventually will support or serving the people So which is the people oriented technologies? so the governance or rules and first of all and needed to improve and the applications and so from this point of view and That’s reason why we create this kind of internal governance which define the intention in define the principle and define the scenario and Define the product and how we apply the technology inside of solution or inside of a business of situations I think whatever from the security by design or security by default or security by operations we need to pay attention to the overall for And the life cycles management for AI application that probably it can bring more concrete meaning for AI governance

Moderator – Massimo Marioni:
Can number one thing for leaders to think about when implementing? I think there’s there’s a big decision coming up for

Ken Naumann:
technology leaders, especially Developers of software in the cyberspace And that decision is when you turn over more and more responsibility to the AI technology You know, when when does that shift happen where right now? Or in the immediate future AI can serve as a very good co-pilot but when does it actually become the pilot and I think it’s up to us as leaders of the organizations that are innovating around this technology to Make that determination in a way that a is going to be safe for the people who adopt the technology be doing in a Honest way in terms of being able to recognize What the current state of evolution is around AI and then see do it in a way that’s going to protect The people who are currently doing those jobs And You know, there’s a bit of a push pull in the industry right now Some people think that you know AI technology is going to take over the analyst role in a sock within the next three to five years Other people think that the steps that need to be taken before that happened need to be very measured and It needs to happen over a much more Long elongated period of time the other thing that I would bring up is you know What is the role ultimately going to be of a security? cyber security Specialists and you know, is it going to actually be protecting the environment or protecting AI? and you know to me there’s going to be a lot of a process of be procedures in terms of you know, how you go about doing that what technologies you use to do that and making sure that we put all the building blocks in place ultimately before we turn over our security future to machines

Moderator – Massimo Marioni:
Very well said helmet last word one man

Helmut Reisinger:
Well, if if it’s true that remember the time of infiltration versus exfiltration is shrinking heavily the world will need to have real-time and Autonomous autonomous meaning highly automated Cybersecurity solution that does not come without AI It’s a prerequisite so If this is a prerequisite the innovation will be how can we have the best use of AI? That is only possibly if you have good data So if you want to come to an outcome of 10 seconds meantime to detect and one minute meantime to remediate You need to have the right telemetry data. Remember device ID as well as the endpoint Telemetry data and from the cloud and then apply those algorithms and I think policymakers are very well advised to give space and oxygen to the AI space on The other hand as well to be aware and cognizant that AI is also a driver on the attacker side Just to give you one final number in the first seven months since chat GPT was launched our market leading unit 842 it’s a threat intelligence unit that we have have noticed 910% increase of faked slash Vulnerable chat GPT like websites being created as a trap for people and the public So it’s important for societies for enterprises and public organizations I think AI without cyber security or cyber security without AI vice versa will not work if we want to Keep your digital assets safe in a real-time and autonomous cyber security version

Moderator – Massimo Marioni:
Thank you very much that wraps up our panel everyone There’s a 10-minute break before the start of the next panel so there we go

Helmut Reisinger

Speech speed

176 words per minute

Speech length

1607 words

Speech time

548 secs

Ken Naumann

Speech speed

165 words per minute

Speech length

796 words

Speech time

289 secs

Moderator – Massimo Marioni

Speech speed

160 words per minute

Speech length

547 words

Speech time

205 secs

Sean Yang

Speech speed

174 words per minute

Speech length

1030 words

Speech time

354 secs