Emerging Shadows: Unmasking Cyber Threats of Generative AI

2 Nov 2023 13:20h - 13:55h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Richard Watson

AI development has rapidly advanced, leading to a faster and more accessible IT landscape. This development has made IT more accessible to individuals and organizations alike. However, this rapid progress has also raised concerns regarding the associated threats that come with AI technology.

One of the primary concerns is the potential for AI to enhance the authenticity of malware and enable the creation of deepfakes. Malicious actors can leverage AI-powered techniques to create sophisticated and realistic cyber threats, which can pose significant risks to individuals and businesses. Deepfakes, in particular, have the potential to undermine trust and integrity by manipulating and fabricating audio and video content.

Businesses are increasingly incorporating AI into their operations, but many struggle to effectively govern and monitor its use. This poses a challenge, as the gap between the utilization of AI and the capabilities of IT and cybersecurity to manage it can result in vulnerabilities and risks. Data poisoning is a specific concern, as it can have adverse effects on critical business processes by deliberately targeting and manipulating datasets used in AI models.

The governance and risk management frameworks need to be updated to effectively handle the complexities of AI in business settings. Organizations must address the unique challenges posed by AI in terms of privacy, accountability, and ethics. Furthermore, the integrity of the data used to train AI models is crucial. AI models are only as good as the data they are trained on, and any biases or errors in the data can produce flawed and unreliable results.

Establishing trust in AI models is also vital. Many individuals have concerns about the use of AI and are hesitant to trust companies that heavily rely on this technology. The ability to explain AI decisions, protect data privacy, and mitigate bias are essential to building this trust.

Furthermore, there are concerns about surrendering control to AI technology due to its immense knowledge and fast assimilation of new information. People worry about the potential misuse of AI in areas such as warfare and crime. Policy measures, such as President Biden’s executive order, have been introduced to address these risks and manage the responsible use of AI.

The field of AI and cybersecurity faces a significant talent gap. The demand for skilled professionals in these areas far exceeds the available supply. This talent gap presents a challenge in effectively addressing the complex cybersecurity threats posed by AI.

To tackle these challenges, organizations should create clear strategies and collaborate globally. Learning from global forums and collaborations can help shape effective strategies to address the risks and enhance cybersecurity practices. Organizations must take proactive steps and not wait for perfect conditions or complete knowledge to act. Waiting can result in missed opportunities to protect against the risks associated with AI.

Integration of AI is necessary to combat the increasing volume of phishing attacks. Phishing attacks have seen a substantial increase, and AI can play a crucial role in detecting and preventing these attacks. However, operating models must be transformed to ensure effective integration of AI, ending with human involvement for a thorough and closed-loop activity.

AI and generative AI have the potential to frustrate criminals and increase the cost of their activities. By utilizing AI technology, criminal activities can become more challenging and costly to execute. For example, applying AI and generative AI can disrupt the metrics and cost-effectiveness of certain criminal operations, such as call centre scams.

In conclusion, while AI development has brought significant advancements and accessibility to IT, there are numerous challenges and risks associated with its use. These challenges include the authenticity of cyber threats, governance and monitoring issues, data integrity, trust-building, talent gaps, control concerns, and the potential misuse of AI. Organizations must address these challenges, develop effective strategies, collaborate globally, and integrate AI into their operations to ensure cybersecurity and responsible use of AI technology.

Dr. Yazeed Alabdulkarim

The analysis highlights the escalating threat of cyber attacks and the challenges faced by cybersecurity defenses. This is supported by the fact that 94% of companies have experienced a cyber attack, and experts predict an exponential growth in the rate of cyber attacks by 2023. Cybercrimes are adopting Software-as-a-Service (SaaS) models and leveraging automation technology to scale their attacks. The availability of Malware as a Service in the cybercrime economy further strengthens their ability to carry out attacks at a larger volume and faster pace.

Generative AI is identified as a potential contributor to the intensification of the cyber attack situation. It is suggested that Generative AI could be used to create self-adaptive malwares and assemble knowledge useful for physical attacks. This raises concerns about the future impact of Generative AI on cybersecurity.

There are differing stances on the regulation of Generative AI. Some argue for limitations on its use, citing the belief that the rise of cyber attacks is due to the use of Generative AI. On the other hand, there are proponents of utilizing Generative AI for defense and combating its nefarious uses. They believe that considering threat actors and designing based on the attack surface can help leverage Generative AI for defensive purposes.

Disinformation is identified as a significant issue associated with Generative AI. The ability of Generative AI to generate realistic fake content raises concerns about the spread of disinformation and its potential consequences.

On a positive note, Generative AI can be used to analyze and respond to security alerts. It is suggested that employing Generative AI in this way can help speed up defensive measures to match the increasing speed of cyber attacks. Furthermore, it is argued that limiting the use of AI technology in cybersecurity would be counterproductive. Instead, AI can play a crucial role in fully analyzing security alerts and addressing the two-speed race in cybersecurity.

The analysis also highlights the incorporation of AI elements in emerging technologies. It is predicted that upcoming technologies will incorporate AI components, indicating the widespread influence of AI. However, there are concerns that fundamental threats associated with AI will also be present in these emerging technologies.

Understanding how AI models operate is emphasized as an important aspect in the field. The ability to explain AI models is crucial for addressing concerns and building trust in AI technology.

Watermarking on AI output is proposed as a potential solution to distinguish real content from fake. It is suggested that both AI companies and authorities should establish watermarking systems to ensure the reliability and authenticity of AI-generated content.

In conclusion, the analysis reveals the growing threat of cyber attacks and the need for stronger cybersecurity defenses. The impact of Generative AI on this situation is a subject of concern, with its potential to intensify attacks and contribute to the spread of disinformation. The regulation and use of Generative AI are topics of debate, with arguments made for limitations as well as for leveraging it in defense and combating nefarious activities. The incorporation of AI elements in emerging technologies raises both opportunities and concerns, while the understanding of AI models and the need for explainable AI should not be overlooked. Finally, watermarking on AI output has the potential to differentiate real content from fake and enhance reliability.

Dr. Victoria Baines

Data poisoning and technology evolution have emerged as significant concerns in the field of cybersecurity. Data poisoning refers to the deliberate manipulation of training data to generate outputs that deviate from the intended results. This form of attack can be insidious, as it slowly corrupts the learning process of machine learning models. Furthermore, influence operations have been conducted to spread discord and misinformation.

The rapid evolution of technology, particularly in artificial intelligence (AI), has created new opportunities for cybercriminals to exploit. AI has led to the replacement of humans with non-human agents in various domains, causing disruptions and potential threats. People have found ways to make bots go bad, and large language models have been repurposed for writing malware. This highlights the need for vigilance in harnessing technological advancements, as they can be exploited for malicious purposes.

The emergence of AI has also resulted in an evolution of cyber threats. Malware implementation has seen new methods and techniques, such as gaming AI models. The ecosystem of cybercriminals may undergo changes due to AI advancements, necessitating proactive measures to counter these evolving threats.

However, not all is bleak in the world of cybersecurity. AI and automation can play a vital role in alleviating the scale and stress issues faced by human operators. The current volume of alerts and red flags in cybersecurity is overwhelming for human teams. A 2019 survey revealed that 70% of cybersecurity executives experience moderate to high stress levels. AI can assist in scaling responses and relieving human operators from burnout, enabling them to focus on tasks they are proficient in, such as threat hunting.

It is worth noting that public perception of AI is often shaped by dystopian depictions in popular culture. The portrayal of AI in science fiction and dystopian narratives tends to create a negative perception. Interestingly, people are more inclined to show positivity towards “chatbots” rather than “Artificial Intelligence”. This demonstrates the influence of popular culture in shaping public opinion and highlights the need for accurate and balanced representation of AI in media.

In conclusion, data poisoning and technology evolution present significant challenges in the field of cybersecurity. The deliberate manipulation of training data and the exploitation of rapid technological advancements pose threats to the integrity and security of systems. However, AI and automation offer promising solutions to address scalability and stress-related issues, allowing human operators to focus on their core competencies. Moreover, it is important to educate the public about AI beyond dystopian depictions to foster a more balanced understanding of its potential and limitations.

Alexandra Topalian

A panel discussion was recently held to examine the cyber threats and opportunities presented by generative AI in the context of cybersecurity. The panel consisted of Richard Watson, a Global Cyber Security Leader at EY, Professor Victoria Baines, an Independent Cyber Security Researcher, Kevin Brown, the Chief Operating Officer at NCC Group, PLC, and Dr. Yazid Al Abdelkarim, the Chief Scientist of Emerging Technologies at CITE. Throughout the discussion, the participants highlighted the potential risks associated with the use of artificial intelligence (AI), specifically generative AI, in the cyber world.

One of the key points discussed during the panel was the emergence of new cyber threats arising from AI. Richard Watson, an EY consultant, stressed the importance of identifying these risks and provided examples of how generative AI can be employed to produce various types of content such as visuals, text, and audio. The panelists also acknowledged the potential danger of data poisoning in relation to generative AI.

Professor Baines echoed Watson’s concerns about data poisoning, emphasising its significance in her research. She also delved into the evolving nature of cyber crimes as new technologies, like generative AI, continue to advance. The panelists then proceeded to explore how cyber criminals can exploit generative AI to develop more sophisticated and elusive cyber threats. They highlighted the potential convergence of generative AI with social engineering tactics, such as phishing, and how this combination could amplify the effectiveness of manipulative attacks.

Dr. Yazid Al Abdelkarim shed light on the scale of cybersecurity attacks and the impact of generative AI. He stressed the need for regulation and shared insights on how SAIT advises organizations on staying ahead of cyber threats. The panelists discussed the challenges, including a talent gap, associated with implementing effective strategies for early detection and management of cyber threats. Kevin Brown shared real-life incidents to illustrate how organizations tackle these challenges.

The threat of deepfakes, where AI-generated content is used to manipulate or fabricate media, was another topic explored during the panel. The participants discussed strategies for addressing this type of threat, with a focus on early detection. They also touched on the ethical boundaries of retaliating against cyber attackers based on psychological profiling, highlighting the importance of complying with the law.

Regarding opportunities, the panelists agreed that generative AI offers benefits in the field of data protection and cybersecurity. Professor Baines emphasized the potential positive aspects of generative AI, highlighting opportunities for enhanced cybersecurity and protection of sensitive information.

In conclusion, the panelists acknowledged the lasting impact of generative AI on the landscape of emerging technologies and its growing influence on cybersecurity. They recognized the advantages and challenges brought about by generative AI in the field. The discussion underscored the need for effective regulations, risk management approaches, and cybersecurity strategies to address the evolving cyber threats posed by generative AI.

Kevin Brown

Generative AI, a powerful technology with various applications, is now being used for criminal activities, leading to concerns about its negative impacts on cybersecurity and criminal behavior. One key concern is that generative AI is lowering the barrier for criminals to exploit it. This means that criminals can easily leverage generative AI for illicit activities, making it more challenging for law enforcement agencies and organizations to prevent and mitigate cybercrime.

Another major concern is that criminals have an advantage over organizations when it comes to adopting new AI technologies. Criminals can quickly launch and utilize new AI technologies without having to consider the regulatory and legal aspects that organizations are bound by. This first-mover advantage allows criminals to stay one step ahead and exploit AI technologies for their nefarious activities.

The emergence of technologies like deepfakes has also brought in a new wave of potential cyber threats. Deepfakes, which are manipulated or fabricated videos or images, have become more accessible and can be utilized in harmful ways. This poses a significant risk to individuals and organizations, as deepfakes can be used for social engineering attacks and to manipulate public opinion or spread misinformation.

Moreover, the use of large language models in artificial intelligence has raised concerns about data poisoning. Large language models can be manipulated and poisoned, leading to a range of malicious motivations. This poses a threat to the integrity and reliability of AI systems, as attackers can exploit vulnerabilities in the data used to train these models.

Additionally, generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks. By using generative AI, criminals can increase the volume and quality of phishing attempts. This allows them to create phishing messages that are highly professional, relevant, and tailored to the targeted individual or business. As a result, generative AI professionalizes phishing, making it more difficult for individuals and organizations to detect and protect themselves against such attacks.

In conclusion, the increased use of generative AI for criminal activities has raised significant concerns about cybersecurity and criminal behavior. The technology has lowered the barrier for criminals to exploit it, giving them an advantage over organizations in adopting new AI technologies. Furthermore, the accessibility of technologies like deepfakes and the potential for data poisoning in large language models have added to the complexity of the cybersecurity landscape. Additionally, generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks, making it harder to detect and defend against such cyber threats. It is crucial for policymakers, law enforcement agencies, and organizations to address these concerns and develop strategies to mitigate the negative impacts of generative AI on cybersecurity.

Session transcript

Alexandra Topalian:
Cyber Threats of Generative AI Richard Watson, Global Cyber Security Leader, EY Yazid Al Abdelkarim, Chief Scientist, Emerging Technologies, CITE Professor Victoria Baines, Independent Cyber Security Researcher Kevin Brown, Chief Operating Officer, NCC Group, PLC Alexandra Topalian, Moderator, International Moderator Good afternoon everyone and welcome to this panel discussion. It is a very hot topic. It is Unmasking Cyber Threats of Generative AI. As we launch into a new era of technology, producing different types of content, generative AI is visual, it is text, it is audio. And so we are here today to discuss the threats, but also the opportunities of generative AI on cyber security. So Richard, let’s start with you since you are the closest to me. As you assist your EY clients in identifying the cyber risks that they face, what are some of the new cyber threats that are created by artificial intelligence?

Richard Watson:
Thanks Alex. AI is moving so quickly. It’s rapid development and it’s kind of democratized IT to some extent. And so a lot has been made around the threats that are things around the velocity of AI, and particularly when the technology gets into the hands of adversaries, how authentic malware can become and deep fakes and so on. But one of the risks we’ve really been focused on at EY is just how quickly it moves from an organizational perspective. We’ve long known about this phenomenon of shadow IT. Well, AI almost puts shadow IT on steroids. And so what we’re actually seeing is the business is using AI every day, but the organization is struggling to keep up with how to monitor that. You’re getting a gap between business use of AI and how IT and cyber security can manage and monitor that. And as a result, you get all sorts of threats around things like data poisoning, around the hijacking of AI, and obviously the privacy risks and so on that create. But really the challenge for organizations is how do you update your governance and your risk management to deal with the business’s use of AI and some of the risks that creates for the organization.

Alexandra Topalian:
And what are some of these risks, if you can give like some more detailed examples?

Richard Watson:
Yeah, well, I mean, so data poisoning being the first one. Obviously, AI models are only as good as the data used to train them. And increasingly as business processes around things like next best action in a call center or how to respond in the case of cyber security defense to certain threats. If prompts are deliberately targeted to kind of poison the data, it can create adverse business reactions. So, you know, cyber security is about confidentiality, integrity, availability of data. You know, really this issue is around managing the integrity of data and then the consequential actions on business processes, which increasingly we’re going to become reliant on as organizations automate their business processes with AI.

Alexandra Topalian:
Professor Baines, I saw you nodding. Do you also feel in your research, have you noticed that data is being poisoned?

Dr. Victoria Baines:
It’s certainly something that we are alerted to. I mean, data poisoning can be quite a slow burn attack in the sense of if you’re seeding skewed data, it might take a bit of time to come out in adverse outcomes. But if we think about influence operations over the last few years, some of those have been targeted, say by nation states or state sponsored groups, not necessarily to get an immediate outcome to vote for a particular candidate or political party, but to sow discord in a community. So that general sense of there being an adverse outcome for a particular group in society that has almost an indirect effect, just kind of disruption as much as anything. I mean, for me, artificial intelligence and the threats attached to generative AI, it’s also about just thinking in terms of what happens when we replace humans with non-human agents in a business. And there are a number of constants, I would say, when I do my futures work, and it’s based on a certain amount of time in law enforcement surrounded by badness. That is that over thousands of years, we know that there will always be people who want to harm other people and other people’s assets. And we know that technology is evolving at such an incredibly rapid rate. So those people will make use of the technology available to them. So yes, we’ve seen people trying to make bots go bad. And we’ve seen large language models like ChatGPT implementing safeguards so that you can’t write malware with ChatGPT, for instance. But interestingly, what we’ve seen spring out of that is people gaming that, repurposing large language models, selling them on the dark web, on kind of dark forums, precisely so that you can write malware. So I think it’s worth kind of broadening out and thinking, rather than it just being how this will affect my business right now, how it will change the cyber criminal ecosystem as well.

Alexandra Topalian:
And as new technologies emerge, do you find that the nature of the crimes is changing in your research?

Dr. Victoria Baines:
Yeah, I mean, this is what makes my job so exciting. It changes daily, hourly, particularly with advances in large language models. I think they have outstripped our expectations, haven’t they? Generally speaking, it’s always my default to say, well, most of the time it’s just old wine in new bottles. It’s just a different kind of attack vector for the cyber crime that we’ve already seen. But I do think data poisoning there is the exception. It’s a new kind of threat to skew that training data so that it produces something other than we’re intending in our use. That’s a new one for me.

Alexandra Topalian:
Right. Thank you. Mr. Brown, what are some of the ways in which generative AI can be exploited by cyber criminals to develop more sophisticated and evasive cyber threats? First of all, good afternoon, everybody. I think some of the bits have been pulled out already.

Kevin Brown:
What generative AI has introduced is a far low barrier of entry into criminal activity. Before, perhaps, you had to have the technical background, the tooling, and the motivation. And now we’re seeing generative AI being used for a far wider range. So whilst we talk about sophistication, I think it’s the ease of access that I’m certainly starting to see more about. I think we also talk about what’s emerging, what is hidden. It’s something that is directly in front of all of us, and that’s a first-mover advantage. Now, in a commercial world, if you’re looking to launch a new product, you’re always trying to get the edge of your competitor. And that’s no different where criminals, they don’t have the risk profile that organizations have. They don’t have to be looking at the explainability of the artificial intelligence. They don’t have to be looking at the legalities, the regulatory. It’s a case of we’ve developed something, let’s launch it. So unfortunately, from sitting on the good side of the fence, we’re always going to be slightly behind the curve from that perspective. Some of the other areas just to highlight, and perhaps we can go back to this. Obviously, social engineering is one that comes to the forefront, as well as the professionalization of deepfakes. We’ve talked about deepfakes for many years, but again, it’s now become far more accessible. And then clearly, we’re into the LLMs, the large language models, and how that can be manipulated, poisoned. And we’ve got used to and accustomed to being a financial motivation. In fact, what we’re seeing through data poisoning is there’s a far wider range of motivations. Some of them may be short-term, but given the amount of elections and political change that’s going on around the world, there’s certainly going to be some slow-burn ones that are already happening.

Alexandra Topalian:
And the potential convergence of generative AI with social engineering tactics, how is this fusion, how could it amplify the effectiveness of phishing and other manipulative attacks?

Kevin Brown:
First of all, I think it’s a massive impact. Certainly through our threat intelligence team at NCC Group, we’ve seen over 1,000% increase already. And I have to say it with a bit of a smile on my face, because all of the phishing training, the phishing awareness programs that we’ve rolled out to all of our colleagues is we’re teaching them to spot the obvious. And with previous phishing attempts, you would look for spelling mistakes, you’d look for grammatic errors. Well, actually, what generative AI has done is just professionalize that. So not only have you now got this increased throughput and volume, all of the training that we’ve educated our colleagues on is almost, you’re putting that to one side because you’re now confronted by emails which have got a lot more relevance, a lot more professionalism. And with generative AI as well, it’s enabling a lot more targeting of spear phishing so that you can really start to add context to the phishing emails. You can talk about the industry. You can really give relevance to the business without too much work. So I think it’s a real game changer for certainly what we’ve seen.

Alexandra Topalian:
Thank you, Mr. Brown. Dr. Yazid, welcome. What is the impact of generative AI related to the scale of cybersecurity attacks?

Dr. Yazeed Alabdulkarim:
So, assalamu alaikum. Good afternoon, everybody. So to understand the scale of generative AI, first we have to consider the current state. So if we look at the current state in 2023, basically adversaries are accelerating and defenders are not able to keep up. It’s basically a two-speed race. So to add to that, basically a research study shows that 94% of companies have experienced a cyber attack in one way or the other. So what’s happening is that just as technology transferring to SaaS, for example, software as a service offering, the cyber crime world is doing the same. So SaaS is becoming in the cyber crime economy as well. And for example, we could see a malware as a service offered in the cyber crime. And to add to this, the automation of the technology is making the threat actors able to accelerate the speed and the volume of attacks and the back as well. With generative AI, it’s expected that the situation will be more difficult because now the attackers will be able to have more means to automate and to generate more intelligent attacks. For example, you could have an adversary creating a self-adaptive malware. And that malware will be able to circumvent and to be undetected by the detection systems. As well as another threat of the generative AI is the assembly of knowledge. So basically with generative AI, you could assemble knowledge that can be utilized for physical attacks. Instead of usually when we have physical attacks, it’s limited to state violent actors. But now even non-state violent actors will be able to acquire that knowledge to launch a similar attack. And if we consider these risks as well, the surveys show that about 85% of security officers believe that the rise of cyber security attacks that we have seen in 2023 is because of the use of generative AI.

Alexandra Topalian:
Thank you, doctor. And as an advisor for SAIT, what can we do in regards to the regulations that are being implemented?

Dr. Yazeed Alabdulkarim:
Yeah, regulations are basically a controversial topic because many believe that it’s challenging to enforce the constraints. And it’s basically wishful thinking. But if we see the initiatives, there is the initiatives by the UN nation. It’s forming a high-level advisory body for AI. Similarly, we have seen the recent U.S. executive order about the safe and secure and trustworthy use and development of AI. But when you consider regulations, there are basically two approaches. One approach is to have regulations to limit the use of generative AI to prevent it to get in the hands of bad actors. However, this approach will end up basically hurting the openness of the technology as well as preventing it for the good users. So I believe the best way to combat generative AI threats is basically by using it for defense and to basically outperform adversaries. So if you do that, you’ll be aligning with the second objective of regulations. Instead of limiting the technology, we should utilize it and use it for defense. For example, and we should design it based on the attack surface. For example, if we consider one of the main issues of generative AI is disinformation. So we should realize that threat actor and then try to come up with defense mechanisms to basically mitigate the risks related to that as well.

Alexandra Topalian:
And how would you go about outperforming?

Dr. Yazeed Alabdulkarim:
Basically, one example, if we see one of the main challenges related to cybersecurity is responding to alerts. A recent research study shows that only 48% of security alerts are investigated. So one way is to use generative AI to basically fully analyze these security alerts and to basically also not only analyzing and potentially responding to them. And that way you will be able to address the two-speed race that I mentioned. So as the adversary are speeding up, we should do the same. We should utilize that technology and not limit it. And then there are many use cases that can be addressed, as I mentioned, with the security alerts.

Alexandra Topalian:
Okay. Thank you, doctor. Well, generative AI is here to stay, correct? Richard, how would you best advise your… Your customers and how they should deal with their risk management approach. Yeah, I think

Richard Watson:
Dr. Yazeed used a key word there, which is trust and I think Establishing trust in AI models is going to be key I think the World Economic Forum has done some of the most recent studies in this space and they found that four out of ten Adults admitted that AI powered products Worry them and that 50% of come 50% of adults, you know Wouldn’t trust companies who use AI as much as they trust companies that don’t and so it’s really incumbent on Organizations to repeat that they would trust companies 50% of adults Would not trust companies who use AI as much as they trust companies who don’t don’t use AI In other words, there’s a huge amount of suspicion There’s a lot of trust AI so I mean one of the things we’ve done at UI to help combat this is The notion of a confidence index So we’ve got our data scientists and our cyber security professionals together to create essentially a framework and an algorithm For you know, how do you determine if a piece of AI is trustworthy or not? So it looks at things like explain ability data privacy Bias and so on so about seven or eight different variables to essentially give a trust score to a process that is using AI and In and if you look at some of the proposed regulation like the European Union AI Act, you know That seems to be the way that regulation is going to go as well It’s gonna be a risk based approach based on some profiling of AI That determines how much testing you need to do and how much disclosure you need to do so I think Providing some metric that Helps create this trust. I think will be really key for organizations and then secondly will need to update their Risk management processes because it’s a case of the business who’s using AI for business purposes Organizational Responsibility, you know audits and so on and then the operational functions that are actually using The AI and maintaining the models coming together to manage this. It’s a bit like the Issue we had where privacy data governance and cyber security, you know had to come together to to manage data You know, we’ve got that again, but with slightly different Stakeholders and axes to worry about

Alexandra Topalian:
and then with this issue of trust. There is also a very negative connotation .That’s come with artificial intelligence Why do you think that is?

Richard Watson:
Yeah, I mean, I think people are just staggered as as Victoria said around, you know, how quickly and how comprehensive You know this technology is it’s it’s become I mean AI is obviously been around for sort of 10-15 years But the generative AI aspect which sort of burst onto the stage in November when Microsoft Acquired open AI, I think it’s shocked people into just how Lucid this technology is and just how much it knows and how quickly it can assimilate new information and people just aren’t ready to surrender that level of control To technology and are worried about it And again, if you look at you know, President Biden’s executive order that came out on Monday You know pretty much the second bullet is about managing the risk of AI use for biological Warfare weapons creation, you know, so all of these big nasty problems are sort of Immediately associated with AI and I think that worries people

Alexandra Topalian:
or having a plane flown without a pilot. But professor Baines suggests that there can be Opportunities right when it comes to cybersecurity and data protection. Tell us a little bit about how you perceive that

Dr. Victoria Baines:
You know your both of your points about the rhetoric of this when you use that term AI Artificial intelligence we immediately think of popular culture. We immediately think of science fiction I’m you can count on one hand the positive blue sky Representations of AI and science fiction. It’s all very dystopian, isn’t it? And we’re kind of inculcated with that sense that it’s all gonna go horribly wrong But if you were to say to people, how do you feel about chatbots? They’d probably be a lot more positive and they’re interacting with them as if they’re dealing with a customer service agent Even though they know that might not be a person on the end of the chat message in terms of opportunities Actually, I’d quite like to pick up on you know What you were talking about in terms of the scale of the problem and about all of those alerts that go Unmanaged because I do a certain amount of research on Burnouts in cyber security and as we all know there aren’t enough people working in incident response There aren’t enough people working in security operations and in 2019 Nominate ran a survey of UK and u.s. C suites cyber security executives and 70% of them said they were suffering from moderate to high stress And I think you know We all recognize that you were talking about the the alerts that go unnoticed or the alerts that don’t get worked Where we are at the moment is with the scale of the red flags that we already have are Too much for incident response teams for security operations centers If what we’re saying is that the scale is going to increase exponentially We absolutely need an automated response a certain amount of automated defense and incident response Not just because it makes sense for the increasing scale But because that’s how we make best use of the humans that we have on our teams. It’s how we keep them from Quitting their jobs and going to work into something else It’s how we preserve their mental health and well-being and dare I say it as someone who has worked these cues in the past It’s how you you know, you give humans tasks that they are good at the threat hunting that sense of what doesn’t feel or smell quite right which so far Machine learning and AI is not particularly good at

Alexandra Topalian:
Hmm interesting, we definitely do have a talent gap there Kevin how would you best advise your organizations on the strategies that they could adopt to detect early detection and Management of cyber threats.

Richard Watson:
There’s a couple of things just to just to pick up on what Richard and Vicki have said as well The first thing is may seem the obvious but to do something I’ve met a number of clients that are almost in this state of paralysis where AI has been around for years Generative AI comes along and they don’t actually know what to do and I think if we look across the globe and I think this is why GCF as a forum is is is perfect for being able to have these open discussions because it just reinforces that people are not alone So my first bit of advice is actually to have a clear strategy. It can be a really basic strategy But it gives you a purpose as to how you’re going to approach the topic. It doesn’t have to be about sophistication coming back to His Excellency the Minister of Education yesterday who I thought was particularly refreshing I really like the point where he was talking about if we’re waiting for the All of the the boxes to be ticked on the clipboard. We’ve missed it We’ve got to go with a risk based approach and that’s where I think with Organizations and certainly how I advise them is to have a strategy based upon what you know but I think the one that is is most pressing as Vicki just mentioned is the skills gap AI the advances of AI has been amazing in the last few years Has it closed the gap two sessions ago on the stage? We were talking about a gap of five million that says to me. We’re not using it So it’s really about understanding the strategy Leveraging colleagues from across the globe forums such as this to help you form your strategy, but most importantly do something and

Alexandra Topalian:
How how do your clients deal with that skills gap? Are they give us some real-life examples?

Richard Watson:
It’s I take a great example fishing. Yes, so I’ve mentioned an increase of a thousand percent Comes back to perhaps what Richard mentioned as well as trust you speak to clients who are trying to run a sock The volume of phishing attacks has gone through the roof. They’ve got AI but but their operating model is still the same The the methodology that has been approached is to take the AI But ultimately it still raises a ticket and ends with a human. So as opposed to saying well What is the closed-loop activity? Where is it? Actually, I’m quite happy to take a little bit of risk fishing emails is one that should just be a closed-loop activity There doesn’t need to be a necessarily human in there. So I work with a lot of clients to transform operating models Because it’s around people process and technology and that has to be the starting point

Alexandra Topalian:
And I just want to pick up on a point you mentioned about the deep fakes What strategies do you recommend for those sort of threats?

Richard Watson:
Again, it naturally depends what industry what sector you’re in it comes back to then the basics of social engineering and recognizing that You’ve got to have additional controls in place It comes back to what I guess from a security industry perspective has been spoken around for years is to defense in depth So if you’ve got someone on to one of your call center agents, it can’t just be that’s the only line of defense in terms Of verification is it is it really mr. Brown on the other end of the phone? You’re gonna have to have other verification methods as well But but what I will say and and and and yes it as well had a great point is we we have to put AI And generative AI to match it because it’s gonna frustrate the criminals and the moment you start to frustrate the moment you slow down Actually, we’re now raising the barrier of entry We’re raising the risk profile and actually the cost for criminals to commit the crime Is now going through the roof and that’s the position that we need to get to so do you mean retaliation? No, not even retaliation it’s actually slowing down their process because criminals yes, they’re criminals, but they’ve still got investment cases They’ve got business cases. You look at some of the call center scams They’ve got metrics around how many calls they’ve got a they’ve got to make how many people do they aim to hook a day? And the moment you start to put AI and generative AI on the other end of that call You’ve just blown their metrics and their business case and all of a sudden the cost of being able to be involved in this criminal Activity that’s just Multiplied by X times.

Alexandra Topalian:
Okay, because we did have a cyber psychologist here yesterday. That was discussing the concept of fighting back and You know retaliating based on the profiling that you do Of the cyber attacker and that’s well, that’s something that you know there’s a fine line with breaking the rules and Breaking the law. All right Dr. Yazi’s my last question to you as we’re running out of time What would be the impact of generate AI on the spectrum of emerging technologies?

Dr. Yazeed Alabdulkarim:
basically as you see all the Upcoming emerging technologies will have the IEI components on them So, what does that mean that all the fundamentals threats that are coming from AI will be present in these Emerging technologies so we have we have we need to have the urge to address them at least or at least evaluate the as I mentioned the attack surface of the Generative AI and try to address the fundamentally so that when the emerging ticks coming up when it’s Related to generate AI we at least have we don’t have a we are not starting from zero we have at least an edge there and for example, one of the initiatives that is coming up is that explain explainable AI and that’s very crucial because we need to one of ways of Addressing the concerns is exactly knowing how it or how the basically the model operates to explain the outcomes that are coming Unfortunately, we’re not there yet. So basically that’s why you are when you have a model you have these Hallucinations coming up because it’s a it’s kind it’s a basically a black box. So we that explainable AI should help to to hopefully address the These concerns as well and just to add one point of regarding the deep fix as as my colleague has mentioned We have seen that recently most of the AI dirty AI companies have voluntarily Proposed to put watermarking on their basically output So you’ll be able to know whether it’s coming from the model or not. I don’t believe this will be sufficient What’s I believe what’s more important as I and it’s back to the defense point that I mentioned that authorities should have their own watermarking and that will give the Ability to know exactly that is coming from a reliable source. Otherwise, it’s basically a misinformation or something that is basically deep fake

Alexandra Topalian:
Right. Well, thank you very much panelists for being with us today. It’s definitely something that’s not going to be going away anytime soon But I do see a lot of benefits to generative AI as well as the downfalls Ladies and gentlemen, please put your hands together For this emerging shadows panel and generative AI. Thank you

Alexandra Topalian

Speech speed

151 words per minute

Speech length

702 words

Speech time

279 secs

Dr. Victoria Baines

Speech speed

172 words per minute

Speech length

915 words

Speech time

319 secs

Dr. Yazeed Alabdulkarim

Speech speed

151 words per minute

Speech length

1053 words

Speech time

420 secs

Kevin Brown

Speech speed

188 words per minute

Speech length

555 words

Speech time

177 secs

Richard Watson

Speech speed

188 words per minute

Speech length

1792 words

Speech time

573 secs