Emerging Shadows: Unmasking Cyber Threats of Generative AI
2 Nov 2023 13:20h - 13:55h UTC
Event report
Moderator:
- Alexandra Topalian
Speakers:
- Richard Watson
- Dr. Yazeed Alabdulkarim
- Kevin Brown
- Dr. Victoria Baines
Table of contents
Disclaimer: This is not an official record of the GCF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the GCF YouTube channel.
Knowledge Graph of Debate
Session report
Richard Watson
AI development has rapidly advanced, leading to a faster and more accessible IT landscape. This development has made IT more accessible to individuals and organizations alike. However, this rapid progress has also raised concerns regarding the associated threats that come with AI technology.
One of the primary concerns is the potential for AI to enhance the authenticity of malware and enable the creation of deepfakes. Malicious actors can leverage AI-powered techniques to create sophisticated and realistic cyber threats, which can pose significant risks to individuals and businesses. Deepfakes, in particular, have the potential to undermine trust and integrity by manipulating and fabricating audio and video content.
Businesses are increasingly incorporating AI into their operations, but many struggle to effectively govern and monitor its use. This poses a challenge, as the gap between the utilization of AI and the capabilities of IT and cybersecurity to manage it can result in vulnerabilities and risks. Data poisoning is a specific concern, as it can have adverse effects on critical business processes by deliberately targeting and manipulating datasets used in AI models.
The governance and risk management frameworks need to be updated to effectively handle the complexities of AI in business settings. Organizations must address the unique challenges posed by AI in terms of privacy, accountability, and ethics. Furthermore, the integrity of the data used to train AI models is crucial. AI models are only as good as the data they are trained on, and any biases or errors in the data can produce flawed and unreliable results.
Establishing trust in AI models is also vital. Many individuals have concerns about the use of AI and are hesitant to trust companies that heavily rely on this technology. The ability to explain AI decisions, protect data privacy, and mitigate bias are essential to building this trust.
Furthermore, there are concerns about surrendering control to AI technology due to its immense knowledge and fast assimilation of new information. People worry about the potential misuse of AI in areas such as warfare and crime. Policy measures, such as President Biden's executive order, have been introduced to address these risks and manage the responsible use of AI.
The field of AI and cybersecurity faces a significant talent gap. The demand for skilled professionals in these areas far exceeds the available supply. This talent gap presents a challenge in effectively addressing the complex cybersecurity threats posed by AI.
To tackle these challenges, organizations should create clear strategies and collaborate globally. Learning from global forums and collaborations can help shape effective strategies to address the risks and enhance cybersecurity practices. Organizations must take proactive steps and not wait for perfect conditions or complete knowledge to act. Waiting can result in missed opportunities to protect against the risks associated with AI.
Integration of AI is necessary to combat the increasing volume of phishing attacks. Phishing attacks have seen a substantial increase, and AI can play a crucial role in detecting and preventing these attacks. However, operating models must be transformed to ensure effective integration of AI, ending with human involvement for a thorough and closed-loop activity.
AI and generative AI have the potential to frustrate criminals and increase the cost of their activities. By utilizing AI technology, criminal activities can become more challenging and costly to execute. For example, applying AI and generative AI can disrupt the metrics and cost-effectiveness of certain criminal operations, such as call centre scams.
In conclusion, while AI development has brought significant advancements and accessibility to IT, there are numerous challenges and risks associated with its use. These challenges include the authenticity of cyber threats, governance and monitoring issues, data integrity, trust-building, talent gaps, control concerns, and the potential misuse of AI. Organizations must address these challenges, develop effective strategies, collaborate globally, and integrate AI into their operations to ensure cybersecurity and responsible use of AI technology.
Dr. Yazeed Alabdulkarim
The analysis highlights the escalating threat of cyber attacks and the challenges faced by cybersecurity defenses. This is supported by the fact that 94% of companies have experienced a cyber attack, and experts predict an exponential growth in the rate of cyber attacks by 2023. Cybercrimes are adopting Software-as-a-Service (SaaS) models and leveraging automation technology to scale their attacks. The availability of Malware as a Service in the cybercrime economy further strengthens their ability to carry out attacks at a larger volume and faster pace.
Generative AI is identified as a potential contributor to the intensification of the cyber attack situation. It is suggested that Generative AI could be used to create self-adaptive malwares and assemble knowledge useful for physical attacks. This raises concerns about the future impact of Generative AI on cybersecurity.
There are differing stances on the regulation of Generative AI. Some argue for limitations on its use, citing the belief that the rise of cyber attacks is due to the use of Generative AI. On the other hand, there are proponents of utilizing Generative AI for defense and combating its nefarious uses. They believe that considering threat actors and designing based on the attack surface can help leverage Generative AI for defensive purposes.
Disinformation is identified as a significant issue associated with Generative AI. The ability of Generative AI to generate realistic fake content raises concerns about the spread of disinformation and its potential consequences.
On a positive note, Generative AI can be used to analyze and respond to security alerts. It is suggested that employing Generative AI in this way can help speed up defensive measures to match the increasing speed of cyber attacks. Furthermore, it is argued that limiting the use of AI technology in cybersecurity would be counterproductive. Instead, AI can play a crucial role in fully analyzing security alerts and addressing the two-speed race in cybersecurity.
The analysis also highlights the incorporation of AI elements in emerging technologies. It is predicted that upcoming technologies will incorporate AI components, indicating the widespread influence of AI. However, there are concerns that fundamental threats associated with AI will also be present in these emerging technologies.
Understanding how AI models operate is emphasized as an important aspect in the field. The ability to explain AI models is crucial for addressing concerns and building trust in AI technology.
Watermarking on AI output is proposed as a potential solution to distinguish real content from fake. It is suggested that both AI companies and authorities should establish watermarking systems to ensure the reliability and authenticity of AI-generated content.
In conclusion, the analysis reveals the growing threat of cyber attacks and the need for stronger cybersecurity defenses. The impact of Generative AI on this situation is a subject of concern, with its potential to intensify attacks and contribute to the spread of disinformation. The regulation and use of Generative AI are topics of debate, with arguments made for limitations as well as for leveraging it in defense and combating nefarious activities. The incorporation of AI elements in emerging technologies raises both opportunities and concerns, while the understanding of AI models and the need for explainable AI should not be overlooked. Finally, watermarking on AI output has the potential to differentiate real content from fake and enhance reliability.
Dr. Victoria Baines
Data poisoning and technology evolution have emerged as significant concerns in the field of cybersecurity. Data poisoning refers to the deliberate manipulation of training data to generate outputs that deviate from the intended results. This form of attack can be insidious, as it slowly corrupts the learning process of machine learning models. Furthermore, influence operations have been conducted to spread discord and misinformation.
The rapid evolution of technology, particularly in artificial intelligence (AI), has created new opportunities for cybercriminals to exploit. AI has led to the replacement of humans with non-human agents in various domains, causing disruptions and potential threats. People have found ways to make bots go bad, and large language models have been repurposed for writing malware. This highlights the need for vigilance in harnessing technological advancements, as they can be exploited for malicious purposes.
The emergence of AI has also resulted in an evolution of cyber threats. Malware implementation has seen new methods and techniques, such as gaming AI models. The ecosystem of cybercriminals may undergo changes due to AI advancements, necessitating proactive measures to counter these evolving threats.
However, not all is bleak in the world of cybersecurity. AI and automation can play a vital role in alleviating the scale and stress issues faced by human operators. The current volume of alerts and red flags in cybersecurity is overwhelming for human teams. A 2019 survey revealed that 70% of cybersecurity executives experience moderate to high stress levels. AI can assist in scaling responses and relieving human operators from burnout, enabling them to focus on tasks they are proficient in, such as threat hunting.
It is worth noting that public perception of AI is often shaped by dystopian depictions in popular culture. The portrayal of AI in science fiction and dystopian narratives tends to create a negative perception. Interestingly, people are more inclined to show positivity towards "chatbots" rather than "Artificial Intelligence". This demonstrates the influence of popular culture in shaping public opinion and highlights the need for accurate and balanced representation of AI in media.
In conclusion, data poisoning and technology evolution present significant challenges in the field of cybersecurity. The deliberate manipulation of training data and the exploitation of rapid technological advancements pose threats to the integrity and security of systems. However, AI and automation offer promising solutions to address scalability and stress-related issues, allowing human operators to focus on their core competencies. Moreover, it is important to educate the public about AI beyond dystopian depictions to foster a more balanced understanding of its potential and limitations.
Alexandra Topalian
A panel discussion was recently held to examine the cyber threats and opportunities presented by generative AI in the context of cybersecurity. The panel consisted of Richard Watson, a Global Cyber Security Leader at EY, Professor Victoria Baines, an Independent Cyber Security Researcher, Kevin Brown, the Chief Operating Officer at NCC Group, PLC, and Dr. Yazid Al Abdelkarim, the Chief Scientist of Emerging Technologies at CITE. Throughout the discussion, the participants highlighted the potential risks associated with the use of artificial intelligence (AI), specifically generative AI, in the cyber world.
One of the key points discussed during the panel was the emergence of new cyber threats arising from AI. Richard Watson, an EY consultant, stressed the importance of identifying these risks and provided examples of how generative AI can be employed to produce various types of content such as visuals, text, and audio. The panelists also acknowledged the potential danger of data poisoning in relation to generative AI.
Professor Baines echoed Watson's concerns about data poisoning, emphasising its significance in her research. She also delved into the evolving nature of cyber crimes as new technologies, like generative AI, continue to advance. The panelists then proceeded to explore how cyber criminals can exploit generative AI to develop more sophisticated and elusive cyber threats. They highlighted the potential convergence of generative AI with social engineering tactics, such as phishing, and how this combination could amplify the effectiveness of manipulative attacks.
Dr. Yazid Al Abdelkarim shed light on the scale of cybersecurity attacks and the impact of generative AI. He stressed the need for regulation and shared insights on how SAIT advises organizations on staying ahead of cyber threats. The panelists discussed the challenges, including a talent gap, associated with implementing effective strategies for early detection and management of cyber threats. Kevin Brown shared real-life incidents to illustrate how organizations tackle these challenges.
The threat of deepfakes, where AI-generated content is used to manipulate or fabricate media, was another topic explored during the panel. The participants discussed strategies for addressing this type of threat, with a focus on early detection. They also touched on the ethical boundaries of retaliating against cyber attackers based on psychological profiling, highlighting the importance of complying with the law.
Regarding opportunities, the panelists agreed that generative AI offers benefits in the field of data protection and cybersecurity. Professor Baines emphasized the potential positive aspects of generative AI, highlighting opportunities for enhanced cybersecurity and protection of sensitive information.
In conclusion, the panelists acknowledged the lasting impact of generative AI on the landscape of emerging technologies and its growing influence on cybersecurity. They recognized the advantages and challenges brought about by generative AI in the field. The discussion underscored the need for effective regulations, risk management approaches, and cybersecurity strategies to address the evolving cyber threats posed by generative AI.
Kevin Brown
Generative AI, a powerful technology with various applications, is now being used for criminal activities, leading to concerns about its negative impacts on cybersecurity and criminal behavior. One key concern is that generative AI is lowering the barrier for criminals to exploit it. This means that criminals can easily leverage generative AI for illicit activities, making it more challenging for law enforcement agencies and organizations to prevent and mitigate cybercrime.
Another major concern is that criminals have an advantage over organizations when it comes to adopting new AI technologies. Criminals can quickly launch and utilize new AI technologies without having to consider the regulatory and legal aspects that organizations are bound by. This first-mover advantage allows criminals to stay one step ahead and exploit AI technologies for their nefarious activities.
The emergence of technologies like deepfakes has also brought in a new wave of potential cyber threats. Deepfakes, which are manipulated or fabricated videos or images, have become more accessible and can be utilized in harmful ways. This poses a significant risk to individuals and organizations, as deepfakes can be used for social engineering attacks and to manipulate public opinion or spread misinformation.
Moreover, the use of large language models in artificial intelligence has raised concerns about data poisoning. Large language models can be manipulated and poisoned, leading to a range of malicious motivations. This poses a threat to the integrity and reliability of AI systems, as attackers can exploit vulnerabilities in the data used to train these models.
Additionally, generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks. By using generative AI, criminals can increase the volume and quality of phishing attempts. This allows them to create phishing messages that are highly professional, relevant, and tailored to the targeted individual or business. As a result, generative AI professionalizes phishing, making it more difficult for individuals and organizations to detect and protect themselves against such attacks.
In conclusion, the increased use of generative AI for criminal activities has raised significant concerns about cybersecurity and criminal behavior. The technology has lowered the barrier for criminals to exploit it, giving them an advantage over organizations in adopting new AI technologies. Furthermore, the accessibility of technologies like deepfakes and the potential for data poisoning in large language models have added to the complexity of the cybersecurity landscape. Additionally, generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks, making it harder to detect and defend against such cyber threats. It is crucial for policymakers, law enforcement agencies, and organizations to address these concerns and develop strategies to mitigate the negative impacts of generative AI on cybersecurity.
Speakers
AT
Alexandra Topalian
Speech speed
151 words per minute
Speech length
702 words
Speech time
279 secs
Report
A panel discussion was recently held to examine the cyber threats and opportunities presented by generative AI in the context of cybersecurity. The panel consisted of Richard Watson, a Global Cyber Security Leader at EY, Professor Victoria Baines, an Independent Cyber Security Researcher, Kevin Brown, the Chief Operating Officer at NCC Group, PLC, and Dr.
Yazid Al Abdelkarim, the Chief Scientist of Emerging Technologies at CITE. Throughout the discussion, the participants highlighted the potential risks associated with the use of artificial intelligence (AI), specifically generative AI, in the cyber world. One of the key points discussed during the panel was the emergence of new cyber threats arising from AI.
Richard Watson, an EY consultant, stressed the importance of identifying these risks and provided examples of how generative AI can be employed to produce various types of content such as visuals, text, and audio. The panelists also acknowledged the potential danger of data poisoning in relation to generative AI.
Professor Baines echoed Watson's concerns about data poisoning, emphasising its significance in her research. She also delved into the evolving nature of cyber crimes as new technologies, like generative AI, continue to advance. The panelists then proceeded to explore how cyber criminals can exploit generative AI to develop more sophisticated and elusive cyber threats.
They highlighted the potential convergence of generative AI with social engineering tactics, such as phishing, and how this combination could amplify the effectiveness of manipulative attacks. Dr. Yazid Al Abdelkarim shed light on the scale of cybersecurity attacks and the impact of generative AI.
He stressed the need for regulation and shared insights on how SAIT advises organizations on staying ahead of cyber threats. The panelists discussed the challenges, including a talent gap, associated with implementing effective strategies for early detection and management of cyber threats.
Kevin Brown shared real-life incidents to illustrate how organizations tackle these challenges. The threat of deepfakes, where AI-generated content is used to manipulate or fabricate media, was another topic explored during the panel. The participants discussed strategies for addressing this type of threat, with a focus on early detection.
They also touched on the ethical boundaries of retaliating against cyber attackers based on psychological profiling, highlighting the importance of complying with the law. Regarding opportunities, the panelists agreed that generative AI offers benefits in the field of data protection and cybersecurity.
Professor Baines emphasized the potential positive aspects of generative AI, highlighting opportunities for enhanced cybersecurity and protection of sensitive information. In conclusion, the panelists acknowledged the lasting impact of generative AI on the landscape of emerging technologies and its growing influence on cybersecurity.
They recognized the advantages and challenges brought about by generative AI in the field. The discussion underscored the need for effective regulations, risk management approaches, and cybersecurity strategies to address the evolving cyber threats posed by generative AI.
DV
Dr. Victoria Baines
Speech speed
172 words per minute
Speech length
915 words
Speech time
319 secs
Arguments
Data poisoning is a significant concern
Supporting facts:
- Data poisoning is a slow burn attack
- Influence operations have been targeted to spread discord
Topics: Artificial Intelligence, Data Security
Technology evolution is rapid and can be exploited for malicious intents
Supporting facts:
- People have been making bots go bad.
- Large language models have been repurposed for writing malware.
Topics: Technology Evolution, Cyber Security
The nature of crimes is changing with the emergence of new technologies
Supporting facts:
- Dr. Victoria Baines' job is changing daily, hourly with advances in new technologies.
Topics: Cyber Crime, Technology Advancement
Data poisoning is a new kind of cyber crime
Supporting facts:
- Data poisoning is a new kind of threat where training data is skewed to produce a different output than intended.
Topics: Data poisoning, Large Language Models, Cyber Crime
AI and automation can alleviate the scale and stress issues in cyber security
Supporting facts:
- Current alerts and red flags in cyber security are too much for human teams
- A 2019 survey showed 70% of cyber security executives are suffering from moderate to high stress
- AI can help scale responses and keep human operators from quitting due to burnout
- AI can help free up humans for tasks they are adept at, such as threat hunting
Topics: artificial intelligence, cyber security, automation, incident response
Report
Data poisoning and technology evolution have emerged as significant concerns in the field of cybersecurity. Data poisoning refers to the deliberate manipulation of training data to generate outputs that deviate from the intended results. This form of attack can be insidious, as it slowly corrupts the learning process of machine learning models.
Furthermore, influence operations have been conducted to spread discord and misinformation. The rapid evolution of technology, particularly in artificial intelligence (AI), has created new opportunities for cybercriminals to exploit. AI has led to the replacement of humans with non-human agents in various domains, causing disruptions and potential threats.
People have found ways to make bots go bad, and large language models have been repurposed for writing malware. This highlights the need for vigilance in harnessing technological advancements, as they can be exploited for malicious purposes. The emergence of AI has also resulted in an evolution of cyber threats.
Malware implementation has seen new methods and techniques, such as gaming AI models. The ecosystem of cybercriminals may undergo changes due to AI advancements, necessitating proactive measures to counter these evolving threats. However, not all is bleak in the world of cybersecurity.
AI and automation can play a vital role in alleviating the scale and stress issues faced by human operators. The current volume of alerts and red flags in cybersecurity is overwhelming for human teams. A 2019 survey revealed that 70% of cybersecurity executives experience moderate to high stress levels.
AI can assist in scaling responses and relieving human operators from burnout, enabling them to focus on tasks they are proficient in, such as threat hunting. It is worth noting that public perception of AI is often shaped by dystopian depictions in popular culture.
The portrayal of AI in science fiction and dystopian narratives tends to create a negative perception. Interestingly, people are more inclined to show positivity towards "chatbots" rather than "Artificial Intelligence". This demonstrates the influence of popular culture in shaping public opinion and highlights the need for accurate and balanced representation of AI in media.
In conclusion, data poisoning and technology evolution present significant challenges in the field of cybersecurity. The deliberate manipulation of training data and the exploitation of rapid technological advancements pose threats to the integrity and security of systems. However, AI and automation offer promising solutions to address scalability and stress-related issues, allowing human operators to focus on their core competencies.
Moreover, it is important to educate the public about AI beyond dystopian depictions to foster a more balanced understanding of its potential and limitations.
DY
Dr. Yazeed Alabdulkarim
Speech speed
151 words per minute
Speech length
1053 words
Speech time
420 secs
Arguments
Cybersecurity threats are increasing and defenses are not keeping up
Supporting facts:
- 94% of companies have experienced a cyber attack
- current state in 2023 shows an exponential growth in rate of cyber attacks
Topics: Cybersecurity, Defense, Generative AI
Cybercrimes are adopting SaaS models, scaling attacks with automation
Supporting facts:
- Malware as a Service is offered in the cybercrime economy
- Automation technology is accelerating the volume and speed of attacks
Topics: Cybercrime, SaaS, Automation
Generative AI might intensify the cyber attacks situation
Supporting facts:
- Generative AI could potentially create self-adaptive malwares
- Generative AI helps to assemble knowledge useful for physical attacks
Topics: Generative AI, Cyber attacks, Malware
Regulations on generative AI can either limit its use or be used as a defense mechanism.
Supporting facts:
- The UN has formed an advisory body for AI
- Recent U.S. executive order about safe and secure use of AI
Topics: Artificial Intelligence, Regulations
One of the main issues with generative AI is disinformation.
Topics: Artificial Intelligence, Disinformation
Use of generative AI to analyze and respond to security alerts
Supporting facts:
- A research study shows that only 48% of security alerts are investigated
- Adversaries are speeding up attacks so defensive measures must also speed up
Topics: Cybersecurity, Generative AI
Emerging technologies will have AI elements
Supporting facts:
- All the upcoming emerging technologies will have the AI components
Topics: AI, Emerging Technologies
Fundamental threats from AI will be present in emerging technologies
Supporting facts:
- All fundamental threats that come from AI will be present in new technological advancements
Topics: AI Threats, Emerging Technologies
The aim is to understand how the AI model operates
Supporting facts:
- The initiative of explainable AI is coming up to know how the model operates
Topics: AI, Model Operations
AI should be explainable to counter concerns
Supporting facts:
- Explainable AI should help address concerns
Topics: AI, Explainable AI
Watermarking on AI output can help distinguish real from fake
Supporting facts:
- Most of the AI companies have voluntarily proposed to put watermarking on their output
- Authorities should have their own watermarking that will ensure the source is reliable
Topics: AI, Watermarking, Deep fake
Report
The analysis highlights the escalating threat of cyber attacks and the challenges faced by cybersecurity defenses. This is supported by the fact that 94% of companies have experienced a cyber attack, and experts predict an exponential growth in the rate of cyber attacks by 2023.
Cybercrimes are adopting Software-as-a-Service (SaaS) models and leveraging automation technology to scale their attacks. The availability of Malware as a Service in the cybercrime economy further strengthens their ability to carry out attacks at a larger volume and faster pace.
Generative AI is identified as a potential contributor to the intensification of the cyber attack situation. It is suggested that Generative AI could be used to create self-adaptive malwares and assemble knowledge useful for physical attacks. This raises concerns about the future impact of Generative AI on cybersecurity.
There are differing stances on the regulation of Generative AI. Some argue for limitations on its use, citing the belief that the rise of cyber attacks is due to the use of Generative AI. On the other hand, there are proponents of utilizing Generative AI for defense and combating its nefarious uses.
They believe that considering threat actors and designing based on the attack surface can help leverage Generative AI for defensive purposes. Disinformation is identified as a significant issue associated with Generative AI. The ability of Generative AI to generate realistic fake content raises concerns about the spread of disinformation and its potential consequences.
On a positive note, Generative AI can be used to analyze and respond to security alerts. It is suggested that employing Generative AI in this way can help speed up defensive measures to match the increasing speed of cyber attacks.
Furthermore, it is argued that limiting the use of AI technology in cybersecurity would be counterproductive. Instead, AI can play a crucial role in fully analyzing security alerts and addressing the two-speed race in cybersecurity. The analysis also highlights the incorporation of AI elements in emerging technologies.
It is predicted that upcoming technologies will incorporate AI components, indicating the widespread influence of AI. However, there are concerns that fundamental threats associated with AI will also be present in these emerging technologies. Understanding how AI models operate is emphasized as an important aspect in the field.
The ability to explain AI models is crucial for addressing concerns and building trust in AI technology. Watermarking on AI output is proposed as a potential solution to distinguish real content from fake. It is suggested that both AI companies and authorities should establish watermarking systems to ensure the reliability and authenticity of AI-generated content.
In conclusion, the analysis reveals the growing threat of cyber attacks and the need for stronger cybersecurity defenses. The impact of Generative AI on this situation is a subject of concern, with its potential to intensify attacks and contribute to the spread of disinformation.
The regulation and use of Generative AI are topics of debate, with arguments made for limitations as well as for leveraging it in defense and combating nefarious activities. The incorporation of AI elements in emerging technologies raises both opportunities and concerns, while the understanding of AI models and the need for explainable AI should not be overlooked.
Finally, watermarking on AI output has the potential to differentiate real content from fake and enhance reliability.
KB
Kevin Brown
Speech speed
188 words per minute
Speech length
555 words
Speech time
177 secs
Arguments
Generative AI lowers the barrier to criminal activity
Supporting facts:
- Generative AI is now being used for a wider range of activities, including criminal activity, making it easier for criminals to exploit it
Topics: Generative AI, Cybercrime
Criminals have a first-mover advantage over organizations in using new AI technologies
Supporting facts:
- Criminals can quickly launch and use new AI technologies without having to consider the regulatory and legal aspects
Topics: AI exploitation, Cybercrime
Social engineering and deepfakes have become more accessible and can be used in harmful ways
Supporting facts:
- Technologies like deepfakes have become easier to manipulate, bringing in a new wave of potential cyber threats
Topics: Deepfakes, Social Engineering, Cyber Security
Large Language Models can be manipulated and poisoned
Supporting facts:
- The emergence of Artificial Intelligence has led to issues like data poising, which can have a wide range of motivations
Topics: Large Language Models, Data poisoning
Generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks
Supporting facts:
- NCC Group has seen over a 1000% increase in phishing with the use of AI
- Generative AI professionalizes phishing by improving the grammar and spelling of phishing attempts
- Generative AI allows for more targeted spear phishing with relevant and professional content
Topics: Generative AI, Phishing, Social Engineering
Report
Generative AI, a powerful technology with various applications, is now being used for criminal activities, leading to concerns about its negative impacts on cybersecurity and criminal behavior. One key concern is that generative AI is lowering the barrier for criminals to exploit it.
This means that criminals can easily leverage generative AI for illicit activities, making it more challenging for law enforcement agencies and organizations to prevent and mitigate cybercrime. Another major concern is that criminals have an advantage over organizations when it comes to adopting new AI technologies.
Criminals can quickly launch and utilize new AI technologies without having to consider the regulatory and legal aspects that organizations are bound by. This first-mover advantage allows criminals to stay one step ahead and exploit AI technologies for their nefarious activities.
The emergence of technologies like deepfakes has also brought in a new wave of potential cyber threats. Deepfakes, which are manipulated or fabricated videos or images, have become more accessible and can be utilized in harmful ways. This poses a significant risk to individuals and organizations, as deepfakes can be used for social engineering attacks and to manipulate public opinion or spread misinformation.
Moreover, the use of large language models in artificial intelligence has raised concerns about data poisoning. Large language models can be manipulated and poisoned, leading to a range of malicious motivations. This poses a threat to the integrity and reliability of AI systems, as attackers can exploit vulnerabilities in the data used to train these models.
Additionally, generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks. By using generative AI, criminals can increase the volume and quality of phishing attempts. This allows them to create phishing messages that are highly professional, relevant, and tailored to the targeted individual or business.
As a result, generative AI professionalizes phishing, making it more difficult for individuals and organizations to detect and protect themselves against such attacks. In conclusion, the increased use of generative AI for criminal activities has raised significant concerns about cybersecurity and criminal behavior.
The technology has lowered the barrier for criminals to exploit it, giving them an advantage over organizations in adopting new AI technologies. Furthermore, the accessibility of technologies like deepfakes and the potential for data poisoning in large language models have added to the complexity of the cybersecurity landscape.
Additionally, generative AI has the potential to amplify the effectiveness of phishing and manipulative attacks, making it harder to detect and defend against such cyber threats. It is crucial for policymakers, law enforcement agencies, and organizations to address these concerns and develop strategies to mitigate the negative impacts of generative AI on cybersecurity.
RW
Richard Watson
Speech speed
188 words per minute
Speech length
1792 words
Speech time
573 secs
Arguments
AI development is fast and has democratized IT
Supporting facts:
- AI is moving quickly
- AI has made IT more accessible
Topics: Artificial Intelligence, Information Technology
There are threats associated with the rapid development of AI
Supporting facts:
- Malware can become more authentic with AI
- Deepfakes are a potential threat
Topics: Artificial Intelligence, Cybersecurity
Businesses are using AI daily, but struggle to govern and monitor its use
Supporting facts:
- There is a gap between business use of AI and the ability of IT and cybersecurity to manage
- This can lead to threats such as data poisoning and hijacking of AI
Topics: Artificial Intelligence, Business, Risk Management
Updating governance and risk management to handle AI in business is a challenge
Supporting facts:
- Organizations struggle to keep up with business's use of AI
- This gap creates risks including privacy risks
Topics: Artificial Intelligence, Governance, Risk Management
AI models are only as good as the data used to train them.
Supporting facts:
- Increasingly business processes like next best action in a call center or cyber security defense responses are based on these AI models.
Topics: AI, Data Quality
Data poisoning can create adverse business reactions.
Supporting facts:
- If prompts are deliberately targeted to poison the data, it can have consequences on business processes.
Topics: Data poisoning, Business Impact
Cyber security focuses on confidentiality, integrity and availability of data.
Topics: Cyber security, Data Integrity
The integrity of data need to be managed, especially when corporations automate their business processes with AI.
Supporting facts:
- AI models rely heavily on the integrity of the data used to train them.
Topics: AI, Data Integrity, Business Automation
Establishing trust in AI models is crucial
Supporting facts:
- 50% of adults wouldn't trust companies who use AI as much as those who don't
- Four out of ten adults admitted that AI powered products worry them
Topics: AI, Trust, AI Models
UI developed a framework and an algorithm to determine if a piece of AI is trustworthy
Supporting facts:
- It looks at things like explainability, data privacy, and bias
Topics: UI (Company), AI, Framework, Algorithm, Trust
Risk management processes need to be updated to manage AI
Supporting facts:
- Organisational responsibility, business use of AI, and operational functions that are actually using the AI need to be considered
Topics: AI, Risk Management
People are not ready to surrender high level of control to AI technology due to its immense knowledge and fast assimilation of new information
Supporting facts:
- The generative AI aspect has shocked people due to its lucidity and fast information assimilation
- AI technology has been around for 10-15 years with its comprehensive influence recently highlighted when Microsoft acquired open AI
Topics: Artificial Intelligence, Control, Trust
People associate AI with big nasty problems like crime and warfare, which adds to their worry
Supporting facts:
- People are worried about the misuse of AI in potentially biological warfare or weapons creation
- President Biden's executive order addresses managing the risk of AI use for such issues
Topics: Artificial Intelligence, Warfare, Risks
There is a significant talent gap in the field of AI and cybersecurity.
Supporting facts:
- The global forum GCF identified a gap of five million potentials in AI
Topics: AI, Cybersecurity, Talent Gap
Learning from global forums and collaborations can help shape strategies.
Topics: Strategy, Global Cooperation
Increase in phishing attacks leading to necessity of AI integration
Supporting facts:
- Volume of phishing attacks has increased thousand percent
- AI is utilized but the operating model remains the same
Topics: AI, Phishing attacks, Cybersecurity
Defense in depth strategy is essential for verifying calls
Supporting facts:
- Additional controls should be in place to verify caller identity, it can't solely rely on the call center agents
Topics: Cybersecurity, Call center, Verification
Using AI and generative AI can frustrate criminals and increase the cost of their activities
Supporting facts:
- Applying AI and generative AI can mess up the business metrics of criminals, thus increasing their cost of operation and slowing down their process
Topics: Artificial Intelligence, Cyber Crime, Generative AI
Report
AI development has rapidly advanced, leading to a faster and more accessible IT landscape. This development has made IT more accessible to individuals and organizations alike. However, this rapid progress has also raised concerns regarding the associated threats that come with AI technology.
One of the primary concerns is the potential for AI to enhance the authenticity of malware and enable the creation of deepfakes. Malicious actors can leverage AI-powered techniques to create sophisticated and realistic cyber threats, which can pose significant risks to individuals and businesses.
Deepfakes, in particular, have the potential to undermine trust and integrity by manipulating and fabricating audio and video content. Businesses are increasingly incorporating AI into their operations, but many struggle to effectively govern and monitor its use. This poses a challenge, as the gap between the utilization of AI and the capabilities of IT and cybersecurity to manage it can result in vulnerabilities and risks.
Data poisoning is a specific concern, as it can have adverse effects on critical business processes by deliberately targeting and manipulating datasets used in AI models. The governance and risk management frameworks need to be updated to effectively handle the complexities of AI in business settings.
Organizations must address the unique challenges posed by AI in terms of privacy, accountability, and ethics. Furthermore, the integrity of the data used to train AI models is crucial. AI models are only as good as the data they are trained on, and any biases or errors in the data can produce flawed and unreliable results.
Establishing trust in AI models is also vital. Many individuals have concerns about the use of AI and are hesitant to trust companies that heavily rely on this technology. The ability to explain AI decisions, protect data privacy, and mitigate bias are essential to building this trust.
Furthermore, there are concerns about surrendering control to AI technology due to its immense knowledge and fast assimilation of new information. People worry about the potential misuse of AI in areas such as warfare and crime. Policy measures, such as President Biden's executive order, have been introduced to address these risks and manage the responsible use of AI.
The field of AI and cybersecurity faces a significant talent gap. The demand for skilled professionals in these areas far exceeds the available supply. This talent gap presents a challenge in effectively addressing the complex cybersecurity threats posed by AI.
To tackle these challenges, organizations should create clear strategies and collaborate globally. Learning from global forums and collaborations can help shape effective strategies to address the risks and enhance cybersecurity practices. Organizations must take proactive steps and not wait for perfect conditions or complete knowledge to act.
Waiting can result in missed opportunities to protect against the risks associated with AI. Integration of AI is necessary to combat the increasing volume of phishing attacks. Phishing attacks have seen a substantial increase, and AI can play a crucial role in detecting and preventing these attacks.
However, operating models must be transformed to ensure effective integration of AI, ending with human involvement for a thorough and closed-loop activity. AI and generative AI have the potential to frustrate criminals and increase the cost of their activities. By utilizing AI technology, criminal activities can become more challenging and costly to execute.
For example, applying AI and generative AI can disrupt the metrics and cost-effectiveness of certain criminal operations, such as call centre scams. In conclusion, while AI development has brought significant advancements and accessibility to IT, there are numerous challenges and risks associated with its use.
These challenges include the authenticity of cyber threats, governance and monitoring issues, data integrity, trust-building, talent gaps, control concerns, and the potential misuse of AI. Organizations must address these challenges, develop effective strategies, collaborate globally, and integrate AI into their operations to ensure cybersecurity and responsible use of AI technology.