Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33

11 Oct 2023 06:15h - 07:45h UTC

Event report

Speakers and Moderators

Speakers
  • Amal El Fallah-Seghrouchni, Executive President, Moroccan International Center for Artificial Intelligence
  • Anastasiya Kazakova, Cyber Diplomacy Knowledge Fellow, DiploFoundation
  • Dennis-Kenji Kipker, Expert in Cybersecurity Law, University of Bremen
  • Jochen Michels, Head of Public Affairs Europe, Kaspersky, Europe
  • Noushin Shabab, Senior Security Researcher, Global Research and Analysis, Kaspersky, Australia
Moderators
  • Genie Sugene Gan, Head of Government Affairs, APAC, Kaspersky

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Martin Boteman

The discussion delves into the significance of identity in the realm of security and the crucial role that AI can play in safeguarding identities. It is acknowledged that with the advancement of AI, data has become more frequently personally identifiable than ever before, leading to a need to address the complex relationship between identity and privacy.

One argument put forward is that security will require identity. The increasing personal identifiability of data, facilitated by AI, has made it imperative to establish and protect individual identities for the sake of security. This argument highlights the evolving nature of security in the digital age and the need to adapt to these changes.

On the other hand, a positive stance is taken towards the potential of AI in enhancing security with the identity factor. It is suggested that AI can aid in securing identities by leveraging its capabilities. The specifics of how AI can contribute to this aspect are not explicitly mentioned, but it is implied that AI can play a role in ensuring the authenticity and integrity of identities.

Furthermore, the discussion recognises the necessity to address the dichotomy between identity and privacy. While identity is essential for security purposes, safeguarding privacy is equally important. This creates a challenge in finding a balance between the two. The analysis raises the question of how to deal with this dichotomy in future endeavours, emphasizing the need for a thoughtful and nuanced approach.

Legal measures are acknowledged as an important consideration in the context of AI. However, it is argued that relying solely on legal frameworks is not enough. This underlines the complexity of regulating AI and the urgent need for additional measures to ensure the responsible and ethical use of the technology. The mention of the Algorithmic Accountability Act in the USA and the European Union's AI Act serves to highlight the efforts being made to address these concerns.

Overall, there is a positive sentiment regarding the potential of AI in enhancing security with the identity factor. The discussion reinforces the significance of ethical principles such as security by design and privacy by design when implementing AI solutions. It asserts that taking responsibility for AI and incorporating these principles into its development and deployment is essential.

It is worth noting that the expanded summary provides a comprehensive overview of the main points discussed. However, more specific evidence or examples supporting these arguments could have further strengthened the analysis. Nonetheless, the analysis highlights the intersection of identity, privacy, AI, and security and emphasizes the need for responsible and balanced approaches in this rapidly evolving landscape.

Amal El Fallah-Seghrouchini, Executive President, Moroccan International Center for Artificial Intelligence

Artificial Intelligence (AI) has emerged as a powerful tool in the field of cybersecurity, with the potential to enhance and transform existing systems. By leveraging AI, common cybersecurity tasks can be automated, allowing for faster and more efficient detection and response to threats. AI can also analyze and identify potential threats in large datasets, enabling cybersecurity professionals to stay one step ahead of cybercriminals.

The importance of AI in cybersecurity is further highlighted by its recognition as a national security priority. Organizations such as the National Science Foundation (NSF), National Science and Technology Council (NSTC), and National Aeronautics and Space Administration (NASA) have emphasized the significance of AI in maintaining the security of nations. This recognition demonstrates the growing global awareness of the role that AI can play in safeguarding critical infrastructure and sensitive data.

However, the use of AI in cybersecurity also raises concerns about the vulnerability of AI systems. Adversarial machine learning techniques can be deployed to attack AI systems, potentially compromising their effectiveness. It is crucial to regulate the use of AI in cybersecurity to mitigate these vulnerabilities and ensure the reliability and security of these systems.

Furthermore, AI is not only a tool for defending against cyber threats but can also be used to create new kinds of attacks. For example, AI-powered systems can be utilized for phishing, cyber extortion, and automated interactive attacks. The potential for AI to be used maliciously highlights the need for robust ethical and regulatory considerations in the development and deployment of AI systems in the cybersecurity domain.

Ethical and regulatory considerations are necessary to strike a balance between the power of AI and human control. Complete delegation of control to AI in cybersecurity is not recommended, as human oversight and decision-making are essential. Frameworks should be established to ensure the ethical use of AI and to address concerns related to privacy, data governance, and individual rights.

Initiatives aimed at differentiating between identifier and identity are being pursued to strengthen security and privacy measures. By avoiding the use of a unique identifier for individuals and instead associating sectorial identifiers with identity through trusted third-party certification, the risk of data breaches and unauthorized access is reduced.

In addition to data protection, ethics in AI extend to considerations of dignity and human rights. It is essential to incorporate these ethical principles into the design and implementation of AI systems. Furthermore, informed consent and user awareness are crucial in ensuring that individuals understand the implications and potential risks associated with using generative AI systems.

Preserving dignity and human rights should be a priority in all systems, including those powered by AI. This encompasses a continuous debate and discussion in which the principles of ethics play a central role. Educating the population and working towards informed consent are important steps in achieving a balance between the benefits and potential harms of AI.

Accountability, privacy, and data protection are recognized as tools towards ensuring ethical practices. These principles should be integrated into the development and deployment of AI systems to safeguard individual rights and maintain public trust.

Overall, AI has the potential to revolutionize cybersecurity, but its implementation requires careful consideration of ethical, regulatory, and privacy concerns. While AI can enhance and transform the field of cybersecurity, there is a need for comprehensive regulation to address vulnerabilities. The differentiation between identifier and identity, as well as the emphasis on dignity and human rights, are important factors to consider in deploying AI systems. Promoting informed consent, user awareness, and ethical use of AI should be prioritized to maintain a secure and trustworthy digital environment.

Audience

During the discussion, the speakers delved into the implementation of ethical AI in the field of cybersecurity and raised concerns regarding its potential disadvantages when countering unethical adversarial AI. They emphasised that adversaries employing adversarial AI techniques are unlikely to consider ethical principles and may operate without any regard for the consequences of their actions.

The audience expressed apprehension about the practicality and effectiveness of using ethical AI in defending against unethical adversarial AI. They questioned whether the application of ethical AI would provide a sufficient response to the increasingly sophisticated and malicious tactics employed by adversaries. It was noted that engaging in responsive actions by deploying ethical AI to counter unethical adversarial AI might place defenders at a disadvantage, highlighting the complexity of the issue.

Given these concerns, the need for a thorough review of the application of ethical AI in response to unethical adversarial AI was acknowledged. There was specific emphasis on active cyber defence, which involves proactive measures to prevent cyber attacks and mitigate potential harm. The aim of the review is to ensure that the use of ethical AI is optimised and effectively aligned with the challenges posed by unethical adversarial AI.

These discussions revolved around the topics of Ethical AI, Adversarial AI, Cybersecurity, and Active Cyber Defence, all of which are highly relevant in today's digital landscape. The concerns raised during the discussion reflect the ongoing tension between the desire to uphold ethical principles and the practical challenges faced when countering adversaries who disregard those principles.

Furthermore, this discussion aligns with the Sustainable Development Goals (SDGs) 9 and 16, which emphasise the importance of creating resilient infrastructure, fostering innovation, promoting peaceful and inclusive societies, and ensuring access to justice for all. By addressing the ethical challenges associated with adversarial AI in cybersecurity, efforts can be made towards achieving these SDGs, as they are integral to building a secure and just digital environment.

Overall, the discussion underscored the need for careful consideration and evaluation of the application of ethical AI in response to unethical adversarial AI. Balancing the ethical dimension with the practical requirements of countering adversaries in the ever-evolving digital landscape is a complex task that warrants ongoing discussion and analysis.

Anastasiya Kazakova, Cyber Diplomacy Knowledge Fellow, DiploFoundation

Artificial Intelligence (AI) plays a crucial role in enhancing cybersecurity by improving threat detection and intelligence gathering. However, concerns have been raised regarding the autonomous nature of AI and its potential to make impactful decisions in everyday life. It is argued that AI should not operate solely autonomously, highlighting the importance of human oversight in guiding AI's decision-making processes.

A major issue faced in the field of AI is the anticipation of conflicting AI regulations being established by major markets, including the EU, US, and China. This potential fragmentation in regulations raises concerns about the limitations and hindered benefits of AI. It is important to have uniform regulations that promote the widespread use and opportunities of AI for different communities.

The challenge of defining AI universally is another issue faced by legislators. With AI evolving rapidly, it becomes increasingly difficult to encompass all technological advancements within rigid legal frameworks. Instead, the focus should be on regulating the outcomes and expectations of AI, rather than the technology itself. This flexible and outcome-driven approach allows for adaptable regulations that keep up with the dynamic nature of AI development.

In the realm of cybersecurity, the question arises of whether organizations should have the right to "hack back" in response to attacks. Most governments and industries agree that organizations should not have this right, as it can lead to escalating cyber conflicts. Instead, it is recommended that law enforcement agencies with the appropriate mandate step in and investigate cyberattacks.

The challenges faced in cyberspace are becoming increasingly sophisticated, requiring both technical and policy solutions. Addressing cyber threats necessitates identifying the nature of the threat, whether it is cyber espionage, an Advanced Persistent Threat (APT), or a complex Distributed Denial of Service (DDoS) attack. Hence, integrated approaches involving both technical expertise and policy frameworks are essential to effectively combat cyber threats.

Ethical behavior is emphasized in the field of cybersecurity. It is crucial for good actors to abide by international and national laws, even in their reactions to unethical actions. Reacting unethically to protect oneself can compromise overall security and stability. Therefore, ethical guidelines and considerations must guide actions in the cybersecurity realm.

The solution to addressing cybersecurity concerns lies in creativity and enhanced cooperation. Developing new types of response strategies and increasing collaboration between communities, vendors, and governments are vital. While international and national laws provide a foundation, innovative approaches and thinking must be utilized to develop effective responses to emerging cyber threats.

Regulations play an important role in addressing AI challenges, but they are not the sole solution. The industry can also make significant strides in enhancing AI ethics, governance, and transparency without solely relying on policymakers and regulators. Therefore, a balanced approach that combines effective regulations with industry initiatives is necessary.

Increased transparency in software and AI-based solution composition is supported. The initiative of a "software bill of materials" is seen as a positive step towards understanding the composition of software, similar to knowing the ingredients of a cake. Documenting data sources, collection methods, and processing techniques promotes responsible consumption and production.

In conclusion, AI has a significant impact on cybersecurity, but it should not operate exclusively autonomously. Addressing challenges such as conflicting regulations, defining AI, the right to "hack back," and increasing sophistication of cyber threats requires a multidimensional approach that encompasses technical expertise, policy frameworks, ethical considerations, creativity, and enhanced cooperation. Effective regulations, industry initiatives, and transparency in software composition all contribute to a more secure and stable cyberspace.

Noushin Shabab, Senior Security Researcher, Global Research and Analysis, Kaspersky

Kaspersky, a leading cybersecurity company, has harnessed the power of artificial intelligence (AI) and machine learning to strengthen cybersecurity. They have integrated machine learning techniques into their products for an extended period, resulting in significant improvements.

Transparency is paramount when using AI in cybersecurity, according to Kaspersky. To achieve this, they have implemented a global transparency initiative and established transparency centers in various countries. These centers allow stakeholders and customers to access and review their product code, fostering trust and collaboration in the cybersecurity field.

While AI and machine learning have proven effective in cybersecurity, it is crucial to protect these systems from misuse. Attackers can manipulate machine learning outcomes, posing a significant threat. Safeguards and security measures must be implemented to ensure the integrity of AI and machine learning systems.

Kaspersky believes that effective cybersecurity requires a balance between AI and human control. While machine learning algorithms are adept at analyzing complex malware, human involvement is essential for informed decision-making and responding to evolving threats. Kaspersky combines human control with machine learning to ensure comprehensive cybersecurity practices.

Respecting user privacy is another vital consideration when incorporating AI in cybersecurity. Kaspersky has implemented measures such as pseudonymization, anonymization, data minimization, and personal identifier removal to protect user privacy. By prioritizing user privacy, Kaspersky provides secure and trustworthy solutions.

Collaboration and open dialogue are emphasized by Kaspersky in the AI-enabled cybersecurity domain. They advocate for collective efforts and knowledge exchange to combat cyber threats effectively. Open dialogue promotes the sharing of insights and ideas, leading to stronger cybersecurity practices.

It is crucial to be aware of the potential misuse of AI by malicious actors. AI can facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. However, Kaspersky highlights that advanced security solutions, incorporating machine learning, can identify and mitigate such attacks.

User awareness and education are essential to counter AI-enabled cyber threats. Kaspersky underscores the importance of educating users to understand and effectively respond to these threats. Combining advanced security solutions with user education is a recommended approach to tackle AI-enabled cyber threats.

In conclusion, Kaspersky's approach to AI-enabled cybersecurity encompasses leveraging machine learning, maintaining transparency, safeguarding systems, respecting user privacy, and promoting collaboration and user education. By adhering to these principles, Kaspersky aims to enhance cybersecurity practices and protect users from evolving threats.

Dennis Kenji Kipker, Expert in Cybersecurity Law, University of Bremen

The discussions revolve around the integration of artificial intelligence (AI) and cybersecurity. AI has already been used in the field of cybersecurity for automated anomaly detection in networks and to improve overall cybersecurity measures. The argument is made that AI and cybersecurity have been interconnected for a long time, even before the emergence of use cases like generative AI.

It is argued that special AI regulation specifically for cybersecurity is not necessary. European lawmakers are mentioned as leaders in cybersecurity legislation, using the term "state-of-the-art of technology" to define the compliance requirements for private companies and public institutions. It is mentioned that attacks using AI can be covered by existing national cyber criminal legislation, without the need for explicit AI-specific regulation. Furthermore, it is highlighted that the development and security of AI is already addressed in legislation such as the European AI Act.

The need for clear differentiation in the regulation of AI and cybersecurity is emphasized. Different scenarios need different approaches, distinguishing between cases where AI is one of several technical means and cases where AI-specific risks need to be regulated.

The privacy risks associated with AI development are also acknowledged. High-impact privacy risks can arise during the development process and need to be carefully considered and addressed.

The struggles in implementing privacy laws and detecting violations are mentioned. It is suggested that more efforts are needed to effectively enforce privacy laws and detect violations in order to protect individuals' privacy.

While regulation of AI is deemed necessary, it is also suggested that it should not unnecessarily delay or hinder other necessary regulations. The European AI Act, with its risk classes, is mentioned as a good first approach to AI regulation.

The importance of cooperation between the state and industry actors is emphasized. AI is mainly developed by a few big tech players from the US, and there is a need for closer collaboration between the state and industry actors for improved governance and oversight of AI.

It is argued that self-regulation by industries alone is not enough. Establishing a system of transparency on a permanent legal basis is seen as necessary to ensure ethical and responsible AI development and deployment.

Additional resources and stronger supervision of AI are deemed necessary. Authorities responsible for the supervision of AI should be equipped with more financial and personnel resources to effectively monitor and regulate AI activities.

The need for human control in AI-related decision-making is emphasized. Official decisions or decisions made by private companies that can have a negative impact on individuals should not be solely based on AI but should involve human oversight and control.

Safety in AI development is considered paramount. It is emphasized that secure development practices are crucial to ensure the safety and reliability of AI solutions.

Lastly, it is acknowledged that while regulation plays a vital role, it alone cannot completely eliminate all the problems associated with AI. There is a need for a comprehensive approach that combines effective regulation, cooperation, resources, and human control to address the challenges and maximize the benefits of AI technology.

Jochen Michels, Head of Public Affairs Europe, Kaspersky

During the session, all the speakers were in agreement that the six ethical principles of AI use in cybersecurity are equally important. This consensus among the speakers highlights their shared understanding of the significance of each principle in ensuring ethical practices in the field.

Furthermore, the attendees of the session also recognized the importance of all six principles. The fact that these principles were mentioned by multiple participants indicates their collective acknowledgement of the principles' value. This shared significance emphasizes the need to consider all six principles when addressing the ethical challenges posed by AI in cybersecurity.

However, while acknowledging the equal importance of the principles, there is consensus among the participants that further multi-stakeholder discussion is necessary. This discussion should involve a comprehensive range of stakeholders, including industry representatives, academics, and political authorities. By involving all these parties, it becomes possible to ensure a holistic and inclusive approach to addressing the ethical implications of AI use in cybersecurity.

The need for this multi-stakeholder discussion becomes evident through the variety of principles mentioned in a poll conducted during the session. The diverse range of principles brought up by the attendees emphasizes the importance of engaging all involved parties to ensure comprehensive coverage of ethical considerations.

In conclusion, the session affirmed that all six ethical principles of AI use in cybersecurity are of equal importance. However, it also highlighted the necessity for further multi-stakeholder discussion to ensure comprehensive coverage and engagement of all stakeholders. This discussion should involve representatives from industry, academia, and politics to effectively address the ethical challenges posed by AI in cybersecurity. The session underscored the significance of partnerships and cooperation in tackling these challenges on a broader scale.

Moderator

The panel discussion on the ethical principles of AI in cybersecurity brought together experts from various backgrounds. Panelists included Professor Dennis Kenji Kipker, an expert in cybersecurity law from Germany, Professor Amal, the Executive President for the AI Movement at the Moroccan International Center for Artificial Intelligence, Ms. Nushin, a Senior Security Researcher from Kaspersky in Australia, and Ms. Anastasia Kazakova, a Cyber Diplomacy Knowledge Fellow from the Diplo Foundation in Serbia.

The panelists discussed the potential of AI to enhance cybersecurity but stressed the need for a dialogue on ethical principles. AI can automate common tasks and help identify threats in cybersecurity. Kaspersky detects 325,000 new malicious files daily and recognizes the role AI can play in transforming cybersecurity methods. However, AI systems in cybersecurity are vulnerable to attacks and misuse. Adversarial AI can attack AI systems and misuse AI to create fake videos and AI-powered malware.

Transparency, safety, human control, privacy, and defense against cyber attacks were identified as key ethical principles in AI cybersecurity. The panelists emphasized the importance of transparency in understanding the technology being used and protecting user data. They also highlighted the need for human control in decision-making processes, as decisions impacting individuals cannot solely rely on AI algorithms.

The panelists and online audience agreed on the equal importance of these ethical principles and called for further discussions on their implementation. The moderator supported multi-stakeholder discussions and stressed the involvement of various sectors, including industry, research, academia, politics, and civil society, for a comprehensive and inclusive approach.

Plans are underway to develop an impulse paper outlining ethical principles for the use of AI in cybersecurity. This paper will reflect the discussion outcomes and be shared with the IGF community. Feedback from stakeholders will be gathered to further refine the principles. Kaspersky will also use the paper to develop their own ethical principles.

In summary, the panel discussion highlighted the ethical considerations of AI in cybersecurity. Transparency, safety, human control, privacy, and defense against cyber attacks were identified as crucial principles. The ongoing multi-stakeholder discussions and the development of an impulse paper aim to provide guidelines for different sectors and promote an ethical approach to AI in cybersecurity.

Speakers

AE

Amal El Fallah-Seghrouchini

Speech speed

128 words per minute

Speech length

1741 words

Speech time

817 secs

Click for more

AK

Anastasiya Kazakova

Speech speed

170 words per minute

Speech length

3082 words

Speech time

1087 secs

Click for more

A

Audience

Speech speed

167 words per minute

Speech length

130 words

Speech time

47 secs

Click for more

DK

Dennis Kenji Kipker

Speech speed

157 words per minute

Speech length

1961 words

Speech time

747 secs

Click for more

JM

Jochen Michels

Speech speed

123 words per minute

Speech length

82 words

Speech time

40 secs

Click for more

MB

Martin Boteman

Speech speed

158 words per minute

Speech length

389 words

Speech time

148 secs

Click for more

M

Moderator

Speech speed

162 words per minute

Speech length

3553 words

Speech time

1313 secs

Click for more

NS

Noushin Shabab

Speech speed

125 words per minute

Speech length

1371 words

Speech time

658 secs

Click for more