Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33

11 Oct 2023 06:15h - 07:45h UTC

Event report

Speakers and Moderators

Speakers
  • Amal El Fallah-Seghrouchni, Executive President, Moroccan International Center for Artificial Intelligence
  • Anastasiya Kazakova, Cyber Diplomacy Knowledge Fellow, DiploFoundation
  • Dennis-Kenji Kipker, Expert in Cybersecurity Law, University of Bremen
  • Jochen Michels, Head of Public Affairs Europe, Kaspersky, Europe
  • Noushin Shabab, Senior Security Researcher, Global Research and Analysis, Kaspersky, Australia
Moderators
  • Genie Sugene Gan, Head of Government Affairs, APAC, Kaspersky

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martin Boteman

The discussion delves into the significance of identity in the realm of security and the crucial role that AI can play in safeguarding identities. It is acknowledged that with the advancement of AI, data has become more frequently personally identifiable than ever before, leading to a need to address the complex relationship between identity and privacy.

One argument put forward is that security will require identity. The increasing personal identifiability of data, facilitated by AI, has made it imperative to establish and protect individual identities for the sake of security. This argument highlights the evolving nature of security in the digital age and the need to adapt to these changes.

On the other hand, a positive stance is taken towards the potential of AI in enhancing security with the identity factor. It is suggested that AI can aid in securing identities by leveraging its capabilities. The specifics of how AI can contribute to this aspect are not explicitly mentioned, but it is implied that AI can play a role in ensuring the authenticity and integrity of identities.

Furthermore, the discussion recognises the necessity to address the dichotomy between identity and privacy. While identity is essential for security purposes, safeguarding privacy is equally important. This creates a challenge in finding a balance between the two. The analysis raises the question of how to deal with this dichotomy in future endeavours, emphasizing the need for a thoughtful and nuanced approach.

Legal measures are acknowledged as an important consideration in the context of AI. However, it is argued that relying solely on legal frameworks is not enough. This underlines the complexity of regulating AI and the urgent need for additional measures to ensure the responsible and ethical use of the technology. The mention of the Algorithmic Accountability Act in the USA and the European Union’s AI Act serves to highlight the efforts being made to address these concerns.

Overall, there is a positive sentiment regarding the potential of AI in enhancing security with the identity factor. The discussion reinforces the significance of ethical principles such as security by design and privacy by design when implementing AI solutions. It asserts that taking responsibility for AI and incorporating these principles into its development and deployment is essential.

It is worth noting that the expanded summary provides a comprehensive overview of the main points discussed. However, more specific evidence or examples supporting these arguments could have further strengthened the analysis. Nonetheless, the analysis highlights the intersection of identity, privacy, AI, and security and emphasizes the need for responsible and balanced approaches in this rapidly evolving landscape.

Amal El Fallah Seghrouchini

Artificial Intelligence (AI) has emerged as a powerful tool in the field of cybersecurity, with the potential to enhance and transform existing systems. By leveraging AI, common cybersecurity tasks can be automated, allowing for faster and more efficient detection and response to threats. AI can also analyze and identify potential threats in large datasets, enabling cybersecurity professionals to stay one step ahead of cybercriminals.

The importance of AI in cybersecurity is further highlighted by its recognition as a national security priority. Organizations such as the National Science Foundation (NSF), National Science and Technology Council (NSTC), and National Aeronautics and Space Administration (NASA) have emphasized the significance of AI in maintaining the security of nations. This recognition demonstrates the growing global awareness of the role that AI can play in safeguarding critical infrastructure and sensitive data.

However, the use of AI in cybersecurity also raises concerns about the vulnerability of AI systems. Adversarial machine learning techniques can be deployed to attack AI systems, potentially compromising their effectiveness. It is crucial to regulate the use of AI in cybersecurity to mitigate these vulnerabilities and ensure the reliability and security of these systems.

Furthermore, AI is not only a tool for defending against cyber threats but can also be used to create new kinds of attacks. For example, AI-powered systems can be utilized for phishing, cyber extortion, and automated interactive attacks. The potential for AI to be used maliciously highlights the need for robust ethical and regulatory considerations in the development and deployment of AI systems in the cybersecurity domain.

Ethical and regulatory considerations are necessary to strike a balance between the power of AI and human control. Complete delegation of control to AI in cybersecurity is not recommended, as human oversight and decision-making are essential. Frameworks should be established to ensure the ethical use of AI and to address concerns related to privacy, data governance, and individual rights.

Initiatives aimed at differentiating between identifier and identity are being pursued to strengthen security and privacy measures. By avoiding the use of a unique identifier for individuals and instead associating sectorial identifiers with identity through trusted third-party certification, the risk of data breaches and unauthorized access is reduced.

In addition to data protection, ethics in AI extend to considerations of dignity and human rights. It is essential to incorporate these ethical principles into the design and implementation of AI systems. Furthermore, informed consent and user awareness are crucial in ensuring that individuals understand the implications and potential risks associated with using generative AI systems.

Preserving dignity and human rights should be a priority in all systems, including those powered by AI. This encompasses a continuous debate and discussion in which the principles of ethics play a central role. Educating the population and working towards informed consent are important steps in achieving a balance between the benefits and potential harms of AI.

Accountability, privacy, and data protection are recognized as tools towards ensuring ethical practices. These principles should be integrated into the development and deployment of AI systems to safeguard individual rights and maintain public trust.

Overall, AI has the potential to revolutionize cybersecurity, but its implementation requires careful consideration of ethical, regulatory, and privacy concerns. While AI can enhance and transform the field of cybersecurity, there is a need for comprehensive regulation to address vulnerabilities. The differentiation between identifier and identity, as well as the emphasis on dignity and human rights, are important factors to consider in deploying AI systems. Promoting informed consent, user awareness, and ethical use of AI should be prioritized to maintain a secure and trustworthy digital environment.

Audience

During the discussion, the speakers delved into the implementation of ethical AI in the field of cybersecurity and raised concerns regarding its potential disadvantages when countering unethical adversarial AI. They emphasised that adversaries employing adversarial AI techniques are unlikely to consider ethical principles and may operate without any regard for the consequences of their actions.

The audience expressed apprehension about the practicality and effectiveness of using ethical AI in defending against unethical adversarial AI. They questioned whether the application of ethical AI would provide a sufficient response to the increasingly sophisticated and malicious tactics employed by adversaries. It was noted that engaging in responsive actions by deploying ethical AI to counter unethical adversarial AI might place defenders at a disadvantage, highlighting the complexity of the issue.

Given these concerns, the need for a thorough review of the application of ethical AI in response to unethical adversarial AI was acknowledged. There was specific emphasis on active cyber defence, which involves proactive measures to prevent cyber attacks and mitigate potential harm. The aim of the review is to ensure that the use of ethical AI is optimised and effectively aligned with the challenges posed by unethical adversarial AI.

These discussions revolved around the topics of Ethical AI, Adversarial AI, Cybersecurity, and Active Cyber Defence, all of which are highly relevant in today’s digital landscape. The concerns raised during the discussion reflect the ongoing tension between the desire to uphold ethical principles and the practical challenges faced when countering adversaries who disregard those principles.

Furthermore, this discussion aligns with the Sustainable Development Goals (SDGs) 9 and 16, which emphasise the importance of creating resilient infrastructure, fostering innovation, promoting peaceful and inclusive societies, and ensuring access to justice for all. By addressing the ethical challenges associated with adversarial AI in cybersecurity, efforts can be made towards achieving these SDGs, as they are integral to building a secure and just digital environment.

Overall, the discussion underscored the need for careful consideration and evaluation of the application of ethical AI in response to unethical adversarial AI. Balancing the ethical dimension with the practical requirements of countering adversaries in the ever-evolving digital landscape is a complex task that warrants ongoing discussion and analysis.

Anastasiya Kozakova

Artificial Intelligence (AI) plays a crucial role in enhancing cybersecurity by improving threat detection and intelligence gathering. However, concerns have been raised regarding the autonomous nature of AI and its potential to make impactful decisions in everyday life. It is argued that AI should not operate solely autonomously, highlighting the importance of human oversight in guiding AI’s decision-making processes.

A major issue faced in the field of AI is the anticipation of conflicting AI regulations being established by major markets, including the EU, US, and China. This potential fragmentation in regulations raises concerns about the limitations and hindered benefits of AI. It is important to have uniform regulations that promote the widespread use and opportunities of AI for different communities.

The challenge of defining AI universally is another issue faced by legislators. With AI evolving rapidly, it becomes increasingly difficult to encompass all technological advancements within rigid legal frameworks. Instead, the focus should be on regulating the outcomes and expectations of AI, rather than the technology itself. This flexible and outcome-driven approach allows for adaptable regulations that keep up with the dynamic nature of AI development.

In the realm of cybersecurity, the question arises of whether organizations should have the right to “hack back” in response to attacks. Most governments and industries agree that organizations should not have this right, as it can lead to escalating cyber conflicts. Instead, it is recommended that law enforcement agencies with the appropriate mandate step in and investigate cyberattacks.

The challenges faced in cyberspace are becoming increasingly sophisticated, requiring both technical and policy solutions. Addressing cyber threats necessitates identifying the nature of the threat, whether it is cyber espionage, an Advanced Persistent Threat (APT), or a complex Distributed Denial of Service (DDoS) attack. Hence, integrated approaches involving both technical expertise and policy frameworks are essential to effectively combat cyber threats.

Ethical behavior is emphasized in the field of cybersecurity. It is crucial for good actors to abide by international and national laws, even in their reactions to unethical actions. Reacting unethically to protect oneself can compromise overall security and stability. Therefore, ethical guidelines and considerations must guide actions in the cybersecurity realm.

The solution to addressing cybersecurity concerns lies in creativity and enhanced cooperation. Developing new types of response strategies and increasing collaboration between communities, vendors, and governments are vital. While international and national laws provide a foundation, innovative approaches and thinking must be utilized to develop effective responses to emerging cyber threats.

Regulations play an important role in addressing AI challenges, but they are not the sole solution. The industry can also make significant strides in enhancing AI ethics, governance, and transparency without solely relying on policymakers and regulators. Therefore, a balanced approach that combines effective regulations with industry initiatives is necessary.

Increased transparency in software and AI-based solution composition is supported. The initiative of a “software bill of materials” is seen as a positive step towards understanding the composition of software, similar to knowing the ingredients of a cake. Documenting data sources, collection methods, and processing techniques promotes responsible consumption and production.

In conclusion, AI has a significant impact on cybersecurity, but it should not operate exclusively autonomously. Addressing challenges such as conflicting regulations, defining AI, the right to “hack back,” and increasing sophistication of cyber threats requires a multidimensional approach that encompasses technical expertise, policy frameworks, ethical considerations, creativity, and enhanced cooperation. Effective regulations, industry initiatives, and transparency in software composition all contribute to a more secure and stable cyberspace.

Noushin Shabab

Kaspersky, a leading cybersecurity company, has harnessed the power of artificial intelligence (AI) and machine learning to strengthen cybersecurity. They have integrated machine learning techniques into their products for an extended period, resulting in significant improvements.

Transparency is paramount when using AI in cybersecurity, according to Kaspersky. To achieve this, they have implemented a global transparency initiative and established transparency centers in various countries. These centers allow stakeholders and customers to access and review their product code, fostering trust and collaboration in the cybersecurity field.

While AI and machine learning have proven effective in cybersecurity, it is crucial to protect these systems from misuse. Attackers can manipulate machine learning outcomes, posing a significant threat. Safeguards and security measures must be implemented to ensure the integrity of AI and machine learning systems.

Kaspersky believes that effective cybersecurity requires a balance between AI and human control. While machine learning algorithms are adept at analyzing complex malware, human involvement is essential for informed decision-making and responding to evolving threats. Kaspersky combines human control with machine learning to ensure comprehensive cybersecurity practices.

Respecting user privacy is another vital consideration when incorporating AI in cybersecurity. Kaspersky has implemented measures such as pseudonymization, anonymization, data minimization, and personal identifier removal to protect user privacy. By prioritizing user privacy, Kaspersky provides secure and trustworthy solutions.

Collaboration and open dialogue are emphasized by Kaspersky in the AI-enabled cybersecurity domain. They advocate for collective efforts and knowledge exchange to combat cyber threats effectively. Open dialogue promotes the sharing of insights and ideas, leading to stronger cybersecurity practices.

It is crucial to be aware of the potential misuse of AI by malicious actors. AI can facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. However, Kaspersky highlights that advanced security solutions, incorporating machine learning, can identify and mitigate such attacks.

User awareness and education are essential to counter AI-enabled cyber threats. Kaspersky underscores the importance of educating users to understand and effectively respond to these threats. Combining advanced security solutions with user education is a recommended approach to tackle AI-enabled cyber threats.

In conclusion, Kaspersky’s approach to AI-enabled cybersecurity encompasses leveraging machine learning, maintaining transparency, safeguarding systems, respecting user privacy, and promoting collaboration and user education. By adhering to these principles, Kaspersky aims to enhance cybersecurity practices and protect users from evolving threats.

Dennis Kenji Kipker

The discussions revolve around the integration of artificial intelligence (AI) and cybersecurity. AI has already been used in the field of cybersecurity for automated anomaly detection in networks and to improve overall cybersecurity measures. The argument is made that AI and cybersecurity have been interconnected for a long time, even before the emergence of use cases like generative AI.

It is argued that special AI regulation specifically for cybersecurity is not necessary. European lawmakers are mentioned as leaders in cybersecurity legislation, using the term “state-of-the-art of technology” to define the compliance requirements for private companies and public institutions. It is mentioned that attacks using AI can be covered by existing national cyber criminal legislation, without the need for explicit AI-specific regulation. Furthermore, it is highlighted that the development and security of AI is already addressed in legislation such as the European AI Act.

The need for clear differentiation in the regulation of AI and cybersecurity is emphasized. Different scenarios need different approaches, distinguishing between cases where AI is one of several technical means and cases where AI-specific risks need to be regulated.

The privacy risks associated with AI development are also acknowledged. High-impact privacy risks can arise during the development process and need to be carefully considered and addressed.

The struggles in implementing privacy laws and detecting violations are mentioned. It is suggested that more efforts are needed to effectively enforce privacy laws and detect violations in order to protect individuals’ privacy.

While regulation of AI is deemed necessary, it is also suggested that it should not unnecessarily delay or hinder other necessary regulations. The European AI Act, with its risk classes, is mentioned as a good first approach to AI regulation.

The importance of cooperation between the state and industry actors is emphasized. AI is mainly developed by a few big tech players from the US, and there is a need for closer collaboration between the state and industry actors for improved governance and oversight of AI.

It is argued that self-regulation by industries alone is not enough. Establishing a system of transparency on a permanent legal basis is seen as necessary to ensure ethical and responsible AI development and deployment.

Additional resources and stronger supervision of AI are deemed necessary. Authorities responsible for the supervision of AI should be equipped with more financial and personnel resources to effectively monitor and regulate AI activities.

The need for human control in AI-related decision-making is emphasized. Official decisions or decisions made by private companies that can have a negative impact on individuals should not be solely based on AI but should involve human oversight and control.

Safety in AI development is considered paramount. It is emphasized that secure development practices are crucial to ensure the safety and reliability of AI solutions.

Lastly, it is acknowledged that while regulation plays a vital role, it alone cannot completely eliminate all the problems associated with AI. There is a need for a comprehensive approach that combines effective regulation, cooperation, resources, and human control to address the challenges and maximize the benefits of AI technology.

Jochen Michels

During the session, all the speakers were in agreement that the six ethical principles of AI use in cybersecurity are equally important. This consensus among the speakers highlights their shared understanding of the significance of each principle in ensuring ethical practices in the field.

Furthermore, the attendees of the session also recognized the importance of all six principles. The fact that these principles were mentioned by multiple participants indicates their collective acknowledgement of the principles’ value. This shared significance emphasizes the need to consider all six principles when addressing the ethical challenges posed by AI in cybersecurity.

However, while acknowledging the equal importance of the principles, there is consensus among the participants that further multi-stakeholder discussion is necessary. This discussion should involve a comprehensive range of stakeholders, including industry representatives, academics, and political authorities. By involving all these parties, it becomes possible to ensure a holistic and inclusive approach to addressing the ethical implications of AI use in cybersecurity.

The need for this multi-stakeholder discussion becomes evident through the variety of principles mentioned in a poll conducted during the session. The diverse range of principles brought up by the attendees emphasizes the importance of engaging all involved parties to ensure comprehensive coverage of ethical considerations.

In conclusion, the session affirmed that all six ethical principles of AI use in cybersecurity are of equal importance. However, it also highlighted the necessity for further multi-stakeholder discussion to ensure comprehensive coverage and engagement of all stakeholders. This discussion should involve representatives from industry, academia, and politics to effectively address the ethical challenges posed by AI in cybersecurity. The session underscored the significance of partnerships and cooperation in tackling these challenges on a broader scale.

Moderator

The panel discussion on the ethical principles of AI in cybersecurity brought together experts from various backgrounds. Panelists included Professor Dennis Kenji Kipker, an expert in cybersecurity law from Germany, Professor Amal, the Executive President for the AI Movement at the Moroccan International Center for Artificial Intelligence, Ms. Nushin, a Senior Security Researcher from Kaspersky in Australia, and Ms. Anastasia Kazakova, a Cyber Diplomacy Knowledge Fellow from the Diplo Foundation in Serbia.

The panelists discussed the potential of AI to enhance cybersecurity but stressed the need for a dialogue on ethical principles. AI can automate common tasks and help identify threats in cybersecurity. Kaspersky detects 325,000 new malicious files daily and recognizes the role AI can play in transforming cybersecurity methods. However, AI systems in cybersecurity are vulnerable to attacks and misuse. Adversarial AI can attack AI systems and misuse AI to create fake videos and AI-powered malware.

Transparency, safety, human control, privacy, and defense against cyber attacks were identified as key ethical principles in AI cybersecurity. The panelists emphasized the importance of transparency in understanding the technology being used and protecting user data. They also highlighted the need for human control in decision-making processes, as decisions impacting individuals cannot solely rely on AI algorithms.

The panelists and online audience agreed on the equal importance of these ethical principles and called for further discussions on their implementation. The moderator supported multi-stakeholder discussions and stressed the involvement of various sectors, including industry, research, academia, politics, and civil society, for a comprehensive and inclusive approach.

Plans are underway to develop an impulse paper outlining ethical principles for the use of AI in cybersecurity. This paper will reflect the discussion outcomes and be shared with the IGF community. Feedback from stakeholders will be gathered to further refine the principles. Kaspersky will also use the paper to develop their own ethical principles.

In summary, the panel discussion highlighted the ethical considerations of AI in cybersecurity. Transparency, safety, human control, privacy, and defense against cyber attacks were identified as crucial principles. The ongoing multi-stakeholder discussions and the development of an impulse paper aim to provide guidelines for different sectors and promote an ethical approach to AI in cybersecurity.

Session transcript

Moderator:
the meeting to order. Let me maybe just start by introducing all the speakers from the panel that we have today. We’ll start with my left. We have Professor Dennis Kenji Kipker from the University of Bremen. He’s an expert in cybersecurity law from Germany. And I have on my right Professor Amal, who is Executive President for the AI Movement, the Moroccan International Center for Artificial Intelligence, Morocco. And then on my far left would be Ms. Nushin, who is Senior Security Researcher, Global Research and Analysis Team from Kaspersky in Australia. And of course, on my far right, last but definitely not the least, Ms. Anastasia Kazakova, Cyber Diplomacy Knowledge Fellow from Diplo Foundation, flown in from Serbia. And myself, I am Jeanne Sujin Gan, Head of Government Affairs and Public Policy for Asia Pacific, Japan, Middle East, Turkey, and Africa regions from Kaspersky. Well, today’s workshop is titled Ethical Principles for the Use of AI in Cybersecurity. And of course, by way of a background and setting of the context, we basically are currently witnessing a rapid development of AI around the world for some time now. And it really has the potential to bring many benefits to the world as we have all probably experienced on a day to day basis, including enhancing the level of cybersecurity. AI algorithms help with rapid identification and response to security threats, and automate and enhance the accuracy of threat detection, for instance. And this is something that we experience in Kaspersky because we are a cybersecurity company. But of course, While numerous of these general ethical principles and foundations for AI have already been developed by various stakeholders, for example, in 2021, the UNESCO actually adopted the recommendations on the ethics of AI. However, the growing use of AI and machine learning components actually in cybersecurity makes ever more urgent the need for ethical principles of AI development, distribution, and utilization in this domain. Due to the particular opportunities, but also risks of AI in cybersecurity, there is a need for a broad dialogue for on such specific ethical principles, which we felt today is a good opportunity for us to sort of discuss that. And also for this reason, we at Kaspersky actually has developed initial ideas regarding aspects that should be taken into account there. And of course, these will be discussed in today’s workshop. So just to sort of run you through the structure of the workshop and what we plan to do in terms of our agenda today, we’re gonna start in a moment to run some survey with our audience today, including those who have dialed in online with two poll questions, which I’ll ask my colleague, Johan, to pull out in a moment, followed by, you know, our speakers being asked the first round of questions, and then we’ll take some questions from the floor as well. And before we end the session today, so I promise, you know, our panel of speakers are really experts in their respective domains, and put together, we’re gonna expect some very good discussions. So without further ado, let me just invite Johan, who is joining us online, and we should be able to see him to run the first online poll question. Yes, Johan, we see you. Thank you.

Jochen Michels:
Yeah, I spotted the poll.

Moderator:
Yes, and we can hear you too. Very good. So the first question, Johan will put up, is, in your opinion, is the use of AI in cybersecurity more likely to strengthen or weaken the level of protection? In your opinion, is the use of AI in cybersecurity more likely to strengthen or weaken the level of protection? Of course, we have got options for people who are participating in the poll. Of course, the first option is that it will strengthen protection. Second, it will weaken protection. And the third one is, in the name of democracy, we allow you to say you don’t know. So let’s just give this a moment and I will wait for Johan to… Ah, looking good. Okay, in your opinion, is the use of AI in cybersecurity more likely to strengthen or weaken the level of protection? I think we have got 62% who have said that it will strengthen protection. Let me just write this down. 20% say that it will not, it will in fact weaken. And 20% have exercised their right to say that they don’t know. That’s good. And I think this is something that we will flesh out in a little bit with the presentations from our speakers. I would also want to just invite Johan to put up the second poll question. I think we are ready to close this poll. Let’s pull out the second poll question. We only have two to start off before we get into the panel discussion. So let’s call up the second poll question. The second poll question. of the technique, but I will do so soon. Yeah, I will do so. The second poll question is what should prevail? Yeah, we see it. Thank you. The second question is what should prevail in AI regulation specifically for cybersecurity? Of course, the answers include, number one, it should be regulated as heavily as generative AI. Second, there is no need for regulation. Voluntary adherence is best. Ethical principles would do just good. And of course, the third option would be existing cybersecurity regulation need to be updated to account for AI technologies. I’m not sure if the poll is working well off with the online audience. Let’s hear from Johan. It’s working, yes. Fantastic, thank you. I will wait some further seconds and then I will end poll. Thank you. Okay. Interesting, interesting. What should prevail in AI regulations specifically for cybersecurity? Only a single choice was allowed and I think we’ve got 38% of our audience saying that it should be regulated as heavily as generative AI. Nobody selected, no need for regulation. So I think we have, well, at least some agreement there. And 63% are saying the existing cybersecurity regulation need to be updated. That’s interesting. Let’s just park that aside for a while. I think, thank you, Johan. We’ll have you back with us later on in today’s session. We can close the poll. Thank you, Johan. Now, I think I’m going to be opening up some questions later on to our panelists, but I would first call on Nushin to perhaps, she’s got some slides for us also. Some slides, yeah. And I’ll just invite Nushin to please deliver some short remarks, her impulse speech on opportunities and risks of AI and cybersecurity and what ethical principles she feels should be developed to promote the opportunities and mitigate the risks. Nushin, please.

Noushin Shabab:
Okay, thanks, Jenny. I’m not sure if the slides, okay, great. So as my colleague perfectly stated and most of the audience agree, AI and in particular machine learning has actually helped to strengthen cybersecurity in a lot of ways. We have been using machine learning techniques in our products at Kaspersky for a long time. So it’s not something new for us, but as we have always had this concern about the ethical principles of using AI and machine learning in cybersecurity, we thought to use this opportunity to share a little bit about some of the basic principles that we believe that are important in, sorry, in the use of AI in cybersecurity. And we want to have a discussion today and yeah, maybe develop these principles further. Let me start with the first principle. So the first one is transparency. We believe that it’s important and it is the user’s rights to know if a cybersecurity solution has been using AI and machine learning and the companies, the service providers need to be transparent about the use of AI. We have a global transparency initiative. And as part of this initiative, we have transparency centers in different countries in the world. And the number is actually growing. We are opening more centers and in these centers, stakeholders and customers, enterprises, they can go and inspect and visit the centers and look at the code of our products, including how AI and machine learning has been used in our products. So we commit to being transparent and making sure that users know and consent to their data and their contribution to our network is transparent. And they are aware of that machine learning techniques are used in the products. Number two, safety. So when it comes to the use of AI and machine learning in real world, there are actually a lot of ways that these systems can be misused by malicious actors to make them make mistakes deliberately. So there are various techniques that the attackers can use to try to manipulate the outcome of machine learning systems and algorithms. That’s why we believe that having safety of the AI and machine learning systems in mind is very important. And towards this principle, we have a lot of security measures in place, like auditing our systems with machine learning, reducing the use of third party data sets for the training for machine learning systems. and also a lot of other techniques, such as making sure that we favor the cloud-based machine learning algorithms to the ones that are actually stored and deployed on the user system. Number three, human control. So we all agree that AI can help a lot in a lot of areas in cybersecurity. For example, in improving detection of malicious behavior, in anomaly analysis, and so on. But when it comes to sophisticated malwares, especially with advanced persistent threats, it’s very important to understand that these type of malwares, they mutate, they adopt different techniques, encryption, obfuscation, and so on, to actually bypass machine learning and AI systems. Because of this, we always have human control over our machine learning systems. And we believe that it’s important to have an expert that has good knowledge and understanding, and is backed by a big data set, big data of cyber threats, to supervise the outcome of machine learning algorithms. That’s how human control has been always there for the systems that we use machine learning for. Number four, privacy. When we talk about big data, and data from cyber threats, it always comes with some sort of information that can be considered as personal identifier data. So we believe that it’s users’ right to have privacy on their personal data. That’s why we have a lot of measures to make sure that the privacy of users are considered when it comes to machine learning algorithm, and the data that is used to train these algorithms. By many ways, like pseudonymizing, anonymizing, reducing the data collection from users, removing personal identifiable information from URLs, or other data that comes from user systems. Number five, develop for cybersecurity. So as our mission to create a safe world, we are committed to only use and provide services that work in defense. So along with this principle, we have the services that use machine learning and AI developed only for defensive practices. And we encourage other companies to join us in this principle too. Last but not least, that’s actually why we are here, and we have this discussion here. We are open for dialogue. We believe that it’s only through collaboration between various parties, and between everyone in the industry, and in government sector, that we can truly achieve the best result, the best protection for users and user data against cyber attacks and cyber threats. So that was it. Thank you.

Moderator:
Thank you very much, Dushan. I think that sort of, I hope, sets the stage and sort of sets the tone to today’s discussion, because we really are focusing For those who’ve just joined us, we are focusing our workshop today, really discussing the ethical principles for the use of AI in cybersecurity. And also, I think I just want to take this time to sort of hear from a more technical scientific perspective from Amal on how can the microphone be. How can AI or machine learning techniques contribute to cybersecurity and which issues can emerge while using AI techniques for cybersecurity and how can we solve these issues? I think you also have some slides, if we can put up some slides. Yes, we see them.

Amal El Fallah Seghrouchini:
Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there is a need of regulation like most people voted earlier. So, my presentation will be very short, even if there are a lot of points. But mainly, I would like to emphasize where AI can be used in cybersecurity, because the ethical problems comes from the way we will use AI in cybersecurity. So, the context is that, as you all know, cybersecurity is a very huge problem for all software around. And in this presentation, as Jeannie said, I will address some points related to how AI is included in cybersecurity systems. So, as you know, Kaspersky detects like 325,000 new malicious files every day, and this comes from a report FireEye in 2017, so I think today there are much more. The problem with classical methods for cyber security is that there is slow detections and also slow neutralizations. And what we expect from AI is to enhance and transform cyber security methods by providing predictive intelligence and long life cycle of the software. So the role of AI more specifically in cyber security is twofold. The first thing is that AI can automate common cyber security tasks like vulnerability management, threat detections, et cetera. And also thanks to AI, we can identify threats in large data sets that have not been analyzed manually. So as you can see, cyber security and AI is a national security priority by the NSF, NSTC and NASA today. So what I want to present is that there are two kinds of AI. The first boxes in the left represent what we call a blue AI. And in the right, you have the red AI. The blue AI presents some opportunities for cyber security. For example, AI will help to create smart cyber security. For example, effective security controls, automatic vulnerability discovery, et cetera. And also in the fourth point, by using AI, you can fight cyber criminals. For example, for fraud detections, analysis, intelligence encryption, fight against fake news, et cetera. And this is the good news for using AI in cyber security. But as you know, cyber security, these techniques or these AI systems are also vulnerable and raise a lot of challenges like robustness, vulnerability of algorithms, of AI algorithms, and also some misuses of AI. For example, by creating fake videos, AI-powered malware, smarter social engineering attacks, et cetera. So AI for cyber security, I will go very fast, don’t worry, AI in the domain of cyber security will help in all these steps. And this is the NIST-CSF framework, how to identify, understand your assets and resources, protect by developing and implementing appropriate protection measures, detect by identifying the occurrence of a cyber security event, respond by taking action if a cyber security event is detected, and finally restore activities that aim to maintain resilience plans. So this is the lifecycle of cyber security, defensive cyber security, and AI can be used at all the stages of this lifecycle. So I can say that the ethical issues of using AI in cyber security can be studied through these five steps. For example, if you identify your asset and you should be sure that your resources are resilient, are not vulnerable, to protect also and detect, et cetera. So how do we implement all this by using AI techniques? I will not detail all the phases. But for example, in identification step, we will use some tasks. If I address some tasks of cybersecurity like fuzzing, pen testing, et cetera, the techniques of AI that I will be able to use and they are used in practice today are deep learning, reinforcement learning, deep learning and reinforcement learning for classification of bugs, and also some NLP and methods of machine learning. This means that all the problems that come with AI techniques will be found again in dealing with cybersecurity. So this is only a one step identification and we can deploy, I don’t have time, this is why I cut, but we can do the same for all these phases in cybersecurity. System. So now we can use also techniques from cybersecurity to securize or to make AI system more robust. And this is a challenge of real AI. Robustness, vulnerability of AI algorithms. For example, there are very well known adversarial machine learning techniques that can be used to secure or to attack AI systems and algorithms. Also, this is why I say that adversarial AI attacks AI systems. AI cannot be made unconditionally safe like any other technology. So we have to take care that our AI system used in cybersecurity will not be attacked by malicious attacks or something else. This is a very famous example in computer vision. If you look at the pictures, they are similar, but the AI system will detect different things. It’s just a question of changing one pixel sometimes in picture, and you can have a different output. For example, you have the right one, the left one, you can see a car, this is correct, but in the left one, you can see ostrich. The system will recognize, but people cannot, I mean, human being cannot see the difference, but machine learning algorithm will make the difference. Okay, so last thing is misuse of AI, for example, by creating fake videos, they are very famous today, and AI-powered malware, smarter social engineering attacks, and so on. And I will end with this. So we know today that AI can create new kinds of cyber attacks for phishing, cyber extortion, automated interactive attacks, etc. For example, using generative AI in cyber extortion is something very common today. So the need of regulation is crucial, I mean, it’s very important. We inherit all the problems, the issues coming from software, but we have also some very specific problems for cyber security domain. And AI will bring major ethical and regulatory challenges also in cyber security. So my conclusion is that I we need ethical and regulatory consideration for cybersecurity systems. Delegations of control, we have to find a consensus between human total control and total autonomy of AI systems. Delegations of control will be granted with this sole objective and not towards total autonomy of the AI in cybersecurity. And cybersecurity actors are still looking for an adequate legal basis to conduct their daily testing practices for privacy and data governance, for example, in cybersecurity. Thank you for your attention.

Moderator:
Thank you very much. That was wonderful. And I’m already, I’m madly taking notes because I’m gonna have to synthesize all of this. But before I do so and really do a full-on panel discussion with perhaps some questions from the floor, I’d like to just pass the time over to Anastasia who will be talking with us about some of the current trends and reflections on AI policies in particular in the field of cybersecurity. And maybe an impulse statement by Anastasia on the chances and risks and the value of ethical principles.

Anastasiya Kozakova:
Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work on a policy before I work in the private sector as a cybersecurity expert also focuses on the policy. And in my current work, we do also discuss with the multi-stakeholder is how the cyber norms for responsible behavior could be implemented. And while we are not solely focused on AI, we largely focus on the norms for responsible behavior in the context of international security and peace in the context of overall cyber stability. AI policy is definitely. getting more and more attention. So thanks so much for the previous speakers. I think we’ve seen indeed that AI already entered the world of cybersecurity. I think quite many years ago, it helps to enhance detection and helps to collect intelligence for better analysis of the cyber threats. And I think many, if not all cybersecurity companies these days, especially advanced companies do apply to some extent AI in the methods, how they deal with the threats and what the intelligence they produce for the customers. The big question though, how does it work? What kind of a data do the companies use for this? How actually the AI, which still quite unknown for many even who develop AI in terms of the mysterious black box, what happens there? If it makes decisions, how it made a particular decision? So all this really important questions, I think one of the key fundamental challenges that not only on the minds of the policymakers but on the minds of the users, but also on the minds of those who develop AI and AI based solutions. And in this regard, yeah, the human control and retaining human control, I think all the speakers have said already, it’s really fundamental. AI should not be autonomous because we cannot allow something that we don’t know exactly or completely to allow it to somehow make so much big impact on our human life. And we see that humans are afraid of this, right? But even though the policymakers already have started talking and discussing how to make AI more predictable, transparent, ethical, the question is still if, okay, we retain the human control, we give back the control to humans, those who develop, who sort of academics who would like to see what are the algorithms. I use that, the question is still quite challenging. How this control will be split up between actors and which actors would be sort of on the table? Who would, in the end, retain the biggest control among humans? Would it be the developers of the AI or the policymakers of the academia? How to ensure that actually the data that has been collected on a massive scale for the AI is not monopolized by one actor or just a few actors in the market? How to make sure that academia, again, civil society has access to analyze what kind of a data and under which policies, which processes it’s been used and data protection, security are properly mitigated. So these are really, I think, open questions. It’s really difficult questions. They are very contextual questions. It will speak about the AI in terms of the impacts for society, for economy, for security, for international security. All of these questions will be, I think, decided on the particular context and it’s really important and it’s really challenging, therefore. One of the other challenges, I think, in all the emerging policies or even the regulations to make it more transparent, more ethical is, of course, to define AI. There’s, I think, no universal definition so far what’s the AI is. And the policymakers, I think, have really struggled to carefully scope future laws to pin down what AI exactly entails in a particular context. So one of the aspects that’s really important for policymakers and for legislators to make sure that the laws focus on the outcomes and expectations but not the technology itself. It will help actually to make this laws more future-prone and it will focus on what actually concerns people. People, I think, users, we as just ordinary users. we don’t want to know how the code is reading for that. We do want to know how this code will impact our lives, how this will impact our security, how this will impact our jobs, or the community or society, or the broader scale. The other aspect that I also wanted to mention that even though currently we kind of, I think, name lots of policies or regulations narrowly in the field of cybersecurity, in terms of AI and cybersecurity. And here, I would actually agree with the audience that participated in a poll. And I think most of the people said that it’s rather the existing cybersecurity regulation needs to be strengthened rather than new regulation on the own AI cybersecurity needs to develop. And I agree with this. I think that it’s really important to see broadly on a more on a horizontal level, how AI is a one more technology, is a one more piece of code in the end, even though it’s really complicated, fascinating, and it’s difficult piece of code. But still, how is actually produces, which impacts it produces for different stakeholders. And in this regard, they are already emerging in existing laws to regulate the security of data in particular contexts, to regulate the security of the critical infrastructure, and so on. And AI, I believe, complicates the picture, but doesn’t require a new approach from scratch. So yes, it complicates a lot the current picture and it requires innovative, probably discussions, but still we need to look at the, again, at the impacts, what the technology gives to us. I also wanted to say that we do see the emergent discussions in terms of the. the impacts that we’re on the international security and peace likely within the UN and within the regional fora, but still they’re not that extensive as they should be. The problem is that still, I think the international community and those who engaged in this discussions, including diplomats, still lack substantial evidence how many advanced AI tools, if they exist, can be used for both defense and offensive purposes. There’s still the knowledge is very limited. There’s a lot of secrecy about this. It’s the knowledge that is not accessible to a broader public, to a broad public, or to even a limited group of the academics, unfortunately. So there are at the same time growing interests and calls of the international community to produce the sort of the rules of the road, how to regulate AI in terms of cybersecurity, especially where AI can be used in the military context on the battlefield. And I think it’s really important, but hopefully we will see more probably dynamics. But so far, again, I highlight, to have this discussions more evidence-based and more substantive, we need to understand what kind of the tools already out there and to increase the transparency in terms of the different types of the actors that are involved in cyber activity. And I would probably conclude to this question saying that overall speaking of the regulations, I think it’s already evident that the large markets such as the EU, US, China, other countries probably will pass conflicting regulations concerning AI quite soon. I think we heard yesterday from the US diplomat that US is preparing the executive order on the artificial intelligence soon. And the G7 leaders, they also have committed to establish a set of the guiding rules for the AI. So, we see the appetite, we see the appetite to actually split, to define the roles. Who will have the ultimate power to define the impacts of the R in the future? Will it be the governments? Which governments? Will it be the vendors, the companies? And how to make sure that it’s just the one and a few companies? The problem is that if it happens, and if more fragmentation happens in this field, how it happens overall in cybersecurity and in cyberspace, unfortunately, it will make, I think, less opportunities for different communities to truly benefit from learning what the AI could bring to us as the international community, as a society. There are still beliefs, I think, and hopes that vendors or organizations or companies could take a lead and organize sort of the consortium and to make a self-voluntary approach, a self-regulation approach to be more transparent. And what we just heard from Kaspersky, I think it’s a good initiative. We hear more and more initiatives, especially companies involved, extensively involved in AI to be more active and saying that what kind of the data they use, how they process this data. And I think there’s still a hope, optimistic hope that if this conversation continues, a bottom-up approach would lead. And in this regard, there will be more opportunities to avoid the risk of conflicting laws, of the fragmentation in this field, and probably to make sure that still the access to this technology, to the research, to the discussion will be much broader than just within the borders of one particular country of a few countries. But I think that’s still the open questions. There’s many open questions and all of the, to some extent, all the emergent policies try to address this in terms of the result. what conclusions will come, I think that’s an open question. So let’s see how humans will be optimistic or pessimistic solving this. Thank you.

Moderator:
Thank you, Nastia. I would just want to finish off this, you know, preliminary round of remarks with inviting Dennis to sort of speak about, you know, can AI be legally regulated at all, given the current political and technical difficulties with the AI Act in Europe, for example, and aren’t we destroying innovation through legal, through over-regulation? So maybe I’ll just hand the time over to Dennis.

Dennis Kenji Kipker:
Yeah, thank you very much, Jeannie, and thank you for the possibility to speak here today. As a professor for cybersecurity law, I definitely have a legal perspective on the whole topic and in regulating AI. We definitely need to draw a clear line, as it was already noted by the previous speakers, because we are not talking just about general AI regulation here, but about a very specific use case. And that means an example, just a piece, a slice of a broad use case scenario. And in my opinion, AI and cybersecurity are two topics that have already come together a long time before use cases like generative AI became public in the recent month. And for example, AI is used with regard to cybersecurity in automated anomaly detection and networks. And I already wrote some publications about that six years ago. And this, of course, begs the question regarding this very specific use case, do we need a special AI regulation for cybersecurity in the future? And my answer with regard to that is quite clear. I would say, no, this might be interesting, but to justify this, in my opinion, we need to differentiate again, because there are three different use case scenarios that we will have to talk about and that we’ll have to take a closer look onto. So the first one is AI is used to improve cybersecurity. The second one, AI is used to compromise cybersecurity. And the third one, AI in general is being developed. And the first two scenarios are from a legal perspective, quite easy to answer. So when AI is used to improve cybersecurity, is technically one of several possible measures that can improve cyber resilience. For example, European lawmakers, who in my opinion, currently lead the world in cybersecurity legislation, for example, with the new network information security directive. that became effective in the beginning of this year or, for example, also a draft version of the Cyber Resilience Act. We have a lot of upcoming cybersecurity regulation and the point is regarding this cybersecurity-specific legislation, the European lawmakers have so far avoided exclusively naming specific technologies to realize an appropriate level of cybersecurity and have instead of that used the general term state-of-the-art of technology, which is a general guideline in many legal regulations of technology, such as cybersecurity as well. So it means, for example, private companies, public institutions that implement cybersecurity have to fulfill the state-of-the-art of technology to be compliant with the legal rules. And this, in my opinion, as a lawyer, is very fitting because a law will never be able to conclusively map all the technologies that will be developed in the future that are needed, especially here for cybersecurity in a casualistic sense, due to all the rapid technological development that we have. And we have very fast development cycles, not currently, but also in the future. And this is also widely accepted as opinion by the scientific community. The second use case scenario that I would like to mention, so that means when cyber attackers use AI to compromise IT systems, this is also not a specific AI cybersecurity scenario, because again, as with defending against cyber attacks, attackers may well use different technologies to successfully attack IT systems as well. And these are typically criminal offenses. And in many countries, in various countries all over the world, we have also cyber criminal law. And these criminal offenses in the national cyber criminal legislations are being interpreted. And as a part of this legal interpretation, They already cover the use of AI as a technical means of attack without the need for explicit regulation. And now we come to the third point of this very short statement. The third aspect is not directly related to cyber security, but to the development of AI. We already heard some statements about development of AI, of how keeping AI secure when it is being developed. And of course, this is an important question that we also have to address from a legal perspective. But this development issue of AI cannot be considered a cyber security specific issue. So it requires a focus. And of course, it must be ensured, for example, as Amal mentioned, that AI systems are not themselves compromised at this very important stage. And that’s something that we’ve talked about in several panels during this conference. And this is also what the European AI Act, as a regulation that has also been mentioned already for several times, for example, seeks to achieve. When it explicitly in its draft version, that was made public last year, stipulates that AI itself must be cyber secure. Therefore, developers of AI must provide safeguards to prevent, for example, manipulation of training data sets or to prevent hostile input to manipulate an AI’s response. And this is also something I guess Amal mentioned. But this, in my opinion, is just one facet of secure and safe AI development and not really a use case for implementation of AI and cyber security. So to come to a conclusion as a result, in my opinion, the regulation… of AI and cyber security must clearly differentiate between scenarios in which AI is only one of several possible technical means and the regulation of AI specific risks themselves. I think this is an important point which has to be taken into the policy debate and into the future legal debate as well. Thank you.

Moderator:
Thank you very much, Dennis, for that. So, I think so far what we have heard beginning with the ethical principles that were sort of put forward by Nooshin on transparency, safety, human control, privacy, defensive cybersecurity, and being open for dialogue have pretty much been also agreed upon in various different ways. First of all, of course, we heard from Amal about the framework, the five steps to defensive cybersecurity, the lifecycle, identifying, protecting, detecting, responding, and restoring, which also, of course, sort of dovetails with various aspects of those ethical principles which were put forward by Nooshin on safety, human control, privacy, and defensive cybersecurity. And then, of course, also we heard from Nastia about transparency, elements of transparency as well as, of course, the multi-stakeholder cooperation perspective to things, amongst other things, of course. And, of course, Dennis had also highlighted some of the limitations of regulation and the need for some ethical principles that overlay. And we’ll talk a little bit more about all of these in a short while. But I thought I wanted to take this time to open the floor to some possible questions because otherwise I am going to ask a round of questions. I see that there are no… I’m just going to ask Johan if you have got any questions from the online participants. Otherwise, I would be quite ready to launch into my round of questions. Yes, there is a question there in the room. Can I just ask you to take the mic? Yeah. Ah, you have… Okay. You have to turn on the mic. Push the button up. Thank you.

Audience:
All right. Thank you for the presentation as well. So, the question from my side, although the ethical question is more philosophical approach for sure. When I look at the cybersecurity, because the adversary is going to use adversarial AI and they don’t care about ethics. Now, for us to defend, and I see that… Detection might be where we might imply the ethical approaches, but when we are talking about response, especially about active cyber defense and engaging in responsive actions, implying ethical AI to counter an unethical adversarial AI actually might put us in a disadvantage. Maybe I would like to hear your approaches or your thoughts on this as well.

Moderator:
All right, maybe I’ll ask Nastia to take the question. Thank you for that question, first of all.

Anastasiya Kozakova:
That’s a good question. I think this question already exists before the AI right overall. If the organization that being is attacked, if the organization has the right and if it has a possibility, if it’s still the organization has the right to hack back, right? There’s all the discussions of the hackbacks. If they’re illegal, if they are lawful, if they could be legitimate a particular situation, I think in most of the countries, the governments and industry came to the conclusion that organizations probably shouldn’t have this right. So the law enforcement that have the mandate per law should step in and actually if the organization asks for this help, so the law enforcement or other specialized agencies can investigate and then decide how to do, depending on what type of the actor, the organization dealing with, whether it is a cyber espionage, if the APT, it’s of course the matter of the international security of the relations of the two of the more countries, it’s really getting more critical, but if it’s a sort of really advanced complicated DDoS or it’s sort of deficient with the AI, right? So whether organization has this right, I think it will be really risky to go into this direction. But overall, as you said, it’s a really philosophical how we define ethics in this regard and why we… as a good actress need to be ethical? Was a lot of bad actress that behave unethical? Well, again, I think it’s a really risky conversation that might take because we need to define what’s our goal. Our goal is to enhance sort of security for all, some sort of optimal collective security. And our goal is to enhance stability. Whether if us as a good actress behaving unethical to protect, even to protect ourselves is a part of the security and stability in the end. I think not likely. So we still need to, well, abide to international law, domestic law, national law, and overall sort of the rules to make sure that if there’s a bad actress that acting as a bad actress, we sort of stay on the side where we do understand the limits of our actions. But I don’t want to conclude on a pessimistic note, but still on a hopeful note, the challenges that we see in the cyberspace, they of course getting more and more sophisticated and it’s not purely technical thing, right? And this is what makes it really difficult. If that’s technical, so the technical people will solve it. The problem is with much more nuanced, sometimes policy solutions with the international security solutions. So in this regard, I think we need as a humans who try to protect ourselves be, I think even more creative. Yes, that’s difficult, but we have to do so. Be creative in terms of focus on what we already have for centuries. It’s international law. Again, it’s the national law, but also be creative how the new types of responses could be developed. in this regard, how we could enhance cooperation between communities, between vendors who could share the knowledge with the outputs of the research, or even the government despite the current geopolitical situation, how could we increase our chances to develop those creative solutions to address the threats that are getting more and more complicated to us. Again, that’s difficult, but I think there’s a lot of hope that it will be developed more and more, because I think we all want, in the end, security for us all.

Moderator:
Thanks for that. I thought I’ll also pay some attention to the questions from our online participants. There was one question from Yasuki Saito, and I think I’d like to ask Nushin to take this question. It says, what do you think of using LLM or a chat GPT to deceive human users and force their PCs to be infected by malwares? Is there any good way to avoid such things? Nushin?

Noushin Shabab:
Okay. I guess we heard from Amal about this particular type of attack, like advanced social engineering enabled by AI, and this is a perfect example to use an AI system to make a more convincing social engineering conversation or an email or a message that looks very benign and doesn’t raise any suspicion. This is just one example of how AI can be misused by malicious actors, but I would say still with an advanced and sophisticated security solution, obviously having machine learning techniques implemented into the solution can also help to identify a spear phishing email or even a social engineering attack. But also, apart from having an advanced solution to address and to protect users against such attacks, I would say that talking and raising awareness about such attacks, because they, I mean, I’m sure that the attackers, especially with the use of AI, they can bypass a lot of services. It’s much easier, it would be much easier to understand if the victim was the target environment and how the environment is, what are the softwares, what are the security measures in place in their target environment, and try to figure out a way to bypass that. So I would say something to complement an advanced solution would be just education for common users and also for employees in organizations to understand the risks and understand how AI can help in making a more convincing conversation or a more convincing spear-phishing email, and make sure that users are aware and they don’t fall victim.

Moderator:
Thanks for that. So I think just taking stock of what we have so far, from the poll, from the survey results, and also from the discussion, I think first of all, what we’re hearing is that obviously AI and cybersecurity has produced a lot of benefits, and we can’t run away from the use of AI and cybersecurity. But second of all, of course, it comes with costs, right? There are impacts, there are unintended consequences. And just now Amal actually brought up some statistics from Kaspersky several years ago about the number of new malicious files that were detected on a daily basis. And just thanks for bringing that up. I thought I could also… So give an update on the statistics as of today, actually Kaspersky uncovers and finds on a daily basis more than 400,000 new unique malicious files every day. And that’s not, that’s astounding. And when I talk about new malicious, unique malicious files, we’re talking about maybe one malware that infects 10,000 computers, let’s just say, does not count as 10,000, that’s considered, that’s counted as one, if it’s the same malware. So if we’re here, all of us sitting here in this room for an hour and a half for this workshop, we’re essentially talking about what, 27,000, 30,000 new unique malicious files uncovered by a single company like Kaspersky. So that’s astounding. So there are costs and there are benefits, to the use of AI in cyber security that we need to be concerned about. And that brings me to the third point, which is that, which is the reason for our discourse today that we’re having this, what we’re all talking about. And we discussed then, what are the role, what’s the role of laws and regulations and all that, right? And then we also hear that, we then start thinking about not just regulation, but what exactly are you regulating and why are we regulating? And then we also hear discussions about conflicting regulations, which are beginning to surface, right, globally. And so then what that brings us to would be that there are limitations to regulations. There are limitations to regulations. And as a lawyer, I’m saying this, that anything that is legal may also not be ethical. So do we then, right, take a step forward and then start thinking about ethical. principles beyond just legal frameworks. And that is, I think, where we are today. And I think we have a question from the floor. Sir, can I just ask that you take the mic and introduce yourself and give us a question? Thank you.

Martin Boteman:
Hi there. This is Martin Boteman. I’ve been talking with the DCIoT today as well. And one of the complexities that comes up when you talk about AI and cybersecurity, I agree with what has been said. But a complication is that security will require identity. And I can see specifically with AI that has a dual impact. One thing is that data, thanks to AI, become even more often personally identifiable than before. But the other thing is that AI can also help secure, as has been pointed out, but maybe also with the identity factor. So how do you deal with the dichotomy between identity need in the future going forward? There’s no way around it. At the same time, also privacy. And this is part of your legal considerations, of course, as well, and ethical. Thank you.

Moderator:
OK. All right. I will leave Dennis to take this question.

Dennis Kenji Kipker:
Yeah, of course. When developing AI, we have high impact privacy risks. And I think this is quite clear. From the European Union perspective speaking, we have a general data protection regulation of personal data also when AI is being trained. But as I mentioned, when it comes to the possibilities and also the problems of AI regulation, I think in general, we need to move away from trying to regulate every conceivable scenario and risk. We have risk, definitely. But this is not a typical thing of AI. So this is a thing that addresses the whole technology sector. And on one hand, full technology regulation will, in my opinion, never be possible. And on the other hand, administrative practices also raise the question, for example, of who should control and implement all these laws. Because you will need a lot of resources. And we see it with regard to data privacy authorities. I think not only in the European Union, but all over the world, that they have problems, that they are struggling in implementation of laws. There are always companies that are not compliant. And this is, of course, a question that is not AI. AI specific, legally, it has long been proven that what matters is not the severity of sanctions after a certain kind of violation, but the likelihood of detection of a violation. And I think this is where we need to work. So what this means, in my opinion, for AI in the wake of the current hype that we’ve seen since the beginning of this year, that we should not fall into a mindless, in my opinion, mindless regulatory debate that possibly ends up even delaying or even torpedoing the really necessary regulation. So we need definitely a core regulation, but we have to distinguish between things that are necessary and that are not necessary for the first start. And in my opinion, the European AI Act, and of course its draft version with its different risk classes, is a good first approach for the time being. Even, of course, it needs to be revised again, because we have seen this year that there have been some new upcoming risks. And since AI is also mainly not developed by states, but currently in the hands of just a few big tech players, mostly coming from the US, the cooperation between the state and the industry actors really needs to improve. And this is where we need to work on as well. So self-regulation by industries, in my opinion, alone, not enough. We need a system of transparency. We need more cooperation that needs to be established on a permanent legal basis. And when we talk about ethical principles, and this is also a part of the session, I think ethical principles can help, of course, but the authorities for supervision of AI, they must be stronger. So that means they need more financial resources. They will need more personnel resources in the future so that we can tackle all these problems.

Moderator:
Thank you. I think I’ll ask Professor Amal to also add on, and then Astier can do so as well. Thank you.

Amal El Fallah Seghrouchini:
Thank you for the question. I’m trying to answer the question about identity and security. And I think, in fact, when we talk about security, we are naturally interested in the identity of the person we try to secure, for example. But there are some initiatives around the world where we can, the purpose is to try to make a difference between identifier and identity of person. And this is very interesting because you can rely on. a trust third party to certify that, for example, that person is associated to that identifier. So we don’t have access to the identity of the whole identity of the person. And another very nice initiative is to avoid to have unique identifier of a person. This allow to not have access to 360 degree of the person itself. So it’s sectorial identifier that is associated with the same identity, which is associated through a third party trust to some person. You add all these layers to avoid the direct access to a person with all data of the person because anonymity of data is not enough today.

Anastasiya Kozakova:
I don’t know if this already answers your question, but I’m also curious to know what you think as well. The question that you ask is really, well, they are very specific, but they’ve really critical of course. And I would probably say not the most popular opinion, but I believe that regulations are not the only solution. Quite often I think regulations could be really slow and not that effective to address the challenges that we face, especially with AI. We still don’t know how the AI will impact us in a week. It’s really rapidly developing. While regulations are important in terms of the nudge developers, manufacturers of the products, tech companies to move in the right direction with the legal and the regulatory actions to put the incentives, right incentives on the market for them. I still believe that the industry has the capacity. and has the ability to do lots of really important things without policymakers and regulators being in the room. So for software, there’s a lot of initiatives going on sort of the software bill of materials as BOM. The idea is to increase transparency of the composition of the software that you’re using. If you take a cake, you need to know what kind of the ingredients there to make sure that it will not make any harm to you given your dietary specifics. So the same logic applies to software. Even if you’re the bigger company, you need to have a detailed documentation, updated, automated documentation that could be actually machine readable to understand what types of the co-components there, how could you use and if there’s a vulnerability, you could easier to find the co-component could be exploited. So I think the same logic could be applied to those who develop AI-based solutions, increase transparency of the components that you use, increase also the data documentation, document what type of the data sources, collection methods, processing techniques that you apply. Yes, it probably will be useful only to the most advanced customers and the large corporations, but these companies also do have their own users. And I think that will have indirect positive security back for us all. So hopefully it takes time, but I think it’s maybe more agile, rougher than weight, extensive regulation to be passed on.

Moderator:
Thank you for that, Nastia. And I think your point about the software bill of materials is really something that resonates with me because that’s also something that at Kaspersky we practice for our software. I think it is important to know the ingredients to the cake that you’re about to eat. I think Professor Amal wants to add on something and perhaps you wanna also give a response later on.

Amal El Fallah Seghrouchini:
Yeah, Professor Amal, maybe just go ahead first. Yeah. Because we are talking about ethics from the beginning and we don’t specify what do we mean with ethics. And I think ethics is not limited to data protection, but also we have to consider dignity to protect human rights. For example, when you detect some malicious attack, for example, you should be careful with the origin of that attack, fairness, privacy, and also informed consent. And my point is, what do we mean by informed consent? When people give some data, some information, interact with the system, like for example, in generative AI systems, people are not aware of the consequences of the tool they use. And they give consent, they think that they are informed, but in fact, they are not informed because most of people are very far from technology. And most of them have no idea on cyber systems. So what. What do we mean by informed consent? How do we protect dignity in these situations?

Martin Boteman:
Thank you for that. What we ended up with, it’s not emotions, it’s just my throat dry. What we ended up with this morning in the discussion was very much that of course legal isn’t enough. Legal is the last resort in a way. So, whereas we’ve been talking a lot about privacy and security by design, I think it’s important to realize that in AI context that is an extra challenge. But it’s a challenge we’re also facing, for instance, thank you to the reference to the European Union’s AI Act, but we’re also aware that of course the Algorithmic Accountability Act is coming up in the USA. And you see that that is ways where we may end up with AI not being just this magic, but something real and concrete where we can take responsibility for. I think that that’s an important element. So, thank you for your answers. And it’s just, we don’t know all the answers yet. I very much realized that. But the old principles of security by design and privacy by design remain important. We realize we live in a world where in some countries, identity is there to protect you and some others, it may make you a victim. So, thank you very much for your thoughts.

Moderator:
All right. I think, thank you for that. I think I am going to, I’m mindful of the time we have about 11 minutes left and I’m trying to economize the time left that we have because not forgetting that we also still have one more survey for our online participants before we conclude today’s discussion. So, I sort of just want to go down the row and maybe begin with Nushin. It’s the same question for all our speakers actually. So, I’m going to ask the question for each of you. Maybe just try to keep your remarks short, one minute, two minute max. Yeah. Which of the, which are the two most important principles in your view that definitely need to be followed in cybersecurity? The two most important ethical

Noushin Shabab:
principles. Yeah. That’s actually a very good question. The most, the two most important principles for me, I think, that there are the two main points that’s been discussed more than other principles today. So first one, transparency, so being transparent to the users and also to the rest of the community and world, what we do with the user data and how we implement detections and how we protect users, be it through a machine learning technique, an algorithm or more like traditional ways. And the second one is obviously privacy. We are in cyber security industry and we deal with targets and victims of cyber attacks. For us, it’s one of the most important aspects to protect users. And obviously, if we don’t take care of the privacy of user data ourselves, it doesn’t make much sense to try to protect them from cyber attackers, right?

Moderator:
So I would say transparency and privacy for me. Thank you very much, Nushin. I’ll just go down the room with Fadenis. I’m hoping you, I’m secretly hoping you will touch on some other principles,

Dennis Kenji Kipker:
but there’s of course the democratic… Yeah, that’s really a difficult question. So to make a long story short, as a scientist, I can say that even with paradigmatic events like AI, we should move to the level of factual argumentation. So this is something I mentioned several times also in my opening statement. We do not eliminate problems just by regulation alone. And this is my opinion and illusion, even if legislators and politicians might see it differently. And in cyber security, we need to clearly align ourselves with the three scenarios of AI. that I have also been mentioning in my opening statement. And in terms of the principles, I find it very difficult to just say we have two principles that are relevant because the use of AI, not only in cybersecurity, but everywhere has so many facets and different risks that we do not have approached yet. And I think one of the most important thing is that we have human control about decisions. And this is something which is also clearly described and this is also described with regard to the use of personal data, for example, or with regard to decisions of authorities, official decisions, or any kind of decisions of private companies that might have a negative impact on individuals that these decisions cannot be made only based on AI. And in my opinion, the second important principle, I would say is that safety comes first. We have to distinguish between security and safety. I think this cannot be done here in a few minutes, but when it comes to AI, we have a lot of use cases for the use of AI. And that means that security is connected very strongly with safety. And we should take a strong look onto all these safety issues because when the AI is not developed securely, we cannot have safe solutions as a result of the AI. So in my opinion, these would be the two most important principles putting on top of the principle of the one new machine set. Thank you.

Moderator:
That’s great. Amal, would you like to give us your two most important principles in your view?

Amal El Fallah Seghrouchini:
If we are talking about principles of ethics, I think. Yes. Okay. Because we are talking about ethics like if it’s a stamps we put on product and ethics is not that. Ethics is continuous debate and discussion about how things. will go ahead. So from my opinion, the first thing we have to take care of is how to preserve dignity and human rights in all these systems. And the second is to work to reach informed consent with population that use these systems. And this means that we have to be very didactical to explain things. For example, we have talked about accountability. Those are tools, accountability, privacy, data protection. All these are tools towards principles of ethics.

Anastasiya Kozakova:
So I guess I’m also expecting to answer this question. I think none of these principles alone do help to, as the users or as the overall sort of those who live in cyberspace, to have sufficient degree of security, right? Transparency alone, well, we know about the particular technology, what type of the code it’s used and all of that. We do have the policies, but what actually, how can this help us to be more secure and to feel more sort of secure and stable in cyberspace? So none of this principle alone actually helps to achieve what we want to achieve, but altogether, and many more, could increase our chances to have this optimal security. But overall, I think we as humans need to be guided with the principle that we should avoid producing harm to each other, to to others with any type of technology and AI is of course my exception here.

Moderator:
Thank you, thank you for that. I think my secret wish sort of came true and everyone sort of touched on the different principles but now I think it leaves us to sort of hear from our online audience as well. I’m just gonna invite my colleague Johan to pull out the final survey question because I think it’ll be interesting to hear what we have from the online audience. Basically the question is, please mark from one to six because we have six ethical principles. Please mark from one to six the significance of each ethical principle of AI use and cybersecurity. Six being most significant, very, very, very significant and of course, one being well to you the least significant of all six. But I do agree with Nastia also, everything sort of comes together. It’s, yeah, depends on how you formulate the principles. Amal is whispering to my ear. Let us wait some further minutes, seconds please for the poll. In the meantime, I just thought I’ll like to say, what are we gonna do about the ethical principles that are in, well, currently in a sort of draft proposals stage, right? So today we have heard ideas that were discussed. We have had some new suggestions that were made and the proposals will be further developed. It doesn’t just stop. Of course, the goal is to develop a basis that can serve as a guideline for industry, for research, for academia, for politics and civil society and developing individual ethical principles. So after this session, we will be publishing an impulse paper on ethical principles for the use of AI and cybersecurity, and it will also reflect the discussion results and will be made available for the IGF community as well. In addition, of course, the paper will be sent to our stakeholders to gather complementary feedback, and of course, Kaspersky will also further develop our own principles based on this paper and provide the best practices for the cybersecurity industry that we’re in. So now, thank you for putting up the results from the poll, Johan. So please mark from 1 to 6 the significance of each ethical principle of AI use in cybersecurity. I think we have… Johan, would you like to interpret these results for us, because there are many colors?

Jochen Michels:
Yes, there are many colors, and it reflects that all of the six principles are important, so there is no priority. All of them are mentioned by the different attendees, and it makes clear that what you said, Nushin, and also Dennis and Amal and Nastia, it is very important to take into account all the different principles and to start further multi-stakeholder discussion on that. So that’s the result of the poll.

Moderator:
Thank you very much for that. I think we can close the poll, and I will just take one minute to sort of wrap up. I think the key takeaways really are that the ethical principles all sort of come together as one and complement one another, and they need to be further developed beyond today’s discussion. And of course, like Amal had said, it really depends on how you frame it also, and that is something that we need to further develop. So when it comes to transparency, safety, human control, privacy, defense of cybersecurity, and being open for dialogue, these are by far, I think, equally important principles that even our online audience have also agreed. And I think the call, well, it remains for me to state then that the call for action would be that we need further international multi-stakeholder discussion on these ethical principles that we have developed and sort of designed. They’re not exactly rocket science, but I think it’s about collating all of them into one document that is coherent and makes sense for everyone. And of course, because we are, you know, a player in the cybersecurity field, then we’re of course particularly interested in developing such ethical principles for AI in cybersecurity. So I just want to take this time to thank all of our audience today and people who have asked questions as well. I hope, you know, it has also furthered this discourse, and also to thank all of our speakers, starting from Nastia, Amal, Dennis, and of course, Nushin, and myself. I’m Jeannie from Kaspersky signing off here. Thank you very much, and I hope you have a successful rest of your time in IGF. Thank you. You You You You You You

Amal El Fallah Seghrouchini

Speech speed

128 words per minute

Speech length

1741 words

Speech time

817 secs

Anastasiya Kozakova

Speech speed

170 words per minute

Speech length

3082 words

Speech time

1087 secs

Audience

Speech speed

167 words per minute

Speech length

130 words

Speech time

47 secs

Dennis Kenji Kipker

Speech speed

157 words per minute

Speech length

1961 words

Speech time

747 secs

Jochen Michels

Speech speed

123 words per minute

Speech length

82 words

Speech time

40 secs

Martin Boteman

Speech speed

158 words per minute

Speech length

389 words

Speech time

148 secs

Moderator

Speech speed

162 words per minute

Speech length

3553 words

Speech time

1313 secs

Noushin Shabab

Speech speed

125 words per minute

Speech length

1371 words

Speech time

658 secs