Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Moderator – Daria Tsafrir
During the discussions, three main topics were examined in depth. The first topic focused on the concerns of the government regarding the protection and safety of critical infrastructures and supply chains. It was acknowledged that governments have a major role in ensuring the security of crucial infrastructures and supply chains, which are vital for the functioning of industries and economies. However, no specific supporting facts or evidence were provided to substantiate these concerns.
The second topic revolved around the risks of over-regulation and the dynamic nature of AI. Participants expressed the need to strike a balance between regulating AI to prevent potential negative consequences and allowing for its innovative and transformative potential. The dynamic nature of AI poses a challenge in terms of regulation, as it constantly evolves and adapts. Again, no supporting facts were provided to further illustrate these risks, but it was acknowledged as a valid concern.
The third topic that was discussed focused on cybersecurity challenges. It was highlighted that addressing these challenges requires collaboration within international forums and the possibility of establishing binding treaties. The need for such cooperation arises from the global nature of cyber threats and the shared responsibility in mitigating them. However, no supporting evidence or specific examples of cybersecurity challenges were referred to.
Throughout the discussions, all speakers maintained a neutral sentiment, meaning they did not express strong support or opposition to any particular viewpoint. This could indicate that the discussions were conducted in an objective manner, with an emphasis on highlighting different perspectives and concerns rather than taking a definitive stance.
Based on the analysis, it is evident that the discussions centered around key areas of government concerns, the risks associated with over-regulation of AI, and the need for international cooperation in addressing cybersecurity challenges. However, the absence of specific supporting facts or evidence detracts from the overall depth and credibility of the arguments presented.
Moderator 1
During his presentation, Abraham introduced himself and verified that he was audible. He provided a comprehensive overview of his background and experience, emphasising his expertise in the field. Abraham highlighted his various roles within the industry, acquiring a diverse set of skills and knowledge in the process.
Abraham also detailed his educational qualifications, underscoring his pertinent degrees and certifications. He explained how these qualifications have equipped him with a strong theoretical foundation, complemented by practical skills developed through hands-on experience.
In addition, Abraham outlined his past work experiences and accomplishments, showcasing specific successful projects and the positive outcomes they generated. He shared examples of challenges encountered during these projects and how he overcame them, displaying problem-solving abilities and resilience.
Regarding communication skills, Abraham mentioned his experience working with multicultural teams and effectively collaborating with individuals from diverse backgrounds. He emphasized his strong interpersonal skills, enabling him to cultivate robust relationships with clients and stakeholders throughout his professional journey.
Furthermore, Abraham mentioned his commitment to continuous professional development, expressing enthusiasm for keeping abreast of the latest industry trends and advancements. He attends relevant conferences, workshops, and seminars, actively engaging in professional networks to stay connected with industry experts.
In conclusion, Abraham presented himself as a highly experienced and qualified professional, highlighting his expertise through his extensive background, educational qualifications, and successful project achievements. He demonstrated effective communication, collaboration, and adaptability, crucial in a fast-paced, ever-evolving industry.
Gallia Daor
The Organisation for Economic Co-operation and Development (OECD) has played a significant role in guiding the development and deployment of artificial intelligence (AI). In 2019, the OECD became the first intergovernmental organization to adopt principles for trustworthy AI. These principles, which focus on the aspects of robustness, security, and safety, have since been adopted by 46 countries. They also serve as the basis for the G20 AI principles, highlighting their global relevance and influence.
The OECD’s emphasis on robustness, security, and safety in AI is crucial in ensuring the responsible development and use of AI technologies. To address the potential risks associated with AI systems, the OECD proposes a systematic risk management approach that spans the entire lifecycle of AI systems on a continuous basis. By adopting this approach, companies and organizations can effectively identify and mitigate risks at each phase of an AI system’s development and deployment.
To further support the responsible development and deployment of AI, the OECD has also published a framework for the classification of AI systems. This framework aids in establishing clear and consistent guidelines for categorising AI technologies, enabling stakeholders to better understand and evaluate the potential risks and benefits associated with different AI systems.
The OECD recognises that digital security, including cybersecurity and the protection against vulnerabilities, is a significant concern in the era of AI. To address this, the OECD has developed a comprehensive framework for digital security that encompasses various aspects such as risk management, national digital security strategies, market-level actions, and technical aspects, including vulnerability treatment. Moreover, the OECD hosts an annual event called the Global Forum on Digital Security, providing an opportunity for global stakeholders to discuss and address key issues related to digital security.
Interestingly, AI itself serves a dual role in digital security. While AI systems have the potential to become vulnerabilities, particularly through data poisoning and the malicious use of generative AI, they can also be utilised as tools for enhancing digital security. This highlights the need for robust security measures and responsible use of AI technologies to prevent malicious attacks while harnessing the potential benefits AI can provide in bolstering digital security efforts.
In addition to addressing risks and emphasising security, the OECD recognises the importance of international cooperation, regulation, and standardisation in the AI domain. The mapping of different standards, frameworks, and regulations can help stakeholders better understand their commonalities and develop practical guidance for the responsible development and deployment of AI technologies.
Intergovernmental organisations, such as the OECD, play a vital role in convening stakeholders and facilitating conversations on respective issues. By bringing together governments, industry experts, and other relevant actors, intergovernmental organisations enable collaboration and foster partnerships for addressing the challenges and opportunities presented by AI technologies.
Finally, the development of metrics and measurements is crucial for effectively addressing and evaluating the impact of AI technologies. The OECD is actively involved in the development of such metrics, with one notable example being the AI Incidents Monitor. This initiative aims to capture and analyse real-time data and incidents caused by AI systems, allowing for a better understanding of the challenges and risks associated with AI technologies.
In conclusion, the OECD has made significant contributions to the development and governance of AI technologies. Through the establishment of principles for trustworthy AI, the emphasis on risk management, the focus on digital security, the recognition of AI’s dual role in security, and the efforts towards international cooperation and metric development, the OECD is actively working towards ensuring the responsible and beneficial use of AI technologies on a global scale.
Asaf Wiener
The Israel Internet Association, represented by Asaf Wiener, serves as the country code top-level domain (CCTLD) manager for the IL, which stands for the Israel National TLD. As the manager of this important domain, the association plays a crucial role in overseeing internet activities in Israel.
Furthermore, the Israel Internet Association is the Israeli chapter of the Internet Society, demonstrating their commitment to promoting various aspects of the digital landscape. Specifically, they focus on digital inclusion, education, and cybersecurity within the country. These areas are of critical importance in today’s interconnected world, and the association strives to bridge the digital divide, ensure access to quality education, and enhance cybersecurity measures for Israeli citizens.
Dr. Asaf Wiener’s organization also works towards addressing digital gaps and advancing public initiatives. This highlights their dedication to narrowing the disparities in access and opportunities that exist in the digital realm. By engaging in various public initiatives, they aim to create a more equitable digital landscape for all.
Additionally, Dr. Asaf Wiener demonstrates a strong inclination towards public engagement and participation. He actively invites anyone interested in learning more about their activities to approach him for further details, indicating a desire to foster collaboration and partnerships in pursuit of their mission.
In conclusion, the Israel Internet Association, led by Asaf Wiener, fulfills the crucial role of CCTLD manager for the IL, representing the Israeli chapter of the Internet Society. Their focus on digital inclusion, education, and cybersecurity, and their commitment to addressing digital gaps and engaging the public, highlight their dedication to advancing the digital landscape in Israel.
Abraham Zarouk
Abraham Zarouk is the Senior Vice President of Technology at the Israel National Cyber Directorate (INCD). In this role, he oversees the day-to-day operations of the Technology division, focusing on project implementation, IT operations, and support for national defense activities. Zarouk also plays a key role in preparing the INCD for the future by promoting innovation and establishing national labs for research and development.
The INCD places a strong emphasis on addressing weaknesses in artificial intelligence (AI). They examine vulnerabilities in AI algorithms, infrastructure, and data sets, and have established a dedicated national lab to enhance AI resilience. Through collaborations with industry leaders like Google, the INCD is actively promoting the use of AI-powered technologies and driving innovation in the field of cybersecurity.
In addition to their proactive approach, the INCD also acknowledges the potential threats posed by AI-based attackers. As the use of AI tools among attackers increases, the INCD recognizes the need to stay vigilant and develop strategies to counter these sophisticated attacks.
Overall, Abraham Zarouk’s role as the Senior Vice President of Technology at the INCD is crucial in ensuring smooth operations and driving the organization’s preparedness for future challenges. The INCD’s focus on addressing AI weaknesses, collaboration with industry partners, and recognition of potential AI-based threats highlights their commitment to cybersecurity excellence.
Daniel Loevenich
Germany is taking proactive measures to manage the risks associated with artificial intelligence (AI) within complex technical systems. The country is specifically focusing on the AI components or modules within these systems. This approach highlights Germany’s commitment to addressing the potential dangers and challenges that AI can present.
To further mitigate these risks, Germany is working on extending its existing cybersecurity conformity assessment infrastructure. This move aims to establish a robust framework to evaluate and ensure the conformity of AI technologies. The country is also striving to unify AI evaluation and conformity assessment according to the standards set by the EU’s AI Act. This step demonstrates Germany’s dedication to aligning its evaluation processes with international norms and regulations.
The implementation of the AI Act is deemed crucial for managing AI risks in Germany. This legislation, which the country is actively working towards, will play a vital role in addressing technical system risks across the entire supply chain of AI applications. By incorporating this act, Germany seeks to establish a comprehensive and effective framework for managing AI-related risks.
Furthermore, Germany is actively promoting the adoption of AI technologies, particularly among small and medium-sized enterprises (SMEs). The country recognizes the potential benefits that these technologies can bring and encourages businesses to embrace them. This approach highlights Germany’s openness to innovation and its efforts to support the growth of AI within its industries.
There is also support for international standardization in guiding the use of AI technologies. This standpoint suggests that by establishing global standards, individuals can have more control over how AI technologies are utilized. This commitment to international cooperation reinforces Germany’s desire to foster responsible and ethical AI practices.
It is important to acknowledge that AI technologies are heavily reliant on data, and their responsible usage ultimately rests on individuals. Germany recognizes the responsibility that comes with the use of AI systems and the need for individuals to exercise caution and ethics when handling data-driven technologies.
Another noteworthy observation is the call for the market to be the determining factor in deciding the use of AI-based systems. Germany suggests that market forces and customer preferences should dictate the direction of AI technology, promoting a more customer-centric approach to AI adoption.
Nevertheless, standardizing AI usage at a value-based level can be challenging due to the differences in societal values. The discrepancy in value-based governmental positions creates a complex landscape for consensus-building and establishing universal standards for AI application. Germany recognizes this challenge and the need for careful consideration of normative and ethical issues surrounding the use of AI technologies.
In conclusion, Germany is actively implementing AI risk management within complex technical systems, with a particular focus on AI components. The country is working towards unifying evaluation processes and conforming to international standards through the AI Act. Germany also promotes the adoption of AI technologies among SMEs and supports international collaboration in establishing standards for responsible AI usage. However, the challenge of aligning value-based norms and standards remains an ongoing concern for AI implementation.
Hiroshi Honjo
Hiroshi Honjo is the Chief Information Security Officer for NTT Data, a Japanese-based IT company with a global workforce of 230,000 employees. NTT Data is actively involved in numerous AI and generative AI projects for their clients. Honjo believes that AI governance guidelines are crucial for the company, covering important aspects like privacy, ethics, and technology. These guidelines promote responsible and ethical practices in AI development and usage.
In the realm of generative AI, Honjo highlights the significance of addressing cybersecurity intricacies, particularly in light of recent attacks on large language models. This underscores the importance of tackling cybersecurity issues within the context of generative AI.
One complex issue in handling data by generative AIs is determining the applicable law or regulation for cross-border data transfers. Similar to challenges faced by private companies managing multinational projects, NTT Data must navigate various regulations and ensure compliance with jurisdiction-specific requirements.
Honjo advocates for international harmonization of AI regulations, emphasizing that guidelines in G7 countries are insufficient. He supports the establishment of international standards that govern the development, use, and deployment of AI, aimed at promoting fairness and consistency in AI regulation.
Additionally, Honjo expresses his concern regarding uneven data protection regulations like the General Data Protection Regulation (GDPR). He acknowledges that differing data protection regulations across countries impose significant costs on businesses. To mitigate these challenges and ensure a level playing field for businesses operating in multiple jurisdictions, Honjo advocates for consistent and harmonized data protection measures.
In summary, Hiroshi Honjo, as the Chief Information Security Officer for NTT Data, emphasizes the necessity of AI governance guidelines, the need to address cybersecurity intricacies in generative AI, the complexity of cross-border data transfers, and the importance of international harmonization of AI regulations. His commitment to consistent data protection regulations reveals his dedication to reducing costs and promoting fairness within the industry.
Bushra Al-Blushi
Bushra Al-Ghoushi is an influential figure in the field of cybersecurity and currently serves as the Head of Research and Innovation at Dubai Electronic Security Center. She has made significant contributions to the industry through her leadership positions.
One of Al-Ghoushi’s notable achievements is the establishment of Dubai Cyber Innovation Park, which aims to promote innovation and collaboration in the field of cybersecurity. Her involvement in founding this park demonstrates her commitment to advancing the industry and creating opportunities for technological development.
Al-Ghoushi’s expertise is also recognized internationally, as she is an official UAE member of the World Economic Forum Global Future Council on Cyber Security. This highlights her contributions to global discussions and initiatives surrounding cybersecurity.
Furthermore, Al-Ghoushi’s extensive involvement in advisory boards, both nationally and internationally, reflects her broad knowledge and the trust placed in her expertise. These advisory roles enable her to shape policies and strategies in the field, further solidifying her thought leadership and influence.
In terms of AI risks, Al-Ghoushi advocates for a gradual and incremental approach to cybersecurity rules and regulations. She emphasizes the importance of identifying and mitigating potential risks posed by AI through appropriate controls and regulations.
Al-Ghoushi also highlights the significance of considering the deployment of AI models and how they impact security controls. She emphasizes the need for addressing the unique risks associated with AI in their development and implementation, ensuring that adequate security measures are in place.
Regarding policy and regulatory approaches, Al-Ghoushi supports a risk-based approach that strikes a balance between control and security issues. She collaborated with Dubai in 2018 to develop AI security ethics and guidelines, which remain applicable to generative AI today.
Al-Ghoushi emphasizes the need for global harmonization of AI regulations and standards. Currently, different countries have fragmented regulations, making compliance challenging for providers and consumers. Harmonization would simplify compliance and instill confidence in internationally recognized AI tools.
To achieve this, Al-Ghoushi suggests international collaboration and the establishment of an international certification or conformity assessment for AI. This would ensure that AI systems meet minimum security requirements and facilitate compliance for providers while enabling effective enforcement of industry standards by regulatory bodies.
In conclusion, Bushra Al-Ghoushi’s leadership and expertise in cybersecurity are evident through her various roles and initiatives. Her emphasis on gradual, incremental cybersecurity rules and regulations for AI reflects a balanced approach that prioritizes both innovation and security. Al-Ghoushi’s advocacy for global harmonization of AI regulations and the establishment of international certification schemes further underscores her commitment to promoting secure and responsible use of AI technologies.
Session transcript
Moderator – Daria Tsafrir:
Thank you very much. Thank you. I think we’re ready to begin. Okay. Can we have our speakers on Zoom on the screen? Perfect. Can everyone turn their cameras, please? Yeah, we see you now. Okay, here’s Daniel. Okay, so good morning, everyone, and welcome to our session on cybersecurity regulation in the age of AI. I’m Daria Zafriar, currently a legal advisor at the Israel National Cyber Directorate, leading legal aspects of AI, cloud computing and international law. Unfortunately, due to the current situation in Israel, my colleagues and I were unable to attend the session on site. So our colleague, Dr. Weiner, who is already there, offered his help in moderating on site. So Asaf, let’s start and then get back to me.
Asaf Wiener:
Great. So my name is Dr. Asaf Weiner. I’m from the Israel Internet Association, which is the CCTLD manager of the IL, the Israel National TLD. And also we are the Israeli chapter of Internet Society, promoting digital inclusion, education and cybersecurity for citizens in Israel. Among other things, working on digital gaps and other initiatives for the public. So I’m not originally part of this panel, so I won’t take too much time to present myself. But I invite everyone who will have any questions or want more details about our activities at Internet Society IL to approach me after the session. And I’ll be happy to introduce myself and our work. And now let’s go back to the original participants of this panel.
Moderator – Daria Tsafrir:
Thank you. So let me ask you, let’s start by introducing yourselves. Let’s start with Dr. Al-Bushi.
Bushra Al-Blushi:
Hello. Good morning, everyone. It gives me a great honor and pleasure to share. the stage with the great panelists and with everyone here today morning. It’s 5 a.m. in Dubai now. So, my name is Bushra Al-Ghoushi. I’m the Head of Research and Innovation in Dubai Electronic Security Center. I’m also the Director General Senior Consultant in the center. So, basically, it’s the center that sets the rules, regulations, standards, and also monitor the cyber security posture here in the city of Dubai. I’m also the founder of Dubai Cyber Innovation Park, which is an innovation arm for Dubai Electronic Security Center. I’m the official UAE member in the World Economic Forum Global Future Council on Cyber Security. I’m also a member of many advisory boards nationally and internationally. Thank you. Mr. Zarouk?
Hiroshi Honjo:
Okay. So, Mr. Honjo, is there? Yes. My name is Hiroshi Honjo. I think I’m only the one based in Tokyo, Japan, but I just come back from Germany, so I still got your luck. So, I’m the Chief Information Security Officer for a Japanese-based IT company called NTT Data, with the employee of the 230,000 globally. Japan is only a small part of the employees, so we have more business in more than 52 countries except for Japan. So, as a private company, we are running so many AI, generative AI projects to our clients, so it’s a very hot topic. It’s a pleasure to talk with you. Thank you.
Moderator – Daria Tsafrir:
Ms. Galia Daur?
Gallia Daor:
Good morning, everyone. My name is Galia Daur. I’m a Policy Advisor in the OECD’s Digital Economy Policy Division in the Directorate of Science, Technology, and Innovation. Our division covers the breadth of digital issues, including artificial intelligence, including digital security, but also measurement aspects, privacy, data governance, and many other issues. But for today, we’ll be focusing on AI and digital security. So, I’ll stop here and I look forward
Moderator – Daria Tsafrir:
to the discussion. Thank you. Mr. Lovanic? Good morning, everyone. I’m Daniel Lovanic.
Daniel Loevenich:
I’m the AI and Data Standards Officer at German Federal Office of Information Security. And I’m very much concerned on AI cyber security standards. Let me just stress that I appreciate to share the stage with you and congratulations to a great event up to now. Thank you very much.
Moderator 1:
Yes, Abraham, I think we can hear you now. So if you could present yourself.
Abraham Zarouk:
Okay, hello everyone. My name is Abraham Zarouk. I’m the SVP technology of the INCD, the Israel National Cyber Directorate. I manage the technology division. So I am responsible for day-to-day operation, such a project implementation, IT operation and providing support for national defense activities. I am also responsible for preparing the INCD for the future by creating R&D activities, promoting innovation, establishing national lab and building national level solution. I have eight kids and they always ask a lot of questions. So I’m already know how JGPT feels. Thank you.
Moderator – Daria Tsafrir:
One will be about the current state of affairs. And the second one will deal with, is there more to be done in the domestic and international levels? So let’s get into it. Now we are all familiar with the cybersecurity regulation toolkit, breach of information, mandatory requirements for a critical infrastructure, risk assessments, info sharing, et cetera. And the question is whether this current toolkit is sufficient to deal with threats to AI systems or to the data are used for it. Now, our goal in this session is getting some insights of what governments can do better and where they shouldn’t be at all. So now, please note that when we talk about regulation, we mean not only regulations, but also government. We mean it broadly, also government guidelines, incentives, and other such measures. So for everyone’s benefit, and so that we can be on the same page, let me turn to Mr. Zarok and ask you, can you please map out for us, from what you learn, the different cybersecurity risks and vulnerabilities related to AI? Mr. Zarok?
Abraham Zarouk:
Again, you hear me now?
Moderator – Daria Tsafrir:
Yes, now I can hear you.
Abraham Zarouk:
Thank you. The INCD focuses on three main domains when addressing AI. The first domain is protecting AI. AI-based models are increasingly being deployed in production in many critical systems across many sectors. But those systems are designed without security in mind and are vulnerable to attacks. Since the average AI engineer is not a security expert, and the cybersecurity experts are not domain experts in AI, we need to find a way to establish and improve AI resiliency. INCD approaches this issue from several angles. One is examining weaknesses in AI algorithms, infrastructure, data sets, and systems. more. This is done as an ongoing task. The INCD promotes R&D project for testing AI models. Unlike ASM, attack surface management, in the IT world, in the AI world, a tailored approach is needed from each algorithm. INCD focuses on common libraries model and dedicated attacks. Another angle is building a robust risk model for AI. We attempt to define metrics and the models to measure risk in AI algorithms. As I mean, to measure and test the robustness of AI as we do with another IT domain. A third angle is the national lab for AI resilience. INCD has established a national lab which develops an online and offline platform for self-assessment of machine learning models. Based on risk model, we develop. The national AI lab is a collaboration between the academic world, the government, and the technology giant. INCD collaborated with the cyber center at Ben-Gurion University, which is a leader in research, and with Google, which brings cloud knowledge in cyber protection and AI. A second significant domain is using AI for defense. Today most tools and products are used from a form of AI, some more and some less. If you don’t have a logo AI inside on your product and you don’t say AI three times a minute no one will buy it. We understand the power of AI and what it can offer and as ongoing effort we make sure our infrastructure and products support the latest AI powered technology. INCD much like many other nations is promoting innovation and the use of AI powered technology. This is since we don’t want to be behind when it comes to the technology. Our role as a regulator is mainly not to interfere but to see where we can assist the market in order is to promote implementation and the use of advanced technology. We use a variety of tools and capabilities to support our day-to-day operations. This includes tools to help researchers in their system, cyber investigation, various automation to assist in analysis and response for incident as a part of our collaboration with Google in the cyber shield project. A smart chatbot for our national cyber call center 119. It’s a reversed 9-1-1 to provide better service to citizens, collect relevant contextual information, provide more focused responses and support additional languages. A new tool under develop aims to help investigate network traffic pickup in an easier, faster and more human way. AI helps us scale and takes care for routine tasks. So in the time of war, AI allows us to direct main power to critical tasks. We use AI to assist and mediation between the human and the machine. Last domain but not the least. and maybe the most complex subject, which is currently in design, is the defense against AI-enhanced, AI-based attacker. We see an increase in the use of various AI tools among the attackers. And we understand that in the future, we will see machines carrying out sophisticated attacks. We are currently in the process of designing a way to approach this threat scenario, which will probably be built from several components working together. In the future, we will see attacks and defense fully managed by AI, and the smarter, stronger, and faster player will win. Thank you.
Moderator – Daria Tsafrir:
Thank you, Mr. Zurich. Dr. Al-Blushi, I’m going to turn to you now. Based on your vast experience in your past and your current work in promoting innovation and shaping policy at both domestic and global levels, what do you make of AI risks? How do you frame it from a cybersecurity regulation perspective?
Bushra Al-Blushi:
So I think in a city like Dubai, we are always at the forefront of technological transformation revolution. Our role as cybersecurity regulator is to enable those critical national infrastructures to use the new technologies, but use it with the right controls around cybersecurity, and it shouldn’t be perfect from the first place. So it’s gradual, incremental cybersecurity rules, regulations that we work together with the business developers just to make sure that the business objectives are being met and also security is being considered. So I will divide what I’m going to speak about into three main points. So the first one is the AI model security themselves versus the security of the consumers of those AI models. So when it comes to the AI security models and the developers of those models, so the rules, the controls, the standards, and the policies are totally different when I’m speaking about the consumers of those AI models. For me, when I’m talking about the AI security itself, the AI model itself, AI at the end of the day is like any other software that we were using in the past, but what makes it different is the risk that it might generate, the way it has been deployed, how it is being implemented. So for example, an AI model that is deployed in an IoT bulb shouldn’t have the same security controls like an AI model that is deployed in a connected vehicles where any risk or any issue in that AI model might impact the human lives. So at the end of the day, it’s where that AI model is being deployed, how it is being used and why it is being used that makes it different than any other software development tools that we get used to develop in the past. This is how AI model itself became different than the normal software development. Then the second point is the security of the AI consumers. So those people, those government entities, the consumers of the AI themselves. I think in our scenario, in our case, we are more worried about the consumers than the producers because we have main players, as we can all see, we have very main players, specifically when it comes to generative AI that are attracting lots of attention or lots of customers to use them. So when it comes to the AI consumers themselves, I think we need to consider many elements. So how that AI will be used, where it will be used, will it be used in a critical national infrastructure? And what about the data privacy of the data that is being used over there? And then also why I’m using that AI model. So I can categorize it as the previous speaker was saying, so I can categorize it into three main areas. How we are using AI today, we might use it to protect as cybersecurity professionals, using it in the new defensive methodologies that we are using, or it can be used by the malicious actors to harm, or the third category, it can be used by a normal users or it can be used by government entities. And in that case, we will be worried about the data privacy of the data being processed in the AI model. So when it comes to the policies and regulations, so I talked about AI security itself, the consumers, and the last point is the policies, standards and regulations that we need to put around the AI models, I think that there has been lots of efforts globally and internationally having OECD AI principles, NIST AI security standard, and then the great bunch of policies that were issued recently in June by EU. I think we are creating progress towards that, having, let’s say, standards or having specific policies around the security of the AI. But as I said, at the end of the day, it’s like the previous software models that were being developed in the past. So if we will think about it, how we should deal with the AI when it comes to the policies and the regulatory point of view, I think we need to develop, first of all, the basic best practices and principles, like any normal software development life cycle, secure by design, supply chain security. So those basic principles should always be there. And then develop one layer on top, and that layer can be specific to the AI itself, how AI should be developed, should be maintained, should be trusted. So another layer which is specific to AI. And the third layer that can be added, as I said, at the end of the day, depends where I’m going to use it. So it’s a sector-specific layer. So we can add banking layer controls, transportation layer controls, medicine layer controls. So this is the third layer where we need to work with the business owners or the business sectors themselves in order to make sure that the third layer also contains enough controls that will enable them to use it in a safe manner. I believe, I strongly believe that that risk-based approach is the best approach where we should all consider, because having too much controls will limit the usage of the AI, and having too loose controls also will take us into other security issues. In our case, for example, we developed an AI security ethics and guidelines back into 2018 that can still be applicable to generative AI. We are also developing an AI sandboxing mechanism for government entities to test, to try to implement AI solutions that they would like to implement at the city level. And also we have clear guidelines about data privacy. So as we are saying that most of the AI models now are hosted in the cloud, so we have a clear model how information can be dealt in the cloud, and that will include AI models that are hosted in a cloud environment. So I don’t think we should reinvent the wheel. We should develop on the basis of the things that have been there for a long time now.
Moderator – Daria Tsafrir:
Thank you, Dr. Alushi, you’ve raised some very important points. I’ll turn now to Mr. Anjou. Mr. Anjou, you’re representing the private sector. So from your organization’s point of view, how are you currently dealing with AI risks and cybersecurity?
Hiroshi Honjo:
Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI governance guidelines within the company, and that includes the privacy and ethics and technology, everything. So basically, what we do for Genitive AI as a company, basically we do everything for client asks. So many clients ask for the, let’s say, application development, for instance. So we take automatic code generations using Genitive AI. That obviously includes a lot of problems, including IP, integration of property issues, that if you learn the code from whatever source, maybe including the commercial code or non-open source code. So that’s privacy protections, well, integration of property protection is the very important thing for the company as well. And also the frameworks include OECD or NIST AI frameworks that helps the defining risks for what the AI project is. So that went pretty much well for defining the risks within the AI project. The thing is, although we kind of state the risks within projects, it’s all come up with what’s the purpose of the project, whether it’s important infrastructure, whether it’s the banking transactions, or whether it’s more likely what’s on the display is a transcript. So it really depends on the risks there. So all the projects are not really the same. As for privacy issues, so a lot of the large language models in market is learning data from somewhere, and you have to learn a lot of big data. It’s not small data. It’s a huge… data, and the question is where does that data reside, and who owns the data? It’s basically, it’s more like the cross-border data transfer issues that, you know, what’s the data source, what’s the use of the data, that’s basically, it’s more like the international transfer. So question is which laws of regulations will be applied to that data. So that’s a bit of cloud issues, same as the cloud issues, so it’s not easy resolutions for that. So basically, we have to deal with all the data along the generative AIs. So that’s basically a lot of privacy or protections, or anything about the cybersecurity, what will happen to cybersecurity also applied to the generative AI. So basically, when you talk about this AI and security or AI guidelines or whatever, you state within private company, it really depends on the, includes the data, privacy, when you get the data compromised, the data source compromised, or the result of the data was compromised, or any breaches happened within the large language model that has been attacked a couple of times. So that’s really the kind of lessons learned, the cybersecurity also applies to the, not all, but part of the generative AI, the projects. So as a private company, it’s not the single company, country level, we just need to deal with the multinational, multi-country level projects that have to deal with the old data, privacy issues, and also the need to protect the models or data where that resizing. So it’s pretty risk models, risk-based management. So it’s all about money. But basically, due to the multinational projects, that’s not easy resolutions. But with the guidelines and some of the lessons, well, things we kind of apply to cybersecurity, things into the genitive AI, kind of resolving some of the issues residing in the genitive AI projects. So but as I said, we have to deal with a lot of the different countries. So that’s our challenges right now. So not technology itself. It’s more like cross-border, well, multinational different regulations. That’s the real challenges for private company. I think I’ll stop here.
Moderator – Daria Tsafrir:
Thank you. That was very interesting. Ms. Daur, I will turn to you now. The OECD was the first, if I’m not mistaken, to publish clear principles for dealing with the IRIS. Could you share with us the OECD’s policy from today’s point of view with an emphasis on the robustness principle? And maybe a word on where we are headed.
Gallia Daor:
Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles for artificial intelligence. So these principles sort of seek to describe what trustworthy AI is. And they have five values-based principles that apply to all AI actors. And they have five recommendations for policymakers specifically. And so within these principles, like you said, we have a principle that focuses on robustness, security, and safety, which sort of provides that AI systems should be robust, secure, and safe throughout their lifecycle, which I think is a particularly meaningful aspect. And the principles also note that systematic risk management approach to each phase of the AI system lifecycle on a continuous basis is needed. So I think it gives sort of the beginning of an indication of how we can apply a risk management approach in the context of AI. So these principles have now been adopted by 46 countries and also serve the basis for the G20 AI principles. And since their adoption in 2019, we’ve worked on providing tools and guidance for organizations and countries to implement them. And so we sort of took a set of three different types of actions. So one focuses on the evidence base. So we developed an online interactive platform that Brinks called the OECD.AI Policy Observatory that has a database of national AI policies and strategies from. from over 70 countries, and also data and metrics and trends on AI, AI investment, AI jobs and skills, AI research publications, and a lot of other information. We also work on gathering the expertise. So we have a very comprehensive network of AI experts, now with over 400 experts from a variety of countries and disciplines that help us take this work forward. And we also develop tools for implementation. So we have a catalog of tools for trustworthy AI. Sorry, I should say, we don’t develop the tools, but we compile them. So we have this catalog that sort of different organizations and countries can submit the tools that they have. And we process that, and anybody can access and see what is out there that can be used. And in that context is where also our increasing focus on risk management and risk assessment in AI. And we already, last year, we published a framework for the classification of AI systems. And others have noted that the risk is very context-based. So the system is not in the abstract. We don’t know what risk it may pose. It depends how we use it, who uses it, with what data. So this classification framework is really there to help us identify the specific risks in a specific context. And we also will soon publish a mapping of sort of different frameworks for risk assessment of AI and what they have in common, and sort of the guidepost, the top-level guidepost that we see for risk assessment and management in AI. So that’s sort of the main focus is AI here. But I do want to say a word about the OECD’s work on digital security, which is our term for cybersecurity in the economic and social context. So we have an OECD framework for digital security that looks at four different aspects. So it has sort of the foundational level, which is the principles for digital security risk management, general principles, and operational principles for how to do risk management in the digital security context. It also has a strategic level, so how you take these principles as a country and use them to develop your national digital security strategy. We have a market level of sort of how we can How we can work on sort of misaligned incentives and in the market including sort of information gaps and to make sure that that both products and services are safe or secure sorry and also that in particular and others have mentioned that AI is now increasingly used in the context of critical infrastructure and critical activities So we have a recommendation on the digital security of critical activities And the last level is a technical level where we focus on vulnerability treatment and including sort of protections for vulnerability researchers and good practices for vulnerability disclosure, and I think so that this leads and maybe I’ll stop here but I think that the others have said about the the intersection between AI and digital security, which is really the heart of today’s conversation and we sort of see that Like the the first intervention by Mr. Zarouk said so we see that we need to focus both on the digital security of AI Systems. So what do we need to do to make sure that AI systems are secure? so in particular sort of looking at vulnerabilities in the area of The data that is used of data poisoning and how that can affect the outcomes of an AI system But we also need to think about how AI systems made themselves be used either to attack so generative AI is maybe somewhat of a game-changer and in this aspect too, so we we know for example the generative AI can be used to Produce very credible content that can then be used at scale and phishing attacks, for example And also, you know, there’s less work that we have not yet done But sort of how how AI systems can be used to enhance digital security, so I’ll say just one word on that that we have at the OECD We have the global forum on digital security, which is for prosperity which is an annual event where we bring different stakeholders from a very large range of countries to talk about the sort of hot topics in digital security. And the event that we did earlier this year jointly with Japan focused exactly on sort of the link between digital security and technologies and with AI obviously being one of the key focus. And that was exactly one of the themes of our discussion there. So I’ll stop here, but thank you.
Moderator – Daria Tsafrir:
Thank you, Kalia. I can share with you that Israel has adopted the OECD principles into its guideline papers on AI. At the moment, the guidelines are non-legally binding, and the current demand is for sectoral regulators to examine the existing need for a specific regulation in their field. But I imagine we will be soon looking into the AI Act as well. So now I’ll turn to Mr. Lovanic. Could you share with us Germany’s policy regarding cybersecurity and AI? How, in your opinion, will AI Act affect Germany’s policy and regulation? How will you implement it into your law system?
Daniel Loevenich:
Yeah, it’s a very difficult question. Challenging. Since AI Act is, as you know, it’s brand new. But indeed, we in Germany are very much concerned with the European perspective of AI. And just let me stress the fact that especially on the EU level, the union and the standardization organizations like SunCenter, like JTC, TwinOne, as you know, do a great job on that. They very much focus on the 10 issues addressed in the AI Act standardization. And we in Germany are very much looking forward to implement procedures and infrastructures based on our conformity assessment and especially certification infrastructures. To implement the technical basics for our conformity assessment of these standards. But first of all, let me stress the fact that if we say AI risks are special risks to cybersecurity, then we always have in mind the technical system risks, like, for instance, a vehicle. And we always have in mind the technical risks And we address the, especially for embedded AI in such a technical system, we address all these risks based on our experiences with engineering and analysis of these technical systems. Or in case of a distributed IT system with the whole supply chain, in the background, we have special AI components or modules as, for instance, cloud-based services that play a key role for the whole supply chain. So we address the risks in terms of the whole supply chain of this application. And it’s very important to be aware that when we, in Germany, consider AI risks, we have to concentrate on these AI modules within that complex systems. And we do that just by mapping down these application or sectoral-based risks, which may be regulated, of course, by standards, down to technical requirements for the AI modules that are building. And of course, we have a lot of stakeholders being responsible and being competent to address these risks. And they are responsible for implementing the special AI countermeasures, technical countermeasures, within their modules during the whole life cycles, as we heard from the speakers already. And this is where we do concentrate, especially in Germany, but in the EU. The overall issue is to build a uniform AI evaluation and conformity assessment framework. Bloomberg, independently of who’s responsible to implement the countermeasures effectively working of them for the cybersecurity risks. And this is a European approach. It is number one key political issue in the German AI standardization roadmap. So if you ask me what we do next, yes, on the basis of existing cybersecurity conformity assessment infrastructure, like attestation, second party or third party evaluation, certification, and so on, we try to address these special AI risks as an extension to the existing frameworks, implementing the EU AI standardization requests. Does that answer your question, basically?
Moderator – Daria Tsafrir:
Thank you. Thank you so much. And you actually brought me directly to the second round of our session, which is what’s missing and what we can do better. And as some of you mentioned already, one of our major concerns as a government is the protection of safety and safety of critical infrastructures. And as a result, chain supplies. And recently, we are also looking into SMEs. So I have two questions, if you could address it shortly. One is, what should governments be doing in the regulatory space to improve cybersecurity of their systems? And when we talk about regulation, I think we need to address two subjects. One, we need to consider the risks of over-regulation. And we also need to think, is AI dynamic? Maybe it’s too dynamic for regulation. And the second question is, how much of the challenges should be addressed within international forums, including maybe binding treaties? So if you could address these questions, and maybe an idea or an advice for the future, if you have one, then I’ll be glad to hear it. So I think we’ll keep the same order. So we’ll start with Dr. Al-Ghloushi, and where we can go on.
Bushra Al-Blushi:
Yeah. I think I will take it from the international perspective. As we can see today in the current land of many AI acts being developed and being issued by different countries and it’s totally fragmented. And it’s very difficult for both providers and consumers to adopt at the end of the day. So assume that I’m providing those services or AI models in 100 countries and I’m having 100 acts, to which one should I comply with? Shouldn’t we harmonize or shouldn’t we come at least with the minimum requirements for conformity assessment or for compliance that will make it much easier for the producers to comply. And at the end of the day, we’ll give also the consumers that confidence that this AI tool is being internationally recognized by multiple countries. So that fragmentation, as I said, it makes it really difficult for both consumers and providers to comply with. The international collaboration and the harmonization of AI standardization, the compliance to the requirements to address those challenges. And actually this was one of the papers that we published last year with the World Economic Forum calling for a harmonized international certification scheme for different things. AI was not part of it, but at least it addressed the idea how the harmonization should be done, what are the minimum requirements. I’m not saying that it’s the full certification that the country should rely on, but at least it’s the minimum requirement certification or the minimum requirements conformity assessment that will makes it easier for the providers to comply with and will make also our role as regulator much more, let’s say, less than having different standards, different requirements, different, let’s say, acts in different countries. This is in a nutshell, I think harmonization of international requirements is very important in order to move forward with the different AI acts that we have today.
Moderator – Daria Tsafrir:
Thank you. Mr. Rangel.
Hiroshi Honjo:
Yeah, Dr. Bruce, you said almost what I want to say, but basically. As a private company, we need international harmonizations for all the regulations. On this keynote speech of IDF, our Japanese Prime Minister Kishida-san said there will be AI regulations, guidelines in G7 countries. It’s OK, but that’s not enough. So there are more countries. So we need at least minimum requirements, minimum harmonization to run the business across the multinational countries. So I’m kind of looking forward for that. But what I don’t like to happen is what happened to the data protections, GDPR. Some countries have very strong regulations. Other countries have very soft law. And that’s a private company that costs a lot. So I hope all the things harmonized within AI. So I’ll stop here.
Moderator – Daria Tsafrir:
Thank you. Ms. Bauer?
Gallia Daor:
Thank you. Yeah, so I think we’ve heard a lot about the fragmentation issue. And obviously, that’s a serious issue. So I think it’s difficult to talk sort of in the abstract about whether we should or shouldn’t have regulation or because these things are happening. So I think it’s also to talk about what we do with this. And I think from the perspective of an international organization, I think we can talk perhaps about sort of three roles of intergovernmental organizations and what they can do to help countries and organizations in this situation. So one thing is sort of looking at the mapping the different standards and frameworks and regulations and all these things out there and sort of trying to identify. identify commonalities, I don’t know, perhaps minimum standards, and sort of develop some sort of a practical guidance from that. But I think another important role is the ability of intergovernmental organizations, and we see that here today, to convene the different stakeholders from the different countries and from the different stakeholder groups to sort of flag their issues and have that conversation. And perhaps a third aspect is to advance the metrics and measurement of some of these issues that are very challenging. And so in the context of our work on AI, we’re developing, and we will launch next month, an AI incidents monitor that sort of looks at real time, live data to see what actual incidents AI systems cause in the world. And I think that’s maybe one step to advance that issue. Thank you.
Moderator – Daria Tsafrir:
Thank you. Mr. Lovnic?
Daniel Loevenich:
Yeah, we in Germany want to open markets to new technologies. We want people to be creative with AI technologies. We want SMEs to be on their way to use these technologies and even to develop new ideas with these technologies. So we really don’t want to prescribe things. We just want to recommend people and organizations to do special things. So basically the, and obviously the first and overall instrumentarium for this is international standardization so that people can decide on different issues and their own risks and requirements to use technologies in special ways. ways and not to use them or to misuse them in other ways. Please allow some remarks on that standardization issues, especially on the ISO level. My experience is they are a lot of people involved. Many of them are AI experts, but I can distinguish three schools of thought. Technical, sectoral means application specific in contradiction to the technical application agnostic of you and the normative and ethical things on the top. It’s nothing new. It’s three different aspects of AI technology since they are data driven. We have data in these systems and they are used as machine understandable data, not readable data, but understandable data. So people are very much responsible in using these technologies for specific purposes. Now then, if you have appropriate standards and speaking of harmonization, you can do this on the technical level, like ISO does, like Sansemelet does, like other people do. It’s very easy. If you come to application specific requirements, you can standardize that. In Europe, we have ATSI for instance, or ITU for the normative. for the health care sectors. Very effective. You can do that. And you can do it even on the application and sectoral-specific levels. You can do regulation if you want, but let the market do it. Let they decide this is use of our AI-based systems. And let the market and the customers decide, I want to use this technology in that way that is regulated by blah, blah, blah. The third school of thought or level is very much specific on value-based things. There are society and all these kind of organization and digital serenity and other aspects that play a key role in that. In the EU, for instance, you have 27 nations, if I’m right, with probably 27 different value-based governmental positions on that. So it’s very, very difficult. Our time is coming to an end. Yeah, I’m going to stop here. But this is the difficult part. Yeah, it was very interesting.
Moderator – Daria Tsafrir:
Yes, thank you. I did steal back our five minutes, I have to say. But well, anyway, time flies when you’re having fun. And our time is unfortunately up. So I would like to thank you all for participating. And I know some of you had to wake up very, very early in the morning. So I really appreciate your effort. It was very interesting and very enlightening. And I hope to see you soon, maybe on the follow-up session.
Speakers
Abraham Zarouk
Speech speed
103 words per minute
Speech length
819 words
Speech time
475 secs
Arguments
Abraham Zarouk is the SVP technology of the Israel National Cyber Directorate
Supporting facts:
- Abraham Zarouk is the SVP technology of the INCD
Topics: INCD, Israel National Cyber Directorate
Abraham is responsible for day-to-day operation such as project implementation, IT operation, and providing support for national defense activities
Supporting facts:
- Abraham Zarouk is in charge of the Technology division within the INCD
Topics: project implementation, IT operation, national defense
Abraham is preparing the INCD for the future via R&D activities, promoting innovation, establishing national labs, and building national-level solutions
Supporting facts:
- Abraham Zarouk is in charge of preparing INCD for the future
- He oversees R&D activities, promotes innovation, establishes national labs, and builds national-level solutions
Topics: R&D activities, Innovation, National Laboratory
INCD focuses on three main areas in addressing AI
Supporting facts:
- AI is being deployed in many critical systems without security in mind
- INCD looks at weaknesses in AI algorithms, infrastructure, data sets
- INCD has established a national lab for AI resilience
Topics: AI security, AI resiliency, AI-enhanced attacks
Report
Abraham Zarouk is the Senior Vice President of Technology at the Israel National Cyber Directorate (INCD). In this role, he oversees the day-to-day operations of the Technology division, focusing on project implementation, IT operations, and support for national defense activities.
Zarouk also plays a key role in preparing the INCD for the future by promoting innovation and establishing national labs for research and development. The INCD places a strong emphasis on addressing weaknesses in artificial intelligence (AI). They examine vulnerabilities in AI algorithms, infrastructure, and data sets, and have established a dedicated national lab to enhance AI resilience.
Through collaborations with industry leaders like Google, the INCD is actively promoting the use of AI-powered technologies and driving innovation in the field of cybersecurity. In addition to their proactive approach, the INCD also acknowledges the potential threats posed by AI-based attackers.
As the use of AI tools among attackers increases, the INCD recognizes the need to stay vigilant and develop strategies to counter these sophisticated attacks. Overall, Abraham Zarouk’s role as the Senior Vice President of Technology at the INCD is crucial in ensuring smooth operations and driving the organization’s preparedness for future challenges.
The INCD’s focus on addressing AI weaknesses, collaboration with industry partners, and recognition of potential AI-based threats highlights their commitment to cybersecurity excellence.
Asaf Wiener
Speech speed
133 words per minute
Speech length
133 words
Speech time
60 secs
Arguments
Asaf Wiener is a representative of the Israel Internet Association
Supporting facts:
- The Israel Internet Association is the CCTLD manager of the IL, the Israel National TLD
Topics: Israel Internet Association, CCTLD
The Israel Internet Association is also the Israeli chapter of Internet Society
Topics: Internet Society, Digital Inclusion, Education
His organization works on promoting digital inclusion, education and cybersecurity for citizens in Israel
Topics: Digital Inclusion, Education, Cybersecurity
Dr. Asaf Weiner’s organization also works on digital gaps and other public initiatives
Topics: Digital Inclusion, Internet Society, Public Initiatives
Report
The Israel Internet Association, represented by Asaf Wiener, serves as the country code top-level domain (CCTLD) manager for the IL, which stands for the Israel National TLD. As the manager of this important domain, the association plays a crucial role in overseeing internet activities in Israel.
Furthermore, the Israel Internet Association is the Israeli chapter of the Internet Society, demonstrating their commitment to promoting various aspects of the digital landscape. Specifically, they focus on digital inclusion, education, and cybersecurity within the country. These areas are of critical importance in today’s interconnected world, and the association strives to bridge the digital divide, ensure access to quality education, and enhance cybersecurity measures for Israeli citizens.
Dr. Asaf Wiener’s organization also works towards addressing digital gaps and advancing public initiatives. This highlights their dedication to narrowing the disparities in access and opportunities that exist in the digital realm. By engaging in various public initiatives, they aim to create a more equitable digital landscape for all.
Additionally, Dr. Asaf Wiener demonstrates a strong inclination towards public engagement and participation. He actively invites anyone interested in learning more about their activities to approach him for further details, indicating a desire to foster collaboration and partnerships in pursuit of their mission.
In conclusion, the Israel Internet Association, led by Asaf Wiener, fulfills the crucial role of CCTLD manager for the IL, representing the Israeli chapter of the Internet Society. Their focus on digital inclusion, education, and cybersecurity, and their commitment to addressing digital gaps and engaging the public, highlight their dedication to advancing the digital landscape in Israel.
Bushra Al-Blushi
Speech speed
165 words per minute
Speech length
1650 words
Speech time
601 secs
Arguments
Bushra Al-Ghoushi is the Head of Research and Innovation at Dubai Electronic Security Center
Supporting facts:
- Bushra Al-Ghoushi is also the Director General Senior Consultant in the center
Topics: Research, Innovation, Cyber Security
AI risks should be navigated through gradual, incremental cybersecurity rules and regulations
Supporting facts:
- In a city like Dubai, they are always at the forefront of technological transformation
- Their role as cybersecurity regulator is to enable the critical national infrastructures to use new technologies, with right controls around cybersecurity
Topics: Cybersecurity, AI risks, Regulation
Difference between security of AI models and consumers of AI models
Supporting facts:
- AI at the end of the day is like any other software, but what makes it different is the risk that it might generate, the way it has been deployed
- Security controls of an AI model greatly depends on where and how it’s deployed
Topics: AI Security, AI Consumers
AI security includes base principle layer, AI-specific layer, and sector-specific layer
Supporting facts:
- Develop the basic best practices and principles, like any normal software development life cycle, secure by design, supply chain security
- A second layer specific to the AI itself, and a third layer that can be sector specific
Topics: AI Security, Policies, Standards, Regulations
Harmonization of global AI regulations and standards is needed
Supporting facts:
- Current situation of AI regulations and standards is fragmented with different countries having different rules
- Difficult for providers and consumers to comply with wide variety of regulations
- Harmonization would make compliance easier for providers and give consumers confidence in internationally-recognized AI tools
Topics: AI regulation, Standardization, International collaboration
Report
Bushra Al-Ghoushi is an influential figure in the field of cybersecurity and currently serves as the Head of Research and Innovation at Dubai Electronic Security Center. She has made significant contributions to the industry through her leadership positions. One of Al-Ghoushi’s notable achievements is the establishment of Dubai Cyber Innovation Park, which aims to promote innovation and collaboration in the field of cybersecurity.
Her involvement in founding this park demonstrates her commitment to advancing the industry and creating opportunities for technological development. Al-Ghoushi’s expertise is also recognized internationally, as she is an official UAE member of the World Economic Forum Global Future Council on Cyber Security.
This highlights her contributions to global discussions and initiatives surrounding cybersecurity. Furthermore, Al-Ghoushi’s extensive involvement in advisory boards, both nationally and internationally, reflects her broad knowledge and the trust placed in her expertise. These advisory roles enable her to shape policies and strategies in the field, further solidifying her thought leadership and influence.
In terms of AI risks, Al-Ghoushi advocates for a gradual and incremental approach to cybersecurity rules and regulations. She emphasizes the importance of identifying and mitigating potential risks posed by AI through appropriate controls and regulations. Al-Ghoushi also highlights the significance of considering the deployment of AI models and how they impact security controls.
She emphasizes the need for addressing the unique risks associated with AI in their development and implementation, ensuring that adequate security measures are in place. Regarding policy and regulatory approaches, Al-Ghoushi supports a risk-based approach that strikes a balance between control and security issues.
She collaborated with Dubai in 2018 to develop AI security ethics and guidelines, which remain applicable to generative AI today. Al-Ghoushi emphasizes the need for global harmonization of AI regulations and standards. Currently, different countries have fragmented regulations, making compliance challenging for providers and consumers.
Harmonization would simplify compliance and instill confidence in internationally recognized AI tools. To achieve this, Al-Ghoushi suggests international collaboration and the establishment of an international certification or conformity assessment for AI. This would ensure that AI systems meet minimum security requirements and facilitate compliance for providers while enabling effective enforcement of industry standards by regulatory bodies.
In conclusion, Bushra Al-Ghoushi’s leadership and expertise in cybersecurity are evident through her various roles and initiatives. Her emphasis on gradual, incremental cybersecurity rules and regulations for AI reflects a balanced approach that prioritizes both innovation and security. Al-Ghoushi’s advocacy for global harmonization of AI regulations and the establishment of international certification schemes further underscores her commitment to promoting secure and responsible use of AI technologies.
Daniel Loevenich
Speech speed
98 words per minute
Speech length
1083 words
Speech time
664 secs
Arguments
Germany is actively implementing AI risks management within complex technical systems
Supporting facts:
- Germany is focusing on AI components or modules within complex technical systems for addressing the AI risks.
- They are actively implementing countermeasures during the whole lifecycle of AI systems.
Topics: AI regulations, AI risks, AI Act, Standardization
Germany is working towards uniform AI evaluation and conformity assessment framework
Supporting facts:
- To address AI risks, Germany is extending existing cybersecurity conformity assessment infrastructure.
- Germany is working towards unifying AI evaluation and conformity assessment in line with EU AI Act standardization requirements.
Topics: AI regulations, AI Act, Conformity assessment
Germany is open to new technologies and wants to promote creativity with AI technologies among SMEs.
Supporting facts:
- Germany is promoting AI technologies and encouraging SMEs to use these technologies.
Topics: Germany, AI technology, SMEs
AI technologies are data driven and their usage responsibility rests on people’s shoulders.
Topics: AI technology, Responsibility, Data-driven
Standardizing AI usage at a value-based level is difficult due to differences in value-based governmental positions.
Supporting facts:
- Daniel stressed on the challenge of normative and ethical issues concerning AI usage due to different societal values.
Topics: AI technology, Standardization, Government
Report
Germany is taking proactive measures to manage the risks associated with artificial intelligence (AI) within complex technical systems. The country is specifically focusing on the AI components or modules within these systems. This approach highlights Germany’s commitment to addressing the potential dangers and challenges that AI can present.
To further mitigate these risks, Germany is working on extending its existing cybersecurity conformity assessment infrastructure. This move aims to establish a robust framework to evaluate and ensure the conformity of AI technologies. The country is also striving to unify AI evaluation and conformity assessment according to the standards set by the EU’s AI Act.
This step demonstrates Germany’s dedication to aligning its evaluation processes with international norms and regulations. The implementation of the AI Act is deemed crucial for managing AI risks in Germany. This legislation, which the country is actively working towards, will play a vital role in addressing technical system risks across the entire supply chain of AI applications.
By incorporating this act, Germany seeks to establish a comprehensive and effective framework for managing AI-related risks. Furthermore, Germany is actively promoting the adoption of AI technologies, particularly among small and medium-sized enterprises (SMEs). The country recognizes the potential benefits that these technologies can bring and encourages businesses to embrace them.
This approach highlights Germany’s openness to innovation and its efforts to support the growth of AI within its industries. There is also support for international standardization in guiding the use of AI technologies. This standpoint suggests that by establishing global standards, individuals can have more control over how AI technologies are utilized.
This commitment to international cooperation reinforces Germany’s desire to foster responsible and ethical AI practices. It is important to acknowledge that AI technologies are heavily reliant on data, and their responsible usage ultimately rests on individuals. Germany recognizes the responsibility that comes with the use of AI systems and the need for individuals to exercise caution and ethics when handling data-driven technologies.
Another noteworthy observation is the call for the market to be the determining factor in deciding the use of AI-based systems. Germany suggests that market forces and customer preferences should dictate the direction of AI technology, promoting a more customer-centric approach to AI adoption.
Nevertheless, standardizing AI usage at a value-based level can be challenging due to the differences in societal values. The discrepancy in value-based governmental positions creates a complex landscape for consensus-building and establishing universal standards for AI application. Germany recognizes this challenge and the need for careful consideration of normative and ethical issues surrounding the use of AI technologies.
In conclusion, Germany is actively implementing AI risk management within complex technical systems, with a particular focus on AI components. The country is working towards unifying evaluation processes and conforming to international standards through the AI Act. Germany also promotes the adoption of AI technologies among SMEs and supports international collaboration in establishing standards for responsible AI usage.
However, the challenge of aligning value-based norms and standards remains an ongoing concern for AI implementation.
Gallia Daor
Speech speed
163 words per minute
Speech length
1509 words
Speech time
554 secs
Arguments
The OECD has set principles for trustworthy AI, prioritizing robustness, security, and safety
Supporting facts:
- In 2019, the OECD was the first intergovernment organization to adopt principles for artificial intelligence.
- These principles have now been adopted by 46 countries and also serve the basis for the G20 AI principles.
- One of the principles focuses on robustness, security, and safety.
Topics: Artificial Intelligence, Safety, Risk management, Security
OECD emphasizes risk management in the lifecycle of AI systems
Supporting facts:
- The principles suggest a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis.
- OECD published a framework for the classification of AI systems.
- OECD has a comprehensive network of AI experts that help to forward the work.
Topics: Artificial Intelligence, Risk management
Digital security including cybersecurity and protection against vulnerabilities is a significant focus for OECD.
Supporting facts:
- OECD has a framework for digital security looking at risk management, national digital security strategies, market-level actions, and technical aspects including vulnerability treatment.
- OECD has an annual event, the global forum on digital security, to discuss topical issues in digital security.
Topics: Digital security, Cybersecurity, Vulnerabilities
Mapping the different standards, frameworks, regulations can help better understand their commonalities and develop a practical guidance
Topics: international regulation, standardisation, technology governance
Role of intergovernmental organizations significant in convening stakeholders, thus enabling conversation on their respective issues
Topics: intergovernmental organizations, stakeholder engagement
Development of metrics and measurement of challenging issues is crucial, evidence is an AI incidents monitor
Supporting facts:
- They are developing AI incidents monitor to look at real time data and incidents caused by AI systems
Topics: metrics development, AI incident monitor
Report
The Organisation for Economic Co-operation and Development (OECD) has played a significant role in guiding the development and deployment of artificial intelligence (AI). In 2019, the OECD became the first intergovernmental organization to adopt principles for trustworthy AI. These principles, which focus on the aspects of robustness, security, and safety, have since been adopted by 46 countries.
They also serve as the basis for the G20 AI principles, highlighting their global relevance and influence. The OECD’s emphasis on robustness, security, and safety in AI is crucial in ensuring the responsible development and use of AI technologies. To address the potential risks associated with AI systems, the OECD proposes a systematic risk management approach that spans the entire lifecycle of AI systems on a continuous basis.
By adopting this approach, companies and organizations can effectively identify and mitigate risks at each phase of an AI system’s development and deployment. To further support the responsible development and deployment of AI, the OECD has also published a framework for the classification of AI systems.
This framework aids in establishing clear and consistent guidelines for categorising AI technologies, enabling stakeholders to better understand and evaluate the potential risks and benefits associated with different AI systems. The OECD recognises that digital security, including cybersecurity and the protection against vulnerabilities, is a significant concern in the era of AI.
To address this, the OECD has developed a comprehensive framework for digital security that encompasses various aspects such as risk management, national digital security strategies, market-level actions, and technical aspects, including vulnerability treatment. Moreover, the OECD hosts an annual event called the Global Forum on Digital Security, providing an opportunity for global stakeholders to discuss and address key issues related to digital security.
Interestingly, AI itself serves a dual role in digital security. While AI systems have the potential to become vulnerabilities, particularly through data poisoning and the malicious use of generative AI, they can also be utilised as tools for enhancing digital security.
This highlights the need for robust security measures and responsible use of AI technologies to prevent malicious attacks while harnessing the potential benefits AI can provide in bolstering digital security efforts. In addition to addressing risks and emphasising security, the OECD recognises the importance of international cooperation, regulation, and standardisation in the AI domain.
The mapping of different standards, frameworks, and regulations can help stakeholders better understand their commonalities and develop practical guidance for the responsible development and deployment of AI technologies. Intergovernmental organisations, such as the OECD, play a vital role in convening stakeholders and facilitating conversations on respective issues.
By bringing together governments, industry experts, and other relevant actors, intergovernmental organisations enable collaboration and foster partnerships for addressing the challenges and opportunities presented by AI technologies. Finally, the development of metrics and measurements is crucial for effectively addressing and evaluating the impact of AI technologies.
The OECD is actively involved in the development of such metrics, with one notable example being the AI Incidents Monitor. This initiative aims to capture and analyse real-time data and incidents caused by AI systems, allowing for a better understanding of the challenges and risks associated with AI technologies.
In conclusion, the OECD has made significant contributions to the development and governance of AI technologies. Through the establishment of principles for trustworthy AI, the emphasis on risk management, the focus on digital security, the recognition of AI’s dual role in security, and the efforts towards international cooperation and metric development, the OECD is actively working towards ensuring the responsible and beneficial use of AI technologies on a global scale.
Hiroshi Honjo
Speech speed
98 words per minute
Speech length
942 words
Speech time
578 secs
Arguments
Hiroshi Honjo is the Chief Information Security Officer for NTT Data.
Supporting facts:
- NTT Data is a Japanese-based IT company
- NTT Data has 230,000 employees globally
- Japan is only a small part of the employees
Topics: CISO, NTT Data
AI governance guidelines are critical within the company encompassing various aspects like privacy, ethics and technology.
Supporting facts:
- Genitive AI engages in activities like code generation which raises Intellectual Property issues
Topics: AI governance, Privacy, Ethics, Technology
Cybersecurity intricacies are applicable to generative AI, therefore they are as crucial to tackle.
Supporting facts:
- There has been attacks on large language models
Topics: Cybersecurity, Generative AI
Determining what law or regulation is applied to cross-border data transfer is complex.
Supporting facts:
- Legal issues similar to those with cloud computing, are pertinent to data handled by generative AIs
Topics: Data privacy, Cross-border transfer, International law
Hiroshi Honjo advocates for international harmonization of AI regulations
Supporting facts:
- Japanese Prime Minister Kishida-san mentioned AI regulations in his IDF keynote speech
- Honjo pointed out that guidelines in G7 countries were good but not sufficient
Topics: AI, Regulations, International Harmonization
Report
Hiroshi Honjo is the Chief Information Security Officer for NTT Data, a Japanese-based IT company with a global workforce of 230,000 employees. NTT Data is actively involved in numerous AI and generative AI projects for their clients. Honjo believes that AI governance guidelines are crucial for the company, covering important aspects like privacy, ethics, and technology.
These guidelines promote responsible and ethical practices in AI development and usage. In the realm of generative AI, Honjo highlights the significance of addressing cybersecurity intricacies, particularly in light of recent attacks on large language models. This underscores the importance of tackling cybersecurity issues within the context of generative AI.
One complex issue in handling data by generative AIs is determining the applicable law or regulation for cross-border data transfers. Similar to challenges faced by private companies managing multinational projects, NTT Data must navigate various regulations and ensure compliance with jurisdiction-specific requirements.
Honjo advocates for international harmonization of AI regulations, emphasizing that guidelines in G7 countries are insufficient. He supports the establishment of international standards that govern the development, use, and deployment of AI, aimed at promoting fairness and consistency in AI regulation.
Additionally, Honjo expresses his concern regarding uneven data protection regulations like the General Data Protection Regulation (GDPR). He acknowledges that differing data protection regulations across countries impose significant costs on businesses. To mitigate these challenges and ensure a level playing field for businesses operating in multiple jurisdictions, Honjo advocates for consistent and harmonized data protection measures.
In summary, Hiroshi Honjo, as the Chief Information Security Officer for NTT Data, emphasizes the necessity of AI governance guidelines, the need to address cybersecurity intricacies in generative AI, the complexity of cross-border data transfers, and the importance of international harmonization of AI regulations.
His commitment to consistent data protection regulations reveals his dedication to reducing costs and promoting fairness within the industry.
Moderator – Daria Tsafrir
Speech speed
143 words per minute
Speech length
1024 words
Speech time
429 secs
Arguments
Government’s major concerns include protection and safety of critical infrastructures and chain supplies
Topics: Government regulation, Critical Infrastructure, Supply Chains
Considering risks of over-regulation and dynamic nature of AI in terms of regulation
Topics: AI regulation, Risk of Over-regulation, Dynamic nature of AI
Need to address cybersecurity challenges within international forums and possibility of binding treaties
Topics: Cybersecurity, International forums, Binding treaties
Report
During the discussions, three main topics were examined in depth. The first topic focused on the concerns of the government regarding the protection and safety of critical infrastructures and supply chains. It was acknowledged that governments have a major role in ensuring the security of crucial infrastructures and supply chains, which are vital for the functioning of industries and economies.
However, no specific supporting facts or evidence were provided to substantiate these concerns. The second topic revolved around the risks of over-regulation and the dynamic nature of AI. Participants expressed the need to strike a balance between regulating AI to prevent potential negative consequences and allowing for its innovative and transformative potential.
The dynamic nature of AI poses a challenge in terms of regulation, as it constantly evolves and adapts. Again, no supporting facts were provided to further illustrate these risks, but it was acknowledged as a valid concern. The third topic that was discussed focused on cybersecurity challenges.
It was highlighted that addressing these challenges requires collaboration within international forums and the possibility of establishing binding treaties. The need for such cooperation arises from the global nature of cyber threats and the shared responsibility in mitigating them. However, no supporting evidence or specific examples of cybersecurity challenges were referred to.
Throughout the discussions, all speakers maintained a neutral sentiment, meaning they did not express strong support or opposition to any particular viewpoint. This could indicate that the discussions were conducted in an objective manner, with an emphasis on highlighting different perspectives and concerns rather than taking a definitive stance.
Based on the analysis, it is evident that the discussions centered around key areas of government concerns, the risks associated with over-regulation of AI, and the need for international cooperation in addressing cybersecurity challenges. However, the absence of specific supporting facts or evidence detracts from the overall depth and credibility of the arguments presented.
Moderator 1
Speech speed
164 words per minute
Speech length
17 words
Speech time
6 secs
Report
During his presentation, Abraham introduced himself and verified that he was audible. He provided a comprehensive overview of his background and experience, emphasising his expertise in the field. Abraham highlighted his various roles within the industry, acquiring a diverse set of skills and knowledge in the process.
Abraham also detailed his educational qualifications, underscoring his pertinent degrees and certifications. He explained how these qualifications have equipped him with a strong theoretical foundation, complemented by practical skills developed through hands-on experience. In addition, Abraham outlined his past work experiences and accomplishments, showcasing specific successful projects and the positive outcomes they generated.
He shared examples of challenges encountered during these projects and how he overcame them, displaying problem-solving abilities and resilience. Regarding communication skills, Abraham mentioned his experience working with multicultural teams and effectively collaborating with individuals from diverse backgrounds. He emphasized his strong interpersonal skills, enabling him to cultivate robust relationships with clients and stakeholders throughout his professional journey.
Furthermore, Abraham mentioned his commitment to continuous professional development, expressing enthusiasm for keeping abreast of the latest industry trends and advancements. He attends relevant conferences, workshops, and seminars, actively engaging in professional networks to stay connected with industry experts. In conclusion, Abraham presented himself as a highly experienced and qualified professional, highlighting his expertise through his extensive background, educational qualifications, and successful project achievements.
He demonstrated effective communication, collaboration, and adaptability, crucial in a fast-paced, ever-evolving industry.