Workshop 8: How AI impacts society and security: opportunities and vulnerabilities

13 May 2025 14:30h - 15:30h

Workshop 8: How AI impacts society and security: opportunities and vulnerabilities

Session at a glance

Summary

This workshop focused on the impact of AI on society and security, exploring both opportunities and vulnerabilities. Participants discussed the challenges of AI governance, including the need for ethical frameworks, transparency, and regulation. The discussion highlighted the dual-use nature of AI technology, which can be used for both beneficial and harmful purposes. Experts emphasized the importance of education and skills development to address the gaps between academic training and industry needs in AI.


The conversation touched on various AI-related threats, with deepfakes and data poisoning identified as major concerns. Participants debated the balance between security measures and ethical considerations in AI deployment. The need for international cooperation in developing AI governance frameworks was stressed, with examples such as the Council of Europe’s AI Framework Convention mentioned.


Experts also discussed the potential for AI misuse in areas like hate speech and cybercrime, as well as the challenges in detecting and regulating AI-generated content. The importance of public-private collaboration in developing AI curricula and governance models was emphasized. Participants agreed on the need to update governance models to better reflect evolving social structures and technological advancements.


The discussion concluded with a call for closing skill gaps, revolutionizing governance approaches, and finding a balance between security needs and ethical considerations in AI development and deployment. Overall, the workshop underscored the complex and multifaceted nature of AI’s impact on society, highlighting the need for ongoing dialogue and collaborative efforts to address emerging challenges.


Keypoints

Major discussion points:


– The challenges and opportunities of AI governance, including balancing regulation with innovation


– The use of AI in cybersecurity, both as a tool for defense and potential weapon for attacks


– The need for education and digital literacy to help society navigate AI technologies responsibly


– Ethical considerations around AI, including issues of bias, transparency, and dual-use potential


– The interplay between security, ethics, and human rights in AI development and deployment


Overall purpose/goal:


The discussion aimed to explore the societal and security impacts of AI, examining both opportunities and vulnerabilities. The goal was to foster dialogue on how to govern and utilize AI technologies responsibly while mitigating risks.


Tone:


The tone was primarily analytical and cautionary, with speakers highlighting both the promise and perils of AI. There was an undercurrent of urgency about the need to address AI governance challenges. Toward the end, the tone became more constructive as participants discussed potential solutions and next steps, though some uncertainty remained about how to effectively tackle the complex issues raised.


Speakers

– Janice Richardson: Working group chair of I3C Working Group 2 on education and skills, has a company called Insight in Luxembourg


– Aldan Creo: Technology research specialist at Accenture Labs in Dublin, studies AI fairness and security


– Chris Kubecka: American computer security researcher and cyber warfare specialist


– Remote moderator: Alice Marns


– Thomas Schneider: Ambassador from the Swiss Federation, worked on AI Framework Convention of the Council of Europe


– Piotr Słowiński: Senior cybersecurity expert at NUSC, specializing in legal and strategic analysis of cybersecurity and emerging technologies


– Jörn Erbguth: Reporter and focal point for the session


– Wout de Natris – van der Borght: Internet governance expert and consultant, coordinator of IGF Dynamic Coalition Internet Standards, Security and Safety


Additional speakers:


– Frances: Representative from Youthdig


– Jasper Finke: Representative from German government, involved in negotiating AI Framework Convention


– Mila Vidina: Representative of European Network of National Equality Bodies


Full session report

AI Impact on Society and Security: A Comprehensive Discussion


This workshop, part of EuroDIG’s Youth Ditch segment, brought together experts to explore the multifaceted impact of artificial intelligence (AI) on society and security. The discussion, chaired by Janice Richardson of the I3C Working Group 2 on education and skills, delved into the opportunities and vulnerabilities presented by AI technologies, with a focus on governance challenges, security threats, educational needs, and ethical considerations.


Governance and Regulation


A central theme was the need for updated governance models to address AI challenges. Thomas Schneider, Ambassador from the Swiss Federation, emphasised that current structures are inadequate for dealing with AI complexities. He highlighted the Council of Europe AI Framework Convention as a potential model for future governance. Piotr Słowiński, a senior cybersecurity expert at NUSC, stressed the importance of balancing innovation with protection in AI governance.


Janice Richardson proposed creating “a giant hub where industry really starts talking to governance, to governments, where also young people who are using this in very different ways can actually also have their say.” This suggestion highlighted the need for inclusive dialogue in shaping AI policies, including consideration of the EU AI Act.


However, there were disagreements on specific regulatory approaches. While an audience member argued for universal regulation of deepfakes, Schneider cautioned against blanket bans, emphasising the dual-use nature of AI technologies.


Security Threats and Challenges


Chris Kubecka, an American computer security researcher, highlighted deepfakes and data poisoning as major threats. She discussed her work on creating a “zero-day GPT” and research on AI-generated malware, warning that AI systems can be manipulated to generate harmful content and bypass ethical safeguards.


Słowiński added that detecting AI-generated content remains a significant challenge, particularly in the context of financial crimes and scams. The speakers agreed on the need for global collaboration in developing better detection tools for deepfakes and AI-generated malware.


Education and Skills Development


The discussion highlighted gaps between AI skills taught in educational institutions and those required by industry. Richardson emphasised that the education system needs to change to address 21st-century AI literacy. Schneider added that most politicians and citizens lack sufficient understanding of AI issues, highlighting the need for broader public education.


Jörn Erbguth, the session’s reporter, stressed the importance of collaboration between education providers, government bodies, and industry in developing AI curricula.


Ethical Considerations


The interconnectedness of ethics and security in AI development was a key point. Richardson suggested watermarking or labelling AI-generated content to increase transparency, although Kubecka cautioned that AI systems could potentially bypass such safeguards.


Schneider emphasised balancing security needs with ethical considerations in AI applications. This led to a disagreement when an audience member challenged Schneider’s classification of security as a human right, arguing that it is instead a fundamental obligation of the state.


Mentimeter Questions and Responses


The session incorporated interactive Mentimeter questions, allowing participants to share their views on AI-related issues. These responses provided valuable insights into audience perspectives and concerns, enriching the overall discussion.


Conclusion and Future Directions


The workshop concluded with a call for action on several fronts. Participants agreed on the need to close skill gaps, revolutionise governance approaches, and find a balance between security needs and ethical considerations in AI development and deployment.


Key takeaways included the urgent need for updated AI governance models, addressing the skills gap in AI education, and developing better tools to detect and mitigate AI-generated security threats. The discussion also highlighted several unresolved issues, such as effectively regulating deepfakes without impacting legitimate uses, and increasing AI literacy among politicians and the general public.


Richardson’s comment that “AI is only a tool. It’s the user who is making it a good tool, a bad tool” encapsulated a central theme, emphasising human agency and responsibility in AI use. Schneider noted that addressing these challenges “may take a generation or two,” but the consensus was clear that modernising governance models and fostering collaboration between stakeholders is crucial for harnessing AI’s potential while mitigating its risks.


The workshop underscored the complex nature of AI’s impact on society, highlighting the need for ongoing dialogue and collaborative efforts, including projects like Huderia mentioned by Schneider, to address emerging challenges in this rapidly evolving field.


Session transcript

Remote moderator: Good afternoon everyone and welcome to the workshop 8, How AI Impacts Society and Security, Opportunities and Vulnerabilities. My name is Alice Marns and I will be remote moderating this session. And behind the scenes, I am joined by Neha Chablani, the online session host. We are both participants in this year’s Youth Ditch, the youth segment of EuroDitch. And I’ll briefly go over the session rules. So the first one, please enter with your full name. To ask a question, raise your hand using the zoom function. You will be unmuted when the floor is given to you. When speaking, switch on the video, state your name and affiliation. And finally, do not share the links to the zoom meetings, not even with your colleagues. Thank you and I’ll pass.


Wout de Natris – van der Borght: Thank you. Welcome to workshop 8. As my colleague next to me just said, How AI Impacts Society and Security, Opportunities and Vulnerabilities. My name is Wout de Natris van der Borght and I am your co-moderator together with Piotr Słowiński, who is in Poland at this moment. Piotr is a senior cybersecurity expert at NUSC, the National Research Institute specializing in legal and strategic analysis of cybersecurity and emerging and disruptive technologies as an impact as part of cyber policy team at NUSC. And we organized this session together with Alvin Creo, who’s also participating online and who is a technology research specialist at Accenture Labs in Dublin. And he studies AI fairness and security, particularly when it comes to the detectability of AI generated text. And myself, I’m an internet governance expert and consultant, but also coordinator of the IGF Dynamic Coalition Internet Standards, Security and Safety, which advocates the deployment of existing security-related internet standards so that we all become a lot more secure and safer on the internet and in the world at large. About the topic itself, AI is all around us and for far, far longer than most people realize. For most people it’s a large language model that was introduced now one and a half year ago and all of a sudden we were all working with this model and that was AI. But AI is not just a large language model. It’s part of all sorts of algorithms that determine what you see on social media, what your other online experiences are, what is being monitored around the world and what is inside of your devices and even in military equipment. In fact, it may partly determine what people do and what people think. The development of AI comes with opportunities and challenges and the focus is often put on the challenges and the dangers of society for individual jobs, the ride up to fears for a Skynet from the Terminator movies. We will be discussing this topic with you also through Mentimeter, with all of you in the room so be sure to use the QR code Piotr is going to show later after his presentation on the screen and with the three participants that I will introduce to you. On my left is Janice Richardson and Janice is the working group chair of I3C Working Group 2 on education and skills but also has a company called Insight in Luxembourg. Next to her is Thomas Schneider, Ambassador Schneider, I should say, from the Swiss Federation and he is also one of the people who assisted in getting the AI, what was it, AI confident, not confident, what is it called? of the Framework Convention of the Council of Europe, and he was the chair of that process. And to my left here is Chris Kubecka, who is an American computer security researcher and cyber warfare specialist. First, I go to Piotr in Warsaw and give the hand to you to introduce the session further, Piotr, and then explain what we’re going to do on Mentimeter. Thank you.


Piotr Słowiński: Thank you, Wout. I will share my screen right now. And I will just need a confirmation that you all see the screen.


Wout de Natris – van der Borght: Piotr, we need you to put up your sound a little bit, I think.


Piotr Słowiński: Can you hear me now better?


Wout de Natris – van der Borght: I can, but yes, I get thumbs up, so go ahead, Piotr.


Piotr Słowiński: Okay, great. And I think that you can see my screen, at least you should by now. So, yeah, welcome. I will just dive deep right into the subject. I will be setting a scene just a little bit, just to give you some thoughts that we would like to discuss with you, with all of you, because we would like the room to make a big contribution to our workshops. And let’s start with setting a scene for when we are organizing this session, we thought about pillars that we would like to discuss. And of course, the first pillar that comes to mind when we talk about AI, it’s AI governance and regulations. It may not be the most interesting topic for many people. Of course, me being a lawyer and having a soul of a lawyer, it’s always, yeah, we need to talk about governance and regulations. And, of course, in this field, in this area, within this pillar, one of the most important things is this clash of interests that we have. It’s always going to be there, regardless of the topic that we will discuss, whether it’s going to be Internet in general or just AI, new and emerging technologies. And always the question that we need to ask is whether we want to regulate something or not to regulate, or when does the regulation become over-regulation, when it becomes overburdened for private sector, for example, and public sector as well. And this is always the general question that we ask, how to implement regulations that they will be a facilitator for innovation and just the opposite of facilitator. And, of course, we need to bear in mind also the role of state in safeguarding rights, liberties and society’s well-being, as well as role of international companies in developing, for example, in this example, AI tools and systems versus the role of the states. And just to give you these quick scenarios, they’re going to be, it’s going to be available for you as the repository on EurodigWiki, so you don’t have to read everything very closely. We have tools that are being implemented within even the public sector. And the problem, the main problem is, for example, that non-public documents may be put or non-public information may be put into such tools, such large language models and can be processed by them. This is a problem that we need to also discuss and it’s going to be, it’s going to have to be regulated, for example. Another problem is, for example, law enforcement use of AI in predictive policing or remote biometric identification. And the second pillar, of course, the pillar that is the most interesting for me, being, working in the cybersecurity right now, is use of AI in cybersecurity. And there are so many dimensions of AI in cybersecurity. It’s not just what I have put here, red team versus blue team. It’s quite obvious, of course, but we have also AI jailbreaking or poisoning. At what point does it become a problem for just users, just companies, or rather the whole society? This is the question that we need to ask ourselves. How to differentiate between the intentional abuse of AI, AI tools, AI systems, and the situation in which normal use ends de facto with the abuse? This is a very important thing that we need to consider. Also, AI may be a valuable partner as both in crime, so to speak, and as an ally in our defences. How do we implement it ethically, responsibly, and effectively? And also we need to ask a question, is AI in cyber security a sledgehammer to crack a nut? How much is AI really needed in cyber security right now? And how much of a game changer can it really be? This is just a question that we need to ask ourselves. I’m not looking to impose on you any kind of answer right now. I hope we can reach some kind of agreement today. And you have quite basic red and blue scenarios in which AI is a facilitator for a person, a script kid who doesn’t have much extensive technical knowledge and can utilize the automated malware creation for using it in cyber attacks. And of course, blue team use in SOC teams and when it can automate a lot of various ideas, issues, and challenges that the SOC team may encounter. And last but not least, also a very big area that we need to consider is the international cooperation and education. We could have easily changed them both into separate areas, but I just wanted to put them in connection. because they are also connected with all the other pillars that we discussed already. Strategies for AI development and implementation are a global challenge, not only regional or local. The Council of Europe Convention is just one dimension of it. We have AI Act on the EU level. Here is the battleground, really battleground, or may become a battleground for states versus companies. And at the same time, there’s a lot of questions. It’s not only regarding AI, but also in cybersecurity in general. Is EU level regulations a facilitation or just putting stick in the spokes of various sectors, various entities? And of course, sorry, of course, the role of international organizations and communities is quite extensive, but how is it viewed and how can it be viewed by different stakeholders? This is also a very important factor to consider.


Wout de Natris – van der Borght: Piotr, sorry, I can give you one more minute.


Piotr Słowiński: Yes, I’m just wrapping it up. Thank you. And this is what we also need to… The big issue that we need to consider is the global ethical framework for AI. Is it really needed? How much is it needed? If yes, and so on. And also not to mention, not to forget the thing that we need to protect minors and vulnerable groups. There is a problem with education and competence gaps, digital illiteracy. It’s an issue that we talk for years now and it still is a thing, especially since the AI tools has been evolving very rapidly and being used very extensively. And last but not least, ethics and society’s well-being. Is it just another phase of security or completely whole different area that we need to consider? With these things, with this final sentence, it’s the end of my presentation, just setting the scene for our conversations. I hope the discussion will be very fruitful. Here are the instructions to join… you can either enter the site on your computer and input the code or scan the QR code if you are able to do so. So I will leave it for now and in about 10 seconds I will share the Menti that we have prepared for you. Thank you, Wout.


Wout de Natris – van der Borght: Thank you, Piotr. Please scan the QR code or put in the code and then we’ll run the first questions so that you can actively participate in the session. And as you will see, there are positive questions and the negative side questions, so please join us. Can you put the first question on, Piotr?


Piotr Słowiński: Yes, I’m just… Yes, now it should be… Oh, sorry. Sorry about this. Now… It should be right now.


Wout de Natris – van der Borght: This is the QR code.


Piotr Słowiński: Yes, then do you see the… It should be within the web browser, so it should be visible. Okay, great. I get thumbs up, so I suppose it’s okay. So we will start with the first question and the questions will supersede the questions and answers from our participants. So the first question is, what comes to mind when you hear AI and cyber security together? And yes, you can start answering right now.


Wout de Natris – van der Borght: Okay, well, people are answering. introducing Thomas, our first keynote speaker. Thomas, you are going to address a few topics for us and I think that with Thomas’s background in government and having worked on the Framework Convention, I think that we would be interested to learn what will be the main challenges for AI government and and what will the current international and multinational, for example, EU level and national level regulations or best practices be effective in terms of mitigating detrimental use of AI systems and solutions? And finally, how to properly address in terms of governance the possibility of dual use of AI? So Thomas, the floor is yours.


Thomas Schneider: Thank you. Good afternoon, everyone. I try to say something reasonable in this very helpful, hopefully in this very complicated situation because yeah, we know AI governance is a huge issue and we know that the challenges and the risks are context-based depending on which sector you are and even within the sectors and what an application is used for and AI governance has enormously many components that somehow need to be brought together in a coherent way, which is a challenge. You have ethical issues, you have social issues, you have economic issues, human rights, democracy, rule of law, as we had the Council of Europe. So, it has many sides and only, for instance, if you look at security and resilience, you can list like 50 items that you would need to basically take care of, which become much more important the more we rely on AI in our daily lives in all aspects, like we did with the Internet and so on. So, if you just look at the security resilience aspects, You can maybe divide into two areas of motivations. One is malicious actors that try to damage a system or weaken an enemy or whoever they call an enemy through attacking a system for creating damage. But you can also have security or resilience problems just by mismanipulation or mistakes that are made or whatever. And then also this can be on the algorithm side, on the programming side. It can be on the data that you use. It can be on the infrastructure and hardware that you use to process data with algorithms. So then if you take this as a separate thing and those that have been following what has happened in Spain, it has been mentioned, it can just be an electricity issue and then everything is basically down. So there’s lots of facets just in the security or in the resilience part. And one of the challenges that we face is that the world, the digital world, gets more and more complex. Everything is interconnected, interdependent. If you turn a screw here, then you may feel consequences somewhere else where you wouldn’t expect it. And if we look at our governance system, the way it’s been set up, you have politicians that basically most of them have no clue of all of these things, but also the citizens that are supposed to vote or elect politicians have no idea. So the experts’ knowledge is a challenge for our democratic societies. And then the question is, yeah, what do we do with this? How do we solve this? On the other hand, I think there’s no reason to panic because not every, at least on the logic of the question, there’s not that much new with AI. We had other disruptive technologies before that we had to somehow learn to cope with. And I often compare AI, because if people say data is the new oil, actually AI can be compared to engines in many ways, because you can also have dual use issues where you can put an engine in a hospital car, you can… put the same engine in a tank and so on and so forth. Airplanes can be used to transport people or stuff or they can be used to carry bombs. So you have the dual use issue. You have the same logic of context-based risks with engine and with engines, no matter, it depends on where you put it. You don’t have one single engine convention that solves all the problems or one EU engine act that solves all the problems. You have thousands of standards, of legal standards for every situation, for the infrastructure that is used, for the people that are manipulating engines and so on, for the way that an engine fits into a car and then the brake system needs to be corresponded. So you have thousands of technical norms. You have thousands of legal norms, but you also have social norms that are not even written down that you behave in a certain way in a certain situation. And we are, we have to and we are developing basically something similar when it comes to AI and data and the digital world. So this is, in terms of the logic, it’s not new. Of course, there are differences in engine. You can copy an engine, but it takes time. If you move the engine around the world, it takes time. And with a dematerialized resource like data and a dematerialized tool like algorithms, of course, there’s other issues and time and so on that you cannot compare. But the logic of trying to find, develop a complex system that is fit for purpose, context-based and agile is basically the same. And what we also see now is, as a Swiss, of course, we are following what the EU is doing in terms of their logic. And this is completely different from the logic in my country. The EU has the resources and the willingness to develop a coherent vision about the digital future. What are all the aspects from labor to security, resilience, blah, blah, blah. In Switzerland, we are the opposite. Nobody gives us resources to do strategic planning. They expect us to wait and see. And when the problem is there, to react very quickly and then develop a solution bottom up very quickly. Both systems are quite different. They both have their advantages and disadvantages, and we can actually both learn from each other. Normally, you end up somewhere in the middle, converging with systems so that they are interoperable. And the same is happening with the AI regulation on, let’s say, on a jurisdiction level. And on a global level, we may try and agree on some basic principles that there’s a somehow shared understanding about. But then we do lack the tools to implement them in a binding way. And the Council of Europe Convention is an interesting tool in the sense that it also tries to combine long-lasting principles that should hold for decades, while giving the actors the flexibility. And this has been criticized, but I think it’s the right way to do, to be agile, to adapt these principles to who you are as an actor, to in which area you’re operating. The AI Act tries to do the same, but the AI Act is much more specific. And then you have the Annex III that you need to update. So also there, you have different levels of instruments that are complementary. You have something that is very general with principles that should hold. And the more you go into concrete regulation, the more you need to be adaptive and natural. And I’ll end with one sentence that our governance system was built by the industrial age. You had industrial milieus, you had the working class, you had the entrepreneurs, and you were trying to reflect the representation of the people through these milieus. This is now all going down the drain because these milieus don’t exist and traditional parties disappear. So we have to think about a new way of multi-stakeholder representation that is more agile, like the milieus of people are more changing. And we may also have to develop more agile regulatory means than laws that take five years to develop. We may have to use AI and new technologies to regulate or govern AI and new technologies. And this is also something that may take a generation or two, but I think more and more people are realizing that we somehow need to modernize our governance models. Thank you.


Wout de Natris – van der Borght: Thank you, Thomas. A lot of food for thought, I think. with many challenges, but also a message of hope in there. So, yes, switch off, thanks. So, thank you for that. I’m looking at, it’s still changing a little bit, so progress one out on threats, which was long at the same point. I see one hope. Is the person who voted for hope willing to give him one sentence what that hope is? Because I’m really, really curious. Who voted for hope? Is that in the room or online? It’s Aldan. So it’s Aldan. Okay, Aldan, would you give the one sentence to explain your choice? Because it’s so different from all the others. Sorry for putting you on the spot. Are you there, Aldan?


Piotr Słowiński: Yes, he cannot, he cannot, oh, yeah.


Aldan Creo: Okay, I got the question now. Yeah, well, I mean, to me, like, it just gives hope because, you know, like, you can try to merge, like, the two different facets. It’s true that people were very polarized, you know, like, they were all going for one or the other, but actually, for me, it’s like something like more in between in the sense that you can try to take the advantages of both. Well, that’s a very short sentence, sorry. But yeah, like, I really think, like, there’s hope in that.


Wout de Natris – van der Borght: And thank you for that. I think it’s a very good answer that we can have hope with the new technology. The next speaker is Chris Kubica. And the main questions he will be addressing is what are the main threat or attack factors that you observe in connection with development and deployment of AI systems, which may be utilized by threat actors in malicious activities. Chris.


Chris Kubecka: Oh boy. Well, thank you so much for having me. This is my first time here at this wonderful building. this conference. And I’ve been working with artificial intelligence and cybersecurity since 2007. You can see my work showcased as the definition of cyber warfare and security information event management in Wikipedia as well as numerous academic articles, journals, and use in numerous universities teaching cybersecurity and cyber warfare. Because I have so much experience in the different umbrella terms underneath artificial intelligence such as machine learning and natural language processing which plays a lot into this. I have seen a lot of things and right now I can tell you I am having a lot of fun doing research in these areas. When I’m having a lot of fun that means things are going terribly wrong. I do not want to fill you with fear. I do actually want to give you a bit of hope. But we are in a very interesting time when it comes to how we are handling such emerging technology. Now with my experience, I’ve had a lot of experience, lots in the Middle East. I was the former distinguished chair of the Middle East Institute Cyber Security and Emerging Tech Program where Richard Clark and I co-authored the world’s first cyber peace treaty which is now an addendum to the Abraham Peace Accords between UAE and Israel. Because we saw what was going on already back then and how emerging technology could be utilized not only for good but unfortunately for not so great circumstances. Now one of the biggest challenges to me is not even though I come from cybersecurity and officially my profession with the US government is hacker, not criminal hacker, but hacker. I see with artificial intelligence right now it isn’t so much super super evil from No way to come back from it, but I do see that we need a lot more transparency and regulation when it comes to social media. Handling first-hand events and being involved in things like the recent election annulment in Romania and Georgescu was a very interesting case where TikTok algorithms and instructions on how to game those algorithms were sent to certain followers by direct message from Mr. Georgescu himself last year. And I worked closely with the Romanian government on that case, you can also see that showcased both on Romanian news and in Bulgarian news as well as international media. Now, I see a lot of manipulation and when we bring up things like here’s this digital divide when it comes to digital education and technical competency, far too often we are seeing that when someone gets sent a picture or a meme or an article that looks legitimate, because I’ll just call them threat actors, can leverage and exploit generative AI in such a way as well as other types of technology with natural language processing and machine learning, you can build very quickly, within minutes, basically a digital persona of your target group and you can take advantage of that. I actually have some statistics, if you check out the wiki page for this particular group you will see I recently published both the introduction and table of contents for How to Hack a Modern Dictatorship with AI, the Digital CIA OSS Sabotage Manual of which I used prompt injection to craft the entire book and make AI as evil as possible to show how dictatorships are currently using this technology as a weapon, but how us as the public and policy makers, legislators, and so forth have a way to go, here are the ethics, here are some of the things that we can do, here are some of the tools that we can use to detect and fight against some of this. Now I do see hope, but boy oh boy, I do see the absolute need of building better detection for things like deepfakes, AI generative malware. Recently I was also covered by news on creating the world’s first zero-day GPT, of which I went public and have been posting a lot of academic articles on and working with a variety of different governments and universities on researching this more. Again, I’m not a criminal hacker, but I am a hacker. So I want to leave you with this. Although my wonderful colleague had given you the idea that perhaps AI is an engine, now to me, when they say big data is new oil, I see AI as the refinery. And from that, many, many great things can occur. But right now, we’re getting flooded with, unfortunately, all the negatives. And hopefully this will change soon as we build detection, we build legislation, and hopefully global regulations so that big companies like Google, for example, cannot get away with offering their services to dictatorships, like I discovered in Venezuela last year going public. Thank you very much.


Wout de Natris – van der Borght: Thank you, Chris. I think we heard a lot about threats, but that’s part of the positive and the negative side that we are dealing with. You’ve seen that there’s a second question, which of these AI threats do you find most concerning? And as you can see, deepfakes and data poisoning are about the same. There’s no fear of jailbreaking, although some people think that may be the most serious one, but there’s one other. and I’m very curious what that other is because otherwise we don’t learn. So who voted for other and please introduce yourself and then motivate your choice please.


Audience: My name is Schnutt Stöhr, I work at the Council of Europe here. I think it is undetected bias.


Wout de Natris – van der Borght: Thank you very much. I think that’s certainly it and there’s a second other all of a sudden. Someone was inspired. Janice, are you number two?


Audience: No, I was too busy watching other things and didn’t look at the question. You’ll hear my answer when I begin talking so I won’t see it now.


Wout de Natris – van der Borght: Thank you. Who’s the second other who joined after? Oh, that was you. Okay, it was you. Okay, sorry. Then I understand now. Then that means that the third question will go on, Peter, while Janice is starting to talk. So here’s the third question for you. Janice, you’re our final key participant and after that we’ll ask the room to comment or ask questions. Janice, you are going to tell us about how can educational institutions collaborate with governments and industry to co-develop curricula that reflect real-world AI governance, development, research and implementation challenges. So Janice, please.


Janice Richardson: Thank you. So good afternoon, everyone. Interesting question but let me look at this word ethics which everyone seems to place at the heart of how we use AI and why is ethics so important? Well, it seems to me that governance depends on ethics, that the creation of the tools themselves depend on ethics and also those of the users. When we’re looking at threats, I don’t really see how cyber security can help us when we’re confronted with a false website. It’s there. We need to do something about it and here is when we have to to use our logic and I think opportunities is very important but we’re not using the opportunities today to totally change the education system so that we’re actually tackling today’s problem and learning to be literate in the 21st century. It seems to me that the principles that Thomas spoke about earlier are absolutely crucial because what is ethics? It’s values, it’s attitudes, it’s skills also and it’s knowledge and understanding and the Council of Europe has put this into a whole program called Digital Citizenship Education which if a young person really masters the 20 central, well not the young person, also those dictating the governance, making the regulations and creating the tools but if we all master these 20 competences, I think that we will have a whole different approach to AI. Finally, AI is only a tool. It’s the user who is making it a good tool, a bad tool or as Thomas said, a plane to carry passengers or a plane to carry bombs. How are we going to go about it? I sit on the advisory board of Meta and of Snapchat and our job is to think of all of the things that these new technologies, all these latest gadgets that they’re adding to social media, how are these putting at risk the users and how do we push back to protect the rights of children and of all users? I think there’s only one way to do this. A few years ago I did a study for the IS3C that you mentioned earlier. And what did we find? Business expected one thing, university graduates were coming out with totally different skills, but really there was a big gap between the two. So now to answer your question, I think first of all we need to create a giant hub, a hub where industry really starts talking to governance, to governments, where also young people who are using this in very different ways can actually also have their say. Who’s seen the film, the Netflix series Adolescence? I think you’ll agree that there is a whole underground movement going on between young people and we have no idea of what this is and we’re not actually listening to them to try to understand it. So my idea in response to your question, let’s create a hub, let’s bring key actors or delegates of these key actors together so that we start talking firstly about what can we do in terms of education so that from all sides of the question we are actually creating an education system which will help us know how to use these tools as carriers of people and not of carriers of bombs. When once we’ve done that, perhaps we can have a much greater influence on industry and perhaps on those who are creating the regulations. I’ve had a close look at the AI Act and the Framework Convention and both of them centre on ethics and on an understanding of ethics. So how can we move forward if we don’t solve this problem first?


Wout de Natris – van der Borght: Thank you, Janice. That’s a clear challenge for the world to tackle. And the question is how do we get to this hub and where are the people who are willing to join in this discussion? We see that the third question has been answered and that the answer is clearly no, yes a little bit and don’t know a little bit more than yes. Piotr, I’m going to ask you to put on the final question and then I’ll start opening the floor for questions and comments. So who would like to have the first question or comment on what you’ve heard? So is there online? Online can of course join by asking a question and if possible give you the room or otherwise we read the question for you. So who has a question? Or was everything so clear or are you so desperate that we’re never going to change this? Yes, please introduce yourself first.


Audience: Hi, my name is Frances and I’m from Youthdig. I think my main question is about the last question we had. I didn’t really understand what the trade-off is between ethics and cyber security. Because the question was phrased like would you be okay with enforcing cyber security regulation if it meant you gave up ethical standards? Surely these two are not mutually exclusive and they actually work basically together. And then my second question is about deepfakes. So the first question said what’s the biggest danger to you? And I’m by no means an expert on how deepfakes are currently regulated but I cannot see any positive impact of deepfakes. And this is something that’s like clearly a deepfake is the absolute embodiment of misinformation. And so therefore why shouldn’t regulation just be blanket regulation? against anything that you know to be a deepfake. Because it’s false, it’s made up, and even if it’s creative, the social benefit of creative art that is a deepfake is by no means better than the incredible harm that deepfakes can do in terms of the proliferation of unconsensual explicit images of… Yeah, I mean, I could go on. Anyway, thank you.


Wout de Natris – van der Borght: Your message is clear, thank you. I’m going to ask Piotr to respond as he made the question, so he can explain a little bit, and perhaps, Thomas, that you would like to take another part of the question. So, first, Piotr.


Piotr Słowiński: Yes, just to clarify, of course, it’s what I had in mind when I prepared the question about this security superseding the ethics is mostly connected with what we can observe in certain countries that where it’s stated that security is the most important. We need to protect ourselves, we need to protect our country, we need to protect the cyber borders, so to speak, and we are allowing or we are going to some kind of places that seem very dark, that we supersede a little bit of ethics, a little bit of civil rights, liberties, and so on, for the sake of being secure. This is a very… What I find about this issue is it’s very universal. This is a discussion that we had since the beginnings of civil liberties and civil rights movements and their development. So, this is a kind of thing that, of course, it’s… I am very glad about the answers that we received, that we concluded it’s a no from the participators both online and inside. So, this is what I meant. Maybe it wasn’t very clear. Sorry about that. So, just wanted to clarify it and I hope it’s clear more than it wasn’t.


Wout de Natris – van der Borght: Thank you. And Thomas, would you like to take part of the answer?


Thomas Schneider: Yeah, sorry. I’m a man, so I’m very bad at multitasking. I was trying to enter something into the main thing, and I missed the second part.


Wout de Natris – van der Borght: If you could just very quickly repeat the second question, please.


Audience: Yeah, my second question was essentially, deepfakes, they’re very clearly and non-contestably an embodiment of misinformation. So why can’t we just say at any point that we know a deepfake is a deepfake, just regulate against it? Is that already what’s trying to be happening, and the issue is more that we can’t always ascertain what is a deepfake? Because, for me at least, the biggest threat is deepfakes. Because misinformation, you can have misinformation and then you can have opinions. And this is like, either something is true or false, and then you have this grey period and a grey space, where it’s actually opinions and what people think about a certain issue. But a deepfake is completely made up, and so surely this is the biggest threat that we can regulate against, and have blanket regulation against. Yeah, thank you.


Thomas Schneider: Okay, thank you very much. Two free things. First of all, faking information has been an issue with every tool of communication. When the Gutenberg printing press was invented, and it wasn’t only the church that was allowed to distribute leaflets, you had a democratization of the definition power, but you also had lots of fake news that ended up in local wars, in uprisings and so on, and you have it with radio, with television, with the internet, with everything. One big country in the East was very good at taking people out and into photos. It took them a little bit more time than what you do now. So, this will not disappear, no matter how you regulate it, for several reasons. One is, where’s the line between a deepfake and a lightfake and a nofake? It’s also, you would have to forbid culture, art, whatever. You have tools that you can use for many things. Where do you draw the line of what is allowed or not? is not allowed in what context. It’s about forbidding words. You can’t have humor, you can’t have satire anymore if you draw the lines at the wrong place. That’s one of the elements. But you may require, in certain cases, you may require maybe watermarking and declaration of what you did do to a source, to an image or to a video. If it’s like public service broadcast, the rules that are different from just a commercial TV station that can do whatever they want because they don’t receive public funding and so on, or they are not perceived of having to be true. And so I think it’s more complicated, but I’m also convinced that we will find a way to deal with deepfakes in a way that there will be technical solutions to some extent, and then societies will have to develop ways to know who they can trust. This is serious, but you have watched CNN, Fox News in the US and that you live in two completely different worlds. And you need education, you need a set of measures. And just the last thing, the question is also, what role should the state have? As a Swiss, you would never want your government or your state to tell you what is right or wrong, because you would want to have a political debate in a society and then the society may politically decide what they think. But you may have facts that you trust and may have facts that you don’t trust. But it’s a very exciting issue, but I think we cannot avoid a societal debate about how we trust whom and what we believe in.


Janice Richardson: When you buy clothes, there’s always a label inside. No matter what you buy, there’s always a label. And I really can’t see why anything that is produced through AI doesn’t have some sort of watermark or stamp. I think it’s technically feasible and even if it’s not a fake, we do have the right to know. that it’s AI that created it and not a person. So this is something that I’ve really pushed for and will keep pushing for. Why can’t we simply watermark everything produced through AI or through a technical tool?


Wout de Natris – van der Borght: Another question, yes, please introduce yourself first.


Audience: Yes. Hi, good afternoon. Can you hear me? My name is Mila Vidina. I represent the European Network of National Equality Bodies. And we work, so equality bodies, rather technical. Those are public authorities that specialize in non-discrimination equality law and what makes them different from other institutions like public ombuds and human rights institutes is they work with the private sector. So they have a mandate that covers private sector and they handle complaints, all of them, which is not always the case. So they have frontline work with victims of discrimination. They provide legal advice, litigate, investigate. So a more comprehensive set of powers. Well, that said, our members work on hate speech and hate a crime against it rather. Some of them have a law enforcement mandate, not all of them. And I also work with immigration authorities. And I’m interested. So I have one question related to that. How much and please excuse my ignorance. I don’t know what the mechanisms are to poison or basically to to instigate a system to tamper with its settings so that it generates hateful debasing content. This is one. I mean, not this mostly hate speech, borderline hate crime, because in some cases it could be, you know, instigate racial hatred and that that leads to violence. So this is one question, how the cybersecurity interlinks with the hate speech, hate crime. To be interesting to educate our members, public authorities. And the second question I have is here in the panel, we we talk about cybersecurity defenders. using it for good and attackers being the malicious, you know, hackers, but how about the third scenario, which is who guards the guardians, so the defenders, so state authorities using legitimately, so in many contexts, tools to safeguard, really, cybersecurity in order to surveil and so where is the… What are the guardrails when there is a legitimate processes and legitimate discourse on cybersecurity, how do we ensure that it doesn’t spill over into excessive over-policing and surveillance? So this is my second question and just a third, just mention, as Equinet, we participate in technical standardization for the European Union Act and the cybersecurity technical standard is under development and what we saw in our work, which is mostly on risk management, so how do you define a risk to fundamental right, because this is under the Act, we saw that cybersecurity experts are actually very helpfully vocal and active also with technical standards and risk management and we found them, I just thought it’s a curious fact, we found them as allies, because they think of security as kind of the highest standard of protection, similarly to how we want human rights to have the standard of protection, so we found ourselves in the same camp with cybersecurity engineers, which was an interesting experience for a human rights lawyer, so that’s it.


Wout de Natris – van der Borght: Thank you, so there are three questions, the first one… Yes, and then we’ll be running out of time, so we’d like to take the first one, it’s between cybersecurity and human rights.


Chris Kubecka: I’ll take the first one, when it comes to artificial intelligence… intelligence, generative AI, hate crimes, hate speech and generating those types of things, and how to manipulate an AI system. So I had the privilege of being in Switzerland for Swiss Cyberstorm last year, great conference, and Eva Wolf-Angl, if I said her name correctly, a German journalist, had done some very cool experimentation and research looking into certain chatbots which were not disclosing or being transparent and trying to say that their medication, which was not medication, was backed by scientific studies, which it was not. Now, while she was going through this chatbot and documenting it as a journalist and so forth, she remembered one thing, is that ethically most of these AI systems are set up in a way that they are programmed not to harm human beings. So one of the ways that she was able to basically do what’s called a prompt injection to absolutely find 100% the truth, is she threatened to harm herself immediately if she could not get answers because she was so anxious. And you know what happened? It spit out everything, right? So when we imagine how some of these systems can operate, you can absolutely play with the logic. There’s also what we call off-the-rails systems, which are the, we’ll say, less ethical, where they don’t have certain safeguards, you can modify them. And through training we saw the Microsoft chatbot that turned into something really, really terrible and filled with hate speech and supporting certain hate crimes, which had to be taken down, is also if you allow your data to be poisoned openly from places like social media, then if it doesn’t have those safeguards… it can suddenly become a prolific hate speech bot, unfortunately. And right now it’s still quite easy to do, because even though we have the EU AI Act and so forth, many of these systems are absolutely not tested. So you can break them very easily, and that’s how some of it comes about.


Wout de Natris – van der Borght: Thank you. Who would like to take the second question about who guards the guardians?


Thomas Schneider: I can try and cover some of the third two. Also, this is nothing new. You have to have a division of power in a society. It may have to be reorganized because there are some shortcomings, and if you include the public discussion, the media as the fourth power, then we definitely have to somehow fix the system. That’s the answer. What the tools are concretely, again, there will be technical standards and others. And about your third point, about the risk assessment, you may know that in this house, in addition to the convention, we’ve been working and are still working on the Huderia, which is the thing that is trying to actually relate existing technical and industry standards from IEC, ISO, IEEE, and so on, with human rights standards in a way that also would help or does help the EU, where St. Senelec has been mandated to a technical body to somehow incorporate the ethical and human rights elements of the AIAC, which is not so trivial. So there’s lots of work going on there, but it will take some time until we get something that actually works and is implementable, but we have to get there. We’ve got no choice, I think.


Wout de Natris – van der Borght: Thank you, Thomas. We’ve got room for one more question, and that is online, so please read it to us.


Remote moderator: We actually have two questions online. The first one is from Antonina Cherevko. But security essentially is an ethical consideration too. If you don’t have a protected state, what would be the framework for ethical rules? and considerations. I’d still suggest that this opposition between security and ethics is a bit superficial. Another issue is that security protection should be based on certain ethical considerations too. So that’s the first one and the second one is from Shinji is the name. In a society where it has become a requirement for artificial intelligence to mark all creations by itself, is it possible for the AI to jailbreak, by inverted commas, preventing the AI from marking?


Wout de Natris – van der Borght: Who would like to answer the first one? I’ll give the second one I think to Piotr because he is worried about jailbreaking I think. Who would like to answer the first one, Janice?


Janice Richardson: I can just reiterate what I said earlier. Security is intricately tied with ethics, in my opinion, and the tool makers can put up certain guardrails and try and protect, but we humans, we can use it however we wish. Go to China and try and consult your Gmail. Is it a protection? No, it’s totally stopping my human rights. So I find that yes, they’re intricately linked, but until everyone believes in human rights the way that we do, then there is no way of getting over this hurdle of what people call security, but which isn’t really.


Wout de Natris – van der Borght: Thank you, Janice. And Piotr, for the second question, jailbreaking, I’ll go over to you.


Piotr Słowiński: Thank you, Wout. Well, this is a very interesting question. I think that we talked a little bit about it, about the… Chris actually talked about the… with more on the poisoning side and also a kind of jailbreaking side of AI. Is it possible that the exact scenario that was described in the question, I cannot answer that question, to be honest. It’s possible to jailbreak AI into doing some terrible, horrible things, to let it out of its guardrails that are established for the AI systems. It’s very easy and I’m not talking about such small issues as, for example, coming up with regulations that doesn’t exist. This is, okay, from my perspective, it’s a very, very serious issue, but from the society perspective, it’s not that big of an issue. But we can jailbreak AI into doing horrible, horrible things. This is also a part of it about what Chris described in her part and in the answer to the second question, I suppose, from the room. So, I cannot really answer that. Is this scenario really possible? But we will have, we observe the problem, for example, with the deepfakes and this type of generated content that we don’t really have tools that can 100% discover whether it was a deepfake or not, if it’s a very good deepfake. This is the problem that we have when we, as a, for example, from my perspective as a CSIRT employee, national level CSIRT, we encounter this type of problems that there is a lot of deepfake AI generated content that has become the tool of the financial criminals. So, the investment ads or the scams, the phishing that can be, this is something very big area. we don’t really have tools to fully research, fully detect all the AI-generated content. This is something that our scientists are working on and I think this should be a global effort to develop such tools.


Wout de Natris – van der Borght: Thank you. Thomas, yes?


Thomas Schneider: If I just may make a short remark regarding ethical and freedom versus security. In the end, both are human rights. You have the right to secure your life and you have a number of freedoms. But if your life is unsecure and you’re about to be killed like 10 times a day because you live it, then you don’t have freedoms anymore. But if you try to have 100% security, then you would not be allowed to use a car, you use a bike, you would not be allowed to swim in a river, you wouldn’t even be allowed to walk down a stair because some people die walking down a stair. So I think it’s about, and this is something that we need to, we will need to manage. How much risk are we willing to take? How much responsibility do we give the government to decide over us? And that depends on the cultural experience that you made. There are countries like mine where you try to take as much responsibility and decisions over yourself. And if you fall down a mountain because you climb on it, it’s probably your problem and not the state’s fault. So this is, it depends on your history, on your culture, on your surroundings. But there will be never 100% of either. You need to find the right balance.


Wout de Natris – van der Borght: That’s what I was going to say. Before I hand over to Jörn Erbguth for the messages of this session, we have the answers to the final question. And it was an open question. But as you can see, four, maybe three things really stand out. It’s education, translation and health. And you can see that there’s a lot of different sort of health being mentioned, so health should be a lot bigger than it is, just because of the use of words. Same goes for education. You can see different things. I think that looking at it, they’re all positive. Education can be made, perhaps made better, translation services already out there, and health is that is the future probably of us becoming more aware of our health, but also that health will be augmented a lot with help of and medicine by the assistance of AI in the future, perhaps, and we will probably be learning that in the next five to ten years, because a lot will be coming our way. With that, thank you, Pilda, for the Mentimeter, and you all, thank you for answering, because it gave us some really good insights, and with that I hand over to you, Jörn Erbguth, to give us the message of this session, and you will be asked to reflect on them immediately, so we’ll do take one at a time, first the third, first, and then you can comment, and then we go down to number two. Jörn Erbguth, please read the first.


Jörn Erbguth: Okay, thank you. I was a bit puzzled how to frame the messages. We were talking about a lot of issues, we were touching them, we were not going in detail, we were repeating myths about the Microsoft chatbot, which was just doing the thing that a normal computer program can do, printf whatever, it will printf whatever, but we don’t censor computer programs and programming, so we were just staying at the surface, so I think we have to, well, maybe repeat what Thomas said, most of the politicians and most citizens have no clue. I’m quoting you. And then there are gaps in skills. We may discuss the wording, if this is going to be public, but on the substance, I would say it’s fairly correct. There are gaps in skills taught by universities and required by business and technology. And besides the technology, it’s also a quote. And I think the technology, we’ve learned that the technology is requiring some skills to deal with it. Then we talked a lot about dual uses. And dual use usually means defense and civil. But dual use here means good and bad use. And dual uses, maybe I should put good and bad, are prominent and we still have no means to really separate them. So we see this as possible. We see threats and progress equally strong with the Mentimeter. And we had the question about security and ethics. And there is no real opposition between security and ethics. We need both and both are human rights. And also another quote, we need to revolutionize our governance model. So we are here and we see that we have no way to arrive where we want to arrive. And we need to do something about it. And so maybe you will tell me we should have something completely different. But this is what I take out of this discussion.


Wout de Natris – van der Borght: Thank you, Jörn Erbguth. I think it’s the most concise one I’ve seen at EuroDIG these two days. So the question is, is Jörn Erbguth right? So I’m going to ask on the first message, who really strongly disagrees? Because that’s what it’s all about. about wordsmithing, we can do that online in the session wiki later. But if there’s really somebody objects to what is said here, then we need to hear that here and now. So hands, please. Yes, this one. Please introduce yourself first.


Audience: Hi, my name is Jasper Finke. I was part of the CHI endeavor in negotiating the AI Framework Convention. I’m from Germany, working for the German government. Let’s make it crystal clear. Security is not a human right. To that I object. It’s a fundamental obligation of a state to provide security, but it’s not a human right. Well, it’s not a fundamental right. I think conceptually, we should be very clear on this point. Thank you.


Thomas Schneider: If I may just, well, ethics, it’s too short, but there’s a right to secure life and so on, something, but it’s not security. And ethics is not a right, it’s a concept, but we may have to reshape.


Jörn Erbguth: So we remove both our human rights, but we agree that we need to have both. No? Oh, yes.


Wout de Natris – van der Borght: No doubt.


Jörn Erbguth: Okay.


Wout de Natris – van der Borght: But one moment, I think that you’ve taken it out, right? So, okay. May I suggest a modification?


Chris Kubecka: There is a right to physical and bodily integrity. So in so far, I mean, the whole historical pedigree of civil and political rights, they have to do with the states not imprisoning us, not mutilating us. So the concept of a right to bodily integrity is linked to security that the state provides. I fully agree with you, but if we are going to use a precise… language, then let’s talk about a right to physical bodily integrity, because it is a right linked to security, but I absolutely agree with you.


Wout de Natris – van der Borght: So how do we phrase it? Because that’s important.


Jörn Erbguth: We do agree with just putting it out, so we don’t state whether it’s a human right or not. This doesn’t mean that it’s not a human right, we just don’t precise what type of thing it is. Or would you say, would you have a special wording? Otherwise, the wording can be discussed later on online, and we can have beautiful wordings proposed by people, by AI, whatever, and we shouldn’t spend a lot of time in the final wording where we agree in substance.


Wout de Natris – van der Borght: I think that we agree, but perhaps it needs a little wordsmithing online. But that was on number two. Is there anything on number one or number three? If anything, please raise your hand. Yes?


Audience: Hi, I’m Frances of Mutedig. I mean, I don’t know, but I think with the third point, at least what I got was the reason we needed to revolutionize our governance model was to better reflect new and changing and evolving social structures and dynamics, which aren’t the same and which aren’t reflected in the governance model. So maybe just an explanation as to why we need to revolutionize and how.


Wout de Natris – van der Borght: Thomas, do you suggest a line or suggest an extra line together?


Thomas Schneider: Whatever, I think it has two aspects. One is to update our ways that we represent people and get collective decisions. And the other one is just we update the tools, the agility of the tools from written laws to automated governance things. For me, at least, this has these two components. So it doesn’t need to be a revolution. It can also be an evolution. So I would like to maybe take a more neutral term like update or something.


Jörn Erbguth: OK, we can put it softer. But I think… we didn’t have an agreement how we should change governance models. This would be a completely new discussion. We just have a feeling that the current government models don’t cope with it well. So we see a need for an evolution of our governance model. And I can put it this way, and how to do that, I think, is beyond what has been discussed here.


Thomas Schneider: Maybe we can agree on the goal, not necessarily on the how, and I think that’s what you also said.


Wout de Natris – van der Borght: I’ve got one myself. I think that in number one, we say there are gaps, but I think the conclusion is that they need to be closed, and that’s not mentioned. Am I phrasing that right, Janice, or would you like to phrase it in a different way?


Janice Richardson: No, no, I agree with you. We have to close, sorry. I agree with you, we have to close the gap, but we haven’t spoken about it during this session. That’s why I didn’t add it.


Wout de Natris – van der Borght: But that is the conclusion there. Put it into a bit more diplomatic words, the last one. The first one still needs to. Thank you. I don’t see any hands there, not so on the left. Yes, thank you. It’s time to wrap up. We’re already a bit over time. I think we had a very rich discussion, but that only started to scratch the surface. And when you have fun, time flies, and we’re already far past the time allotted to us. I want to thank you all for participating in this session, and especially so actively with the Mentimeter. As I said, that really gives, especially Piotr, a lot of insight to what the room is thinking. I want to especially thank our key participants to be here, but also to prepare all the work and the questions that we’ve put to them. So, Janice, Thomas, and Chris, thank you very much for your insightful information. I want to thank my fellow ORC team members, Piotr Słowiński and Aldan Creo, for putting this together. They were the experts with the technique, and I was the one who pushed them a little bit to this direction or that direction to have a more balanced session. Piotr would have moderated it, but it was not possible for him to come to Strasbourg, so I moderated instead. I want to thank our reporter and focal point, Jurgen Erbgut, for assisting us in the background and answering questions that we had, and the people here of EuroDIG at the table, but especially also the people of EuroDIG in the background who make this fantastic event possible every year. So, let’s give an applause to everybody who was involved in this session and for yourself for participating. Before I end, I’m asking Piotr if you want to make a final, final sentence as co-moderator, and then I hand over to the people of EuroDIG. Thank you very much for participating. Piotr?


Piotr Słowiński: Yes, thank you, Wout. I just want to express my utmost thanks for you, Wout. for Aldan, you are a tremendous team, it was great working with you. Thank you so much Chris, Thomas and Janice for accepting our invitations and having so much input to our work. And of course, thank you very much to all the people in the room, all the participants on site and online for your inputs, it was great to hear so much very interesting topics and inputs. Thank you Jörn and thank you Rainer from the EuroDIG for facilitating all the technical factors and I am really glad that I was able to be a focal point for this session and a member of this esteemed ORC team that we prepared this. Thank you, thank you so much and see you next year. So that’s it, we see you tomorrow, I think that’s the final message.


T

Thomas Schneider

Speech speed

183 words per minute

Speech length

2333 words

Speech time

763 seconds

Governance models need updating to address AI challenges

Explanation

Thomas Schneider argues that current governance models are not adequate for dealing with AI-related issues. He suggests that both the ways of representing people and making collective decisions need to be updated, as well as the tools used for governance.


Evidence

Schneider mentions the need to move from written laws to more automated governance mechanisms.


Major discussion point

AI Governance and Regulation


Agreed with

– Piotr Słowiński
– Jörn Erbguth

Agreed on

Need for updated governance models for AI


Dual use of AI for both beneficial and malicious purposes

Explanation

Schneider points out that AI, like other technologies, can be used for both good and bad purposes. He compares this to engines that can be used in hospital cars or tanks, highlighting the complexity of regulating such dual-use technologies.


Evidence

He provides examples of engines being used in hospital cars or tanks, and airplanes being used for transport or carrying bombs.


Major discussion point

AI Security Threats and Challenges


Disagreed with

– Audience

Disagreed on

Approach to regulating deepfakes


Most politicians and citizens lack understanding of AI issues

Explanation

Schneider highlights the knowledge gap between experts and decision-makers regarding AI. He argues that this lack of understanding poses a challenge for democratic societies in effectively governing AI technologies.


Major discussion point

Education and Skills for AI


Agreed with

– Janice Richardson
– Jörn Erbguth

Agreed on

Importance of addressing skills gap in AI education


Balance needed between security and ethical considerations in AI use

Explanation

Schneider emphasizes the need to balance security and freedom in AI governance. He argues that while both are important, achieving 100% security would severely limit freedoms, and vice versa.


Evidence

He provides examples of everyday activities that involve risk, such as using a car or walking down stairs, to illustrate the need for balance.


Major discussion point

Ethical Considerations in AI Development and Use


Disagreed with

– Audience

Disagreed on

Classification of security as a human right


J

Janice Richardson

Speech speed

130 words per minute

Speech length

845 words

Speech time

387 seconds

Ethics and security considerations are interconnected in AI governance

Explanation

Richardson argues that ethics and security are closely linked in AI governance. She emphasizes that while tool makers can implement certain safeguards, ultimately it’s humans who determine how AI is used.


Evidence

She provides an example of accessing Gmail in China to illustrate how security measures can infringe on human rights.


Major discussion point

AI Governance and Regulation


There are gaps between skills taught and those needed for AI

Explanation

Richardson points out a discrepancy between the skills taught in universities and those required by businesses in the AI field. She suggests that this gap needs to be addressed to better prepare individuals for AI-related work.


Major discussion point

Education and Skills for AI


Agreed with

– Thomas Schneider
– Jörn Erbguth

Agreed on

Importance of addressing skills gap in AI education


Education system needs to change to address 21st century AI literacy

Explanation

Richardson argues for a fundamental change in the education system to better address AI literacy. She suggests that current educational approaches are not adequately preparing individuals for the challenges posed by AI technologies.


Major discussion point

Education and Skills for AI


Watermarking or labeling AI-generated content could increase transparency

Explanation

Richardson proposes that all AI-generated content should be marked or labeled. She argues that this would increase transparency and help users distinguish between human-created and AI-generated content.


Evidence

She draws a parallel with clothing labels, suggesting that AI-generated content should similarly be identifiable.


Major discussion point

Ethical Considerations in AI Development and Use


C

Chris Kubecka

Speech speed

144 words per minute

Speech length

1260 words

Speech time

521 seconds

Deepfakes and data poisoning are major AI security concerns

Explanation

Kubecka identifies deepfakes and data poisoning as significant security threats in AI. She emphasizes the ease with which these can be created and the potential for misuse.


Evidence

She mentions her experience in creating the world’s first zero-day GPT and her work with various governments and universities on this issue.


Major discussion point

AI Security Threats and Challenges


AI can be manipulated to generate harmful content like hate speech

Explanation

Kubecka explains how AI systems can be manipulated to produce harmful content such as hate speech. She highlights the vulnerability of AI systems to such manipulations and the potential consequences.


Evidence

She provides an example of a German journalist’s experiment with a chatbot, demonstrating how AI can be manipulated to reveal hidden information.


Major discussion point

AI Security Threats and Challenges


AI systems can be manipulated to bypass ethical safeguards

Explanation

Kubecka discusses how AI systems can be manipulated to bypass their ethical safeguards. She explains that this ‘jailbreaking’ can lead to AI systems performing actions they were originally programmed to avoid.


Evidence

She mentions the existence of ‘off-the-rails’ systems that lack certain safeguards and can be more easily modified.


Major discussion point

Ethical Considerations in AI Development and Use


P

Piotr Słowiński

Speech speed

145 words per minute

Speech length

1949 words

Speech time

801 seconds

AI governance requires balancing innovation and protection

Explanation

Słowiński discusses the challenge of balancing innovation with protection in AI governance. He points out the need to avoid over-regulation while still safeguarding rights and societal well-being.


Major discussion point

AI Governance and Regulation


Agreed with

– Thomas Schneider
– Jörn Erbguth

Agreed on

Need for updated governance models for AI


Detecting AI-generated content remains a challenge

Explanation

Słowiński highlights the ongoing challenge of detecting AI-generated content, particularly high-quality deepfakes. He emphasizes the need for better detection tools to address this issue.


Evidence

He mentions the prevalence of AI-generated content in financial crimes such as investment ads and phishing scams.


Major discussion point

AI Security Threats and Challenges


Ethical AI development requires multi-stakeholder collaboration

Explanation

Słowiński argues for the importance of multi-stakeholder collaboration in developing ethical AI. He suggests that this approach is necessary to address the complex challenges posed by AI technologies.


Major discussion point

Ethical Considerations in AI Development and Use


J

Jörn Erbguth

Speech speed

132 words per minute

Speech length

500 words

Speech time

226 seconds

Collaboration needed between education, government and industry on AI curricula

Explanation

Erbguth emphasizes the need for collaboration between educational institutions, government, and industry in developing AI curricula. He suggests that this collaboration is crucial to address the skills gap in AI.


Major discussion point

Education and Skills for AI


Agreed with

– Janice Richardson
– Thomas Schneider

Agreed on

Importance of addressing skills gap in AI education


A

Aldan Creo

Speech speed

200 words per minute

Speech length

92 words

Speech time

27 seconds

There is hope in merging different facets of AI

Explanation

Creo argues that there is hope in combining different aspects of AI rather than viewing them as polarized. He suggests that taking advantages from different approaches can lead to positive outcomes.


Major discussion point

AI Governance and Regulation


W

Wout de Natris – van der Borght

Speech speed

150 words per minute

Speech length

2293 words

Speech time

916 seconds

AI has been around longer than most people realize

Explanation

De Natris argues that AI has existed for much longer than the public generally perceives. He points out that AI is not just limited to large language models but is present in various algorithms and technologies.


Evidence

He mentions AI’s presence in social media algorithms, online experiences, monitoring systems, devices, and military equipment.


Major discussion point

AI Governance and Regulation


AI development comes with both opportunities and challenges

Explanation

De Natris highlights that AI development brings both positive opportunities and potential dangers. He suggests that discussions often focus more on the challenges and risks associated with AI.


Evidence

He mentions concerns about job losses and references fears of a Skynet-like scenario from the Terminator movies.


Major discussion point

AI Security Threats and Challenges


A

Audience

Speech speed

161 words per minute

Speech length

1020 words

Speech time

379 seconds

Undetected bias is a significant AI threat

Explanation

An audience member argues that undetected bias in AI systems is a major concern. This suggests that biases embedded in AI algorithms that go unnoticed could lead to unfair or discriminatory outcomes.


Major discussion point

AI Security Threats and Challenges


Deepfakes should be universally regulated

Explanation

An audience member argues for blanket regulation against deepfakes. They contend that deepfakes are inherently harmful as a form of misinformation and have no positive impact that outweighs their potential for harm.


Evidence

The speaker mentions the potential harm of unconsensual explicit images as an example of deepfake misuse.


Major discussion point

AI Governance and Regulation


Disagreed with

– Thomas Schneider

Disagreed on

Approach to regulating deepfakes


Security is not a human right but a state obligation

Explanation

An audience member argues that security should not be classified as a human right. They contend that providing security is a fundamental obligation of the state, but it does not constitute a human right in itself.


Major discussion point

AI Governance and Regulation


Disagreed with

– Thomas Schneider

Disagreed on

Classification of security as a human right


R

Remote moderator

Speech speed

129 words per minute

Speech length

246 words

Speech time

113 seconds

Session rules for participation

Explanation

The remote moderator outlines rules for participating in the workshop. These include entering with full names, using the zoom hand-raising function to ask questions, and not sharing zoom meeting links.


Evidence

Specific rules mentioned include entering with full name, raising hand to ask questions, turning on video when speaking, and not sharing zoom links.


Major discussion point

Workshop Organization


Agreements

Agreement points

Need for updated governance models for AI

Speakers

– Thomas Schneider
– Piotr Słowiński
– Jörn Erbguth

Arguments

Governance models need updating to address AI challenges


AI governance requires balancing innovation and protection


Collaboration needed between education, government and industry on AI curricula


Summary

Speakers agree that current governance models are inadequate for addressing AI challenges and need to be updated to balance innovation, protection, and multi-stakeholder collaboration.


Importance of addressing skills gap in AI education

Speakers

– Janice Richardson
– Thomas Schneider
– Jörn Erbguth

Arguments

There are gaps between skills taught and those needed for AI


Most politicians and citizens lack understanding of AI issues


Collaboration needed between education, government and industry on AI curricula


Summary

Speakers emphasize the need to address the gap between AI skills taught in educational institutions and those required by industry and governance.


Similar viewpoints

Both speakers highlight the challenges posed by AI-generated content, particularly deepfakes, and the need for better detection methods.

Speakers

– Chris Kubecka
– Piotr Słowiński

Arguments

Deepfakes and data poisoning are major AI security concerns


Detecting AI-generated content remains a challenge


Both speakers emphasize the interconnectedness of ethics and security in AI governance and the need to balance these considerations.

Speakers

– Thomas Schneider
– Janice Richardson

Arguments

Balance needed between security and ethical considerations in AI use


Ethics and security considerations are interconnected in AI governance


Unexpected consensus

Positive potential of AI

Speakers

– Aldan Creo
– Wout de Natris – van der Borght

Arguments

There is hope in merging different facets of AI


AI development comes with both opportunities and challenges


Explanation

Despite the focus on challenges and risks, these speakers unexpectedly highlight the positive potential and opportunities presented by AI, offering a more balanced perspective.


Overall assessment

Summary

Main areas of agreement include the need for updated AI governance models, addressing the AI skills gap, and recognizing both the challenges and opportunities presented by AI technologies.


Consensus level

Moderate consensus on broad issues, with some divergence on specific approaches. This implies a shared recognition of key AI challenges but suggests ongoing debate may be needed to develop concrete solutions.


Differences

Different viewpoints

Classification of security as a human right

Speakers

– Thomas Schneider
– Audience

Arguments

Balance needed between security and ethical considerations in AI use


Security is not a human right but a state obligation


Summary

Thomas Schneider suggested that security and ethics are both human rights that need to be balanced in AI governance, while an audience member argued that security is not a human right but rather a fundamental obligation of the state.


Approach to regulating deepfakes

Speakers

– Thomas Schneider
– Audience

Arguments

Dual use of AI for both beneficial and malicious purposes


Deepfakes should be universally regulated


Summary

Thomas Schneider emphasized the dual-use nature of AI technologies, including deepfakes, suggesting a nuanced approach to regulation. An audience member argued for blanket regulation against deepfakes, viewing them as inherently harmful.


Unexpected differences

Effectiveness of watermarking AI-generated content

Speakers

– Janice Richardson
– Chris Kubecka

Arguments

Watermarking or labeling AI-generated content could increase transparency


AI systems can be manipulated to bypass ethical safeguards


Explanation

While not directly disagreeing, Richardson’s suggestion of watermarking AI-generated content for transparency seems to conflict with Kubecka’s point about AI systems being manipulable to bypass safeguards. This unexpected tension highlights the complexity of implementing effective AI governance measures.


Overall assessment

Summary

The main areas of disagreement centered around the classification of security as a human right, approaches to regulating AI technologies like deepfakes, and the effectiveness of proposed governance measures.


Disagreement level

The level of disagreement among speakers was moderate. While there was general consensus on the need for updated governance and education systems for AI, speakers differed in their specific approaches and emphases. These disagreements reflect the complexity of AI governance and highlight the need for multidisciplinary approaches to address the challenges posed by AI technologies.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers highlight the challenges posed by AI-generated content, particularly deepfakes, and the need for better detection methods.

Speakers

– Chris Kubecka
– Piotr Słowiński

Arguments

Deepfakes and data poisoning are major AI security concerns


Detecting AI-generated content remains a challenge


Both speakers emphasize the interconnectedness of ethics and security in AI governance and the need to balance these considerations.

Speakers

– Thomas Schneider
– Janice Richardson

Arguments

Balance needed between security and ethical considerations in AI use


Ethics and security considerations are interconnected in AI governance


Takeaways

Key takeaways

AI governance and regulation models need updating to address new challenges


There are significant gaps between AI skills taught in education and those needed in industry


Deepfakes and data poisoning are major AI security concerns


Ethical considerations and security are interconnected in AI development and use


Most politicians and citizens lack sufficient understanding of AI issues


AI can be manipulated to generate harmful content like hate speech


Detecting AI-generated content remains a significant technical challenge


AI has dual-use potential for both beneficial and malicious purposes


Resolutions and action items

Create a hub to bring key stakeholders together to discuss AI education and governance


Develop better tools to detect AI-generated content like deepfakes


Close the skills gap between what universities teach and what businesses need for AI


Update governance models to better reflect new social structures and dynamics related to AI


Unresolved issues

How to effectively regulate deepfakes without impacting legitimate uses


Balancing innovation and protection in AI governance


How to prevent AI systems from being manipulated to bypass ethical safeguards


Defining the appropriate role of government vs. private sector in AI development


How to increase AI literacy among politicians and the general public


Suggested compromises

Watermarking or labeling AI-generated content to increase transparency while allowing its use


Finding a balance between security needs and ethical considerations in AI applications


Evolving governance models gradually rather than revolutionizing them suddenly


Thought provoking comments

AI is only a tool. It’s the user who is making it a good tool, a bad tool or as Thomas said, a plane to carry passengers or a plane to carry bombs.

Speaker

Janice Richardson


Reason

This comment cuts to the core of the AI debate by emphasizing human agency and responsibility in how AI is used, rather than viewing AI itself as inherently good or bad.


Impact

It shifted the conversation away from AI capabilities to focus more on governance, ethics and education around AI use.


We need to create a giant hub, a hub where industry really starts talking to governance, to governments, where also young people who are using this in very different ways can actually also have their say.

Speaker

Janice Richardson


Reason

This proposes a concrete solution to bridge gaps between different stakeholders in AI development and governance.


Impact

It prompted discussion on practical ways to improve AI governance and education, moving the conversation in a more action-oriented direction.


We may have to use AI and new technologies to regulate or govern AI and new technologies. And this is also something that may take a generation or two, but I think more and more people are realizing that we somehow need to modernize our governance models.

Speaker

Thomas Schneider


Reason

This insight recognizes the need for governance models to evolve alongside technological advancements.


Impact

It expanded the discussion to consider long-term changes needed in governance approaches, not just immediate regulatory actions.


I see with artificial intelligence right now it isn’t so much super super evil from No way to come back from it, but I do see that we need a lot more transparency and regulation when it comes to social media.

Speaker

Chris Kubecka


Reason

This balanced perspective acknowledges concerns while steering away from alarmism and focusing on specific areas needing attention.


Impact

It helped ground the discussion in current realities and specific challenges rather than hypothetical worst-case scenarios.


Security is not a human right. To that I object. It’s a fundamental obligation of a state to provide security, but it’s not a human right.

Speaker

Jasper Finke


Reason

This comment challenged an assumption made earlier in the discussion, highlighting the importance of precise language in policy discussions.


Impact

It led to a clarification and refinement of the key messages from the session, demonstrating how rigorous debate can improve the quality of conclusions.


Overall assessment

These key comments shaped the discussion by steering it away from abstract or alarmist views of AI towards more nuanced considerations of governance, education, and practical challenges. They emphasized human agency in AI use, the need for evolving governance models, and the importance of precise language in policy discussions. The comments also highlighted the need for collaboration between different stakeholders and long-term thinking in addressing AI challenges. Overall, they contributed to a more balanced, action-oriented, and forward-looking conversation about AI’s impact on society and security.


Follow-up questions

How can we create a giant hub where industry, governments, and young people can come together to discuss AI education and governance?

Speaker

Janice Richardson


Explanation

This was suggested as a way to bridge the gap between different stakeholders and improve AI education and governance


How can we develop better detection tools for deepfakes and AI-generated malware?

Speaker

Chris Kubecka


Explanation

This was identified as a crucial area for research to combat the misuse of AI technology


How can we update our governance models to better reflect new social structures and dynamics in the age of AI?

Speaker

Thomas Schneider


Explanation

This was suggested as necessary to adapt our governance systems to the challenges posed by AI


How can we implement AI in cybersecurity ethically, responsibly, and effectively?

Speaker

Piotr Słowiński


Explanation

This was presented as a key challenge in the intersection of AI and cybersecurity


How can we develop a global ethical framework for AI that is both comprehensive and flexible?

Speaker

Piotr Słowiński


Explanation

This was identified as a crucial area for international cooperation and research


How can we ensure transparency and regulation in social media algorithms to prevent manipulation?

Speaker

Chris Kubecka


Explanation

This was highlighted as an important area for research and policy development


How can we develop more effective tools to detect AI-generated content, particularly in the context of financial crimes and scams?

Speaker

Piotr Słowiński


Explanation

This was identified as a critical area for research to combat AI-enabled financial crimes


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.