Open Forum #17 AI Regulation Insights From Parliaments

25 Jun 2025 14:45h - 15:45h

Open Forum #17 AI Regulation Insights From Parliaments

Session at a glance

Summary

This discussion focused on AI regulation from a parliamentary perspective, featuring representatives from the European Union, Egypt, and Uruguay discussing their respective approaches to governing artificial intelligence. The panel was moderated by Arda Gerkens and organized by the Inter-Parliamentary Union, which has been actively tracking AI policy developments across 37 parliaments and recently adopted a resolution on AI’s impact on democracy and human rights.


Axel Voss from the European Parliament explained that the EU AI Act, completed in 2024, primarily focuses on high-risk AI systems including biometric identification, critical infrastructure, education, employment, law enforcement, and democratic processes. He emphasized the need for unified interpretation across EU member states and warned that democratic legislators are often too slow to keep pace with technological developments. Amira Saber from Egypt’s Parliament described introducing the first AI governance bill in her country, emphasizing the importance of data classification, ethical considerations, and balancing innovation with regulation. She highlighted the weaponization of AI in regional conflicts and stressed the need for extensive capacity building among parliamentarians.


Rodrigo Goni Romero from Uruguay outlined his country’s cautious approach, preferring to establish a general legal framework while observing developments in larger jurisdictions before implementing detailed regulations. Multiple participants emphasized the challenge of balancing innovation incentives with necessary protections, particularly regarding deepfakes, misinformation, and the exploitation of vulnerable populations. The discussion revealed common themes across different regions: the need for parliamentary capacity building, multi-stakeholder approaches, youth engagement, and flexible regulatory frameworks that can adapt to rapidly evolving technology while maintaining strong ethical foundations and human rights protections.


Keypoints

## Major Discussion Points:


– **Current State of AI Regulation Across Different Regions**: The panel discussed varying approaches to AI regulation, with the EU having completed the AI Act focusing on high-risk systems, Egypt developing national AI strategy and draft legislation with emphasis on data classification and ethical considerations, and Uruguay taking a slower, consensus-based approach with general legal frameworks to avoid deterring investment.


– **Balancing Innovation with Regulation**: A central theme throughout the discussion was the challenge of creating regulatory frameworks that protect citizens from AI risks while still encouraging private sector investment and technological advancement. Panelists emphasized the need to incentivize AI development in crucial sectors like healthcare, education, and agriculture.


– **Implementation Challenges and Regulatory Bodies**: Participants raised critical questions about which entities should enforce AI regulations, noting that traditional telecommunications regulatory bodies are insufficient. The discussion highlighted the need for specialized AI governance bodies with proper authority and the challenge of creating enforceable regulations that hold up in courts.


– **Global Coordination vs. Local Adaptation**: The conversation addressed the tension between AI being a global technology requiring international coordination and the need for country-specific regulations that reflect local contexts, cultural norms, and institutional capabilities. Participants noted the lack of international AI law similar to cybersecurity frameworks.


– **Capacity Building and Education**: All panelists emphasized the critical importance of educating parliamentarians, policymakers, and citizens about AI technologies. They stressed that effective regulation requires deep understanding of the technology being regulated, and highlighted the need for continuous learning as AI rapidly evolves.


## Overall Purpose:


The discussion aimed to examine the role of parliaments in AI regulation and governance, sharing experiences and best practices across different countries and regions. The forum sought to address practical challenges parliamentarians face when developing AI legislation and to explore how democratic institutions can effectively govern rapidly evolving AI technologies while protecting human rights and promoting innovation.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with participants openly sharing challenges and learning from each other’s experiences. The tone was serious and urgent, reflecting the gravity of AI’s impact on society, but remained optimistic about finding solutions through international cooperation and knowledge sharing. There was a notable shift toward more technical and practical concerns as the discussion progressed, with audience questions bringing focus to specific implementation challenges, enforcement mechanisms, and real-world harms that need immediate attention.


Speakers

**Speakers from the provided list:**


– **Arda Gerkens**: Moderator of the open forum on AI regulation and parliaments


– **Axel Voss**: German lawyer and politician from the Christian Democratic Union of Germany (CDU), Member of the European Parliament since 2009, Coordinator of the People’s Party Group in the Committee of Legal Affairs (2017), Shadow rapporteur on the AI Act, focuses on digital and legal topics


– **Amira Saber**: Egyptian Member of Parliament, Secretary General of the Foreign Relations Committee, National winner of the 2025 Study UK Alumni Awards (social action category), Alumni of the University of Sussex, Policy leader advocating for climate action, AI governance and youth empowerment, Part of ABNIC (African Parliamentary Network on Internet Governance)


– **Rodrigo Goni Romero**: Politician from Uruguay, Member of Partido Nacional, Represents the department of Salto, President of the Committee of the Future of the Parliament, Engaged with AI and democracy


– **Sarah Lister**: Co-director of governance, peace building and rule of law hub at UNDP, Provided closing remarks


– **Hossam Elgamal**: Private sector representative from Africa, MAG (Multistakeholder Advisory Group) member for four years


– **Yasmin Al Douri**: Co-founder and consultant at the Responsible Technology Hub (first European-led, youth-led non-profit focusing on bringing youth voice to responsible technology policy)


– **Meredith Veit**: Public work and human rights researcher with the Business and Human Rights Resource Center


– **Mounir Sorour**: From Bahrain


**Additional speakers:**


– **Participant** (Ali from Bahrain Shura Council): Mentioned that Bahrain drafted and approved the first AI regulatory law


– **Participant** (unnamed): Made comments about simplifying AI laws and keeping frameworks flexible


– **Participant** (unnamed): Discussed exploitation of children through internet and AI, published article on Ahram Center for Political and Strategic Studies about recruitment of children in extremism through AI and gaming


– **Andy Richardson**: IPU staff member mentioned as contact for AI tracking activities (referenced but did not speak)


Full session report

# Parliamentary Perspectives on AI Regulation: A Comprehensive Discussion Summary


## Introduction and Context


This discussion, moderated by Arda Gerkens, was held as part of the Internet Governance Forum (IGF) and brought together parliamentarians and stakeholders from across the globe to examine the critical role of parliaments in artificial intelligence regulation and governance. The forum was organised by the Inter-Parliamentary Union (IPU), which has been actively developing AI governance initiatives including an October 2024 resolution on AI’s impact on democracy, human rights, and the rule of law.


The IPU announced several concrete initiatives during the discussion: a monthly tracker starting February 2025 covering AI policy developments across 37 parliaments, and an upcoming November 28-30 event organised with Malaysia’s Parliament, UNDP, and the Commonwealth Parliamentary Association. The discussion featured representatives from the European Union, Egypt, Uruguay, and Bahrain, alongside civil society organisations, private sector representatives, and youth advocates, creating a multi-stakeholder dialogue on AI governance challenges.


## Current State of AI Regulation Across Different Regions


### European Union Approach


Axel Voss, representing the European Parliament as a CDU member and shadow rapporteur on the AI Act, provided insights into the EU’s comprehensive approach to AI regulation. The EU AI Act was completed “at the end of last Monday, so in 2024,” representing one of the world’s most ambitious attempts at regulating artificial intelligence through a risk-based approach.


The Act focuses on high-risk AI systems including biometric identification, critical infrastructure, education and vocational training, employment, essential private and public services, law enforcement, migration and border control management, and administration of justice and democratic processes. Voss emphasised concerns about implementation challenges, particularly the need for unified interpretation across EU member states to avoid the confusion experienced with GDPR.


Voss noted ongoing discussions about potentially postponing certain requirements, indicating implementation challenges even for completed legislation. He argued that “the democratic legislator is too slow for the technology developments” and advocated for framework-based approaches rather than detailed technical regulations, calling for legislators to “reduce our normal behaviour” and provide solutions within three months.


### Egyptian National Strategy


Amira Saber, representing Egypt’s Parliament as Secretary General of the Foreign Relations Committee, described her country’s efforts in developing AI governance frameworks. Egypt has developed a national AI strategy, and Saber introduced the first parliamentary draft bill on AI governance in the country, emphasising data classification and ethical considerations as foundational elements.


Saber highlighted the weaponisation of AI in regional conflicts, particularly referencing increased civilian casualties in Gaza due to AI-enhanced military systems. She stressed that “there is no one safe until everyone is safe,” connecting AI governance to broader questions of global security and collective responsibility, drawing parallels to lessons learned from the COVID-19 pandemic.


Her approach emphasises data classification as a prerequisite for effective AI regulation, treating national data as a valuable asset requiring protection and accountability measures. She described Egypt’s establishment of a Supreme Council on Artificial Intelligence and emphasised the need for these bodies to have authority to hold governmental entities accountable.


### Uruguay’s Cautious Consensus Approach


Rodrigo Goni Romero, representing Uruguay’s Parliament as President of the Committee of the Future, outlined his country’s deliberately cautious approach to AI regulation. Uruguay has chosen to establish general legal frameworks whilst observing developments in larger jurisdictions, with Goni Romero explicitly stating that Uruguay prefers to “go slow” and learn from others’ experiences.


This approach prioritises consensus-building and stakeholder engagement over rapid regulatory development, reflecting practical considerations about limited regulatory resources and the need to avoid deterring investment whilst ensuring appropriate oversight. Uruguay also emphasises preparation for AI’s impact on employment through capacity building and training programmes.


### Bahrain’s Regulatory Innovation


A participant from Bahrain’s Shura Council noted that Bahrain has drafted and approved what they described as “the first AI regulator law,” using the EU Act as a benchmark but making it simpler and more streamlined. This contribution highlighted how smaller nations can sometimes move more quickly than larger jurisdictions in developing AI governance frameworks.


## Key Challenges and Debates


### Balancing Innovation with Regulation


Throughout the discussion, the tension between promoting innovation and ensuring adequate protection emerged as a critical challenge. Amira Saber articulated this clearly, emphasising the need to avoid hindering investment whilst ensuring ethical AI use, particularly in sectors such as healthcare, education, and agriculture.


The challenge is particularly acute for developing countries that depend on foreign investment and technology transfer for AI development. Different speakers proposed various approaches to achieving this balance, from Egypt’s emphasis on clear ethical guidelines and data classification to Uruguay’s consensus-based approach and the EU’s comprehensive risk-based framework.


### Implementation and Institutional Capacity


Hossam Elgamal, representing the private sector perspective from Africa, raised critical questions about which entities should enforce AI regulations, noting that traditional telecommunications regulatory bodies are insufficient for comprehensive digital society regulation. He observed that “AI is global, it is not local,” highlighting the mismatch between global technology and national regulatory frameworks.


This institutional gap represents a fundamental challenge, as many countries lack the specialised regulatory bodies needed for effective AI oversight. The discussion revealed that creating effective regulation requires not only appropriate laws but also the institutional capacity to implement and enforce them.


### Democratic Processes and Technological Speed


A significant tension emerged between Axel Voss’s advocacy for faster legislative processes and Rodrigo Goni Romero’s preference for slower, consensus-based approaches. Voss argued that democratic legislators must accelerate their processes to keep pace with technology, whilst Goni Romero defended deliberative approaches that build stakeholder consensus.


This disagreement reflects fundamental differences in regulatory philosophy and the particular challenges faced by different types of countries in the global AI governance landscape.


## Human Rights and Social Impact


### Immediate Harms and Vulnerable Populations


Amira Saber provided vivid examples of AI’s immediate harms, particularly describing deepfake threats to women in conservative communities where AI-generated pornographic content could result in death threats. This powerful illustration demonstrated how AI risks intersect with existing social inequalities and cultural contexts in potentially fatal ways.


The discussion also addressed child exploitation through AI-powered platforms, with one participant describing how gaming platforms are being used for recruitment into extremism and radicalisation. These examples highlighted the urgent need for AI governance frameworks that address current harms rather than focusing solely on potential future benefits.


Meredith Veit, representing the Business and Human Rights Resource Centre, emphasised the importance of addressing actual harms happening now rather than being distracted by industry narratives about potential benefits, highlighting the need to maintain focus on protecting vulnerable populations.


### Generational Perspectives and Assumptions


One of the most significant moments occurred when Yasmin Al Douri, representing youth perspectives through the Responsible Technology Hub, directly challenged assumptions about young people’s digital literacy. When Axel Voss expressed “sorrow” that youth grow up in a world where they cannot rely on what they hear or read, Al Douri responded that “young people are way better at actually seeing what is deepfake and what is not.”


This exchange highlighted broader issues about generational assumptions in policymaking and the importance of including youth voices in AI governance discussions. Al Douri observed that young people are “always deemed as not knowing specific things when we’re actually really good at specific things,” revealing how generational biases can influence policy development.


## Data Governance and Accountability


### Data Classification as Foundation


Amira Saber consistently emphasised data classification as crucial for effective AI regulation, arguing that countries must establish clear frameworks for categorising data according to sensitivity levels before implementing comprehensive AI governance systems. This approach recognises that AI systems are fundamentally dependent on data and that effective AI governance requires robust data governance frameworks.


Hossam Elgamal reinforced this perspective by noting that many countries lack proper data exchange policies needed before implementing AI regulation, highlighting how AI governance builds upon existing digital governance frameworks.


### Accountability Frameworks


The discussion revealed significant challenges in establishing clear accountability frameworks for AI systems. Saber’s question about who should be held accountable when sensitive hospital data is leaked in AI systems illustrates the complexity of assigning responsibility in AI-enabled systems involving multiple actors across development and deployment phases.


## Capacity Building and Education


### Parliamentary Education Needs


All speakers emphasised the critical importance of educating parliamentarians about AI technologies. Amira Saber noted that effective regulation requires deep understanding of the technology being regulated, describing taking courses herself to better understand AI systems.


Axel Voss acknowledged the knowledge gap among current regulators, noting that many lack sufficient technical knowledge to effectively govern AI systems. This educational challenge is compounded by the rapid pace of AI development, requiring continuous learning rather than one-time training programmes.


### Broader Stakeholder Education


The discussion highlighted the need for broader public education about AI systems and their implications. This extends to civil society organisations, private sector actors, and other stakeholders who play important roles in AI governance ecosystems, as the multi-stakeholder nature of AI governance requires that all participants have sufficient understanding to contribute meaningfully.


## International Coordination and Future Directions


### Global Cooperation Challenges


Hossam Elgamal’s observation that “AI is global, it is not local” highlighted the fundamental challenge of governing global technology through national regulatory frameworks. His comparison with cybersecurity, where international law remains underdeveloped despite decades of effort, suggests that AI governance faces similar structural challenges.


Amira Saber proposed creating an “AI policy radar” similar to climate policy tracking systems to help parliamentarians understand global regulatory developments and learn from international experiences. This represents a practical approach to enhancing international coordination without requiring formal treaty arrangements.


### IPU’s Ongoing Role


Sarah Lister, UNDP co-director, emphasised in her closing remarks that AI governance is fundamentally a governance issue, not just a technological one. The IPU’s commitment to tracking parliamentary AI activities across 37 countries provides a foundation for continued knowledge sharing, with Andy Richardson identified as the contact for parliaments wanting to be added to the tracking list.


## Conclusions


The discussion revealed both the complexity of AI governance challenges and the potential for constructive international cooperation. While significant disagreements remain about regulatory timing and approaches, there was broad recognition of the need for capacity building, multi-stakeholder engagement, and frameworks that balance innovation with protection.


Key unresolved challenges include creating appropriate institutional capacity for AI governance, developing effective international coordination mechanisms, and ensuring that regulatory frameworks can address immediate harms while promoting beneficial AI development. The emphasis on youth engagement and multi-stakeholder approaches suggests that effective AI governance will require inclusive processes that incorporate diverse perspectives and expertise.


The IPU’s ongoing initiatives, including the monthly parliamentary tracker and upcoming collaborative events, provide concrete mechanisms for continued cooperation and knowledge sharing as countries navigate these complex governance challenges.


Session transcript

Arda Gerkens: here at the open forum where we will speak about the AI regulation and get some insight from parliaments. You’re not hearing me? Closer. Of course, you have to put your headsets on. This is what the workshop is about, yeah. I’ll give you some seconds to do so. And it’s non-translation, so it’s all English, channel one. Okay, thank you so much. I’m very happy to say that we have a beautiful panel today. Mr. Axel Voss, who is a member of the European Parliament, Miss Amira Saber, who is a member of the House of Representatives of Egypt, and Mr. Gonyi from Uruguay. Before we start, I would like to point out to some of the inter-parliamentarian union activities that have already been done on AI, but also will be happening in future. First of all, in October 2024, the IPU adopted a resolution on the impact of AI on democracy, human rights, and the rule of law. So any parliamentarians in here, or people who are working for any parliament, it’s a very interesting document to look at the resolution that has been adopted in October 2024 on the impact of AI on democracy, human rights, and the rule of law. And since 2025, the beginning of 2025, February, IPU began to publish a monthly tracker that monitors which parliaments are taking action on AI policy. So that’s legislation, committee inquiries… all that tracking is done by the IPU and it currently already covers 37 parliaments. So we we want to know if any parliaments here are missing from that list. If you have activities on AI, you want to be on the tracking list, you can reach out to Mr. Andy Richardson who is sitting here at the corner in front of me and he will add you to that list. And in this year, in November 28 to 20, the 30th of November, there will be a core organized event with the Parliament of Malaysia, UNDP and the Commonwealth Parliamentarian Association and that event is the role of Parliament in shaping the future of responsible AI. This will be published so you can see more about this activity and of course we hope that many parliamentarians will attend. And today from the UNDP is also here Ms. Sarah Lister. She is a co-director of governance, peace building and rule of law hub UNDP and she will make the closing remarks for today. So she won’t be speaking until the end, it’s not because I don’t want her to speak, it’s just because she’s going to listen and then conclude at the end. Okay, that’s for the normal remarks in the beginning. I would like to ask Mr. Axel Vos who is a German lawyer and he’s also a politician from the CDU, the Christian Democratic Union of Germany. He’s actually been serving as a member of the European Parliament since 2009 and you have been a coordinator of the People’s Party Group in the Committee of Legal Affairs in 2017 and you have been focusing your work on digital and legal topics, one of them being the shadow rapporteur on the AI Act. What is the current state of play with regard to AI regulation in Germany? the European Union.


Axel Voss: So, thanks a lot, and also thanks for the invitation and having me here. The situation is as follows. We’ve finished the whole AI Act, yeah, let’s say at the end of last Monday, so in 2024, and now the transition time will pass, so we have an additional part in force at the beginning of August. There’s a kind of discussion going on that this should be postponed and so on. There’s a request also from the American or US side, and so this is one thing, but the other thing is also that our companies need probably more time in adapting these, and so there’s a discussion going on just of focusing on the high-risk AI systems, so that might be postponed. It’s not decided yet, but the discussion is there, and the situation is, of course, we are concentrating in the AI Act, especially on the high-risk systems. This is not forbidden. This is allowed, but we are asking for more requirements for the deployer and also for the system itself, and of course, then it’s quite interesting to see or to notice what is an AI high-risk system, and here we have different options in place, so at first biometrical identification, we consider this as a kind of an AI high-risk system, critical infrastructure, also education and vocational training, so also the question of employment of workers, many and also self-employment, the access and enjoyment of essential private services and benefits, law enforcement also is some of these elements, migration, asylum and border control management is considered in a majority in our house for an AI high-risk system and also administration of justice and democratic processes, also this one is considered as an AI high-risk system and this is also something the whole AI Act is delivering some general additional remarks on AI but focusing more than less on AI high-risk systems and therefore then we are trying also to install or to simplify the lives of our businesses in installing so-called sandboxes. I have still the feeling if I may add this what we shouldn’t do and this is not 100% to my satisfaction that we should have only one interpretation of these provisions of the AI Act. We haven’t done this in the data protection regulation but here in the AI Act it’s extremely important that we have in the European single market only one interpretation of everything and otherwise we will be confused and the companies will be confused and this is quite important. Well I could agree with you more and


Arda Gerkens: also handling the GDPR at sometimes it’s it’s you need to interpret it so it’s very very It’s important that we have clear definitions. I’m looking forward to your work on that. Ms. Amita Saber, you are an Egyptian member of the Parliament and you are serving as the Secretary General of the Foreign Relations Committee. Quite impressive. And you are also a national winner of the 2025 Study UK Alumni Awards, the social action category alumni of the University of Sussex. So it’s quite impressive, right? You are a policy leader, you advocate for climate action, also a very important topic, AI governments and youth empowerment while you’re leading on foreign relations and social development. We’ve just heard from Mr. Axel Vos what’s happening in the European Union, but Europe and somewhere, and maybe you can elaborate a little bit more at what’s happening in your country on AI and regulation.


Amira Saber: Yeah, thank you so much. And it’s a pleasure to be talking on this panel amid esteemed colleagues. Actually in Egypt, we already have a strategy for AI which we have relaunched this year. So there is the boundaries of the national strategy. And I introduced to the Parliament the first draft bill on AI governance, which was very luckily endorsed by 60 other Egyptian MPs. The question of to regulate or not to regulate is a very big debate all across the world. And coming to my background on social democracy, I wanted to tackle the ethical part of which. How we can guarantee that the data is well classified accordingly, how to hold the provider, the entity, the government, everyone accountable according to the sensitivity of the data. Because if it comes to AI, which is basically functioning on the data and the data providers, the national data becomes a huge asset by itself. So if, for example, the data of a hospital in a far place in Egypt is leaked. Who should I hold accountable based on this? So what I tried to do based on the EU Act, by the way, because I introduced this bill like now and a year and a quarter ago, it was March 2024. And what was there in the space back then was the EU Act as the main regulatory bill, which is in the space. And we can like frame on and discuss especially that there was a huge debate between the two big schools of the US and the EU on the regulation. Because when you regulate, there is a constraint somehow on technologies advancement on investing in like crazy innovative ideas, because this might hold you accountable. And accordingly, you may pay fines, this will be a financial burden. So how to balance the innovation and incentivize the private sector to invest? Because we need the AI investments in healthcare in my country to a big deal. We need it in education, we need it in agriculture. And since our esteemed attendees today, many of which are parliamentarians, I’m not just talking about the legislative part of which I’m talking about the scrutinizing part as well. Because here comes the importance of capacitating parliamentarians. If you capacitate and educate parliamentarians on how they can use their tools to ask the different ministries, how they use the AI technologies in the different sectors to advance the work, to benefit as much as we can, the people in their country, this matters a lot. So the basic thing, which I am concerned with also is raising the awareness of the parliamentarians in my country and across the region. Actually, I’m also part of ABNIC, it’s the African Parliamentary Network on Internet Governance, an excellent African network which tries to raise the all the time, the knowledge and the exchange of experiences when it comes to data governance. So again, educate and capacitate, this is crucial. And then we discuss how it worked, because what works for Egypt definitely couldn’t work for other countries and vice versa, but there are experiences which we could always learn from and develop. So actually right away as well, there is another bill which is under progress. It could be governmental, because in Egypt we have the bills either introduced from the parliamentarians or from the government. So actually to my knowledge, there is a very big coordination, a very fruitful one between the Ministry of ICT and the Ministry of Judiciary, the Ministry which is actually concerned with judiciary, to coordinate between each other, how can they release a draft bill or how can they introduce a bill on AI governance. So the question of regulation for me, my priorities of which is how we can make it ethical, how we can focus on the sectors that matters, must and top, and how can we also incentivize the private sector, because my bill has certain incentives to get the private sector interest to invest in certain sectors actually on top of which are the ones which I prioritized during my talk now. So it’s a continuous learning process, but very honestly, after we have seen in the Middle East recently and the political consequences of the usage of AI when it is weaponized, because it has been weaponized recently in the war on Gaza, it has weaponized recently in the different wars in the region, this brings attention much more to how important the regulators, the policy makers, everyone who is a decision maker should be very much aware and capacitated on how. this develops touching the lives of thousands of people. There have been lots of reports circulating that when the IDF uses AI on the war in Gaza, this actually has raised the casualties of civilians being dead. So again, the question is political, the question is social, the question is on every aspect of how AI governance today is one of the most things which affects every aspect of our lives, no matter where we are.


Arda Gerkens: Well, this is a very warm way of saying that, the fact that AI has on our lives and especially in your region. And it’s very good to hear that you’re also there giving advice to other countries in your region and region. So maybe if there’s any parliamentarian here who would like to speak to Amira afterwards and get some advice, please make use of her. And in the meanwhile, I would like to, I hope you can hear me well, because my, okay, and it’s my headphones that not working well. Mr. Rodrigo Coño Romero, you are a politician from Uruguay. From Uruguay. And you belong to the Partido Nacional and you represent the department of Salto, right? Yes. Yes, and you have been engaged with AI and democracy. And also you have highlighted the importance of getting the parliaments in the debate on AI. You think that’s very important. Can you tell me what’s happening with the regulation on AI in your country?


Rodrigo Goni Romero: Yes. As Uruguay is a very small country in South America between biggest, big country, Argentina and Brazil, we are prioritized to be open to investment and focusing promoting, investing in AI. So in Uruguay and in neither country in Latin America have approved a law, an AI law as European Act, but in many countries there are many hundreds and hundreds of drafts. In Uruguay no, in Uruguay we prefer have passed a legal framework approved by all parties which self mandate us to develop regulation with the participation of all sectors and basis of consensus. Like I try to avoid to to do a bad signal for the investment so just approve a very very general legal framework that well we we started running to all stakeholders, academia, companies, technological companies and to develop the process very slow. I prefer go slow. I prefer observe that that they happen in Europe and US and maybe a bigger country like Brazil for


Arda Gerkens: example. Thank you very much. It’s interesting to hear that both of you also highlight the importance of having investment on AI and the dangers that you have. infringing on that investment when you have regulation maybe Axel you can tell a little bit about what progress has been achieved and do you see as apart from this challenges and risks with the AI regulation so we have to be aware


Axel Voss: that the digital future means that everything is more transparent is circular and also digital and this brings us all as a legislator under extreme pressure because it’s a challenge now to adapt the online world to the offline world somehow or vice versa and this of course what we are connecting with AI this is the fundament probably of everything else what is coming next so that’s why it is important to have kind of a frame how far you can go and what might not be part of AI systems so what is forbidden what is high risk and what is low risk and so on and so also there is a kind of a fear for shifting power from humans to machines so that’s why also we need to face these and there’s a thin line only between good purposes and bad purposes so that’s why we have the abilities with AI in widening our human abilities on the one hand side and organizational of societal possibilities the meaning of AI for health and climate change energy traffic administration security education future orientated research this is all what we have in mind that what we should go for And that’s why I would say there are a lot of challenges of having advantages also in using AI. On the other hand, of course, we are facing also a lot of risks because of this thin line. There is a risk for democracies, once again, this fear of loss of control. Then we are facing these arguments of surveillance machines, conspiracies, theories. There might be not an exchange of views and arguments any longer. What is then anti-democratic? The manipulation of the public opinions. This is also something fake news, disinformation for destabilizing countries. So, hurdles of attacking free and democracies and freedom is lower. So, even especially for the young people or youth generation, I would say it’s hard to differentiate what is real and what is not real. So, they can’t trust any longer what they are reading, seeing or hearing. So, that’s why this is a kind of a risk situation. You need to bring this somehow in a balancing situation that you are focusing on these advantages, what you can gain out of it and trying to reduce risks also. And this is what we are trying to do. Thank you, Axel.


Arda Gerkens: You just mentioned that for youth it’s hard to know what’s real, but maybe I’m not youth anymore. Sometimes I also feel it’s hard to know what’s real and what’s not. Amira, can you tell, do you also recognize that? What risks and challenges do you see?


Amira Saber: Absolutely. Actually, one of the biggest challenges is deepfake. Because deepfake is not just about how it affects the political space when it comes to elections and electoral campaigns and the manipulation of some political systems. It comes to everyone’s life, especially in some closed communities or communities which are still having their own strict cultural frameworks. Imagine a girl who is living in a village with certain cultural norms and she has leaked photographs of her on pornography. This might threaten her life directly. She could be killed. And actually there are incidents in many countries of girls and women who have suffered or actually have risked their lives to deepfake. So this is affecting everyone’s life. You can ruin someone, you can ruin his life, his career, her life, her career, based on deepfake images, deepfake videos. And now, even during the war, the war in Iran, which is just a few days ago, we were doomed. Everyone was doomed with a massive amount of media, videos, photos, news, with a very fine line on how to verify this is true or not. So in today’s world, what we thought that it would more of empower us towards knowledge and towards edging this knowledge is actually questioning the amount of feeds which we have. Is it true or not? And this again brings a big, important question of how do we verify? And this also applies to education. How do a professor verifies that a certain research is AI generated or it’s done by the personal work of the student? So what is happening now is that I see also developments. on many levels when it comes to the models verification and when it comes to the content verification, is it AI generated or not? And this is why I say all the time, educate, educate, educate, capacitate, capacitate, capacitate. For politicians, for policymakers, and for the people, for everyone. Because it’s a multi-stakeholder thing. Because it’s a multi-stakeholder thing with all the parties involved in the development on this process. Everyone’s life is touched and is altered. And you can’t keep just away from it. However, whatever trials you are exerting in that track, you can’t. It’s embedded in your life. And the good thing now is that the ethical questions are being graced on every international table. I usually hear about different strategies that are now trying to regulate and to have a bold framing, which consequently have some legal responsibilities. If you are dealing with a classified data or you’re dealing with data that is of high risk, this should have immediately a kind of legal liability. Otherwise, it’s a case because it’s already a case which we really need to, it’s a case which is touching everyone’s life, which we really need to regulate. In my country and in my region, the beautiful thing is that there are thousands, I can say millions of young people who are very enthusiastic to get to learn all about AI. I myself, before developing the draftable, I went through different courses, crash ones. I had interviews, definitely because you can’t regulate something which you don’t know. Really good. At least. Take an example of what she did. Very good. You should have a deep dive. Actually, now I can’t pass a day without using at least three apps of AI, at least in my daily work. And it helps a lot. This is the beautiful face of it. But every coin has another side. And this other side, which we as regulators and decision makers should look at thoroughly. We should maximize the benefits of this technology, and we should also look at the risks. And in countries like mine and other countries, in even a position which is having stressful economic situations, the question of AI is the question of electricity, is the question of infrastructure, is a question of capacity building. So the divide is there. I don’t usually like to speak this language, but there is a divide which we all should cross, because this motto, if anyone remembers which of the SDGs, leaving no one behind, we just, you know, we just didn’t remember well as humans the lesson of COVID, when we stated that there is no one safe until everyone is safe. We just forgot about that. This applies to everything else. It applies to AI. There is no one safe until everyone is safe, and we have a responsibility to make it a safe space as possible, because definitely it’s going to be manipulated all the time for different purposes, for different reasons, politically and otherwise. But the real responsibility here is how to make it safe as much as possible. And the good news also is that countries in my region, like Saudi Arabia or UAE or Qatar, they are having a huge potential, like in my country, of young people who are eager to learn, who are eager to get even ahead of the curve of too many competitive setups and environments, which is really appreciated. And I always say that the private sector and the UN agencies have a huge duty to capacitate as much as possible and to get investors to highlight these important tracks, which touches everyone’s life.


Arda Gerkens: Well, thank you very much. I see there’s a question. If you hold on one second, sir, because I would like to pose a question to Mr. Rodrigo and then afterwards I will give the floor to you. So, please go ahead. I was talking about the progress and the challenges and the risks.


Rodrigo Goni Romero: Yes, we are focusing on developing a national strategy of AI to try to involve all the society in the risk, in the challenge, and very focused on capacity. I am the president of the Committee of the Future of the Parliament and I try to awareness to the people the risk of the future of the job. So, we put focus in prepare, capacity. I have to recognize in many perspectives that artificial intelligence have many risks to the job, many changes. So, we have to prepare. There is no magic way to face this risk. Just capacity, capacity, capacity. But many people don’t know about the risk. So, I think it’s our duty of the Parliament to awareness, to ask to prepare and to facilitate the program. Not just to the children, not just to the school, also to the worker. So, we are focusing this.


Arda Gerkens: Very good, thank you very much. The floor, please could you state your name and then your question.


Hossam Elgamal: Yes, my name is Hossam El Gamal. I’m private sector from Africa and I’m a MAG member. I have been MAG member for four years. Well, coming back, AI has been swiftly coming, and in fact it increased a lot the digital gap. We are facing power gap, we are facing computational power gap, and ability to buy the processors to do AI. We are facing data access gap. And finally we are facing scientist capability within the AI that would then build capacity for others as well. Many countries are working on building capacity, which is good, putting some strategy is good. But let me just ask a few questions for you to think about and answer. One thing is, what is the regulatory body that will implement? We don’t in all countries till now, all what we have is telecommunication regulatory body, which is no longer capable of handling digital society regulation. And going to AI and putting regulation for AI, who is going to implement the regulation? So we need along with building AI regulation to start thinking about the regulator and how we are going to do it. So to incentivize people to work in AI, but at the same time to put limitation to misuse of AI. Now second thing is, AI is global, it is not local, and same as cyber security. Till now we are facing huge challenge in having international law of cyber security. And will be the same for AI, each country will try to start having its own regulation. But how we are going to implement it globally? Because generally data that will be used, whether fake or right. will be a global one. So again, international regulation, how is it going to handle this? And finally, a lot of countries in the South especially, in addition of lacking the power, they did not implement yet data exchange policy. So we need first to pass this point in order to be able to go to the next one. Thank you very much.


Arda Gerkens: Thank you very much. We will take one more question please, yes.


Participant: Thank you so much. My name is Ali from Bahrain Shura Council and I believe that Bahrain have drafted the first and got approved the first AI regulator, sorry, law for regulating the use of AI. We managed to do a framework that it combined a balance, which is that was the challenge between investors, getting investors and believe pushing the innovation and how to regulate the bad side of the use of the AI. But my question, I see in the neighbor’s country like Dubai and Saudi Arabia, like Dubai maybe they have a ministry of AI there currently. I think they are implementing also putting a member, an AI member in the parliament, but I don’t see them that they are regulating the AI. Are we doing a step forward, are we doing it ahead or is it we have to slow down on regulating the thing? That’s the question. Thank you.


Arda Gerkens: And the last question and then we’ll answer them all together because they are very much alike.


Yasmin Al Douri: I’ll keep it short. Good afternoon. My name is Yasmin Aldouri. I’m a co-founder and consultant at the Responsible Technology Hub. We’re actually the first European-led, youth-led non-profit that focuses on bringing the youth voice to responsible technology in the area of policy as well. my question is focused actually on Mr. Axel Voss. So your municipality is actually my hometown, so I’m really happy to see you, but also I had to think a little bit when you said that young people do not really understand or cannot really distinguish between news that might be disinformation or misinformation or can’t distinguish between deepfakes and I would definitely disagree specifically on my work with young people. I would even state that young people are way better at actually seeing what is deepfake and what is not and this shows a little bit the issue that we’re generally facing as a young generation. We are always deemed as not knowing specific things when we’re actually really good at specific things. So my question to you specifically for Europe is how can we bring the reality of youth to parliamentarians and how can we make sure that the regulation we’re actually doing today is future-proof for generations that are coming? Thank you.


Arda Gerkens: Well very good questions all of them. I actually would like to give Amira first the floor on you know the the question of what regulatory body who is capable of enough of implementing this and also because you’re from the region for some very good points made by behind and it and also it’s it’s a global problem right so how do we make sure that legislation in one country has the same effect in the other and maybe we don’t want to have the same effect so how do you go about with this as a as a parliamentarian? First when it comes to


Amira Saber: the very important question from Mr. Hossam that was a challenge for me but in Egypt actually have recently a body which is the Supreme Council on Artificial Intelligence. By law they didn’t have the authority to actually may I say hold the governmental entities accountable this is what I tried to do in my draft bill to to give them the authority to hold every ministry accountable. And in Egypt, it’s very much intermingled because now, as I said, the Ministry of Justice and the Ministry of ICT, they are collaborating together towards another draft bill. And actually when it comes to every other ministry, was it education, health, whatever, they have also a mandate on advancing their services with AI technologies. So who could be the body? It should be a body which is actually just, you know, having the framework, the strategy, and it should be regulating it amongst all the other players, which in my case, I see in Egypt, the Supreme Council on Artificial Intelligence is a good entity to do that. So it depends a lot on the context, it depends a lot on the institutions and on the stage and the development of these institutions according to each and every country. But in that case, that’s how I see it in Egypt. For the comment and the question of Honorable Ali, first, congratulations. It’s so good music to my ear to hear that another Arab country has walked miles in that way and road because it’s, I see, a matter of sovereignty. It’s, I see, a matter which is touching everyone’s life. So any parliamentarian trial in that track is something which I very much appreciate. And for this, allow me, Arda, to just intensify a recommendation, which I said yesterday at one of the panels that we need to have in the space a kind of a policy radar for AI. That would be extremely knowledgeable for anyone who is accessing that on the level of policymaking to know what is happening in every region, what is happening in every country, and what do they have in place and what do they have in progress. So if we have such kind of radar, there is a climate policy radar. I wish there would be an AI policy radar. Any of the entities could sponsor that and it would be of great benefit for parliamentarians and decision makers so that I know and everyone knows what is happening in other countries. So, should we slow down or this is the question to regulate or not, but let’s at least classify the data. This is what I am very much concerned with because what is not classified couldn’t be easily regulated. So we have like broader things to think about when it comes to classification and to have a legal liability based on which and the other thing is not to slow down at all when it comes to incentivizing the ministries and asking them and doing this chronitizing job of making every ministry up to using AI for the good of the people in their mandate. So it’s like depending on the context, but again, at least the data classification is one crucial thing.


Arda Gerkens: You make a very good point on the data classification and I think that also answers the question. I think it is indeed worrying that there’s a lot of legislation still needed on this data exchange and that’s something that needs to be worked upon simultaneously. I would like to ask Mr. Axel Fos the question, the excellent question, how do we make the reality of youth in the legislation? So how do we connect it? And maybe you can take another question as well because there’s been some online questions, remote questions asked. Can AI help in mitigating the social impact of child abuse and gender related inequalities? It’s kind of a heavy question, but I thought as you are very much into the AIX, maybe you can stipulate something on it. So thank you.


Axel Voss: If I may start with my friend from the Rhineland, I do not, ah, over there. So what I mentioned shouldn’t be a kind of an insult. It was more, I feel sorrow that you are growing up in a world where you are not, can’t rely on something, what is saying or what you’re hearing or what you’re reading. So this is, so I grew up in a different century. So that’s why I would say it was more easy for me to find trust in. some of these elements. But the question of course is the now active regulators probably do not have the knowledge in a way in really understanding what’s going on. And probably sometimes I have the feeling the politicians might be a bigger problem than the technology because we are hindering sometimes some elements and especially if you’re dealing with digital laws then all of a sudden you have a totally different context instead of online laws. So that’s why it’s difficult. I hope that every parliament has someone dealing with these who is understanding a bit more than the average politician. Also if we are coming to this point and saying oh yes digital is very important and we need to develop something then probably nobody knows what to do. So that’s why we need to come forward with it and also to the others. The problem is so I speak for the European Union the democratic legislator is too slow for the technology developments. And so we are always behind. That’s why I’m I would, we can face this problem a bit more if we are saying we need to reduce the normal behavior of a democratic legislator. So meaning there should be a kind of a solution in place after three months. And otherwise we are losing track of this problem and if we are once ready with a law then it might be a kind of a different problem already occurring. So, we need to be faster and we need probably not to be detailed all the time. It’s more a kind of a frame what we should have with ethical aspects or a kind of value-orientated frame and then you might have the time, as everyone knows, if I’m in the frame, everything is okay. If I’m outside the frame, you will face trouble. So this might be a kind of a better way forward. So I’m asking sometimes myself, shouldn’t we instead of focusing on all these risks and avoiding these, shouldn’t we just ask ourselves what we are expecting from AI? This is a more positive approach probably in saying what we are expecting and we do not want to see other things. So that’s why it might help in a kind of a better way. So data exchange policy, what was mentioned, or the second question? Can it help in mitigating the social impact of gender-related abuse and gender-related inequalities? Yes. I would say it can, but of course you need to have a kind of a plan in mind how to do this. I would say it’s not coming as a kind of a stereotype behavior, so no, you have to concentrate on it and also it can help everyone in mitigating these problems, I would say. But of course you need to be very careful in what are the conditions at the end in formulating these.


Arda Gerkens: So, yeah, basically it’s what I hear you say, that as a parliament you should avoid being too detailed. Yes. But make sure that you have a framework here. Also a philosopher comes to mind, Mr. Wittgenstein, who once said, if you don’t know what you’re talking about, you shouldn’t speak on it. I hear both of you say, maybe as a politician, parliamentarian, this time you should put a little extra effort in understanding AI, because it’s so important. Do you think that the parliaments now, it’s a question also from remotes, that they are kind of shaping their own AI policies, or are they just looking at what somebody else is proposing? You already said that you took the AI Act as an example. I depended on too many sources, of which, because it was in the space then as a legislation, I definitely had a deep dive on what the EU Act had. And how about you, Axel, the European Union, they started fresh, or did they look at other examples? No. There are not many examples in there. You were the first. Yeah. They were the first, yeah, but you’re the first. I followed them. You followed them. OK, so. I went to one. Right.


Axel Voss: It’s very detailed, because we are sometimes too complicated and too bureaucratic orientated, but in general, yes, it’s. So for the people who are not hearing this, Bahrain just stated that they have used, they benchmarked their law against the one of the European Commission, and they made it much more simpler. I think that’s what they tried. As a framework. Right. OK. The problem for kind of a rule of law system is at the end, and this was also mentioned with the first question, how to implement and how to enforce all these. Also, yes, the capacity might not be there. So what we are trying now to do is to build up a so-called AI office and also give guidelines and what the provisions should mean and how we might come forward. But at the end, also, you need someone in controlling. and are enforcing all these, and here we are coming to a problem, you need to have something what is also valid in front of a court. So if this is not possible, then it might be very tricky.


Arda Gerkens: You wanted to add something, Ms. Habe?


Amira Saber: Yes. One of the main tools which parliamentarians use is actually the assessment of the legislative impact. It’s still missing to a great extent to assist the legislative impact of the AI bills which are already released. So we are also challenged by the impact of these legislations. So for example, in the EU Act, can I say full mouth 100% that this didn’t affect the amount of private sector investments in the fields of high tech and AI development and that these companies went to the US, for example? It’s not accessible yet. You know, these kind of big questions, it takes time to assess. That’s why the question of regulation when it comes to AI is very challenging. You need to put the regulation now, and you need also to assess the impact, and you need to all the time amend and edit and amend and edit according to the advancement of the technologies and according to what is in place. That’s why I all the time probe the framework and the legal interventions which are regulating the basic thing, which are more towards the data classification, the rights, the ethics, the broader ones. Because when you dive deeply in the details, you get completely doomed. And this is not what the regulator should do.


Arda Gerkens: I see a lot of people nodding in what you’re saying about adding the legislation all the time. So there’s a question from the floor. And if there’s another question, you have to be at the microphone. But first from the lady. Yeah.


Meredith Veit: Hi, thank you. My name is Meredith. I’m the public work and human rights researcher with the Business and Human Rights Resource Center. And it’s really great that we’re having this panel now about AI regulation and the program. Thank you. of it and better understanding what are the impacts because there are governments even here who are touting some very problematic narratives about the dangers of clipping the wings of the potential and everything that AI can do for the benefits of society while ignoring the mountains of evidence that we have about the actual harms that are taking place now that need to be mitigated and dealt with now and justice that needs to happen now so my question from your seats and your positionality is what is needed and what hope do you have in terms of pushing back against this wave and these very problematic narratives as I mentioned in order to hold the line and keep pushing forward on all of the momentum that had been building about AI regulation for quite some time and even keeping the EU AI Act strong and making sure that the AI office is strong in its enforcement and having a really well-tuned regulatory approach that can help continue to set more standards moving


Arda Gerkens: forward thank you so much thank you and another question from behind your


Participant: honorable yeah hi actually it’s not a question just a comment on amending the law I believe the more that what we can do is to simplify the law and just to keep it to another body that they are more of more flexible because changing anything in the law it’s gonna take a long time which we cannot cope as with the developing the fast developing of the AI so this is the only thing come


Arda Gerkens: that’s a very good advice just keep the framework in the law and make sure that you have lower regulation to be able to be flexible I’m gonna take the last question here yes please


Participant: thank you for giving me the floor I’m very happy that I could have touched you especially on this topic exploitation of children through the internet and through artificial intelligence. I have just published a new article on Ahram Center for Political and Strategic Studies about this issue, how young people and children are recruited in extremism and radicalization and terrorism through artificial intelligence. And it was surprising, because I started to study and research about this topic, that they are recruiting children through the games, through the gamings. They recruit children through electronic games on the internet. It is available. So I concluded this study with some recommendations. One of the recommendations, look after your children. Don’t leave them alone with the screens. Don’t do it. They isolate them firstly, then they start to promote their own ideas and radicalization approaches. Then we find some young people doing horrible things. So one of the recommendations is awareness is very important. And the legislation, of course. Thank you. Bye bye.


Arda Gerkens: Thank you very much. And this is really the last question I’m going to take. I’m going to ask you very brief answers, because Sarah will then wrap up.


Mounir Sorour: I’m Mounir Sourour from Bahrain, I would like to thank you and thank all of you. Making regulation is very easy, but at the same time difficult, especially in AI. Because AI, we are working on open space. And every time we do something, I mean, because we are moving and we’re expanding. As Ms. Iman said, we are every time just asking, adapting to ourselves. Can we just say yes to something to be like a main framework? And we are looking for balancing, we don’t look now to make a regulation, we can’t fix a regulation, especially with AI. Can we, I mean, according to the experience of Egyptian, can we just mention what the main framework at least to minimize the risk of the AI?


Arda Gerkens: Thanks. Thank you. A very short answer from Amira first. You’re good? Axel, maybe you want to comment?


Axel Voss: Yes. So thanks for the recommendations. It’s good to hear. To Meredith, yes, so keeping the AI act strong. This is what we are trying to do, but we are in a kind of a trap. We are seeing AI and AI systems and generative AI is creating a lot of wealth. And so we are lagging behind everywhere except two big regions. And this is why we need to support these. But on the other hand, also trying to keep very strong limits and framework.


Arda Gerkens: Thank you. Thank you very much. Thank you all. I would like to give the floor to Miss Sarah Lister, who will give us her closing remarks.


Sarah Lister: Thank you very much. And as we conclude this open forum on AI regulation, I’d like to start by thanking, first of all, all the panelists, the participants, the moderator in person and online for all your insights and the questions and the commitment to inclusive rights-based governance of artificial intelligence. As a personal reflection, I am delighted that there is such a well-attended session focusing on the role of parliaments. In my experience, too often in digitalization discussions, national governance authorities and processes have been forgotten and have been brought too late to the processes. So, today’s discussion has highlighted the practical role of parliaments in ensuring that AI systems are aligned with human rights and normative principles and to ensure that no one is left behind, as the colleague from Egypt said. We have heard that parliaments are on the front lines of some of the most pressing policy issues of our time. How to protect citizens and all people while enabling innovation? And that has been a core theme that has run through our discussion this afternoon. It was raised by the colleague from Egypt and from Uruguay and from the floor. And then how do we ensure oversight in a rapidly evolving technological landscape? What type of regulatory entities do you need? How do you create governance frameworks that are grounded in human rights? And we have heard that to ensure the benefits of AI reach everyone, we must invest in the development, governance and support of responsible and ethical AI, as well as in countries’ capacities to build safe, secure and trustworthy AI systems. Parliaments are key in governing the use of global technologies and ensuring that these truly serve their public’s interests and support the achievements of the SDGs. We have heard that no single actor can shape the governance of AI alone. Effective regulation demands a multi-stakeholder approach, bringing together parliaments, executive branches, civil society, private sector and technical communities. And Uruguay talked about the consensus-based approach that takes place there. We heard from the floor and then in response about the importance of bringing youth voices to the discussions and have asked the question, how can we effectively ensure that multi-stakeholder, including young people’s, approach to this issue? The report of the Secretary General’s High-Level Advisory Body on AI and the Global Digital Compact that was signed on the margins of the General Assembly last year notes that AI governance requires addressing existing gaps in representation, coordination and implementation. Parliaments are key actors in ensuring that. At UNDP, we see AI not only as a technological issue, but as a governance one. We support countries with our partners in navigating both the governance of digitalization and digitalization for governance. And we support countries to develop their potential to transform their public services, build a more open and inclusive public sphere and enhance democratic processes and institutions. We co-host with IPU, an expert group on parliaments and digital policy to try to bring together some of these elements that people have asked for in terms of sharing experiences. And other international organizations join that expert group to help ensure that we pull together. I see that my time has passed, so I would just say once again, thank you very much for being a part of this timely and important conversation. A special thanks to the Inter-Parliamentary Union for their partnership in making this event possible and to the IGF for hosting us. Thank you very much.


A

Axel Voss

Speech speed

120 words per minute

Speech length

1669 words

Speech time

829 seconds

EU AI Act completed in 2024 with focus on high-risk systems and potential postponement discussions

Explanation

The EU has finished the AI Act by the end of 2024, with implementation beginning in August. There are ongoing discussions about potentially postponing certain aspects, particularly focusing on high-risk AI systems, due to requests from the US and companies needing more time to adapt.


Evidence

Mentions specific timeline of completion by end of 2024, implementation starting in August, requests from American/US side for postponement, and companies needing adaptation time


Major discussion point

Current State of AI Regulation Across Different Countries


Topics

Legal and regulatory


Democratic legislators are too slow for technology developments, need faster, framework-based approach rather than detailed laws

Explanation

The democratic legislative process cannot keep pace with rapid technological developments, causing lawmakers to always be behind. A solution would be to reduce normal legislative timelines to three months and focus on ethical, value-oriented frameworks rather than detailed regulations.


Evidence

States that democratic legislator is too slow for technology developments and suggests three-month solution timeframe, mentions need for ethical aspects and value-orientated frame


Major discussion point

Balancing Innovation and Regulation


Topics

Legal and regulatory


Agreed with

– Amira Saber
– Participant
– Mounir Sorour

Agreed on

Need for flexible regulatory frameworks rather than detailed fixed laws


Disagreed with

– Amira Saber

Disagreed on

Detailed vs. framework-based regulation


Regulation should focus on positive expectations from AI rather than just avoiding risks

Explanation

Instead of concentrating solely on risks and how to avoid them, legislators should take a more positive approach by defining what they expect from AI. This would create a clearer framework where anything within expectations is acceptable, and anything outside faces consequences.


Evidence

Suggests asking ‘what we are expecting from AI’ as a more positive approach rather than focusing on avoiding risks


Major discussion point

Balancing Innovation and Regulation


Topics

Legal and regulatory


Agreed with

– Amira Saber
– Rodrigo Goni Romero
– Participant

Agreed on

Balancing innovation incentives with regulatory protection


Difficulty distinguishing real from fake content affects democratic processes and public trust

Explanation

AI makes it increasingly difficult for people, especially young generations, to differentiate between real and fake content. This creates risks for democracy through manipulation of public opinion, fake news, disinformation, and destabilization of countries.


Evidence

Mentions fake news, disinformation for destabilizing countries, manipulation of public opinions, and that young people can’t trust what they’re reading, seeing or hearing


Major discussion point

Risks and Challenges of AI


Topics

Human rights | Sociocultural


Disagreed with

– Yasmin Al Douri

Disagreed on

Young people’s ability to distinguish real from fake content


Need for better understanding among regulators who may not have sufficient technical knowledge

Explanation

Politicians and regulators often lack the necessary knowledge to understand AI technology properly. This creates a situation where politicians might be a bigger problem than the technology itself, as they may hinder beneficial developments due to insufficient understanding.


Evidence

States that ‘politicians might be a bigger problem than the technology because we are hindering sometimes some elements’ and mentions that active regulators ‘do not have the knowledge in really understanding what’s going on’


Major discussion point

Capacity Building and Education


Topics

Development


Agreed with

– Amira Saber
– Rodrigo Goni Romero
– Arda Gerkens

Agreed on

Capacity building and education are essential for all stakeholders


Requirement for enforceable regulations that are valid in court systems

Explanation

Effective AI regulation requires not just implementation mechanisms but also enforcement capabilities that can withstand legal scrutiny. The challenge lies in creating regulations that are legally sound and can be properly enforced in court systems.


Evidence

Mentions building up ‘AI office’ and giving guidelines, but emphasizes need for ‘something what is also valid in front of a court’


Major discussion point

Implementation and Enforcement Challenges


Topics

Legal and regulatory


AI can help mitigate child abuse and gender inequalities but requires careful planning

Explanation

AI has the potential to address social issues like child abuse and gender-related inequalities, but this won’t happen automatically. It requires deliberate planning and careful consideration of the conditions and formulation of such systems.


Evidence

States ‘it can, but of course you need to have a kind of a plan in mind how to do this’ and mentions need to be ‘very careful in what are the conditions at the end in formulating these’


Major discussion point

Human Rights and Social Impact


Topics

Human rights | Children rights


A

Amira Saber

Speech speed

156 words per minute

Speech length

2487 words

Speech time

952 seconds

Egypt has national AI strategy and first parliamentary draft bill on AI governance with focus on ethical data classification

Explanation

Egypt has relaunched its national AI strategy and introduced the first parliamentary draft bill on AI governance, endorsed by 60 MPs. The focus is on ethical aspects, data classification, and ensuring accountability for data providers and entities based on data sensitivity.


Evidence

Mentions relaunching national strategy, first draft bill endorsed by 60 Egyptian MPs, and specific focus on data classification and accountability based on data sensitivity


Major discussion point

Current State of AI Regulation Across Different Countries


Topics

Legal and regulatory | Data governance


Need to avoid hindering investment while ensuring ethical AI use, especially in healthcare, education, and agriculture

Explanation

There’s a critical balance needed between regulation and innovation incentives. Countries need AI investments in key sectors like healthcare, education, and agriculture, so regulation must include incentives for private sector investment while maintaining ethical standards.


Evidence

Mentions specific sectors: ‘We need the AI investments in healthcare in my country to a big deal. We need it in education, we need it in agriculture’ and discusses including ‘certain incentives to get the private sector interest to invest’


Major discussion point

Balancing Innovation and Regulation


Topics

Economic | Development


Agreed with

– Axel Voss
– Rodrigo Goni Romero
– Participant

Agreed on

Balancing innovation incentives with regulatory protection


Deepfakes pose serious threats to individuals’ lives and careers, especially affecting women in conservative communities

Explanation

Deepfake technology creates severe risks beyond political manipulation, particularly threatening women in conservative cultural contexts. Fake pornographic images can lead to life-threatening situations, career destruction, and there have been actual incidents of women being killed due to deepfake content.


Evidence

Provides specific example: ‘Imagine a girl who is living in a village with certain cultural norms and she has leaked photographs of her on pornography. This might threaten her life directly. She could be killed.’ Also mentions ‘there are incidents in many countries of girls and women who have suffered or actually have risked their lives to deepfake’


Major discussion point

Risks and Challenges of AI


Topics

Human rights | Gender rights online | Sociocultural


AI weaponization in conflicts like Gaza has increased civilian casualties, highlighting political and social implications

Explanation

AI has been weaponized in recent Middle Eastern conflicts, particularly in Gaza, leading to increased civilian casualties. This demonstrates how AI governance affects thousands of lives and has serious political and social consequences beyond technical considerations.


Evidence

States ‘There have been lots of reports circulating that when the IDF uses AI on the war in Gaza, this actually has raised the casualties of civilians being dead’ and mentions AI being ‘weaponized recently in the war on Gaza’


Major discussion point

Risks and Challenges of AI


Topics

Cybersecurity | Cyberconflict and warfare | Human rights


Parliamentarians need education on AI to effectively scrutinize government AI use across sectors

Explanation

Capacity building for parliamentarians is crucial not just for legislation but for their scrutinizing role. Educated parliamentarians can better question ministries about their AI use and ensure technology benefits citizens across different sectors.


Evidence

Emphasizes ‘capacitating parliamentarians’ and mentions using ‘their tools to ask the different ministries, how they use the AI technologies in the different sectors to advance the work, to benefit as much as we can, the people in their country’


Major discussion point

Capacity Building and Education


Topics

Development | Capacity development


Agreed with

– Axel Voss
– Rodrigo Goni Romero
– Arda Gerkens

Agreed on

Capacity building and education are essential for all stakeholders


Continuous learning and capacity building essential for politicians, policymakers, and citizens

Explanation

AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This is a multi-stakeholder issue where everyone’s life is affected, making widespread capacity building essential.


Evidence

Repeatedly emphasizes ‘educate, educate, educate, capacitate, capacitate, capacitate’ and mentions taking courses herself: ‘I myself, before developing the draftable, I went through different courses, crash ones’


Major discussion point

Capacity Building and Education


Topics

Development | Capacity development


Agreed with

– Axel Voss
– Rodrigo Goni Romero
– Arda Gerkens

Agreed on

Capacity building and education are essential for all stakeholders


Need for appropriate regulatory bodies with authority to hold governmental entities accountable

Explanation

Effective AI regulation requires regulatory bodies with proper authority to hold government ministries accountable. In Egypt, the Supreme Council on Artificial Intelligence exists but needs enhanced authority to regulate across all governmental entities.


Evidence

Mentions Egypt’s ‘Supreme Council on Artificial Intelligence’ and explains ‘by law they didn’t have the authority to actually may I say hold the governmental entities accountable this is what I tried to do in my draft bill to to give them the authority’


Major discussion point

Implementation and Enforcement Challenges


Topics

Legal and regulatory


Need for AI policy radar similar to climate policy radar to track global regulatory developments

Explanation

There should be a comprehensive AI policy radar that tracks regulatory developments across all countries and regions. This would provide valuable knowledge for policymakers to understand global AI governance trends and learn from other jurisdictions.


Evidence

Specifically mentions ‘there is a climate policy radar. I wish there would be an AI policy radar’ and explains it would help parliamentarians know ‘what is happening in every region, what is happening in every country’


Major discussion point

Global Coordination and International Cooperation


Topics

Legal and regulatory


Importance of sharing experiences between countries while recognizing different contexts

Explanation

While what works for one country may not work for another, there are valuable experiences that can be shared and adapted. Countries should learn from each other’s approaches while developing solutions appropriate to their specific contexts.


Evidence

States ‘what works for Egypt definitely couldn’t work for other countries and vice versa, but there are experiences which we could always learn from and develop’


Major discussion point

Global Coordination and International Cooperation


Topics

Development


Legislative impact assessment of AI bills still missing in many jurisdictions

Explanation

There’s a significant gap in assessing the actual impact of AI legislation that has been implemented. It’s unclear whether regulations like the EU AI Act have affected private sector investment or caused companies to relocate, making policy evaluation challenging.


Evidence

Questions whether the EU Act ‘didn’t affect the amount of private sector investments in the fields of high tech and AI development and that these companies went to the US, for example? It’s not accessible yet’


Major discussion point

Parliamentary Role and Oversight


Topics

Legal and regulatory


Agreed with

– Axel Voss
– Participant
– Mounir Sorour

Agreed on

Need for flexible regulatory frameworks rather than detailed fixed laws


Disagreed with

– Axel Voss

Disagreed on

Detailed vs. framework-based regulation


Parliaments should focus on scrutinizing ministerial AI use rather than just legislation

Explanation

Beyond creating laws, parliaments should actively scrutinize how different government ministries use AI technologies. This oversight function ensures that AI implementation serves public interests and advances citizen welfare across sectors.


Evidence

Emphasizes ‘the scrutinizing job of making every ministry up to using AI for the good of the people in their mandate’


Major discussion point

Parliamentary Role and Oversight


Topics

Legal and regulatory


Data classification crucial as foundation for effective AI regulation

Explanation

Proper data classification is fundamental to AI regulation because unclassified data cannot be easily regulated. This involves creating legal liability frameworks based on data sensitivity levels and ensuring accountability for data handling.


Evidence

States ‘what is not classified couldn’t be easily regulated’ and emphasizes ‘at least the data classification is one crucial thing’


Major discussion point

Data Governance and Classification


Topics

Legal and regulatory | Data governance


Agreed with

– Hossam Elgamal

Agreed on

Data classification and governance as fundamental to AI regulation


National data becomes valuable asset requiring protection and accountability measures

Explanation

When AI systems function on data, national data becomes a significant asset that requires protection. Clear accountability frameworks are needed to determine responsibility when sensitive data, such as hospital records, is compromised or leaked.


Evidence

Provides specific example: ‘if, for example, the data of a hospital in a far place in Egypt is leaked. Who should I hold accountable based on this?’ and explains ‘the national data becomes a huge asset by itself’


Major discussion point

Data Governance and Classification


Topics

Legal and regulatory | Data governance | Privacy and data protection


Agreed with

– Hossam Elgamal

Agreed on

Data classification and governance as fundamental to AI regulation


Need for legal liability based on data sensitivity classification

Explanation

AI governance should establish clear legal responsibilities that correspond to the sensitivity level of data being processed. High-risk data handling should automatically trigger specific legal liability frameworks to ensure accountability.


Evidence

States ‘If you are dealing with a classified data or you’re dealing with data that is of high risk, this should have immediately a kind of legal liability’


Major discussion point

Data Governance and Classification


Topics

Legal and regulatory | Privacy and data protection


Agreed with

– Hossam Elgamal

Agreed on

Data classification and governance as fundamental to AI regulation


R

Rodrigo Goni Romero

Speech speed

84 words per minute

Speech length

311 words

Speech time

220 seconds

Uruguay prefers slow, consensus-based approach with general legal framework rather than detailed regulation

Explanation

As a small country between larger neighbors, Uruguay prioritizes being open to investment and avoids sending negative signals to investors. They have passed a general legal framework with multi-party support that mandates developing regulation through stakeholder participation and consensus.


Evidence

Mentions Uruguay is ‘a very small country in South America between biggest, big country, Argentina and Brazil’ and explains they ‘prefer have passed a legal framework approved by all parties which self mandate us to develop regulation with the participation of all sectors and basis of consensus’


Major discussion point

Current State of AI Regulation Across Different Countries


Topics

Legal and regulatory


Disagreed with

– Axel Voss

Disagreed on

Speed and approach to AI regulation


Small countries like Uruguay prioritize being open to investment while developing regulation through stakeholder consensus

Explanation

Uruguay’s approach focuses on promoting AI investment while carefully developing regulation through inclusive processes. They prefer to observe what happens in larger jurisdictions like Europe, the US, and Brazil before making detailed regulatory decisions.


Evidence

States they are ‘prioritized to be open to investment and focusing promoting, investing in AI’ and prefer to ‘observe that that they happen in Europe and US and maybe a bigger country like Brazil’


Major discussion point

Balancing Innovation and Regulation


Topics

Economic | Legal and regulatory


Agreed with

– Axel Voss
– Amira Saber
– Participant

Agreed on

Balancing innovation incentives with regulatory protection


Future job displacement requires extensive capacity building and preparation programs

Explanation

AI poses significant risks to employment that require proactive preparation through capacity building programs. As president of the Committee of the Future, the speaker emphasizes the need to prepare not just children and students, but also current workers for AI-related job changes.


Evidence

Mentions being ‘president of the Committee of the Future of the Parliament’ and states ‘I have to recognize in many perspectives that artificial intelligence have many risks to the job, many changes’ and emphasizes programs ‘Not just to the children, not just to the school, also to the worker’


Major discussion point

Risks and Challenges of AI


Topics

Economic | Future of work | Development


Continuous learning and capacity building essential for politicians, policymakers, and citizens

Explanation

Addressing AI risks requires extensive capacity building across society. Many people are unaware of AI risks, making it parliament’s duty to raise awareness and facilitate preparation programs for all segments of society.


Evidence

Emphasizes ‘capacity, capacity, capacity’ and states ‘many people don’t know about the risk. So, I think it’s our duty of the Parliament to awareness, to ask to prepare and to facilitate the program’


Major discussion point

Capacity Building and Education


Topics

Development | Capacity development


Agreed with

– Axel Voss
– Amira Saber
– Arda Gerkens

Agreed on

Capacity building and education are essential for all stakeholders


P

Participant

Speech speed

127 words per minute

Speech length

401 words

Speech time

188 seconds

Bahrain has drafted and approved first AI regulatory law balancing innovation and regulation

Explanation

Bahrain has successfully created and approved what they claim is the first AI regulatory law. Their framework achieves a balance between attracting investors, promoting innovation, and regulating the negative aspects of AI use.


Evidence

States ‘Bahrain have drafted the first and got approved the first AI regulator, sorry, law for regulating the use of AI’ and mentions achieving ‘a balance…between investors, getting investors and believe pushing the innovation and how to regulate the bad side of the use of the AI’


Major discussion point

Current State of AI Regulation Across Different Countries


Topics

Legal and regulatory


Agreed with

– Axel Voss
– Amira Saber
– Rodrigo Goni Romero

Agreed on

Balancing innovation incentives with regulatory protection


Importance of having flexible lower-level regulations rather than frequently amending laws

Explanation

Rather than constantly amending laws to keep up with AI developments, it’s better to simplify laws and delegate detailed regulation to more flexible bodies. Changing laws takes too long to cope with the fast pace of AI development.


Evidence

States ‘changing anything in the law it’s gonna take a long time which we cannot cope as with the developing the fast developing of the AI’


Major discussion point

Implementation and Enforcement Challenges


Topics

Legal and regulatory


Agreed with

– Axel Voss
– Amira Saber
– Mounir Sorour

Agreed on

Need for flexible regulatory frameworks rather than detailed fixed laws


Children vulnerable to recruitment through AI-powered gaming platforms for extremism and radicalization

Explanation

Research shows that children are being recruited for extremism and terrorism through AI-enhanced online gaming platforms. The process involves isolating children, then promoting radical ideas, leading to dangerous outcomes.


Evidence

Published article on ‘how young people and children are recruited in extremism and radicalization and terrorism through artificial intelligence’ and explains ‘they are recruiting children through the games, through the gamings…They isolate them firstly, then they start to promote their own ideas and radicalization approaches’


Major discussion point

Youth Engagement and Future-Proofing


Topics

Cybersecurity | Violent extremism | Children rights


Importance of parental awareness and supervision of children’s online activities

Explanation

Parents must actively supervise their children’s screen time and online activities to prevent exploitation. Leaving children alone with screens makes them vulnerable to isolation and radical recruitment through gaming platforms.


Evidence

Recommends ‘look after your children. Don’t leave them alone with the screens. Don’t do it’ and explains the isolation process that leads to radicalization


Major discussion point

Youth Engagement and Future-Proofing


Topics

Children rights | Cybersecurity


Need for simplified legal frameworks with flexible implementation mechanisms

Explanation

AI regulation should focus on creating main frameworks that minimize risks while allowing for flexible adaptation. Rather than trying to fix detailed regulations that become quickly outdated, countries should establish broad principles that can be adjusted as technology evolves.


Evidence

Asks ‘Can we just say yes to something to be like a main framework?’ and mentions ‘we don’t look now to make a regulation, we can’t fix a regulation, especially with AI’


Major discussion point

Parliamentary Role and Oversight


Topics

Legal and regulatory


H

Hossam Elgamal

Speech speed

129 words per minute

Speech length

342 words

Speech time

158 seconds

Lack of adequate regulatory bodies beyond telecommunications authorities in many countries

Explanation

Most countries only have telecommunications regulatory bodies, which are insufficient for governing digital society and AI regulation. New regulatory frameworks require appropriate institutions capable of implementing and enforcing AI-specific regulations.


Evidence

States ‘We don’t in all countries till now, all what we have is telecommunication regulatory body, which is no longer capable of handling digital society regulation’


Major discussion point

Implementation and Enforcement Challenges


Topics

Legal and regulatory


AI regulation needs international coordination as AI systems operate globally

Explanation

AI operates on a global scale similar to cybersecurity, making international coordination essential. Individual country regulations will be insufficient since data used in AI systems, whether authentic or fake, operates across borders.


Evidence

Compares to cybersecurity challenges: ‘Till now we are facing huge challenge in having international law of cyber security. And will be the same for AI’ and notes ‘data that will be used, whether fake or right. will be a global one’


Major discussion point

Global Coordination and International Cooperation


Topics

Legal and regulatory | Cybersecurity


Many countries lack proper data exchange policies needed before implementing AI regulation

Explanation

Countries in the Global South face multiple challenges including lack of computational power, data access gaps, and insufficient scientific capability. Many haven’t implemented basic data exchange policies, which are prerequisites for effective AI regulation.


Evidence

Lists specific gaps: ‘power gap, we are facing computational power gap, and ability to buy the processors to do AI. We are facing data access gap. And finally we are facing scientist capability’ and notes ‘a lot of countries in the South especially…did not implement yet data exchange policy’


Major discussion point

Data Governance and Classification


Topics

Development | Data governance | Digital access


Agreed with

– Amira Saber

Agreed on

Data classification and governance as fundamental to AI regulation


Digital divide creates gaps in computational power, data access, and scientific capability

Explanation

AI development has exacerbated existing digital divides, creating gaps in computational power, access to processors, data access, and scientific expertise. These gaps particularly affect countries in the Global South and hinder their ability to participate in AI governance.


Evidence

Specifically mentions ‘power gap, we are facing computational power gap, and ability to buy the processors to do AI. We are facing data access gap. And finally we are facing scientist capability within the AI’


Major discussion point

Human Rights and Social Impact


Topics

Development | Digital access


Y

Yasmin Al Douri

Speech speed

161 words per minute

Speech length

210 words

Speech time

77 seconds

Young people are often better at identifying deepfakes and misinformation than assumed

Explanation

Contrary to assumptions that young people cannot distinguish between real and fake content, they are actually better at identifying deepfakes and misinformation. This represents a broader issue where young people’s capabilities are underestimated by policymakers.


Evidence

States ‘I would definitely disagree specifically on my work with young people. I would even state that young people are way better at actually seeing what is deepfake and what is not’ and mentions this ‘shows a little bit the issue that we’re generally facing as a young generation’


Major discussion point

Youth Engagement and Future-Proofing


Topics

Sociocultural | Human rights


Disagreed with

– Axel Voss

Disagreed on

Young people’s ability to distinguish real from fake content


Need to bring youth reality to parliamentarians and ensure future-proof regulation

Explanation

There’s a disconnect between youth capabilities and parliamentarian perceptions that needs to be addressed. Regulation should be designed to be future-proof for coming generations, requiring better integration of youth perspectives in policy-making processes.


Evidence

Asks ‘how can we bring the reality of youth to parliamentarians and how can we make sure that the regulation we’re actually doing today is future-proof for generations that are coming?’


Major discussion point

Youth Engagement and Future-Proofing


Topics

Legal and regulatory | Human rights


M

Meredith Veit

Speech speed

150 words per minute

Speech length

215 words

Speech time

85 seconds

Need to address actual harms happening now rather than just potential future benefits

Explanation

There are problematic narratives from governments that focus on AI’s potential benefits while ignoring substantial evidence of current harms. Immediate action is needed to address existing problems and provide justice for those already affected by AI systems.


Evidence

Mentions ‘mountains of evidence that we have about the actual harms that are taking place now that need to be mitigated and dealt with now and justice that needs to happen now’


Major discussion point

Human Rights and Social Impact


Topics

Human rights


Importance of maintaining strong regulatory standards despite pressure from industry narratives

Explanation

There’s a need to resist problematic narratives about the dangers of regulating AI and maintain momentum for strong regulatory approaches. This includes keeping the EU AI Act strong and ensuring robust enforcement through institutions like the AI office.


Evidence

Asks about ‘pushing back against this wave and these very problematic narratives’ and mentions ‘keeping the EU AI Act strong and making sure that the AI office is strong in its enforcement’


Major discussion point

Human Rights and Social Impact


Topics

Human rights | Legal and regulatory


S

Sarah Lister

Speech speed

141 words per minute

Speech length

584 words

Speech time

248 seconds

Multi-stakeholder approach essential involving parliaments, civil society, private sector, and technical communities

Explanation

Effective AI governance cannot be achieved by any single actor alone. It requires collaboration between parliaments, executive branches, civil society, private sector, and technical communities, with parliaments playing a key role in ensuring AI serves public interests.


Evidence

States ‘no single actor can shape the governance of AI alone. Effective regulation demands a multi-stakeholder approach, bringing together parliaments, executive branches, civil society, private sector and technical communities’


Major discussion point

Global Coordination and International Cooperation


Topics

Legal and regulatory


M

Mounir Sorour

Speech speed

136 words per minute

Speech length

128 words

Speech time

56 seconds

AI regulation should focus on flexible frameworks rather than fixed regulations due to the open and expanding nature of AI

Explanation

Making AI regulation is both easy and difficult because AI operates in an open space that is constantly moving and expanding. Rather than creating fixed regulations that cannot adapt, there should be main frameworks that can minimize AI risks while allowing for necessary adaptations as the technology evolves.


Evidence

States that ‘AI, we are working on open space. And every time we do something, I mean, because we are moving and we’re expanding’ and asks about creating ‘a main framework at least to minimize the risk of the AI’


Major discussion point

Implementation and Enforcement Challenges


Topics

Legal and regulatory


Agreed with

– Axel Voss
– Amira Saber
– Participant

Agreed on

Need for flexible regulatory frameworks rather than detailed fixed laws


Need for balanced approach that doesn’t focus solely on regulation but seeks equilibrium

Explanation

The approach to AI governance should prioritize finding balance rather than just creating regulations. This involves looking for ways to achieve equilibrium between different interests and needs rather than simply imposing restrictive measures.


Evidence

States ‘we are looking for balancing, we don’t look now to make a regulation’ and emphasizes that ‘we can’t fix a regulation, especially with AI’


Major discussion point

Balancing Innovation and Regulation


Topics

Legal and regulatory


A

Arda Gerkens

Speech speed

155 words per minute

Speech length

1705 words

Speech time

659 seconds

IPU has established comprehensive AI tracking and policy initiatives for parliaments globally

Explanation

The Inter-Parliamentary Union has adopted a resolution on AI’s impact on democracy, human rights, and rule of law in October 2024, and launched a monthly tracker monitoring AI policy actions across 37 parliaments. These initiatives aim to coordinate parliamentary responses to AI governance challenges.


Evidence

Mentions specific IPU resolution from October 2024 ‘on the impact of AI on democracy, human rights, and the rule of law’ and monthly tracker that ‘currently already covers 37 parliaments’


Major discussion point

Global Coordination and International Cooperation


Topics

Legal and regulatory


Parliamentarians should invest extra effort in understanding AI due to its critical importance

Explanation

Given AI’s significant impact on society, parliamentarians and politicians need to put additional effort into understanding the technology before attempting to regulate it. This echoes the philosophical principle that one shouldn’t speak about topics they don’t understand.


Evidence

References philosopher Wittgenstein’s principle ‘if you don’t know what you’re talking about, you shouldn’t speak on it’ and suggests ‘as a politician, parliamentarian, this time you should put a little extra effort in understanding AI, because it’s so important’


Major discussion point

Capacity Building and Education


Topics

Development | Capacity development


Agreed with

– Axel Voss
– Amira Saber
– Rodrigo Goni Romero

Agreed on

Capacity building and education are essential for all stakeholders


AI makes it difficult for people of all ages to distinguish between real and fake content

Explanation

The challenge of identifying authentic versus AI-generated content affects not just young people but people of all generations. This represents a broader societal challenge that extends beyond generational boundaries.


Evidence

Personal reflection: ‘maybe I’m not youth anymore. Sometimes I also feel it’s hard to know what’s real and what’s not’


Major discussion point

Risks and Challenges of AI


Topics

Sociocultural | Human rights


Agreements

Agreement points

Capacity building and education are essential for all stakeholders

Speakers

– Axel Voss
– Amira Saber
– Rodrigo Goni Romero
– Arda Gerkens

Arguments

Need for better understanding among regulators who may not have sufficient technical knowledge


Continuous learning and capacity building essential for politicians, policymakers, and citizens


Parliamentarians need education on AI to effectively scrutinize government AI use across sectors


Continuous learning and capacity building essential for politicians, policymakers, and citizens


Parliamentarians should invest extra effort in understanding AI due to its critical importance


Summary

All speakers emphasized that understanding AI technology is crucial for effective governance, with particular emphasis on educating parliamentarians, policymakers, and citizens to make informed decisions about AI regulation and oversight.


Topics

Development | Capacity development


Need for flexible regulatory frameworks rather than detailed fixed laws

Speakers

– Axel Voss
– Amira Saber
– Participant
– Mounir Sorour

Arguments

Democratic legislators are too slow for technology developments, need faster, framework-based approach rather than detailed laws


Legislative impact assessment of AI bills still missing in many jurisdictions


Importance of having flexible lower-level regulations rather than frequently amending laws


AI regulation should focus on flexible frameworks rather than fixed regulations due to the open and expanding nature of AI


Summary

There is strong consensus that AI regulation should focus on creating flexible frameworks and principles rather than detailed, rigid laws that cannot adapt to rapidly evolving technology.


Topics

Legal and regulatory


Balancing innovation incentives with regulatory protection

Speakers

– Axel Voss
– Amira Saber
– Rodrigo Goni Romero
– Participant

Arguments

Regulation should focus on positive expectations from AI rather than just avoiding risks


Need to avoid hindering investment while ensuring ethical AI use, especially in healthcare, education, and agriculture


Small countries like Uruguay prioritize being open to investment while developing regulation through stakeholder consensus


Bahrain has drafted and approved first AI regulatory law balancing innovation and regulation


Summary

All speakers agreed on the critical need to balance regulatory protection with maintaining incentives for innovation and investment, particularly for smaller countries and developing economies.


Topics

Economic | Legal and regulatory


Data classification and governance as fundamental to AI regulation

Speakers

– Amira Saber
– Hossam Elgamal

Arguments

Data classification crucial as foundation for effective AI regulation


National data becomes valuable asset requiring protection and accountability measures


Need for legal liability based on data sensitivity classification


Many countries lack proper data exchange policies needed before implementing AI regulation


Summary

Both speakers emphasized that proper data classification and governance frameworks are prerequisites for effective AI regulation, with clear accountability measures based on data sensitivity levels.


Topics

Legal and regulatory | Data governance | Privacy and data protection


Similar viewpoints

Both speakers from developing countries emphasized the importance of learning from other jurisdictions while adapting solutions to their specific national contexts, favoring consensus-based approaches over rigid regulatory frameworks.

Speakers

– Amira Saber
– Rodrigo Goni Romero

Arguments

Importance of sharing experiences between countries while recognizing different contexts


Uruguay prefers slow, consensus-based approach with general legal framework rather than detailed regulation


Topics

Development | Legal and regulatory


Both speakers highlighted the serious threats posed by AI-generated fake content, though Voss focused on democratic implications while Saber emphasized personal safety risks, particularly for women.

Speakers

– Axel Voss
– Amira Saber

Arguments

Difficulty distinguishing real from fake content affects democratic processes and public trust


Deepfakes pose serious threats to individuals’ lives and careers, especially affecting women in conservative communities


Topics

Human rights | Sociocultural | Gender rights online


Both emphasized the active oversight role of parliaments beyond just creating laws, including monitoring how AI is used by government entities and protecting vulnerable populations from AI-enabled threats.

Speakers

– Amira Saber
– Participant

Arguments

Parliaments should focus on scrutinizing ministerial AI use rather than just legislation


Children vulnerable to recruitment through AI-powered gaming platforms for extremism and radicalization


Topics

Legal and regulatory | Cybersecurity | Children rights


Unexpected consensus

Youth capabilities in identifying AI-generated content

Speakers

– Axel Voss
– Yasmin Al Douri
– Arda Gerkens

Arguments

Difficulty distinguishing real from fake content affects democratic processes and public trust


Young people are often better at identifying deepfakes and misinformation than assumed


AI makes it difficult for people of all ages to distinguish between real and fake content


Explanation

While there was initial disagreement about youth capabilities, the discussion revealed an unexpected consensus that the challenge of distinguishing real from fake content affects all age groups, not just young people, suggesting a more nuanced understanding of digital literacy across generations.


Topics

Sociocultural | Human rights | Youth Engagement and Future-Proofing


Need for international coordination despite national sovereignty concerns

Speakers

– Hossam Elgamal
– Amira Saber
– Sarah Lister

Arguments

AI regulation needs international coordination as AI systems operate globally


Need for AI policy radar similar to climate policy radar to track global regulatory developments


Multi-stakeholder approach essential involving parliaments, civil society, private sector, and technical communities


Explanation

Despite speakers representing different national interests and regulatory approaches, there was unexpected consensus on the need for global coordination and information sharing, recognizing that AI governance transcends national boundaries.


Topics

Legal and regulatory | Global Coordination and International Cooperation


Overall assessment

Summary

The speakers demonstrated remarkable consensus on key foundational issues: the critical importance of capacity building and education for all stakeholders, the need for flexible regulatory frameworks rather than rigid detailed laws, the necessity of balancing innovation with protection, and the fundamental role of data governance. There was also strong agreement on the multi-stakeholder nature of AI governance and the need for international coordination.


Consensus level

High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggests a mature understanding of AI governance challenges and creates a strong foundation for collaborative policy development. The consensus spans across different regions (Europe, Middle East, Latin America) and different types of stakeholders (parliamentarians, civil society, private sector), indicating broad-based agreement on core AI governance principles that could facilitate international cooperation and knowledge sharing.


Differences

Different viewpoints

Young people’s ability to distinguish real from fake content

Speakers

– Axel Voss
– Yasmin Al Douri

Arguments

Difficulty distinguishing real from fake content affects democratic processes and public trust


Young people are often better at identifying deepfakes and misinformation than assumed


Summary

Axel Voss argued that young people find it hard to differentiate what is real and fake, creating democratic risks. Yasmin Al Douri directly disagreed, stating that young people are actually better at identifying deepfakes and misinformation than assumed, and that this represents a broader issue of underestimating youth capabilities.


Topics

Human rights | Sociocultural


Speed and approach to AI regulation

Speakers

– Axel Voss
– Rodrigo Goni Romero

Arguments

Democratic legislators are too slow for technology developments, need faster, framework-based approach rather than detailed laws


Uruguay prefers slow, consensus-based approach with general legal framework rather than detailed regulation


Summary

Axel Voss advocated for faster legislative processes (three months) to keep up with technology, while Rodrigo explicitly stated Uruguay prefers to ‘go slow’ and observe what happens in larger jurisdictions before acting.


Topics

Legal and regulatory


Detailed vs. framework-based regulation

Speakers

– Axel Voss
– Amira Saber

Arguments

Democratic legislators are too slow for technology developments, need faster, framework-based approach rather than detailed laws


Legislative impact assessment of AI bills still missing in many jurisdictions


Summary

While both agree on avoiding overly detailed regulation, Axel Voss focuses on speed and framework approaches, while Amira Saber emphasizes the need for impact assessment and the challenge of constantly amending detailed regulations as technology evolves.


Topics

Legal and regulatory


Unexpected differences

Generational assumptions about digital literacy

Speakers

– Axel Voss
– Yasmin Al Douri

Arguments

Difficulty distinguishing real from fake content affects democratic processes and public trust


Young people are often better at identifying deepfakes and misinformation than assumed


Explanation

This disagreement was unexpected because it revealed a fundamental disconnect between policymaker assumptions and youth advocate perspectives. Yasmin Al Douri’s direct challenge to Axel Voss’s statement highlighted how generational assumptions can influence policy-making, which is particularly significant given that AI regulation will primarily affect younger generations.


Topics

Human rights | Sociocultural


Regulatory timing philosophy

Speakers

– Axel Voss
– Rodrigo Goni Romero

Arguments

Democratic legislators are too slow for technology developments, need faster, framework-based approach rather than detailed laws


Uruguay prefers slow, consensus-based approach with general legal framework rather than detailed regulation


Explanation

This disagreement was unexpected because both speakers represent democratic systems but had completely opposite philosophies about regulatory timing. Axel Voss argued for urgency due to technological pace, while Rodrigo advocated for deliberate slowness to observe and learn from others, revealing how country size and position can fundamentally shape regulatory philosophy.


Topics

Legal and regulatory


Overall assessment

Summary

The main areas of disagreement centered on regulatory approach and timing, youth capabilities in digital environments, and the balance between innovation and regulation. While speakers generally agreed on fundamental goals like protecting citizens while enabling innovation, they differed significantly on methods and assumptions.


Disagreement level

Moderate disagreement with significant implications. The disagreements reveal fundamental differences in regulatory philosophy, generational assumptions, and approaches to balancing innovation with protection. These differences could lead to fragmented global AI governance approaches, with some jurisdictions moving quickly with framework-based regulation while others take slower, consensus-based approaches. The generational disconnect highlighted by the youth advocate suggests that current AI regulation may not adequately reflect the realities and capabilities of those most affected by these technologies.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from developing countries emphasized the importance of learning from other jurisdictions while adapting solutions to their specific national contexts, favoring consensus-based approaches over rigid regulatory frameworks.

Speakers

– Amira Saber
– Rodrigo Goni Romero

Arguments

Importance of sharing experiences between countries while recognizing different contexts


Uruguay prefers slow, consensus-based approach with general legal framework rather than detailed regulation


Topics

Development | Legal and regulatory


Both speakers highlighted the serious threats posed by AI-generated fake content, though Voss focused on democratic implications while Saber emphasized personal safety risks, particularly for women.

Speakers

– Axel Voss
– Amira Saber

Arguments

Difficulty distinguishing real from fake content affects democratic processes and public trust


Deepfakes pose serious threats to individuals’ lives and careers, especially affecting women in conservative communities


Topics

Human rights | Sociocultural | Gender rights online


Both emphasized the active oversight role of parliaments beyond just creating laws, including monitoring how AI is used by government entities and protecting vulnerable populations from AI-enabled threats.

Speakers

– Amira Saber
– Participant

Arguments

Parliaments should focus on scrutinizing ministerial AI use rather than just legislation


Children vulnerable to recruitment through AI-powered gaming platforms for extremism and radicalization


Topics

Legal and regulatory | Cybersecurity | Children rights


Takeaways

Key takeaways

AI regulation requires balancing innovation incentives with risk mitigation, particularly for investment-dependent developing countries


Data classification and governance frameworks must be established before detailed AI regulation can be effective


Parliamentarians need extensive capacity building and technical education to effectively regulate and oversee AI implementation


Democratic legislative processes are too slow for rapidly evolving AI technology, requiring framework-based rather than detailed regulatory approaches


Multi-stakeholder approaches involving parliaments, civil society, private sector, and technical communities are essential for effective AI governance


International coordination is crucial since AI operates globally, but regulatory approaches must be adapted to local contexts


Youth voices and technical expertise should be better integrated into parliamentary AI policy-making processes


Immediate harms from AI (deepfakes, weaponization, misinformation) require urgent attention alongside long-term governance frameworks


Resolutions and action items

IPU to continue monthly tracking of parliamentary AI activities across 37 parliaments and expand coverage


Parliamentarians encouraged to contact Andy Richardson to be added to IPU’s AI policy tracking list


Upcoming IPU event scheduled for November 28-30 with Parliament of Malaysia, UNDP, and Commonwealth Parliamentary Association on responsible AI


Recommendation to create an AI policy radar system similar to climate policy radar to track global regulatory developments


Need to establish appropriate regulatory bodies with authority to hold governmental entities accountable for AI use


Focus on developing flexible lower-level regulations rather than frequently amending primary legislation


Unresolved issues

Which specific regulatory bodies should implement AI regulation in countries lacking adequate digital governance authorities


How to achieve effective international coordination of AI regulation while respecting national sovereignty and different development contexts


Whether the EU AI Act’s detailed approach will negatively impact private sector investment and innovation


How to effectively measure and assess the legislative impact of AI regulations on innovation and investment


Timing and implementation details for potentially postponing high-risk AI system requirements in the EU


How to bridge the digital divide in computational power, data access, and scientific capability between developed and developing countries


Specific mechanisms for ensuring youth voices are meaningfully integrated into parliamentary AI policy processes


Suggested compromises

Adopt general legal frameworks with broad ethical principles rather than detailed technical regulations to maintain flexibility


Focus regulation on data classification and high-risk AI applications while allowing innovation in lower-risk areas


Implement consensus-based, multi-stakeholder approaches that involve all sectors in developing AI governance


Prioritize capacity building and education for parliamentarians while developing regulatory frameworks simultaneously


Create regulatory sandboxes to allow business experimentation within controlled environments


Establish framework laws with delegated authority to specialized bodies for detailed implementation rules


Balance immediate harm mitigation with long-term innovation goals through risk-based regulatory approaches


Thought provoking comments

However, whatever trials you are exerting in that track, you can’t. It’s embedded in your life. And the good thing now is that the ethical questions are being graced on every international table… There is no one safe until everyone is safe, and we have a responsibility to make it a safe space as possible, because definitely it’s going to be manipulated all the time for different purposes, for different reasons, politically and otherwise.

Speaker

Amira Saber


Reason

This comment is deeply insightful because it reframes AI regulation from a technical/legal issue to a fundamental human security issue. By invoking the COVID-19 lesson of ‘no one safe until everyone is safe,’ she connects AI governance to global solidarity and collective responsibility, moving beyond national regulatory approaches.


Impact

This comment elevated the discussion from technical regulatory details to philosophical and ethical foundations. It influenced subsequent speakers to consider the global interconnectedness of AI risks and led to questions about international coordination and the need for unified approaches across borders.


The democratic legislator is too slow for the technology developments. And so we are always behind… we need to reduce the normal behavior of a democratic legislator. So meaning there should be a kind of a solution in place after three months… It’s more a kind of a frame what we should have with ethical aspects or a kind of value-orientated frame.

Speaker

Axel Voss


Reason

This is a provocative challenge to traditional democratic processes, suggesting that democracy itself may be structurally inadequate for governing rapidly evolving technologies. It raises fundamental questions about the tension between democratic deliberation and technological urgency.


Impact

This comment sparked a crucial debate about regulatory approaches throughout the remainder of the discussion. Multiple participants referenced the framework vs. detailed regulation approach, and it led to practical suggestions about keeping laws simple while delegating flexibility to regulatory bodies.


Imagine a girl who is living in a village with certain cultural norms and she has leaked photographs of her on pornography. This might threaten her life directly. She could be killed. And actually there are incidents in many countries of girls and women who have suffered or actually have risked their lives to deepfake.

Speaker

Amira Saber


Reason

This comment is powerfully insightful because it grounds abstract AI risks in concrete, life-threatening realities, particularly for vulnerable populations. It demonstrates how AI harms intersect with existing social inequalities and cultural contexts in ways that can be fatal.


Impact

This visceral example shifted the discussion from theoretical policy considerations to urgent human rights concerns. It influenced later questions about child protection and gender-based violence, and reinforced the need for immediate regulatory action rather than prolonged deliberation.


I would even state that young people are way better at actually seeing what is deepfake and what is not and this shows a little bit the issue that we’re generally facing as a young generation. We are always deemed as not knowing specific things when we’re actually really good at specific things… how can we bring the reality of youth to parliamentarians and how can we make sure that the regulation we’re actually doing today is future-proof for generations that are coming?

Speaker

Yasmin Al Douri


Reason

This comment is thought-provoking because it directly challenges assumptions made by senior policymakers and flips the narrative about generational digital literacy. It highlights a critical gap in policymaking where those most affected by regulations have the least voice in creating them.


Impact

This intervention created a notable shift in tone, forcing Axel Voss to clarify his earlier statement and acknowledge the problem of excluding youth voices. It introduced the crucial question of intergenerational equity in AI governance and influenced the moderator to emphasize the importance of including youth perspectives.


AI is global, it is not local, and same as cyber security. Till now we are facing huge challenge in having international law of cyber security. And will be the same for AI, each country will try to start having its own regulation. But how we are going to implement it globally?

Speaker

Hossam Elgamal


Reason

This comment is insightful because it identifies a fundamental structural problem: the mismatch between global technology and national regulatory frameworks. It draws a parallel with cybersecurity to show this is a recurring challenge in digital governance.


Impact

This question exposed a critical weakness in the current regulatory approach and led to discussions about the need for international coordination. It influenced Amira Saber’s response about creating an ‘AI policy radar’ and highlighted the limitations of purely national approaches to AI governance.


What is the regulatory body that will implement? We don’t in all countries till now, all what we have is telecommunication regulatory body, which is no longer capable of handling digital society regulation. And going to AI and putting regulation for AI, who is going to implement the regulation?

Speaker

Hossam Elgamal


Reason

This comment cuts to the heart of implementation challenges, pointing out that existing regulatory infrastructure is inadequate for AI governance. It highlights the gap between creating laws and having the institutional capacity to enforce them.


Impact

This practical concern grounded the discussion in implementation realities and led to concrete discussions about creating new regulatory bodies, such as Amira Saber’s mention of Egypt’s Supreme Council on Artificial Intelligence and Axel Voss’s reference to the EU’s AI office.


Overall assessment

These key comments fundamentally shaped the discussion by introducing critical tensions and complexities that moved the conversation beyond surface-level policy discussions. Amira Saber’s comments consistently elevated the discourse to address human rights, global solidarity, and life-threatening consequences, while Axel Voss’s observation about democratic processes being too slow created a central tension that influenced much of the subsequent debate. The youth representative’s challenge to generational assumptions and the practical questions about implementation and global coordination forced participants to confront the limitations of current approaches. Together, these interventions transformed what could have been a technical policy discussion into a nuanced exploration of democracy, human rights, global governance, and intergenerational equity in the age of AI. The comments created a cascading effect where each insight built upon others, ultimately revealing AI regulation as a complex challenge requiring fundamental rethinking of governance structures, democratic processes, and international cooperation.


Follow-up questions

How can we have only one interpretation of AI Act provisions across the European single market to avoid confusion for companies?

Speaker

Axel Voss


Explanation

This is crucial for business clarity and consistent implementation across EU member states, unlike the varied interpretations seen with GDPR


Who should be held accountable when sensitive data (like hospital data) is leaked in AI systems?

Speaker

Amira Saber


Explanation

This addresses the fundamental question of liability and responsibility in AI data breaches, especially for sensitive national data


How can we balance AI innovation incentives with regulatory constraints to avoid discouraging private sector investment?

Speaker

Amira Saber


Explanation

This is essential for countries needing AI investments in healthcare, education, and agriculture while maintaining ethical standards


What regulatory body will implement AI regulations, given that current telecommunication regulatory bodies are inadequate for digital society regulation?

Speaker

Hossam Elgamal


Explanation

This addresses the institutional gap in AI governance and the need for appropriate regulatory infrastructure


How can international AI regulation be implemented globally when AI is inherently global but countries are developing separate national regulations?

Speaker

Hossam Elgamal


Explanation

This highlights the challenge of coordinating AI governance across borders, similar to ongoing challenges with international cybersecurity law


Should countries slow down AI regulation or move forward, especially when neighboring countries have different approaches?

Speaker

Ali from Bahrain Shura Council


Explanation

This addresses the strategic timing of AI regulation and whether early adoption provides advantages or disadvantages


How can we bring the reality of youth perspectives to parliamentarians and ensure AI regulation is future-proof for coming generations?

Speaker

Yasmin Al Douri


Explanation

This challenges assumptions about youth capabilities and emphasizes the need for intergenerational input in AI policymaking


Can AI help in mitigating the social impact of child abuse and gender-related inequalities?

Speaker

Remote participant


Explanation

This explores the potential positive applications of AI in addressing serious social issues


How can we assess the legislative impact of AI bills, particularly whether EU AI Act affected private sector investments?

Speaker

Amira Saber


Explanation

This addresses the need for evidence-based policy evaluation to understand the real-world effects of AI regulation


What is needed to push back against problematic narratives that ignore current AI harms while focusing only on potential benefits?

Speaker

Meredith Veit


Explanation

This addresses the need to balance AI regulation discussions with acknowledgment of existing harms requiring immediate attention


What should be the main framework to minimize AI risks while maintaining flexibility for rapid technological changes?

Speaker

Mounir Sourour


Explanation

This seeks practical guidance on creating adaptable regulatory frameworks that can evolve with technology


How can we create an AI policy radar similar to the climate policy radar to track global AI policy developments?

Speaker

Amira Saber


Explanation

This would provide parliamentarians and policymakers with comprehensive knowledge of AI policy developments worldwide


How can we verify AI-generated content and distinguish between real and fake information, including deepfakes?

Speaker

Amira Saber


Explanation

This addresses the critical challenge of content verification in an era where AI can create convincing fake content


How can democratic legislators become faster in responding to technology developments instead of always being behind?

Speaker

Axel Voss


Explanation

This addresses the fundamental challenge of democratic processes being too slow for rapid technological advancement


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.