[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems
Session at a Glance
Summary
This panel discussion focused on balancing innovation and regulation in the development of large-scale AI systems. The panelists, representing various sectors including privacy, tech companies, and consulting, explored the challenges and approaches to AI governance.
Ivana Bartoletti emphasized the importance of leveraging existing privacy and data protection laws to regulate AI, cautioning against rushing into new AI-specific legislation. She stressed that privacy by design is crucial in AI development, particularly in protecting individual rights and freedoms.
Basma Ammari from Meta highlighted their open-source approach to large language models, emphasizing the importance of fairness, transparency, and safety in AI development. She advocated for principles-based and risk-based regulation rather than stringent new laws that might stifle innovation.
Fuad Siddiqui of EY discussed the concept of an “intelligence grid” comprising connectivity, computing, and control layers. He provided examples of AI applications in agriculture and energy sectors, demonstrating how AI can drive productivity and sustainability.
The discussion touched on the debate between creating comprehensive AI acts versus updating existing laws. Panelists generally favored a more flexible, principles-based approach to regulation. They also addressed concerns about algorithm transparency, data privacy, and the need for diverse, representative data in AI development.
The role of parliamentarians in shaping AI governance was a key theme, with panelists urging lawmakers to define the risks they want to mitigate and the values they wish to protect in the AI era. The discussion concluded with calls for collaboration between the private sector and governments in addressing AI’s impact on the job market and the need for education and upskilling initiatives.
Keypoints
Major discussion points:
– The role of privacy and data protection laws in regulating AI
– Whether to create new AI-specific regulations or adapt existing laws
– The importance of risk-based and principles-based approaches to AI governance
– The need for transparency and fairness in AI systems and algorithms
– The role of the private sector in responsible AI development and addressing societal impacts
The overall purpose of the discussion was to explore how to balance innovation and regulation for large-scale AI systems, with a focus on the perspectives of private sector companies and the role of parliamentarians in crafting appropriate governance frameworks.
The tone of the discussion was largely collaborative and solution-oriented. Panelists emphasized the need for cooperation between government and industry to address AI challenges. There was a sense of urgency about the need to act, balanced with caution about over-regulating. The tone became more pointed when addressing parliamentarians directly about their responsibilities, but remained constructive overall.
Speakers
– Latifa Al Abulkarim: Moderator
– Ivana Bartoletti: Chief Privacy and AI Governance Officer at Wipro, visiting cyber security and privacy executive fellow at Pompson Business School at Virginia Tech, co-founder of Women Leading an AI Network
– Basma Ammari: Director of Public Policy for MENA region at META
– Fuad Siddiqui: EY’s Global Innovations and Emerging Tech Leader
Additional speakers:
– Maha Abdel Nasser: Parliamentarian from Egypt with engineering background and 30+ years in ICT industry
– Silvia Dinica: Romanian senator with PhD in applied mathematics
– Ailyn Febles: Cuban parliamentarian, president of a civil society organization for technology professionals, university professor
– Mubarak Janahi: Member of the Bahraini Council
Full session report
AI Governance: Balancing Innovation and Regulation
This panel discussion explored the complex landscape of artificial intelligence (AI) governance, focusing on how to balance innovation with responsible development and regulation of large-scale AI systems. The panelists, representing diverse sectors including privacy advocacy, tech companies, and consulting, offered varied perspectives on the challenges and approaches to AI governance.
Key Themes and Arguments
1. Leveraging Existing Laws vs. Creating New AI-Specific Regulations
A central point of debate was whether to create comprehensive new AI laws or adapt existing regulatory frameworks. Ivana Bartoletti, Chief Privacy and AI Governance Officer at WEPRO, strongly advocated for leveraging existing laws, particularly in the realms of privacy, consumer protection, and anti-discrimination. She argued, “Privacy regulation, consumer regulation, discrimination-related regulation, liability, all these things already apply to AI.”
In contrast, Basma Ammari, Director of Public Policy for MENA region at META, emphasized a risk-based and principles-based approach to regulation. This difference in approach highlights the complexity of crafting effective AI governance strategies and the divergent views on regulation approaches.
2. Privacy and Data Protection in AI Development
There was significant discussion on the critical importance of privacy and data protection in AI governance. Bartoletti stressed that privacy by design is crucial in AI development, particularly in protecting individual rights and freedoms. She emphatically stated, “Whoever tells you that there is a dichotomy between privacy and AI, please do not believe them.” This comment directly challenges the notion that privacy must be sacrificed for AI advancement.
Ammari echoed this sentiment, highlighting META’s commitment to developing AI systems with privacy, safety, fairness, and transparency in mind. She mentioned watermarking as a technique for ensuring transparency in AI-generated content.
3. Open-Source Approaches and Transparency
Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source methodology with its large language model. What that means is that these large language models are made available for practically everyone to use, to build on.” This strategy, she argued, not only improves access to AI but also enhances the fairness and transparency of AI models through collaborative development.
4. The Role and Challenges of Parliamentarians in AI Governance
The discussion emphasized the crucial role of parliamentarians in shaping AI governance, while also highlighting the challenges they face. Bartoletti urged lawmakers to define the specific risks they want to mitigate and the values they wish to protect in the AI era. This call to action highlighted the need for parliamentarians to gain a deep understanding of AI technologies to govern them effectively.
Several audience members raised important questions about how parliaments can leverage AI in their own work, monitor the implementation of AI-related laws, and establish research centers focused on digital innovation and AI. These questions underscored the significant challenges parliamentarians face in understanding and legislating on AI matters.
5. AI Infrastructure and Applications
Fuad Siddiqui, EY’s Global Innovations and Emerging Tech Leader, introduced the concept of an “intelligence grid” comprising connectivity, computing, and control layers. He explained, “As you have built electricity networks, you have an electricity grid, you would be building an intelligence grid.” This analogy helped frame AI development in terms of large-scale infrastructure, leading to considerations of national strategies and public-private partnerships.
Siddiqui provided specific examples of AI applications in agriculture and energy sectors. In agriculture, he mentioned AI’s role in optimizing crop yields, reducing water usage, and improving pest control. For the energy sector, he discussed AI’s potential in optimizing energy distribution and facilitating the transition to renewable energy sources.
6. Addressing Societal Impacts of AI
The discussion touched on the broader societal impacts of AI, particularly its effect on the job market. Panelists agreed on the need for collaboration between the private sector and governments to address workforce impacts through education and upskilling initiatives. Ammari provided an example of META’s partnership with Tawaiq Academy in Saudi Arabia for AI education.
7. Digitizing National Archives
Ammari highlighted the importance of digitizing national archives (with appropriate privacy protections) to improve AI training data. This suggestion emphasized the role of governments in providing high-quality, diverse data sets for AI development.
Unresolved Issues and Future Directions
Despite the productive discussion, several key issues remained unresolved:
1. How to effectively regulate AI algorithms without stifling innovation
2. Addressing cross-border issues in AI governance
3. Finding the appropriate balance between framework/principles-based approaches versus strict AI laws
4. Ensuring AI systems remain fair and unbiased over time
The panel suggested several action items and areas for further exploration:
1. Parliamentarians should focus on understanding AI to govern it effectively
2. Governments should consider digitizing national archives to improve AI training data
3. More research is needed on how to monitor and validate AI systems long-term
4. Exploring public-private collaborations to balance innovation and regulation
5. Creating a future foresight council for technology assessment, as suggested by Siddiqui
Conclusion
The discussion revealed divergent views on AI governance approaches, particularly regarding regulation strategies. However, there was general agreement on the importance of privacy, data protection, and responsible development in AI.
The dialogue highlighted the complexity of AI governance and emphasized the necessity of ongoing collaboration between government, industry, and civil society to develop effective AI policies that balance innovation with responsible development and use. The challenges faced by parliamentarians in understanding and legislating on AI underscore the need for continued education and engagement on these issues.
Session Transcript
Latifa Al Abulkarim: Assalamu alaikum and good morning ladies and gentlemen in the second day of the parliamentary track and a very warm welcome again to Riyadh. Though the weather is really cold and cooler than usual, but I’m sure that this session’s conversation will warm us up and this valuable insights and information, especially that we are having it with our, with the parliamentarians, maybe I would say core stakeholders with the private sector. So today we are going to discuss their searching at the frontier and knowing more about how to balance innovation and regulation in practice while developing large-scale AI systems. Please join me in welcoming our esteemed panelist Ivana Bartoletti, Chief Privacy and AI Governance Officer at WEPRO. I will give a quick bio about Ivana. She’s a privacy and data protection professional and visiting cyber security and privacy executive fellow at Pompson Business School at Virginia Tech. She helps global organizations with their privacy by design programs and privacy and ethical challenges relating to AI and big data. She’s also the co-founder of Women Leading an AI Network, a lobby group of women from different backgrounds aimed to mobilize the tech industry and politics to set clear governance of AI. Next we have Basma Amari, the Director of Public Policy for MENA region at META. She leads a team that focuses on tech regulations and policies, promote platforms, integrity and support innovation ecosystem. By practice, BESMA is an international development and public policy professional with 20 years of experience, having worked at the World Bank, Washington, D.C. and Africa and MENA, as well as social impact organizations and governments in these regions. BESMA also worked across several sectors and contexts, from education to health and community development and across several countries, including in conflict and post-conflict zones in West and East of Africa and MENA. Prior to META, BESMA served at the Prime Minister’s Office of the UAE as Advisor in Strategy and Innovation. She has a Bachelor in Economics and Finance and holds an MBA degree. Last but not least, we have Fuad Siddiqui, he’s EY’s Global Innovations and Emerging Tech Leader. As the EY, Global Consulting Innovation and Emerging Tech Leader, Fahad helps clients unlock new value through techno-economic foresight and challenge established thinking and advocate for inclusive and sustainable growth models. He brings more than 20 years of experience spanning international markets and advising clients on diversification strategies and how to win by capitalizing on the next technological evolution. Thanks so much all for joining us and let me explain that the main goal of this session is to open this channel between the parliamentarians and the private sectors to hear from our esteemed panellists how these companies are designed. designing AI systems or LLMs to enhance productivity? Why without compromising any ethical standards? And what’s exactly their views when it comes to AI regulation? Are they with soft self regulations? And what exactly we mean by sandboxes? And how is it related to the main regulations that we are doing at the parliaments? And are you with those companies who is always coming and saying to the parliaments, please regulate the market? Or you are now having your own strategy or gradual thinking about how the new technologies and digital technologies in general be regulated? And what are the safety and social impacts of LLMs? And how to mitigate the different types of risks? As we know, it’s not only categorized by low and high risk, but it’s also something related to the geopolitical risks and further risks. So I will start by you, Ivana, please explain to us what does privacy by design look like in practice? And how can companies embed it within the AI development life cycle?
Ivana Bartoletti: Thank you so much. And it’s absolutely great to be here. I just wanted to start by saying that I find I think this is really, really important session, because all around the world, politicians like yourselves are grappling with what is AI and does it need ad hoc regulation or not? As somebody who has grown within the privacy field, by saying that privacy plays a huge role when it comes to artificial intelligence. And I want you to understand that a lot of countries around the world at the moment, they have been creating privacy and data protection regulation. Saudi, for example, which is very important because one of the risk related to AI is really about the right and freedoms and the right of data subjects, individuals, and the fact that individuals, data need to be protected and secured when it comes to artificial intelligence. My first encouragement to parliamentarians is to not jump into this, we have to regulate AI. This is because it’s really important that in your countries, you look at how is AI governed and regulated right now. Privacy regulation, consumer regulation, discrimination-related regulation, liability, all these things already apply to AI. To parliamentarians, I wanted to say, don’t think that AI does not exist in isolation. It does not. Already a lot of things, a lot of existing legislation that we have across different countries, they apply to artificial intelligence. AI is not an excuse to say, well, we don’t care about existing regulation, that we’re going to create new ones. First of all is, make sure that we do not jump into regulation like this. Privacy is important because a lot of the harms that we discussed, for example, in the opening session yesterday, they will affect. individuals. And this is important because when we talk, for example, about harms that come from AI, so for example, if you use algorithms to make decisions, or if you train large language systems by taking the data that comes from, for example, from all around the web, what you’re talking about is people, okay? And therefore, privacy legislation is important because it will protect a lot of individuals and it will force organizations to, as much as possible, build up privacy, security, and legal protection by design into what they do. Now, governance comes on many different levels. Governance comes from companies. So we, as organizations, we have to do all we can to be responsible. So you, as parliamentarians, you’re in command. You have to say, companies, you have to be responsible for what you’re doing. Show us, show us the best practice, right? Then there is regulation and governance that come from state and governments, and then there is the international sort of governance, for example, that we are building here in the Internet Governance Board. On companies, privacy by design means that you say to organizations, and this is why your privacy laws are important, privacy is not an afterthought. So if you’re using AI to recruit individuals, for example, to say, I’m going to hire this person, I’m going to promote this person, I’m going to give this person housing, I am going to decide whether this person goes to jail or not. Whatever you’re using AI for, you have to make sure that you have you know what data you’re using, you know the data is accurate, you know that you’re not discriminated against to certain individuals because you’re not done enough due diligence on the data and all the other possible sources of bias, and you’re transparent, and you have to say to companies, well actually, and I come from AI Governance for a company, there is no excuse, you have to be transparent in a meaningful way and demand companies, so for example, if you think about the European AI Act, the European AI Act doesn’t, that a lot of people criticise, doesn’t really add much, it only says, it’s legislation in Europe around AI, it only says, before you market a product, you have to demonstrate that you have done your due diligence, including privacy by design, security by design, it’s not that it comes up with other requirements, it just says before you hit the market, that’s what you need to do. So, just to conclude, privacy by design is important, there are a lot of challenges in privacy in AI, obviously, because, just to be clear, discriminative AI, which is sort of the machine learning, what we know so far, is different from the generative AI, LLMs, so we’re talking about different things, and doing privacy in one area is very different from LLMs, where, for example, by design, it’s difficult to say privacy by design in LLMs, you know, what does that really mean? So, there’s a lot of things that we need to unpack in this, but please leverage your privacy legislation and data protection legislation to ensure that the data of your citizens, and people living in your countries, are safeguarded in AI.
Latifa Al Abulkarim: Thank you so much, I have a lot of questions, but I’m trying not to ask for now, to keep it for later, but, so, you are recommending… that we need to focus on the privacy laws, personal data protection laws, security, for example, guidelines, and thinking about existing laws and how could we, maybe it needs amendments somehow. So this is something that we also need to think about. And thinking about the relation between or the oversight of the parliamentarians on the private work. So we want to make sure what the private companies are doing. Are they following certain governance structure or framework? Do they need to improve that framework somehow? So this is your recommendation. However, what’s happening now, we have the AI Act. And for example, I would say the Chinese approach, they started with, yeah, exactly, with different gradual laws. And now they are trying to merge it into one AI Act. So I don’t know if this is kind of the end journey of several laws to have a one combined law at the end, or do we need to still have the Canadian? I think they are following this, your approach of having like amendments to some of the existing post office. That’s very interesting. Maybe we’ll come to Basma now about, Ivana has mentioned why, and we want to know what from your side about how should companies developing AI systems to address risks arising and mitigating any misuse from the technology? And do you use any human centric design when it comes to, for example, Lama, Meta as in one of those elements? Thank you. Good morning. Sabah al khair.
Basma Ammari: That’s a very good question. I think it’s a natural continuation of what my friend Ivana was just speaking about. One thing to note, before I start speaking is that AI has been there for a very long time. So AI is what has been underpinning the tools that Meta uses. So what you see and what you’ve been seeing on your Instagram or Facebook accounts since the early days is powered by AI. The content that you see, the recommendability of content is also underpinned by AI. So that’s sort of the first point. But as we move into Gen AI, Meta has adopted an open source methodology with its large language model. What that means is that these large language models are made available for practically everyone to use, to build on. And why we do that, why Meta decided to do that is not completely altruistic. It’s really to, one, improve access to AI, but also to make these models better. And what I mean by better, it means the more people, the more experts, the more developers are building on these large language models. You are helping us strip biases, societal biases, for example, from being adopted by the AI. By getting more people around the world and more diverse people from around the world feeding into these large language models, you are supporting it in becoming fairer, more transparent, and more representative. So when we think about risks, what we’re really asking are the very difficult questions around ethics, around ethics of AI, around the responsible development of AI. We focus this on four core areas. One is privacy, which Ivana covered extensively, but privacy is a very important one. AI models are built on data sets. We need to ensure that these data sets respect privacy and privacy measures. The second one is a focus on safety. So these large language models do not become available at the minute of their development or invention. They go through iterations, several ones, and guardrails are built into these large language models to ensure that they are safe for use, that they do not have any dangerous information. And META has an agreement with the National Safety Institute, with the US government. So nothing comes out before it sort of goes through these safety checks. And the other one is fairness. Fairness is very, very important because again, AI is built on data sets. If the data sets are only coming from the global north, it means that this part of the region, and me as an Arab, this means that my culture and my history is not being reflected in the AI. Even if we’re asking AI, whether it’s social questions or political questions, it can be one-sided. So how does META and other companies ensure that these models are as fair and representative as possible? We do that by one, making it open source, but two, by engaging a large group of experts within the company and outside of the company in testing. In testing the AI and frequently before releasing it to ensure that. in all the languages that it’s available, and in all the languages, sorry, and in all the countries where it becomes available, it is representative. And the last one is transparency, which is also very important. How do we ensure that these models and whatever they’re producing is transparent? So we have techniques such as watermarking, ensuring that this will help protect against deep fakes and brand impersonations and so on. I can keep on going, but the summary of all of this is that one, it’s in the design that we monitor for risks. It’s not necessarily through going after stringent and inflexible regulations and new regulations. Because regulations exist, how do we ensure that these regulations include in one way or another AI without stifling innovation? That’s one. And then two is following a principles-based approach and a use case approach. So let’s regulate in a risk-based manner rather than regulate the AI itself, because AI is out of the box. And if we decide to go for a full-fledged regulation, what we might find ourselves in is that we’re falling behind as nations in promoting innovation. Thank you.
Latifa Al Abulkarim: Thank you so much, Basma. So you are promoting here risk-based regulation, use case-based regulation and principles. So, yeah. So if I want to compare between principles and risks in terms of regulations, which one do you think is the best approach? Because we had this discussion for a long time with different regulators. And shall we, for example, follow the AI Act approach as a risk-based regulation or principles-based regulations?
Basma Ammari: I would say- principles-based regulations, and through partnerships also between the private sector industry and the public sector. But I also think that they go hand in hand. So if you’re designing with the right principles, you’re mitigating against the risk. It’s just that I think it’s much more flexible when it comes to principles, like the risk is going to be a very detailed one. Well, this is one, like I would say, areas of discussion that is always on the table when it comes to AI regulations.
Latifa Al Abulkarim: Very interesting. So, Fuad, your bio inspired me for a lot of questions related to the relations between EYs and the clients that you are working with. So as private sector is a core driver of entrepreneurial and entrepreneurships and innovation, how has the development of and implementation of AI addressed the social needs and in various parts of the world? So if you have examples that would be great in different sectors and different domains, that would be very helpful.
Fuad Siddiqui: Thank you. Good morning. Yeah, I’m delighted to be here. And it’s always great to be back in Saudi. I was seeing all the greatest innovations happening and pushing the limits of AI systems innovation as well here. So that’s fantastic. And for my esteemed colleagues and parliamentarians, I would say that I see yourselves as the future technologists. You are almost not a politician anymore. I think as we go into the next decades, you all will be the future technology leaders driving the future of your countries. So I think just one thing before I give some examples on what sectors are driving certain innovations, one of the level setting that’s really important is to understand when we talk about AI itself, AI by itself is not just a technology. It’s a combination and intersection of a number of technologies that has to work together hand in hand to deliver the business. outcomes. So, as you have built electricity networks, you have an electricity grid, you would be building an intelligence grid. That intelligence grid, in my opinion, comprises of three Cs. One is what I call the basic connectivity layer. How do you move data around? So, you have built 5G networks, and you’re getting into 6G networks, and you have space technologies, satellites and all. So, movement of the data in a secure way, in a high-performance way, in a low-latency way will be critical. Then you have a computing layer where you would have computing infrastructure, where you would be housing your data and federating data. Then you have a control layer around software systems and AI systems that are embedded within that. That three-layered structure of connectivity, computing and control is what I call an intelligence grid. And for nations, in order for them to protect their sovereignty and data and innovate and drive new investments in your setup, I think that infrastructure management would be very important. So, that’s really important. Now, what I’m now seeing is that whether it’s manufacturing, agriculture, healthcare, a flavor or a permutation of the three C model is being implemented. Let me give you a couple of examples. So, we’ve been working with a large pharmaceutical biotechnology company, Bayer. You may have heard of the name. So, they have a unit called Bayer Crop Science Unit. They have a long history of developing a lot of insights and feeding agro-economics advisors who then in turn go to the farmers to help them understand how they should act, what is the type of crop understanding they need to have to drive better yields, et cetera. Now, what has happened traditionally is that in order to develop the synthesis of on very specific needs for a specific crop type, it took a lot of time, but they built a library of knowledge. Now with GNI coming in, you are working and bringing with a partner with Microsoft on the cloud layer, we are trying to democratize this whole piece. So now we are developing this agentic systems around this in order to make sure that the agro-economic role becomes much more ubiquitous, much more fast, so that now that knowledge that was residing in terms of how much, what this crop needs to do to drive better precision growth, what kind of nutrient levels, what kind of water levels I need to have, you can now disseminate this information in a much more ubiquitous and democratic fashion. So we are very proud in driving that because not only from a business perspective, we’re now seeing a whole effect around the value chain in that context. So just one more, I think I would like to say is that this is from the, we talked a little bit about privacy and consumer side of things, but when you look at nation’s GDP growth and where some of the workforce is employed, even Saudi and UAE and other places, energy sector is very important as well. So what we’re seeing is there’s a client that I’m working with who basically have now instituted a program around digitalization of their oil wells. And that’s really important because if you look, if you understand the oil and gas sector, it’s a very geo-diverse sector. You have oil wells in remote locations and any fluctuation in the conditions of the well will have a dramatic impact to the production capacity. So what this process is now doing with AI systems and this model that the intelligence grid is that you’ve digitalized the well, sensorized the wells, put AI systems on the well. And then you basically use AI algorithms in a cloud setting to try to monitor and control those wells. What it has done is two things. not only improved the production capacity and the disruption that happens if any of the instability in the environment, it has also reduced the ability to go out and do remote visits. What it has done is reduce the emissions and sustainability impact. So, you see that it’s a chain of cross technologies implemented in such a way that could drive critical infrastructure to drive better productivity, safety, efficiency, and that’s the whole notion of resilient economies at the back of this intelligence grid, if I will.
Latifa Al Abulkarim: Thank you. Thanks so much. A lot of interesting work here and I’m sure that you are working on convincing your clients about these benefits from different perspectives, either economic or social benefits. I will open the floor now. I don’t know how many, yeah, I’ll just short, I don’t know if Celine is here. No? Okay. Okay. We have then, I will open the floor for questions. So, we’ll start from Sahar and then we’ll go back here. We’ll finish from this side. Please. Where’s the mic? Yeah. We have a good number of questions. So, I don’t know how many minutes left. Can we just 15? Okay, good.
Audience: Thank you very much for this insightful session. My name is Maha Abdel Nasser. I’m from Egypt, parliamentarian from Egypt, and at the same time, I have an engineering background and more than 30 years in the ICT industry. So, I have the two hats, the parliamentarian hat and the expert hat. And actually, we have now this debate about the legislation or having an AI act in Egypt or just framework. And I’ve been talking to the minister himself. He wants an act. and they want a framework. And in the industry, they want it to be just a framework or regulation because, of course, doing any changes to the Act, it takes a very long time and the technology is moving extremely fast. So we didn’t still have this, but I think what you’re saying is extremely right, that we need to work on the legislations of data because if we could do good legislations for classifying the data and the free flow of data and all these things, this will help the AI. We don’t need to do anything else, maybe just the ethics for the AI. The main question for me is the privacy. I think, and I’ve been with people from Meta in a roundtable, that we will have to sacrifice our privacy for the sake of AI in the future and to leverage all the benefits from AI. So will this be the case or not?
Latifa Al Abulkarim: Thank you so much. So who’s next? Please, because I will come back to you here.
Audience: Thank you, Dr. Latifa, and thank you to all the speakers for very informative talks. Two points, Zwick. First, I like the Act. I think all parliaments now in the world, everyone is working somehow and there are debates and there will still be some debates. The question would be, do you think from your experience, from what you see, is it enough to have, as you said, like some general regulations and not to be as an Act, especially AI, as you mentioned, Mr. Fouad. It will be, I will intervene in all aspects of our lives, I mean, by day now, by minute. So I like also what Ms. Ivana said about the about, I think, to enforce maybe the companies to say, okay, show me what you have, show me your regulations so at least I can follow up with you before I launch your products. The final, sorry, the final point would be, I think the most important in AI, we are all almost programmers, I’m from IT perspective of computer science, is the algorithm. The algorithm, we know, big companies and all companies, they have the brain. The most important is not only the technology, it is the algorithm. So how can we enforce or maintain or make sure this algorithm will not support even, I don’t know, will support privacy, will not support discrimination against race, religion, ethnicity, whatever. So again, all talks is more not about the algorithms. So how, and I think companies will not say, okay, I’ll show you my goal. My goal is the algorithm. Thank you very much.
Latifa Al Abulkarim: Thanks so much. I will go to this side. I will have more questions, then I will come back here. Can you be please concise into making sure that’s one minute, not more. Where’s the mic? Oh, please, yeah.
Audience: Okay, thank you very much. Radical change in technology. As I see, IGF, Internet Government Forum. And what we’ve been talking from yesterday to today, always we’re talking about AI. And AI is the game, AI is the name. And of course, as you mentioned about the oil and all that, in 1981, I made my master degree for controlling the moisture in the natural gas coming from the well. So we was the microprocessor at that time and control system. So AI has been there. Now, the thing is, we’re talking here about parliaments and how parliaments can, or Majalis Shura, to benefit from this technology. As you know, governments or executive part of the. state, you find that they are advanced in adopting technologies, while in parliaments you find them still they are legislating and writing laws, but how do you see the laws that you execute? How is it executed in the government? Rules and regulations, if you want to monitor or see the performance of these laws on the ground, then you need to use some sort of an AI to advance in taking what is called the what-if, what-if the decision, because you have to see if your laws are doing the right things or not. I think we need some sort of a roadmap and also a proposal of a model that parliaments can adopt to know how to deal with the government activities. Thank you very much.
Latifa Al Abulkarim: Thank you.
Audience: Thank you very much for organizing this discussion. My name is Silvia Dinica. I’m a Romanian senator, but also I have a PhD in applied mathematics. One of you said earlier that these models have been around for quite some time, but to be honest, with my experience in the parliament, I would say that the parliamentarians have a very difficult homework ahead dealing with the AI models. Also because the impact of these models is quite huge in a lot of layers of day-by-day life, and they have to deal with it. They have to put out the framework that is fair, is inclusive, doesn’t let anyone behind. and it’s not quite anything they’ve seen before because most of the know-how is outside of the parliament and we need to bring it inside and we need to put it in the hand of the legislator. So my question for you is how do you see the involvement of the private sector taking into account the effects on the job market, on the education, all the effects of artificial intelligence? How do you see the involvement of the private sector in such a way that we are all doing well as a society? Thank you.
Latifa Al Abulkarim: We have one here. And this is the last question. Celine. I wish that we have more time and we’ll come back to you. I didn’t forget you.
Audience: Good morning, I’m Aileen Febles, a Cuban parliamentarian. I’m also president of a civil society organization that brings together professionals in the technology sector. And I’m a university professor. So I have, the parts I’m interested in are a little bit confluenced in our case. In Cuba, we don’t have a law on artificial intelligence. We prefer to work first on a strategy for the development of artificial intelligence and then regulate artificial intelligence. We have a law for the protection of personal data, but I add to the previous parliamentarian asking what experience we have then in making this happen. Because what is most difficult for us is to legislate. We legislate, especially on technological issues, but to make it happen. Secondly, if there is any experience in the use of AI to legislate. I think it would be very interesting the support that artificial intelligence gives to parliamentarians, to legislators, when it comes to decision-making in the laws that we approve on other issues. But, well, we can start in particular with that law on artificial intelligence. What lessons learned? and if there is any lesson or any experience on this topic. Of course, the barriers always affect. The barriers because the barriers capture, that is, artificial intelligence works with data, the data is captured. Those of us who do not have access to everything, or we have barriers in access, because we cannot provide data from which artificial intelligence feeds to be able to offer their answers, but there is also a gap in the processing capacity. And in that, private companies can contribute a lot to those of us who have that gap in the processing capacity, that we can have our own data, national data, data that we have collected, normalized, standardized, but that we do not have the processing capacity to be able to use it for the good of our citizens. So, it’s like three questions in one, but for those three lines, basically. Peace be upon you. I am Mubarak Janahi from the Bahraini Council. I have a question. Is there a direction to establish parliamentary research centers dedicated to the development and encouragement of digital innovation and artificial intelligence in cooperation with the private sector? Is there a direction to establish parliamentary research centers dedicated to the development and encouragement of digital innovation and artificial intelligence in cooperation with the private sector? Thank you so much. Thank you.
Latifa Al Abulkarim: Okay, so we start with, I’m trying to cluster those questions somehow. So we have the question regarding framework versus act. I think it’s almost the same question between Maha and Salih and regulating the algorithm itself, or how can we know more about the algorithm? There’s another question from Gwana related to the same thing, that AI app is a good start in terms of regulation. But how can we move from having drafts into enforcement, legal enforcement? So these are, I would say, almost the same type of questions. So who would like to start first? Ivana, maybe?
Ivana Bartoletti: Yeah. So thank you. Excellent questions. So I wanted to just start with a provocation. I mean, you are the parliamentarians, you make the rules. But you are really faster than the parliamentarians. No, I’m not. Let me finish. But you make the rules, and it’s important because AI is great, we’ve seen it, all the things that we talked about. But also there are risks. The risks to privacy, security, disinformation, all of that. Now, I always say there’s a good AI and we have seen a bad AI. Now, you need to make sure that in your countries, you do all you can to stop the bad AI. Because otherwise people will say, well, actually, I’m not going to trust this. I’m not going to use it. I’m not going to do that. Okay, first point. And that depends on the Romanian parliamentarian or senator. Of course, it’s difficult because a lot of the know-how is not in the parliament, but hold on a second. Hold on a second. AI is not just technology. It involves data, the way that we see the world, and that’s your job. That’s your job. The way you want to be in 10, 20, 30 years, that’s your job. Okay? And I’m saying this because it’s really important for the future that the decision about where AI is going belongs to those who govern these countries. Now, what does the private sector do? Okay? We can work with you. You can consult. You can ask. We can simplify things and give you the technical know-how. But ultimately, what I’m trying to say here is that I think it’s fair to say that a lot of private sector organisations, we’re saying to government, you know, the ball is in your court on this. That’s important. But I wanted to say one thing. On privacy, for example, whoever tells you that there is a dichotomy between privacy and AI, please do not believe them. Do not. You can ask companies to say, well, enforce privacy. Okay? You can say, and a lot of things, we need more research. You can address where the research needs to go. How we can interrogate algorithms without actually accessing them is research. We can invest and you can decide where you want a lot of the research to go. And it’s important that we invest in research on issues such as how do we keep monitoring algorithms? How do we validate them 10 years down the line? How do we make sure that we control them? How do we leverage AI itself to do a lot of this work? So where you want to go and where you want the research to go, it’s important. Now, the European AI Act, to me, and I’m a European and I’m someone who’s been involved, it’s a good step. It’s not perfect. It’s not perfect by all means. But what it says is, based on the risk, and how do you define the risk? You define them. In the European AI Act, the risks are defined as safety based on the product legislation that we have in Europe, and AI that may infringe upon the rights that we share as Europeans. Okay, that’s Europe. You define what the risks are. And whether you enact new laws, or whether you say, I update existing laws, copyright, for example, privacy, consumer, whether you update what you have, whether you enact a new law, it’s the mindset that you need to change. The mindset is, these are the risks I feel. This is what I want to protect. What is that you, in your countries, you want to protect in the age of AI? So it’s the other way around. And I encourage MPs to think the other way around.
Latifa Al Abulkarim: Thank you. Thanks so much. Well, there is somehow a cross border. So this is where maybe they are quite worrying about when we are importing some technologies, and the technology that we are using, then there is where’s the line that we could, maybe I would consider those risks that is national risks, but I have also to consider those risks that is I didn’t choose it. But it’s there. So I know it’s very interesting discussion, Basma, your points. There are two topics mainly about the same thing, the legislations and different matters and directions and the innovation. I will leave the innovation side to you, Fuad, about when we need use cases for the parliamentarians, AI use cases and innovation centers to help them.
Basma Ammari: Yeah, I mean, I’m not going to touch upon, sorry, I’m not going to touch upon the same issues that Ivana covered. But I heard, I think, two questions. One was about the algorithm and what do we do with the algorithm to ensure that it’s not adopting our existing biases in society. And there’s plenty of them. And one of the godfathers of AI that is a professor at NYU but is also an advisor at Meta, who’s Yann LeCun, one of the things he advocates for, and he encourages governments around the world to do, is to digitize your national archives. Strip them out of private information. So no names, no ages, all of that. But even that information, even if it stays, goes through privacy checks before it’s used for any AI to begin with, at least speaking for Meta. So one is digitizing national archives, which guarantees languages out there, so using local language, local culture, music, history, and so on. Making that available in a digital form becomes information that the AI can feed into. And in practice, then this makes the AI more representative, as I said earlier. So that’s one thing.
Latifa Al Abulkarim: And I’m trying to think of a question related to. from Romania about how can we ensure that the models are something similar to the algorithms, fair and inclusive, and the private sector role in terms of the market and the labor, right? Yeah, the market and labor. Yeah. In terms of workforce.
Basma Ammari: I mean, this was every time there’s a tech revolution, historically, we do see, you know, loss of jobs, but then the creation of new jobs. And will no jobs be lost? No, some jobs are being lost. That’s the reality. And this is a technological revolution, we are in the middle of it. So we have a responsibility as industry, and as governments to come together to really upskill and integrate and innovate around our stagnant education systems. One example here, actually, from Saudi Arabia, META opened up an academy in partnership with Tawaiq Academy to upskill the upcoming generation in tools for AI and tools for the metaverse. We graduated the first cohort last year, we’re graduating the about 1000 students in the AI curriculum this year. And so this is, I think, our collective responsibility. And yes, industry has a big, big role to play here.
Latifa Al Abulkarim: Thank you so much, Basma. One minute, but please, we have you have a new task about from the parliamentarians to EY, to collaborate together, and trying to find new use cases for the parliamentarians to help them using AI to for helping them like summarizing legislations and knowing the gaps and others.
Fuad Siddiqui: So one thing, so just to sum up, in a minute, I think, I fully understand the complexity of your jobs here at stake. And I don’t think it’s the government has to drive it. The private sector is equally responsible. One of the things that I have a lot of friends in the industry, and one of the private sector leader in the US. told me that we develop something, a solution for a particular something, we almost treat it as a hammer. So everything we see is a nail, right? We’re trying to drive a hammer to that problem. Now, if you look at it, the reason I gave the example of intelligence grid is you as parliamentarian have to think around who your trusted ecosystem will be and use them to drive an understanding of how that cross-pollination of knowledge will happen. I’ll give you one concrete example. We’re working with the government at the moment, and what they have done very well is they’ve developed something called the concept of future technology observatory. And what they’ve asked us to do, or some of the consulting firms to do, is to help them understand what’s coming down the pipe, but develop a model. If something around agentic systems or autonomous AI systems, what will that do to the different government entities and others? So we’re developing something called a future tech index to understand the inception of that technology around few dimensions around security, around regulation, around ecosystem impacts, and so on and so forth. And then using that as a basis to then test it with the recipient entity to see what is the maturity level and how can we work together. What that does is to give you a concrete roadmap, then you can have a better position to drive the discussion down with the private sector or specific companies that are giving you that technology itself. So I think that itself, the bottom line is it’s almost creating a future foresight council now, which is driving the mandate, not just to show me how this particular new thing is going to work, but how it works in association with the other. Because you all have taken, I’ll give you an example. If you take a medicine, you have a side reaction, you don’t know how it’s interacting with something else, right? So this is the same issue. If somebody proves you my system works, it’s not enough until you see how the ecosystem works, right? And that’s really the important thing. So I’ll stop there.
Latifa Al Abulkarim: Thank you so much. I remember a quote from a friend. And he said, we know how to build it, but we don’t know how to use it. So thank you so much, everyone, for joining us in this very intense, I would say, discussion and looking forward for more collaboration between the private sector and the different parliaments that is here. Thank you so much. And please, I would like to welcome the new panellists and moderator to the stage. Thank you.
Ivana Bartoletti
Speech speed
131 words per minute
Speech length
1479 words
Speech time
676 seconds
Existing laws already apply to AI, new specific AI laws may not be needed
Explanation
Ivana argues that many existing laws and regulations already apply to AI, such as privacy, consumer protection, and discrimination laws. She suggests that countries should first examine how AI is currently governed before creating new AI-specific legislation.
Evidence
Ivana mentions privacy regulation, consumer regulation, discrimination-related regulation, and liability laws as examples of existing legislation that applies to AI.
Major Discussion Point
AI Regulation Approaches
Agreed with
Basma Ammari
Agreed on
Existing laws and regulations are relevant to AI governance
Differed with
Basma Ammari
Differed on
Approach to AI regulation
Privacy and data protection laws are important for governing AI
Explanation
Ivana emphasizes the importance of privacy and data protection laws in governing AI. She argues that these laws protect individuals’ rights and force organizations to build privacy, security, and legal protections into their AI systems.
Evidence
She mentions that privacy legislation is crucial because many AI-related harms affect individuals, and privacy laws can protect citizens’ data in AI applications.
Major Discussion Point
AI Regulation Approaches
Agreed with
Basma Ammari
Agreed on
Privacy and data protection are crucial in AI governance
Parliamentarians need to define risks and protections for AI in their countries
Explanation
Ivana encourages parliamentarians to think about what they want to protect in the age of AI in their countries. She argues that it’s the responsibility of lawmakers to define the risks and determine what protections are needed.
Evidence
She uses the example of the European AI Act, which defines risks based on product safety legislation and potential infringement of shared European rights.
Major Discussion Point
Role of Parliamentarians in AI Governance
Monitoring long-term impacts of AI systems requires ongoing research
Explanation
Ivana emphasizes the need for ongoing research to monitor and validate AI systems over time. She argues that investment in research is crucial to understand how to control and monitor AI systems in the long term.
Evidence
She suggests areas for research including how to keep monitoring algorithms, how to validate them years down the line, and how to ensure control over AI systems.
Major Discussion Point
Challenges in AI Governance
Basma Ammari
Speech speed
131 words per minute
Speech length
1147 words
Speech time
522 seconds
Open source AI models help improve fairness and transparency
Explanation
Basma argues that making AI models open source allows more people to use and build on them, which helps improve access to AI. She states that this approach can help strip biases from AI models and make them fairer and more transparent.
Evidence
Basma mentions Meta’s approach of making their large language models available for everyone to use and build upon.
Major Discussion Point
AI Development and Implementation
AI systems should be designed with privacy, safety, fairness and transparency in mind
Explanation
Basma emphasizes that AI systems should be designed with key principles in mind, including privacy, safety, fairness, and transparency. She argues that these principles should be built into AI models from the start.
Evidence
She mentions Meta’s focus on privacy in data sets, safety checks before releasing AI models, efforts to ensure fairness and representativeness, and techniques like watermarking for transparency.
Major Discussion Point
AI Development and Implementation
Agreed with
Ivana Bartoletti
Agreed on
Privacy and data protection are crucial in AI governance
Risk-based and principles-based regulation is preferable to strict AI-specific laws
Explanation
Basma advocates for a risk-based and principles-based approach to AI regulation, rather than strict AI-specific laws. She argues that this approach is more flexible and allows for innovation while still addressing potential risks.
Evidence
She suggests regulating in a risk-based manner rather than regulating AI itself, to avoid falling behind in promoting innovation.
Major Discussion Point
AI Regulation Approaches
Agreed with
Ivana Bartoletti
Agreed on
Existing laws and regulations are relevant to AI governance
Differed with
Ivana Bartoletti
Differed on
Approach to AI regulation
Private sector and government partnerships are needed to address AI’s workforce impacts
Explanation
Basma acknowledges that AI will lead to job losses but also create new jobs. She argues that industry and governments have a shared responsibility to upskill workers and innovate around education systems to address these changes.
Evidence
She provides an example of Meta’s partnership with Tawaiq Academy in Saudi Arabia to upskill the upcoming generation in AI and metaverse tools.
Major Discussion Point
AI Development and Implementation
Governments should digitize national archives to improve AI training data
Explanation
Basma suggests that governments should digitize their national archives to provide better training data for AI systems. She argues that this would help make AI more representative of local languages, cultures, and histories.
Evidence
She mentions advice from Yann LeCun, a professor at NYU and advisor at Meta, who advocates for this approach.
Major Discussion Point
Role of Parliamentarians in AI Governance
Fuad Siddiqui
Speech speed
169 words per minute
Speech length
1312 words
Speech time
465 seconds
AI is being applied to improve productivity in sectors like agriculture and energy
Explanation
Fuad discusses how AI is being implemented in various sectors to improve productivity and efficiency. He emphasizes that AI is not just a single technology but a combination of technologies working together.
Evidence
He provides examples of AI applications in pharmaceutical biotechnology (Bayer Crop Science) and the digitalization of oil wells in the energy sector.
Major Discussion Point
AI Development and Implementation
Audience
Speech speed
144 words per minute
Speech length
1367 words
Speech time
566 seconds
Parliamentarians need to understand AI to effectively govern it
Explanation
Audience members express concern about the complexity of AI and the need for parliamentarians to have sufficient understanding to govern it effectively. They highlight the challenge of legislating on technological issues when much of the expertise lies outside of parliament.
Major Discussion Point
Role of Parliamentarians in AI Governance
There’s a need for AI research centers to support parliamentarians
Explanation
An audience member suggests the establishment of parliamentary research centers dedicated to digital innovation and AI. These centers would work in cooperation with the private sector to support parliamentarians in understanding and governing AI.
Major Discussion Point
Role of Parliamentarians in AI Governance
Balancing innovation and regulation is difficult with rapidly changing AI technology
Explanation
Audience members discuss the challenge of creating appropriate regulations for AI given how quickly the technology is evolving. They debate whether a framework or a more formal act is more appropriate for governing AI.
Major Discussion Point
Challenges in AI Governance
Ensuring AI algorithms are fair and unbiased is a key challenge
Explanation
An audience member raises concerns about the fairness and potential biases in AI algorithms. They question how to ensure that algorithms do not perpetuate or exacerbate existing societal biases.
Major Discussion Point
Challenges in AI Governance
Latifa Al Abulkarim
Speech speed
132 words per minute
Speech length
1669 words
Speech time
757 seconds
Cross-border AI applications create governance complexities
Explanation
Latifa points out that the cross-border nature of AI technologies creates additional complexities for governance. She notes that countries may need to consider risks associated with imported technologies that they didn’t choose but are present in their markets.
Major Discussion Point
Challenges in AI Governance
Agreements
Agreement Points
Existing laws and regulations are relevant to AI governance
Ivana Bartoletti
Basma Ammari
Existing laws already apply to AI, new specific AI laws may not be needed
Risk-based and principles-based regulation is preferable to strict AI-specific laws
Both speakers argue that existing laws and regulations can be applied to AI governance, and that creating entirely new AI-specific laws may not be necessary or beneficial.
Privacy and data protection are crucial in AI governance
Ivana Bartoletti
Basma Ammari
Privacy and data protection laws are important for governing AI
AI systems should be designed with privacy, safety, fairness and transparency in mind
Both speakers emphasize the importance of privacy and data protection in AI governance, arguing that these principles should be fundamental in AI development and regulation.
Similar Viewpoints
All three speakers advocate for a balanced approach to AI governance that considers both the potential benefits and risks of AI implementation, suggesting that lawmakers should focus on understanding and addressing specific use cases and risks rather than creating blanket regulations.
Ivana Bartoletti
Basma Ammari
Fuad Siddiqui
Parliamentarians need to define risks and protections for AI in their countries
Risk-based and principles-based regulation is preferable to strict AI-specific laws
AI is being applied to improve productivity in sectors like agriculture and energy
Unexpected Consensus
Need for collaboration between private sector and government in AI governance
Ivana Bartoletti
Basma Ammari
Fuad Siddiqui
Audience
Parliamentarians need to define risks and protections for AI in their countries
Private sector and government partnerships are needed to address AI’s workforce impacts
AI is being applied to improve productivity in sectors like agriculture and energy
There’s a need for AI research centers to support parliamentarians
Despite representing different sectors (academia, private sector, and government), all speakers and audience members agreed on the need for collaboration between the private sector and government in AI governance. This consensus is unexpected given the often conflicting interests of these groups.
Overall Assessment
Summary
The main areas of agreement include the relevance of existing laws to AI governance, the importance of privacy and data protection, the need for a balanced and risk-based approach to regulation, and the necessity of collaboration between the private sector and government.
Consensus level
There is a moderate to high level of consensus among the speakers on fundamental principles of AI governance. This consensus suggests a potential for productive collaboration in developing AI governance frameworks that balance innovation with responsible development and use. However, there are still areas of debate, particularly around the specifics of how to implement these principles in practice.
Differences
Different Viewpoints
Approach to AI regulation
Ivana Bartoletti
Basma Ammari
Existing laws already apply to AI, new specific AI laws may not be needed
Risk-based and principles-based regulation is preferable to strict AI-specific laws
While both speakers advocate for caution in creating new AI-specific laws, Ivana emphasizes leveraging existing laws, while Basma promotes a risk-based and principles-based approach to regulation.
Unexpected Differences
Role of private sector in AI governance
Ivana Bartoletti
Basma Ammari
Parliamentarians need to define risks and protections for AI in their countries
Open source AI models help improve fairness and transparency
While both speakers acknowledge the importance of governance, there’s an unexpected difference in their emphasis on who should lead this effort. Ivana strongly emphasizes the role of parliamentarians, while Basma highlights the benefits of private sector initiatives like open-source AI models.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI regulation, the balance between leveraging existing laws and creating new frameworks, and the roles of government and private sector in AI governance.
difference_level
The level of disagreement is moderate. While there are differences in approach and emphasis, all speakers agree on the need for responsible AI development and governance. These differences reflect the complexity of AI governance and highlight the need for collaboration between government, industry, and civil society to develop effective AI policies.
Partial Agreements
Partial Agreements
Both speakers agree on the importance of privacy and data protection in AI governance, but they differ in their approach. Ivana emphasizes leveraging existing privacy laws, while Basma focuses on incorporating these principles into the design of AI systems from the start.
Ivana Bartoletti
Basma Ammari
Privacy and data protection laws are important for governing AI
AI systems should be designed with privacy, safety, fairness and transparency in mind
Similar Viewpoints
All three speakers advocate for a balanced approach to AI governance that considers both the potential benefits and risks of AI implementation, suggesting that lawmakers should focus on understanding and addressing specific use cases and risks rather than creating blanket regulations.
Ivana Bartoletti
Basma Ammari
Fuad Siddiqui
Parliamentarians need to define risks and protections for AI in their countries
Risk-based and principles-based regulation is preferable to strict AI-specific laws
AI is being applied to improve productivity in sectors like agriculture and energy
Takeaways
Key Takeaways
Existing laws and regulations already apply to AI in many areas, so entirely new AI-specific laws may not be necessary
Privacy and data protection laws are particularly important for governing AI systems
A risk-based and principles-based approach to AI regulation is preferable to strict, inflexible AI-specific laws
AI development should incorporate privacy, safety, fairness and transparency by design
Parliamentarians need to define the specific risks and protections they want for AI in their countries
Public-private partnerships and collaboration are important for effective AI governance and addressing workforce impacts
Resolutions and Action Items
Parliamentarians should focus on understanding AI to govern it effectively
Governments should consider digitizing national archives (with privacy protections) to improve AI training data
More research is needed on how to monitor and validate AI systems long-term
Unresolved Issues
How to effectively regulate AI algorithms without stifling innovation
How to address cross-border issues in AI governance
The appropriate balance between framework/principles-based approaches versus strict AI laws
How to ensure AI systems remain fair and unbiased over time
Suggested Compromises
Using existing laws and regulations where possible, while updating them to address AI-specific concerns
Adopting a risk-based approach that regulates high-risk AI applications more strictly
Balancing innovation and regulation through ‘sandbox’ approaches and public-private collaboration
Thought Provoking Comments
Privacy regulation, consumer regulation, discrimination-related regulation, liability, all these things already apply to AI. To parliamentarians, I wanted to say, don’t think that AI does not exist in isolation. It does not. Already a lot of things, a lot of existing legislation that we have across different countries, they apply to artificial intelligence.
speaker
Ivana Bartoletti
reason
This comment challenges the assumption that entirely new regulations are needed for AI, pointing out that many existing laws already apply. It encourages a more nuanced approach to AI governance.
impact
This shifted the discussion from focusing solely on new AI-specific regulations to considering how existing laws can be applied or adapted. It prompted further discussion on privacy laws and data protection in relation to AI.
META has adopted an open source methodology with its large language model. What that means is that these large language models are made available for practically everyone to use, to build on. And why we do that, why Meta decided to do that is not completely altruistic. It’s really to, one, improve access to AI, but also to make these models better.
speaker
Basma Ammari
reason
This comment provides insight into the strategy of a major tech company regarding AI development, highlighting the benefits of open-source approaches in improving AI models.
impact
It introduced the concept of collaborative AI development and its potential benefits, leading to further discussion on fairness, transparency, and representation in AI models.
As you have built electricity networks, you have an electricity grid, you would be building an intelligence grid. That intelligence grid, in my opinion, comprises of three Cs. One is what I call the basic connectivity layer. How do you move data around? So, you have built 5G networks, and you’re getting into 6G networks, and you have space technologies, satellites and all. So, movement of the data in a secure way, in a high-performance way, in a low-latency way will be critical. Then you have a computing layer where you would have computing infrastructure, where you would be housing your data and federating data. Then you have a control layer around software systems and AI systems that are embedded within that.
speaker
Fuad Siddiqui
reason
This comment provides a comprehensive framework for understanding the infrastructure needed for AI, comparing it to existing utilities like electricity grids. It helps contextualize AI within a broader technological ecosystem.
impact
This analogy helped frame the discussion in terms of large-scale infrastructure development, leading to considerations of national strategies and the role of both public and private sectors in building AI capabilities.
Whoever tells you that there is a dichotomy between privacy and AI, please do not believe them. Do not. You can ask companies to say, well, enforce privacy.
speaker
Ivana Bartoletti
reason
This comment directly challenges a common narrative that privacy must be sacrificed for AI advancement, asserting that both can coexist.
impact
It reframed the discussion around privacy and AI, encouraging participants to think about how to enforce privacy within AI development rather than seeing them as mutually exclusive.
Overall Assessment
These key comments shaped the discussion by moving it from a focus on creating entirely new AI-specific regulations to a more nuanced approach considering existing laws, open collaboration, infrastructure development, and the compatibility of privacy with AI advancement. The discussion evolved to consider AI governance as a complex, multifaceted issue involving various stakeholders and requiring a balance between innovation and regulation. The comments encouraged parliamentarians to think more broadly about their role in shaping AI development and its societal impacts, while also highlighting the importance of collaboration between the public and private sectors.
Follow-up Questions
How to balance between having a comprehensive AI Act versus more flexible frameworks or regulations?
speaker
Maha Abdel Nasser and Salih
explanation
This is important to determine the most effective regulatory approach for AI that can keep pace with rapid technological changes while providing adequate oversight.
How can we ensure AI algorithms support privacy and avoid discrimination?
speaker
Salih
explanation
This is crucial for developing ethical AI systems that protect individual rights and promote fairness.
How can parliaments leverage AI to monitor the implementation and impact of laws?
speaker
Unnamed audience member
explanation
This could enhance the effectiveness of legislative oversight and policy evaluation.
How can the private sector be involved in AI development in a way that benefits society as a whole, considering impacts on job markets and education?
speaker
Silvia Dinica
explanation
This is important for ensuring AI development aligns with broader societal interests and mitigates potential negative impacts.
What experiences or lessons learned are there in implementing AI legislation?
speaker
Ailyn Febles
explanation
This could provide valuable insights for countries developing their own AI regulatory frameworks.
How can AI be used to support legislators in decision-making and law-making processes?
speaker
Ailyn Febles
explanation
This could potentially improve the efficiency and effectiveness of legislative processes.
Is there a direction to establish parliamentary research centers focused on digital innovation and AI in cooperation with the private sector?
speaker
Mubarak Janahi
explanation
This could help bridge the knowledge gap between lawmakers and rapidly evolving AI technologies.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online