WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder

16 Dec 2024 10:15h - 11:30h

WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder

Session at a Glance

Summary

This workshop focused on how data governance initiatives can promote ethics by design in AI and other data-oriented technologies. Speakers from various organizations discussed challenges and strategies for embedding ethical considerations into technological development.

Key themes included the need for multi-stakeholder collaboration, the challenge of defining and implementing ethics across different contexts, and the importance of moving from principles to actionable implementation. Speakers highlighted initiatives like UNESCO’s recommendation on AI ethics, which provides a global standard, and tools like ethical impact assessments to evaluate AI systems.

Challenges discussed included varying levels of AI readiness across countries, differing interpretations of ethical principles, and ensuring meaningful inclusion of civil society voices beyond tokenistic representation. The importance of education and capacity building around AI ethics was emphasized.

Speakers noted the value of open source AI for collaborative development and risk mitigation. Initiatives to bring together diverse stakeholders, including coalitions and expert networks, were described as ways to advance ethical AI governance globally.

Overall, participants agreed on the need to operationalize ethical principles through concrete actions and implementation strategies. Moving from high-level agreement on ethics to practical application across diverse contexts was seen as a key next step for advancing responsible AI development and deployment worldwide.

Keypoints

Major discussion points:

– The challenges of defining and implementing ethics in AI across different contexts and cultures

– The importance of multi-stakeholder collaboration in shaping ethical AI governance

– Various initiatives and frameworks being developed by organizations to promote ethical AI, including UNESCO’s recommendation on AI ethics

– The need to move from high-level principles to concrete implementation of ethical AI practices

– Involving civil society and underrepresented voices in AI governance discussions

The overall purpose of the discussion was to explore how data governance initiatives and multi-stakeholder collaboration can promote ethics by design in AI and other digital technologies. The panelists aimed to address challenges in embedding ethics in AI systems and debate strategies for meaningful inclusion of diverse stakeholders in shaping ethical norms and standards.

The tone of the discussion was largely constructive and solution-oriented. Panelists acknowledged the complexities and challenges involved, but focused on sharing concrete initiatives and proposing ways to make progress. There was a sense of urgency about moving from principles to implementation as the discussion progressed. The tone became slightly more critical when discussing the tokenistic inclusion of civil society voices, but remained overall collaborative and forward-looking.

Speakers

– José Renato Laranjeira de Pereira: Researcher at the University of Bonn, Co-founder of LAPIN (Laboratory of Public Policy and Internet)

– Thiago Moraes: Joint PhD candidate in law at University of Brasilia and University of Brussels, Specialist in data protection and AI governance at Brazilian Data Protection Authority (ANPD), Co-founder and counselor of LAPIN

– Ahmad Bhinder: Policy Innovation Director at the Digital Corporation Organization

– Amina P.: Privacy Policy Manager at META

– Tejaswita Kharel: Project Officer at the Center for Communication Governance at National Law University Delhi

– Rosanna Fanni: Program Specialist in the ethics of AI unit at UNESCO

Additional speakers:

– Alexandra Krastins: Senior lawyer at VLK Advogados, Former project manager at Brazilian National Data Protection Authority, Co-founder and counselor of LAPIN

Full session report

Revised Summary of AI Ethics and Governance Discussion

This workshop explored strategies for embedding ethical considerations into AI and digital technologies. Speakers from various organizations discussed challenges and approaches to promoting ethics by design, emphasizing the need for multi-stakeholder collaboration and the importance of moving from principles to actionable implementation.

Ahmad Bhinder – Digital Corporation Organization (DCO)

Ahmad Bhinder highlighted two main regulatory approaches to AI governance: a prescriptive, risk-based approach led by the EU and China, and a more flexible, principles-based approach favored by the US and Singapore. He noted varying levels of AI readiness across countries, complicating the development of global governance frameworks. Bhinder also mentioned the DCO’s Digital Space Accelerator program, which aims to bring together multiple stakeholders to address AI governance challenges.

Amina P. – META

Amina P. presented open source AI as a tool to enhance privacy and safety, challenging common perceptions by arguing that opening AI models to the wider community allows experts to identify, inspect, and mitigate risks collaboratively. She emphasized META’s partnerships with academia and civil society, including the Partnership on AI and the Coalition for Content Provenance and Authenticity. Amina also highlighted the need for better education on AI and privacy among stakeholders.

Rosanna Fanni – UNESCO

Rosanna Fanni emphasized UNESCO’s recommendation on AI ethics as a global standard agreed upon by 194 member states. She introduced UNESCO’s readiness assessment methodology for evaluating governance frameworks at the macro level and an ethical impact assessment tool for specific algorithms at the micro level. Fanni mentioned UNESCO’s plans to launch a global network of civil society organizations focused on AI ethics and governance, and their ongoing implementation of AI ethics recommendations through readiness assessments in over 60 countries. She also noted the upcoming AI Action Summit hosted by France in February and referenced the Global Digital Compact in her concluding remarks.

Tejaswita Kharel – Center for Communication Governance

Tejaswita Kharel highlighted the need for a context-specific understanding of ethical principles, emphasizing that ethics is a subjective concept varying across individuals and cultures. She raised concerns about ensuring meaningful inclusion of civil society voices in AI governance discussions, pointing out the challenges of moving beyond tokenistic representation to incorporate diverse perspectives effectively.

Challenges in Implementation

Speakers identified several key challenges in implementing ethics by design in AI systems:

1. Varying levels of AI readiness across countries

2. Difficulty in operationalizing ethical principles

3. Subjectivity and differing interpretations of ethics

4. Misconceptions about AI and privacy among stakeholders

5. Ensuring meaningful inclusion of civil society and Global South voices in AI governance processes

Tools and Frameworks for Ethical AI

Various tools and frameworks were presented to promote ethical AI development:

1. DCO’s AI governance assessment tool

2. META’s open source AI and responsible use guidelines

3. UNESCO’s readiness assessment methodology and ethical impact assessment framework

Moving Forward: Resolutions and Unresolved Issues

The discussion led to several action items:

– UNESCO’s launch of a global network of civil society organizations focused on AI ethics and governance

– Continued implementation of UNESCO’s recommendation on ethics of AI through readiness assessments

– META’s planned launch of a voluntary survey for businesses to map AI use across their operations in summer 2024

Key unresolved issues include:

1. Effectively operationalizing ethical principles in AI development and deployment

2. Addressing varying levels of AI readiness across different countries and regions

3. Reconciling differing interpretations and applications of ethical principles across contexts

Conclusion

The discussion highlighted the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. While there was broad agreement on the importance of ethical AI and multi-stakeholder collaboration, the specific implementation strategies and tools varied among different organizations and stakeholders. Moving from high-level agreement on ethics to practical application across diverse contexts emerged as a key next step for advancing responsible AI development and deployment worldwide.

Session Transcript

José Renato Laranjeira de Pereira: here in our time zone, but also good morning, good evening for those watching us online in other time zones. My name is Jose Renato. I am a researcher at the University of Bonn in Germany, but originally from Brazil, also co-founder of LAPIN, the Laboratory of Public Policy and Internet, a non-profit organization based in Brasilia, Brazil. And well, we’re going to start now our workshop number 45 on AI ethics by design. And well, our main goal here is mainly to delve into how data governance initiatives can serve as a cornerstone for promoting ethics by design in data-oriented technologies, in particular artificial intelligence. So more specifically, this panel aims to, one, offer an overall understanding of ethics by design and importance of embedding ethical considerations at the inception of technological development, two, address the challenges of embedding ethics in AI and other digital systems, and finally, debate multi-stakeholder collaboration and its relevance in shaping ethical norms and standards, particularly in the context of the recent UN resolution, A78L49, which underscores the importance of internationally interoperable safeguards for AI systems. We have as policy questions, which will guide the panelists to reflect upon these issues. First one is how can policymakers effectively promote the concept of ethics by design, ensuring integration of ethical principles into the design of process of AI and digital systems in a way that meaningfully includes multiple stakeholders, especially communities affected by the systems? Second policy question is what are the primary challenges to embedding ethics in AI and other systems, and how can policymakers, industry, and civil society collectively? address them to ensure digital technology’s responsible development and deployment? And finally, what strategies and mechanisms can be implemented to foster these multi-stakeholder collaboration in an ethical way, considering the diverse interests among these communities? Who is going to moderate this session? Our first, Thiago Moraes, who is a joint PhD candidate in law at the University of Brasilia and in the University of Brussels. Hope I pronounced that correctly, but my Dutch is not so good. Definitely. And he also works as a specialist in data protection and AI governance at the Brazilian Data Protection Authority, ANPD. Thiago is also co-founder and now counselor of the Laboratory of Public Policy and Internet, LAPIN. The online moderator will be, but which is, who is also with us in person here, is Alexandre Crastins, who is a senior lawyer at VLK Advogados, V-L-K Advogados. She provides consultancy in privacy and AI governance, also a former member of the, a former worker at the Brazilian National Data Protection Authority as a project manager and also co-founder and counselor of the Laboratory of Public Policy and Internet, LAPIN. Well, I hope you all enjoyed the session. Looking forward for the great discussions that we’re, that I’m sure we’re going to have. And I pass the floor to Thiago.

Thiago Moraes: Thank you, José. Well, we are really excited to be here today because this is not only a relevant discussion, but also it’s an opportunity for us to understand better what’s being done in a more hands-on approach when we are discussing this topic of ethics in AI. And that’s why the by-design part of it is so important. And we brought brilliant speakers today, and I will briefly introduce each one of them as they are. open the introductory remarks. And for starting, I would like to invite Mr. Ahmad Binder to speak. Ahmad Binder is the Policy Innovation Director at the Digital Corporation Organization, leading digital policy initiatives to foster collaboration amongst its six team members. With over 20 years of experience in public policy and regulation, he has shaped innovative policies driving connectivity and digital economic growth. Ahmad is dedicated to advancing the DCO’s mission on promoting digital prosperity for all. So, Ahmad, many thanks for coming here. Yesterday, we had the opportunity to see a bit of your framework of the DCO. It’s very interesting. And now we’ll have another interesting moment to know how it can relate with the questions that we are provoking here in this session. So, please, the floor is yours.

Ahmad Bhinder: Thank you very much, Thiago. Thank you very much everybody, for inviting me and on behalf of the Digital Corporation Organization to this session. So, just a very brief introduction to what DCO is. We are an intergovernmental organization, and we are headquartered in Riyadh with the countries from the Middle East, Africa. We have European countries as our member states, and so far we have South Asian countries. So, we started in 2020. Within the last four years, we have come up from five member states to 16 member states now, and we are governed by the council that has representatives or the ministers for digital economy and ICT from our member states. Our sole agenda is to promote digital prosperity and the growth of digital economy, a responsible growth of digital economy. So, this makes us one of a kind organization, a global intergovernmental organization that is not looking at sectors but looking at broadly digital economy. Again, we have our offices here, so we welcome you from Brazil, the whole group of you here to Riyadh. I hope you’re enjoying. Okay, so coming to the global AI governance, there are different initiatives that DCO is doing. I will take you through one of those when I explain the framework, but broadly, we have AI development and governance is not a harmonized phenomenon across the globe. So, we see two types of approaches. One of the approaches which is led by the EU or China or some of those countries where we call it a more prescriptive rules-based, risk-based approaches. And we see the EU AI law, or AI Act, that has come into place, which categorizes the AI into risk categories, and then very prescriptive rules are set for those categories with the higher risk. And then we see in the US and Singapore and a lot of other countries, which have taken a pro, I mean, so-called pro-innovative approach, where the focus is to let AI take its space of development and set the rules, which are broadly based on principles. So initially, we called it as a principles-based approach, but actually all the approaches are based on principles. So even the prescriptive regulatory approaches, they are also based on certain principles. Some call them ethical AI principles, some call them responsible AI governance principles, et cetera, et cetera. So also, we have seen across the nations different means of approaching AI governance or AI regulation. For example, there are domain-specific approaches. So we have laws, for example, for health sector, for education, for a lot of other sector-specific laws, and those laws are being shaped and developed in advance to take into consideration the new challenges and opportunities that are posed by AI in them. Then we have framework approaches, where a broader AI frameworks are being shaped in the countries that would either reform some of those laws or they would impact the current laws. So there’s a broader AI framework. And the third one, as I said, is the specialized AI acts. So the EU AI Act, for example, Australia is working on an AI Act, China has an AI law. So just wanted to give. you

José Renato Laranjeira de Pereira: you you you you you you you you you where she’s served as a member of the jury for data protection officer certification. Amina is CIPP-C certified and has conducted numerous training sessions for professionals on personal data.

Amina P.: Or an additional layer of complexity. And so Ahmed mentioned earlier the approach based on risk-based approach. Yes, you mentioned the risk-based approach. You mentioned the principle-based approach as well. And these are exactly what we advocate for when it comes to regulating AI. We also advocate for technology-neutral legal frameworks, build on existing legal frameworks without creating conflict, and to avoid creating conflicts between different legal frameworks, and then most importantly, collaboration between different stakeholders. And the way we approach this collaborative work at META when it comes to privacy or ethical standards in general is that we rely mainly today on open source AI. So some people will ask a question, very simple one, saying normally open sourcing AI would bring more complexities because we are opening the doors to malicious actions, et cetera, and malicious actors. And so how come that we are enhancing privacy? with or through open-source AI. Actually, the way we approach this and our vision or perception or work in relation to open-source AI is that experts are involved. First of all, we are opening the models. When we talk about open-source AI, it means that we are opening the models to the AI community that can benefit from these AI tools or these AI models, and everyone can use it. Now, the impact of this is that when we open these models, experts can also help us identify, inspect, and mitigate some of the risks. And it becomes a collaborative way of working on these risks and mitigating these risks within the AI community altogether. Of course, all this work is also preceded by a certain work, a privacy review before the launch of products. Redeployment risk assessments are done by Meta. Fine-tuning, safety fine-tuning, red teaming is also done ahead of any launch of any product. But in addition to all of these, of course, we have the Privacy Center. We can talk about the privacy-related tools that we have. But if we want to be specific on collaborative work, of course, once the product, once the model is launched and at Connect 2024, which is a developer conference that is organized by Meta annually, we announced the launch of LAMA 3.2, one of our open and large language models and to open using an open model that can be used by the AI community, of course. And so just to describe this, one of the tools that we use in open source AI is the purple project that we have, which means that before putting in place standards and sharing standards with the user, we, there is a project that is called a purple project which enhances the privacy and safety, means that this is an open tool that is where developers and experts can use and to mitigate the existing risks and rules. They have tested and mitigated these risks through the purple project, because it combines both the blue teaming and red teaming and both are necessary in our opinion. It puts in place standards and we call this the responsible use guide and of course, accessible to everyone. So this is when it comes to open source AI. To conclude on open source AI, for us, it’s a tool to enhance, it’s through open source AI that we can enhance privacy. Another project that is worth mentioning is the open loop projects that we have at Meta, which is a collaborative feedback way of working. So we gather policy makers with companies, share feedback and when it comes to prototypes of AI regulations and ethical standards or open loop, an issue that has been identified in a specific country. So there are prototypes that are being put in place, gathering policy makers and tech companies and starting from there under real conditions, under the real world conditions, these prototypes or these rules or testing rules, let’s go and then starting from there, we can issue policy recommendations, learn from the lessons and then also issue policy recommendations. These are the four steps that use OpenLoop and actually last year or the year before we organized an OpenLoop sprint, not an OpenLoop, OpenLoop we accompanied, for instance, the EU AI Act in Europe, testing some of the provisions ahead of their publication officially, but the OpenLoop sprints are a very small version of the OpenLoop projects that we organized at the Dubai Assembly last year, the year before. In the Minat region, the way we do it, as a privacy policy manager, for instance, I organized expert group round tables ahead of the launch of any product, even related to AI, not related to AI, we gather our experts, we have a group of experts, we share the specificities of the product and we get their feedback to improve our products, whether that is a legal, in relation to safety, in relation to privacy, in relation to human rights. et cetera, we take into consideration this feedback. We organize roundtables with policymakers. Recently, we had one in Turkey around AI and the existing data protection rules and whether they are enough to protect within the AI framework or not, what is necessary to do, a discussion on data subject rights as well. We also contribute to all the public submissions in the region, in some of the countries, not all of them, depending on the regulation, the importance or the nature of the regulation. In Saudi Arabia, of course, recently, Saudi Arabia has been very active on that front, completing the legal framework around data protection, putting in place AI ethical standards as well. So they have been very active on this and we shared our public comments and that we do believe that it’s always a discussion with policymakers. Yeah, looking forward to your questions. Sorry if I took more than seven minutes.

Thiago Moraes: Yeah, it’s okay. The only challenge we have is like, as we have to go with this first round and then try to have some discussions, but it’s interesting to see like the many different activities that META is being involved to try to bring more collaborative approach. The open source AI is definitely a hot topic and there are even some sessions here at the IGF that are also discussing this topic. So it’s nice to know that there are initiatives like that in META as well. Well, without further ado, I think I should move on to our next speaker. Our next speaker is. online is Tejasvita Kharel. I don’t know if I pronounced it right, but she’s a project officer at the Center for Communication Governance at National Law University Delhi. Her work relates to various aspects of information technology law and policy, including data protection, privacy, and emerging technologies such as AI and blockchain. Her work on the ethical governance and regulation of technology is guided by human rights-based perspectives, democratic values, and constitutional principles. So, Tejasvita, thanks a lot for participating with us. And, yeah, well, we’re looking forward to know more about your work regarding these topics.

Tejaswita Kharel: Thank you. Can you guys hear me? Just want to confirm.

Thiago Moraes: Yes.

Tejaswita Kharel: All right. So, I’m Tejasvita. I’m a project officer at the Center for Communication Governance at NLU Delhi. We do a lot of research work in terms of a lot of governance of emerging technology and whether the governance is ethical is, I think, a large part of what our work is. So, in terms of what I want to talk about today, I know we have three policy questions. Out of these three, what I want to concentrate on is number two, which is on primary challenges to embedding ethics. I think when we talk about embedding ethics into AI or into any other system, what is very important to consider what ethics even means in the sense that ethics, in what it is, is a very subjective concept. What ethics might mean to me might be very different to what it means to somebody else. And that is something we can already see in a lot of existing AI principles, ethical principles, or guiding documents, where in one you can see that they might consider transparency to be a principle, which will be a recurring principle across documents, but privacy may not necessarily be one, which means that there will be a varying level of of what these ethical principles might be implementing. So what this means for us is that when you’re implementing ethics, there’s a good chance that not everybody’s applying it in the same way, or even the principles might be different. So in terms of what I mean when I say that people may not implement it in the same way is I will talk about fairness in AI. When we look at fairness in AI, fairness as a concept is different when you look at it in, let’s say the United States versus what you would consider to be fairness as an ethical principle in India, right? In India, there’ll be various factors such as caste, religion, which will be very, very high value rules when you’re determining fairness. Meanwhile, in the US, these factors may look like race. So I specifically mean this in terms of AI bias when you’re looking at discrimination bias in AI. So with that in mind, the first challenge when we’re looking at embedding ethics is that ethics is different for everyone. And even the principles, even though they may be similar, there’ll be a lot of varying factors or differences in how these ethical principles even are understood. So with that in mind, we need to solve this issue and how do we deal with that is, when we look at that as the answer, I will get to the point number three, which is the strategies and mechanisms, what strategies and mechanisms can be implemented, right? So one way that we solve this problem is by ensuring that there’s collaboration between multiple stakeholders in the sense that we very often as civil society and policymakers, we have certain ideas of what ethics means, but do the developers and designers of these systems understand what this even means? Whether they have the ability or not to implement ethics by design into these systems is a very big question. The main way that we can solve this issue is by first identifying what the… ethical principles are, what it means for each differing context. I am of the belief that we cannot define ethics as a larger concept. We must understand that depending on the system, depending on the regional societal context, there will always be differences in terms of what ethics by design is going to look like and there must always be differences because there cannot be a one-size-fit- all standard application of ethics by design because not everybody agrees on what ethics means. So first we determine what ethics even means, what these principles can be, whether for example we want to ensure that ethical, whether we want to ensure that privacy is a part of the ethical principles, for example. And then we get into the question of what these factors are that will be included within these ethical principles. Like I said, if it’s for fairness, are we looking at fairness in the sense of non-discrimination, inclusivity, what are these factors that fall within this is very important to have like one level of understanding on. And then we get into understanding how developers and designers can actually implement this in their systems, whether it’s by ensuring that their data is clean before they start working to ensure that there’s no bias that comes into the data inherently. So I think that the main way that we ensure ethics by design is by ensuring that there’s good collaboration between stakeholders. This collaboration can perhaps look in the can be in the form of a coalition, like for example in India what we have right now is we have a coalition on responsible evolution of AI, where there’s a lot of stakeholders, some of them are developers, some of them are big tech, there’s also civil society participation, and all of us we talk about how number one what the difficulties are in terms of AI and the responsible evolution of it. And then we also discuss how we solve this. So the only way that we can do this is by actually creating a mechanism where there’s collaboration between between all of these different stakeholders where we discuss and identify how we design it. So I’m, so this is my point predominantly in terms of how you can implement ethics by design. Thank you.

Thiago Moraes: Thanks a lot, Jastha. And quite interesting to know about these coalitions that are trying to engage different stakeholders to tackle issues such as fairness in AI. I think this is part of the puzzle that we have to solve here. When we’re discussing about what we really means, right, on ethics by design and where we will get from here is definitely a challenge that we have to consider these many different perspectives. And one of the challenges is how to make these collaborations actually to work and come into results. So thanks for giving a glimpse on that. Hopefully we can have some time to go back a bit, but we’ll move now for our last but not least speaker who is Rosana Fani from UNESCO. So Rosana is a program specialist in the ethics of AI unit at UNESCO, part of the bioethics and ethics of science and technology team, focusing on technology governance and ethics. She supports the global implementation of UNESCO’s recommendation on the ethics of AI, they run process and assist countries in shaping ethical AI policies. Previously, she coordinated AI policy projects at SIPS and contributed to research at the Brookings Institution, the European Parliament Research Service and Nuclear. Rosana holds expertise in international AI governance and policy analysis. So thanks a lot for being here with us, Rosana, and we are looking forward to know more about your work in the topic.

Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things have already been mentioned that I maybe would have been repeating. So I hope I will not do that, but instead offer some sort of, yeah, first maybe putting together the remarks that we’ve heard together today and also offering some perspectives for the discussion. And I will outline that based on, of course, the work that we do at UNESCO to implement ethics of AI around the world. And first, thanks also for organizing the session because I think it’s really, really important to always think when we think about new technologies and especially artificial intelligence, to look at the ethics, because the ethics is what makes us human, what makes us come together, what makes us sit together in the room and discuss and interact and exchange different perspectives. So for us at UNESCO, ethics is not something philosophical and it’s also not something that is built in as an afterthought, so to say, when we look at AI, but it means really from the first moment to respect human rights and human dignity and fundamental freedoms and to put people and societies at the center of the technology. So we really believe that it should not be about controlling the technology, but rather steering the development in a way that serves our goals for humankind, because we believe that the technology conversation, especially the conversation is about AI is in the end, a societal one, not a technological one. And this means that we must scale our governance and our understanding of the technologies in a way that matches the growth of the industry and the growth of the technology itself as it develops into our societies in every aspect. I mean, I don’t have to, I think, mention the examples of where we see AI already happening today and also the risks that arise with it. And there was one point in the discussion when it came to. AI regulation, it was a bit, let’s say, let’s think back a few years when we didn’t yet have the AI Act in place, when we didn’t yet have the discussion about the US framework and also not other standards. There was still this moment, if you remember, that a lot of governments were like, oh, but we see the technology is developing so fast, we can’t really do anything about it, we don’t really know how to steer it and we need to leave the market, solve the problems on its own. But that was the moment when UNESCO started to implement its work on the recommendation on the ethics of AI. So UNESCO actually has been working on ethical and governance consideration of science and technology for several decades. Previously, we have promoted rigorous analysis and multidisciplinary and inclusive debate regarding the development and implication of emerging technologies with our scientific committees that we have. And this started off actually as a debate about ethics and the human genome editing. And since then, we have at UNESCO constantly reflected on the ethical challenges of emerging technologies. And this work eventually accumulated in the observation of member states, seeing that there is actually a lot of ethical risks when it comes to the development and application of artificial intelligence. And this is what has led us to work on the recommendation on the ethics of AI. The recommendation on the ethics of AI, if you think again now today, is actually quite, I think, a fascinating instrument because it is approved by all 194 member states and it has a lot of ethical principles, values, and policy action areas that everybody agreed to. So maybe already reacting to my previous speaker and fellow panelists, that there is actually a global standard I can maybe just quickly, very quickly list the values we have, the respect, promotion and protection of fundamental rights and human rights, then we have the environment and ecosystem flourishing which is also something that is really important when you look at ethics to also look at the environment, and we have ensuring diversity and inclusiveness and peaceful just and interconnected societies, and then we have ten principles how these are translated into practice and these values, for example fairness and discrimination, safety, right to privacy of course, human oversight, transparency, responsibility, you can all read them up online, I will not outline them for the purpose of, for the sake of time. And this recommendation that was adopted in 2021 is actually now being implemented in over 60 member states already around the world and counting, what does that mean implementing the recommendation? It means implementing the recommendation through a very specific tool, it’s the readiness assessment methodology, I have only the French version but here it is, the readiness assessment methodology is actually ethics by design for member states AI governance frameworks, so what does it mean? It means that when member states work on AI governance strategies or before they start working on them, we offer them this tool, it’s basically a long questionnaire that gives member states a 365 degree vision of how their AI ecosystem looks like at home, and this has five dimensions, so societal and cultural, regulatory, infrastructural and other dimensions and through this tool we really ensure that member states know where they stand and how they can improve their governance structures. to ensure that ethics is really at the center of what they do when they work on AI policy and governance. We also have another tool, and I want to quickly spend a minute or so on explaining this tool as well. It’s the ethical impact assessment. The readiness assessment is something that is on a macro level and really looks at the whole governance framework. The ethical impact assessment looks at one specific algorithm and looks at to what extent the specific algorithm complies with the recommendation and the principles outlined in the recommendation. That’s really important when we look at AI systems used in the public sector. For example, when we see AI systems used for welfare allocation or where children go to school, or when we look at AI used in the healthcare context. It is really crucially important that these AI systems are designed in an ethical manner. The ethical impact assessment does exactly that. It analyzes the systems against the recommendation, and this is done in the entire life cycle. Looking at the governance, for example, the multicycler governance, how has it been designed, which entities have been involved. Then it looks at the negative impacts and the positive impacts. That’s also something that I think is really important to emphasize when you look at ethics by design. It’s not just about mitigating the risks, but also looking at the opportunities that exist in the use of AI systems. There’s also always the contextualization of weighing the negatives against the positives. That’s also something that the ethical impact assessment looks at. I will just very briefly, because I think I’m almost also over time, I will very briefly also mention that we work with the private sector. We work with the private sector as well, because we think that when it comes to AI governance, nobody can do it alone. key entity in ensuring that AI systems are being designed and implemented in an ethical manner. So we have been teaming up with the Thomson Reuters Foundation to launch a voluntary survey for business leaders and companies to map how AI is being used across their operations products and services. And this was actually not yet live, it’s going to be launched in June, but now we have already launched the initiative and the questionnaire will be available in summer next year. And the idea really is for businesses to conduct a mapping of their AI governance models and also assess, for example, where AI is already having an impact, for example, on the diversion and inclusion aspect or human oversight or the environmental impact assessment is also featured there. And by offering this tool to the private sector, we really want to support the sector to ensure that their governance mechanisms are becoming more ethical, that they can also disclose this to their investors, their shareholders, but also to the public and really ensure that ethics is at the center of their operations. And last but not least, another aspect that we have heard a lot of times today is multi-stakeholderism. And especially we at UNESCO see that civil society is always a really critical part of the discussion about ethics of AI and governance, but it’s most often civil society is not properly, I think, sitting at the table when it comes to discussions. So we at UNESCO want to change that. We have been already through the last year working on mapping all the different civil society organizations that are working on ethics of AI and governance of AI and we’re bringing them all together also the next year at the the AI Action Summit in Paris first, and then at the Global Forum on the Ethics of AI, that’s UNESCO’s flagship conference on ethics and AI governance. And we will be bringing this global network of civil society organizations for the first time together at these both events. And we invite all civil society organizations that would like to join us as well to ensure that we bring these voices to affect the major AI governance processes that are ongoing right now. And with that, I will close. I really look forward to discussion. I have many more points to say, but yeah, thanks a lot and over to you, back to you, Tiago, or to our next moderator.

MODERATOR: Thank you, Rosanna, for your speech. We’re going to the second part of our panel, but first I would like to engage our audience online and on site. So does anyone have any questions, comments, observations of any kind? Please reach out the standing mic. Okay, so I’m gonna make some questions to our speakers. You can answer as you like. So how are you involving stakeholders from civil society in academia in the initiatives you have mentioned?

Ahmad Bhinder: I spoke last, so maybe I’ll pass the floor. Well, I’ll be quick. So we, as Rosanna said, we as an intergovernmental organization is all about collaboration and discussion. We have, so first of all, we have member states who we hold and conduct the discussion and workshop with. We have a very big network, a growing network, not very big, so it’s of observers and all the initiatives that we propose, we try to seek the inputs from them. them and improve and shape the dialogue. And we want to then become or position ourselves as a collective voice and advocate for the best practices on that behalf. So yeah, this is from an intergovernmental organization perspective. Would you want to take it?

Amina P.: Okay. We have an initiative at META. So actually we partner with a non-profit community that has been created called Partnership on AI. And it’s a partnership with academics and civil society, industry and media as well, creating solutions so that AI advances positive outcomes for people and society. And out of this initiative, specific recommendations are provided under what we call synthetic media framework, recommendations on how to develop, create and share content generated or modified by AI in a responsible way. So this is one of the initiatives that META collaborated. Actually, we collaborate whether with academia, but also with CSOs. We have other projects, Coalition for Content Provenance and Authenticity, with the publication of what we call content credentials about how and when digital content was created or modified. This is called C2PA. And this is another kind of coalition that we have, not necessarily with academia or CSOs, or limited to these actors. Another partnership is the AI Alliance that was established. with IBM, and this gathers creators, developers, and adopters to build, enable, and advocate for open source AI. Tejaswita, do you want to join us?

Tejaswita Kharel: Yes. I would say as somebody who represents more civil society and academia, I can give more of the input on how I think we get involved in these conversations. So like I said before, it’s predominantly in a lot of these coalitions or other groups, there is a lot of representation predominantly by industry, but I do think very often academia and civil society organizations are invited to get opinions in, to listen and understand more about what our beliefs are. But I do believe that very often when this is done, it ends up being a little bit of a minority perspective, and it feels like you’re not necessarily always taken very seriously, because it almost feels like you’re, it’s a little bit like advocacy, where you know that you’re speaking about things that may not necessarily be what other people want to do. So I think even though academia and civil society representation exists, I don’t think it’s being done in a way that is actually useful, because it’s almost like it’s a tokenization of representation. So I will be asked to do something or attend an event representing civil society, academia, and I will do it. But I feel like at the end of the conversation, I am there solely to mark for like a tick box being like, okay, we have had representation, we’ve heard from them. But ultimately, what we want to do is what we believe should be done. So I think it’s more of a criticism from my end on this part. That being said, I unfortunately have another clashing event, so I will not be able to stay for this event any further. I really apologize. It’s been a great. I really love listening to everyone. I’m really grateful for this opportunity and to have been part of this panel with all of these other excellent panelists and even the moderators all of you. Thank you very much. I will be leaving now. Thank you.

MODERATOR: Thank you very much for participation

Ahmad Bhinder: I just want to add one quick thing which I would skip my head so we have a mechanism called digital space accelerator program, where we hold global roundtables on different different digital economy issues. So, this AI tool that we are developing. We actually have have been. So we gather on the sidelines of big events we gather the expert stakeholders, like we did yesterday, and we seek their inputs, while we are shaping and designing our product so this tool so we went to Singapore, for example, we went to, to reality and a couple of other places we gathered the experts, and this is a mechanism not just for the AI, but this is what it’s a holistic program for DCO, please have a look at it on on our website, and feel free to contribute or join as well. So this digital space accelerator program is how we collect. We involve all the stakeholders into our initiatives. Thank you.

Rosanna Fanni: Yes, and I will also add a couple of points, and maybe directly picking up by panelists that has unfortunately now left us. It’s very much true that we also observed that there’s this tick box exercise, especially when it comes to civil society involvement in global governance processes on AI. So this is just exactly why we want to set up this, or are setting up the global network of civil society society organizations, and maybe to give a bit more context. We will launch this in the context. Summit hosted by France, which is happening in February next year. As many of you know, a government led summit, the first one having been taken place in the UK as the safety summit, known as the safety summit, and then followed by the second one hosted by the Republic of Korea. And for us, it’s really crucial that we indeed do not do it again as a tick-tock exercise, but that we bring civil society in the discussions and leverage their voices during the ministerial discussions as well. So this is something that also the organizers have actually already announced. So if you go to the AI Action Summit website, you will see that civil society will actually be a high priority. And our idea is really to link the dots and to make this network, so to say, permanent, to then also offer it as a consultative body, so to say, for future AI Action Summits or for other major governance processes on AI. And this is something that is really at the heart of our endeavor, and we really thank also the cooperation with the Patrick McGovern Foundation, which funds this initiative for their support in this project. The other part that I really wanted to mention is the work with academics. This is also for us a really crucial part of our work, and especially people from academia support us in implementing the recommendation through the readiness assessment methodology that I mentioned beforehand as a 360-degree scanning tool for governments. And we bring together these experts. So imagine we are conducting the readiness assessments in over 60 countries. So that means we have already 60 experts engaged in each country, and every expert brings something that is a bit unique from the country to the discussions, and we assemble these experts in a network that we call AI Ethics Experts Without Borders. So this AI Ethics Experts Without Borders network is really there to unite the knowledge that we find in governments, in on country level and maybe even on regional or local level on ethical governance of AI and brings this, so to say, together at UNESCO. And what is really, really special about is this, that then experts can exchange, hey, what was the experience on, let’s say maybe AI used in healthcare or AI used in another sector, or maybe there was an issue with the supervision of AI. So the idea is really to bring this expertise together and leverage also the knowledge of local experts. And what I want to also emphasize, and it’s also links to the civil society discussion that’s really important. It’s also very often the same, let’s say theme or the same issue happens with civil society as it happens with countries from the global South, so that it is really more of a tick-box exercise. Oh, we have someone from Africa here, but actually the grand majority of the countries that do AI governance are mainly developed economies. So for us, this is very much also linked to our work that we bring in these voices from the global South and not to bring them in as, let’s say a tick-box exercise, but to really leverage their voices. And that’s also why we are already working with out of these 20, we have out of 60 countries, 22 far from Africa and even more from small island developing states. And for us, it’s really important to bring in these actors that are normally underrepresented. And we really hope to be continuing the work with as far as the IGF, but also in many other contexts as well.

MODERATOR: Thank you. We’re going to ask you to bring us your final remarks, but as part of this final remarks, if you could bring us some last insights about one question. So what is the feedback you have received from? stakeholders participating in those collaborative approaches? And what were the challenges they shared in doing those collaborative work? What were the key takeaways? And thank you very much for your participation.

Ahmad Bhinder: Well, okay. I think my last concluding thoughts are actually connected to this this question. So we have engaged with stakeholders across our DCO member states and the governments as well as the civil society. What we have noticed that across our membership, and that’s a sample because we are very diverse to the global examples as well, there are varying levels of AI readiness across the member states. So while some member states or some countries are struggling with the basic infrastructure, the other ones are really, really at the forefront of how to shape AI governance. There are diverse definitions, diverse approaches to the governance. So the uniting factor, as Rosanna said, are the principles, which have been very widely adopted because they are not controversial, but how to action those principles has been quite diverse. Some countries are really, really looking at but the principles are common. So there’s a huge potential for engagement, for harmonization, for synchronization of the policies because the AI or all the emerging technologies, the regulations are not restricted to the countries themselves. So they are global actors, the borders do not define technologies, etc. So I think it’s really, really important now that this when we talk about multi-stakeholderism. or multilateralism to actually action it to to have those voices heard to have those these global forums these global discussions and then the global rule rulemaking or rule setting bodies to be more active and and and push the the the right set of rules etc for the nations to adopt and I think the dialogue is very important that we are having here and we you know we have across these forums. Thank you.

Amina P.: Yeah I would I would highlight so one of the I cannot provide a detailed of course feedback because when we work for instance with experts the experts provide their feedback depending on the product that we are asking them to provide feedback on so but like in a very very general and overview of the comments that we receive sometimes we feel that there is some very varying level of understanding of what AI is of the risks that are on the table being put on the table are we talking about existential risks in general or are we trying to like have a more specific and more scoped approach identifying a specific risk and trying to target this specific risk and mitigate it properly and in a very specific way I think and sometimes we face also some misconceptions from the the experts because if we are talking with experts who are from the human right who have a human rights approach or based approach then maybe in terms of privacy or when it comes to AI specific specificities sometimes they there are some misconceptions. So, the educational work is absolutely indispensable and hence some of the privacy tools that we put in place. For instance, the system cards when it comes to AI to explain to the user, the user who does not have a knowledge and if a user does not understand the AI model, how it works and why it works and behaves this way, it’s very difficult to get the trust from this user. And this is why, for instance, the system cards that we put in place explain, let’s say, the ranking system in our ads, how our ads are ranked, how the users when it comes to the ads are ranked, the ranking systems, the privacy center, some other educational tools as well. It’s very important to educate, to do this education work.

Rosanna Fanni: Yes, I will make it really, really short, implementation, implementation, implementation. We hear from member states that they want to operationalize the principles, they want to do something with AI, they want to use AI, but at the same time, they want to not get it wrong. They don’t want to use it in an unethical manner. They want to have the benefits for everyone, for their citizens, for their businesses. And I think implementation of the recommendation, but also implementation of other tools that we have heard today, hear from other stakeholders, my fellow panelists, also we have the implementation of the Global Digital Compact. I think now the focus really needs to shift from the principles, from the kind of agreement consensus that we have found. Yes, we need ethics. Yes, we need ethics by design. Yes, we need also a global governance for AI, but yes, how do we do it and how do we move from the principles to the action? And I think there’s still a lot of work to be done. necessity to build capacities in governments, to build capacities in public administration, but also in the private sector, also in civil society to really, you know, be actionable, be operational. And at the same time, use AI for, of course, the benefit of the citizens, but also be aware of the risks and mitigate the ethical challenges that we have.

Thiago Moraes: Thanks a lot. And it was amazing having this discussion. And I think Rosanna just got this main question now. Now that we have a consensus, where do we go from here, right? How do we do it? And we’re looking forward to these initiatives that are being developed by different organizations, the iAction Forum that’s coming and many others that we have been sharing in the Internet Governance Forum. So thanks everyone. Thanks to our speakers to be here, the audience for being and for the whole discussion. And yeah, looking forward to what’s coming.

A

Ahmad Bhinder

Speech speed

140 words per minute

Speech length

1105 words

Speech time

472 seconds

Risk-based and principles-based regulatory approaches

Explanation

Ahmad Bhinder discusses two main approaches to AI governance: prescriptive rules-based approaches (like the EU AI Act) and principles-based approaches focused on innovation. He notes that all approaches are ultimately based on certain principles, whether called ethical AI principles or responsible AI governance principles.

Evidence

Examples of EU AI Act and approaches in the US and Singapore

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Amina P.

Tejaswita Kharel

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Varying levels of AI readiness across countries

Explanation

Ahmad Bhinder observes that across DCO member states, there are varying levels of AI readiness. While some countries struggle with basic infrastructure, others are at the forefront of shaping AI governance. This diversity presents challenges in harmonizing approaches to AI governance.

Evidence

Observations from DCO member states

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Amina P.

Tejaswita Kharel

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

Digital Cooperation Organization’s collaborative initiatives

Explanation

Ahmad Bhinder discusses the DCO’s collaborative approach to AI governance. The organization engages with member states, governments, and civil society to gather inputs and shape dialogue on AI governance issues.

Evidence

DCO’s digital space accelerator program and global roundtables

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Amina P.

Rosanna Fanni

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

DCO’s AI governance assessment tool

Explanation

Ahmad Bhinder mentions the development of an AI governance assessment tool by the DCO. This tool is being shaped through inputs from expert stakeholders gathered at global roundtables and events.

Evidence

Stakeholder consultations in Singapore and other locations

Major Discussion Point

Tools and Frameworks for Ethical AI

A

Amina P.

Speech speed

118 words per minute

Speech length

1488 words

Speech time

755 seconds

Open source AI as a tool to enhance privacy and safety

Explanation

Amina P. argues that open source AI can enhance privacy and safety by allowing experts to identify, inspect, and mitigate risks. She emphasizes that this collaborative approach within the AI community helps address potential issues before product launch.

Evidence

Meta’s open source AI initiatives, including the LAMA 3.2 model and the purple project

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Misconceptions about AI and privacy among stakeholders

Explanation

Amina P. highlights that there are varying levels of understanding about AI and its risks among stakeholders. She notes that misconceptions can arise, particularly when experts from different backgrounds (e.g., human rights) engage with AI specificities.

Evidence

Feedback from expert consultations and the need for educational tools like system cards

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

META’s partnerships with academia and civil society

Explanation

Amina P. describes META’s collaborations with academia and civil society organizations to address AI ethics and governance. These partnerships aim to create solutions for responsible AI development and use.

Evidence

Partnership on AI initiative and Coalition for Content Provenance and Authenticity (C2PA)

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Rosanna Fanni

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

T

Tejaswita Kharel

Speech speed

175 words per minute

Speech length

1277 words

Speech time

436 seconds

Need for context-specific understanding of ethical principles

Explanation

Tejaswita Kharel emphasizes that ethics is subjective and can mean different things in various contexts. She argues that ethical principles for AI must be understood and applied differently based on regional and societal contexts, as there cannot be a one-size-fits-all approach.

Evidence

Example of fairness in AI differing between the United States and India due to factors like caste and religion

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Differed on

Approaches to AI Governance and Ethics

Subjectivity and differing interpretations of ethics

Explanation

Tejaswita Kharel points out that ethics is a subjective concept, leading to varying interpretations and implementations of ethical principles in AI. This subjectivity creates challenges in consistently applying ethics by design across different contexts and stakeholders.

Evidence

Variations in ethical principles across different AI guidelines and documents

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Agreed on

Challenges in implementing ethics by design

Need for meaningful inclusion of civil society voices

Explanation

Tejaswita Kharel criticizes the current state of civil society involvement in AI governance discussions, describing it as often tokenistic. She argues for more meaningful inclusion of civil society perspectives beyond just ticking a box for representation.

Evidence

Personal experiences in participating in stakeholder consultations

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Amina P.

Rosanna Fanni

Agreed on

Need for multi-stakeholder collaboration in AI governance

R

Rosanna Fanni

Speech speed

159 words per minute

Speech length

2581 words

Speech time

972 seconds

UNESCO recommendation on ethics of AI as global standard

Explanation

Rosanna Fanni presents UNESCO’s recommendation on the ethics of AI as a global standard approved by 194 member states. This recommendation provides a set of ethical principles, values, and policy action areas that have gained widespread agreement.

Evidence

UNESCO’s recommendation on the ethics of AI and its implementation in over 60 member states

Major Discussion Point

Approaches to AI Governance and Ethics

Differed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Differed on

Approaches to AI Governance and Ethics

Difficulty in operationalizing ethical principles

Explanation

Rosanna Fanni highlights the challenge of moving from agreed-upon ethical principles to practical implementation. She emphasizes the need to shift focus from establishing principles to taking concrete actions in AI governance and ethics.

Evidence

Feedback from member states expressing the desire to operationalize principles and use AI responsibly

Major Discussion Point

Challenges in Implementing Ethics by Design

Agreed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Agreed on

Challenges in implementing ethics by design

UNESCO’s readiness assessment methodology

Explanation

Rosanna Fanni describes UNESCO’s readiness assessment methodology as a tool for ethics by design in AI governance frameworks. This tool provides member states with a comprehensive view of their AI ecosystem across multiple dimensions.

Evidence

Implementation of the readiness assessment in over 60 member states

Major Discussion Point

Tools and Frameworks for Ethical AI

Ethical impact assessments for AI systems

Explanation

Rosanna Fanni introduces UNESCO’s ethical impact assessment tool, which evaluates specific AI algorithms against the principles outlined in the UNESCO recommendation. This tool is particularly important for AI systems used in the public sector.

Evidence

Examples of AI systems in welfare allocation, education, and healthcare

Major Discussion Point

Tools and Frameworks for Ethical AI

UNESCO’s global network of civil society organizations

Explanation

Rosanna Fanni discusses UNESCO’s initiative to create a global network of civil society organizations focused on AI ethics and governance. This network aims to amplify civil society voices in major AI governance processes and discussions.

Evidence

Planned launch of the network at the AI Action Summit in February

Major Discussion Point

Multi-stakeholder Collaboration

Agreed with

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Agreed on

Need for multi-stakeholder collaboration in AI governance

Agreements

Agreement Points

Need for multi-stakeholder collaboration in AI governance

Ahmad Bhinder

Amina P.

Rosanna Fanni

Tejaswita Kharel

Digital Cooperation Organization’s collaborative initiatives

META’s partnerships with academia and civil society

UNESCO’s global network of civil society organizations

Need for meaningful inclusion of civil society voices

All speakers emphasized the importance of involving various stakeholders, including governments, industry, academia, and civil society, in shaping AI governance and ethics frameworks.

Challenges in implementing ethics by design

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Rosanna Fanni

Varying levels of AI readiness across countries

Misconceptions about AI and privacy among stakeholders

Subjectivity and differing interpretations of ethics

Difficulty in operationalizing ethical principles

Speakers agreed that implementing ethics by design in AI systems faces various challenges, including differing levels of readiness, misconceptions, and the difficulty of translating ethical principles into practical actions.

Similar Viewpoints

Both speakers highlighted the importance of considering different approaches to AI governance and ethics, emphasizing the need for context-specific understanding and application of ethical principles.

Ahmad Bhinder

Tejaswita Kharel

Risk-based and principles-based regulatory approaches

Need for context-specific understanding of ethical principles

Both speakers presented tools and methodologies aimed at enhancing the ethical development and governance of AI systems, emphasizing transparency and comprehensive assessment.

Amina P.

Rosanna Fanni

Open source AI as a tool to enhance privacy and safety

UNESCO’s readiness assessment methodology

Unexpected Consensus

Importance of education and capacity building in AI ethics

Amina P.

Rosanna Fanni

Misconceptions about AI and privacy among stakeholders

Difficulty in operationalizing ethical principles

While not explicitly stated as a main argument, both speakers emphasized the need for education and capacity building to address misconceptions and enable the practical implementation of ethical principles in AI governance.

Overall Assessment

Summary

The main areas of agreement include the need for multi-stakeholder collaboration, recognition of challenges in implementing ethics by design, and the importance of context-specific approaches to AI governance and ethics.

Consensus level

There is a moderate to high level of consensus among the speakers on the fundamental aspects of AI ethics and governance. This consensus suggests a growing recognition of the complexities involved in ethical AI development and the need for collaborative, context-sensitive approaches. However, the specific implementation strategies and tools vary among different organizations and stakeholders, indicating that while there is agreement on the importance of ethical AI, the path to achieving it remains diverse and evolving.

Differences

Different Viewpoints

Approaches to AI Governance and Ethics

Ahmad Bhinder

Amina P.

Tejaswita Kharel

Rosanna Fanni

Risk-based and principles-based regulatory approaches

Open source AI as a tool to enhance privacy and safety

Need for context-specific understanding of ethical principles

UNESCO recommendation on ethics of AI as global standard

Speakers presented different approaches to AI governance and ethics, ranging from risk-based and principles-based regulatory approaches to open source AI and context-specific ethical principles. While Ahmad Bhinder discussed various regulatory approaches, Amina P. focused on open source AI, Tejaswita Kharel emphasized context-specific ethics, and Rosanna Fanni presented UNESCO’s global standard.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement centered around the specific approaches to implementing ethical AI governance and the challenges in operationalizing ethical principles across different contexts.

difference_level

The level of disagreement among the speakers was moderate. While they shared common goals of ethical AI governance, they presented different perspectives and approaches. This diversity of viewpoints highlights the complexity of the topic and the need for continued multi-stakeholder dialogue to develop comprehensive and effective ethical AI frameworks.

Partial Agreements

Partial Agreements

All speakers agreed on the need for ethical AI governance, but differed in their approaches to addressing the challenges. Ahmad Bhinder highlighted varying levels of AI readiness, Tejaswita Kharel emphasized context-specific ethics, and Rosanna Fanni focused on the difficulty of operationalizing ethical principles. They all recognized the complexity of implementing ethics by design but proposed different solutions.

Ahmad Bhinder

Tejaswita Kharel

Rosanna Fanni

Varying levels of AI readiness across countries

Need for context-specific understanding of ethical principles

Difficulty in operationalizing ethical principles

Similar Viewpoints

Both speakers highlighted the importance of considering different approaches to AI governance and ethics, emphasizing the need for context-specific understanding and application of ethical principles.

Ahmad Bhinder

Tejaswita Kharel

Risk-based and principles-based regulatory approaches

Need for context-specific understanding of ethical principles

Both speakers presented tools and methodologies aimed at enhancing the ethical development and governance of AI systems, emphasizing transparency and comprehensive assessment.

Amina P.

Rosanna Fanni

Open source AI as a tool to enhance privacy and safety

UNESCO’s readiness assessment methodology

Takeaways

Key Takeaways

There are varying approaches to AI governance and ethics globally, including risk-based and principles-based regulatory approaches

Open source AI and multi-stakeholder collaboration are seen as important tools for enhancing privacy, safety and ethical AI development

UNESCO’s recommendation on ethics of AI provides a global standard agreed upon by 194 member states

Implementing ethics by design in AI faces challenges due to varying levels of AI readiness across countries and differing interpretations of ethical principles

There is a need for context-specific understanding and application of ethical principles in AI

Moving from ethical principles to practical implementation and action remains a key challenge

Resolutions and Action Items

UNESCO to launch a global network of civil society organizations focused on AI ethics and governance

UNESCO to continue implementing its recommendation on ethics of AI through readiness assessments in over 60 countries

META to launch a voluntary survey for businesses to map AI use across their operations in summer 2024

Continued development of tools like DCO’s AI governance assessment tool and UNESCO’s ethical impact assessment framework

Unresolved Issues

How to effectively operationalize ethical principles in AI development and deployment

How to ensure meaningful inclusion of civil society and Global South voices in AI governance processes

How to address varying levels of AI readiness across different countries and regions

How to reconcile differing interpretations and applications of ethical principles across contexts

Suggested Compromises

Balancing prescriptive regulatory approaches with more flexible principles-based approaches to AI governance

Using open source AI as a way to enhance both innovation and ethical safeguards

Combining global ethical standards (like UNESCO’s recommendation) with context-specific implementations

Thought Provoking Comments

We see two types of approaches. One of the approaches which is led by the EU or China or some of those countries where we call it a more prescriptive rules-based, risk-based approaches. And we see the EU AI law, or AI Act, that has come into place, which categorizes the AI into risk categories, and then very prescriptive rules are set for those categories with the higher risk. And then we see in the US and Singapore and a lot of other countries, which have taken a pro, I mean, so-called pro-innovative approach, where the focus is to let AI take its space of development and set the rules, which are broadly based on principles.

speaker

Ahmad Bhinder

reason

This comment provides a clear overview of the two main regulatory approaches to AI governance globally, highlighting the key differences between prescriptive and principles-based approaches.

impact

It set the stage for discussing different regulatory frameworks and their implications, prompting further exploration of how ethics can be embedded in these different approaches.

Actually, the way we approach this and our vision or perception or work in relation to open-source AI is that experts are involved. First of all, we are opening the models. When we talk about open-source AI, it means that we are opening the models to the AI community that can benefit from these AI tools or these AI models, and everyone can use it. Now, the impact of this is that when we open these models, experts can also help us identify, inspect, and mitigate some of the risks.

speaker

Amina P.

reason

This comment challenges the common perception that open-sourcing AI models could lead to more risks, instead presenting it as a collaborative approach to identifying and mitigating risks.

impact

It shifted the discussion towards the potential benefits of open collaboration in AI development and ethics, prompting consideration of how transparency can contribute to ethical AI.

I think when we talk about embedding ethics into AI or into any other system, what is very important to consider what ethics even means in the sense that ethics, in what it is, is a very subjective concept. What ethics might mean to me might be very different to what it means to somebody else.

speaker

Tejaswita Kharel

reason

This comment highlights the fundamental challenge of defining and implementing ethics in AI, pointing out the subjective nature of ethical principles.

impact

It deepened the conversation by prompting reflection on the complexities of implementing ethical AI across different cultural and societal contexts.

The readiness assessment is something that is on a macro level and really looks at the whole governance framework. The ethical impact assessment looks at one specific algorithm and looks at to what extent the specific algorithm complies with the recommendation and the principles outlined in the recommendation.

speaker

Rosanna Fanni

reason

This comment introduces concrete tools for assessing ethical AI implementation at both macro and micro levels, providing practical approaches to the challenge.

impact

It moved the discussion from theoretical considerations to practical implementation strategies, offering tangible ways to embed ethics in AI development and governance.

Overall Assessment

These key comments shaped the discussion by highlighting the complexity of implementing ethical AI across different regulatory approaches, cultural contexts, and levels of governance. They moved the conversation from abstract principles to concrete challenges and potential solutions, emphasizing the need for collaboration, transparency, and practical assessment tools. The discussion evolved from identifying the problem to exploring multifaceted approaches for embedding ethics in AI development and governance.

Follow-up Questions

How can we move from ethical principles to actionable implementation of AI governance?

speaker

Rosanna Fanni

explanation

There is a need to operationalize ethical principles and implement AI governance in practice, beyond just agreeing on high-level concepts.

How can we ensure meaningful inclusion of civil society voices in AI governance discussions, beyond tokenistic representation?

speaker

Tejaswita Kharel

explanation

Civil society participation often feels like a ‘tick box exercise’ without real influence, so more effective ways of inclusion are needed.

How can we address varying levels of AI readiness across different countries while developing global AI governance frameworks?

speaker

Ahmad Bhinder

explanation

There are diverse approaches and capabilities related to AI across countries, which creates challenges for harmonizing global governance.

How can we improve public understanding of AI systems and their implications?

speaker

Amina P.

explanation

There are often misconceptions about AI among experts and the public, highlighting a need for better education and explanation of AI systems.

How can open source AI be leveraged to enhance privacy and security?

speaker

Amina P.

explanation

Open sourcing AI models allows for collaborative risk mitigation, but the implications and best practices need further exploration.

How can we ensure ethical principles are applied consistently across different cultural and societal contexts?

speaker

Tejaswita Kharel

explanation

Ethical principles like fairness can have different interpretations in different contexts, creating challenges for global standards.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.