Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance

15 Dec 2024 12:45h - 13:45h

Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance

Session at a Glance

Summary

This discussion focused on the development of an ethical AI governance framework and tool by the Digital Cooperation Organization (DCO) and Access Partnership. The session began with an introduction to the importance of addressing AI as a societal challenge, emphasizing the need for ethical, responsible, and human-centric AI development. Ahmed Bhinder from DCO explained their approach to ethical AI governance from a human rights perspective, highlighting the organization’s diverse membership and goals.


Chris Martin from Access Partnership presented DCO’s view on ethical AI governance, outlining six key principles: accountability and oversight, transparency and explainability, fairness and non-discrimination, privacy, sustainability and environmental impact, and human-centeredness. The discussion then introduced a prototype tool designed to assess AI systems’ compliance with these ethical principles and human rights considerations.


Matthew Sharp detailed the tool’s functionality, explaining how it provides risk assessments and actionable recommendations for both AI developers and deployers. The tool aims to be comprehensive, practical, and interactive, focusing on human rights impacts across various industries.


The session included a practical exercise where participants were divided into groups to analyze AI risk scenarios using the framework. Groups identified potential ethical risks, scored their severity and likelihood, and proposed mitigation strategies. This activity demonstrated the tool’s application in real-world scenarios.


The discussion concluded with remarks from Alaa Abdulaal of DCO, emphasizing the importance of a multi-stakeholder approach in addressing ethical AI challenges and the organization’s commitment to providing actionable solutions for countries and developers. The session highlighted the ongoing efforts to create practical tools for ensuring ethical AI development and deployment on a global scale.


Keypoints

Major discussion points:


– Introduction of the Digital Cooperation Organization (DCO) and its work on ethical AI governance


– Presentation of DCO’s human rights-centered approach to AI ethics and governance


– Overview of an AI ethics evaluation tool being developed by DCO and Access Partnership


– Interactive exercise for participants to apply the tool’s framework to AI risk scenarios


Overall purpose:


The goal of this discussion was to introduce DCO’s work on ethical AI governance, present their new AI ethics evaluation tool, and gather feedback from participants on the tool’s framework through an interactive exercise.


Tone:


The tone was primarily informative and collaborative. The speakers provided detailed information about DCO’s approach and the new tool in a professional manner. The tone shifted to become more interactive and engaging during the group exercise portion, as participants were encouraged to apply the concepts and provide input. Overall, the discussion maintained a constructive and forward-looking atmosphere focused on addressing ethical challenges in AI development and deployment.


Speakers

– Chris Martin: Head of policy innovation at Access Partnership


– Ahmad Bhinder: Representative of the Digital Cooperation Organization


– Matthew Sharp: Senior manager at Access Partnership


– Thiago Moraes:


– Alaa Abdulaal: Chief of Digital Economy Foresight at the DCO


Additional speakers:


– Kevin: Colleague mentioned as handing out worksheets


Full session report

Revised Summary of Discussion on Ethical AI Governance


Introduction


This discussion, led by representatives from the Digital Cooperation Organization (DCO) and Access Partnership, focused on the development of an ethical AI governance framework and assessment tool. The session emphasized the critical importance of addressing AI as a societal challenge, highlighting the need for ethical, responsible, and human-centric AI development.


Key Speakers and Their Roles


1. Chris Martin: Head of policy innovation at Access Partnership


2. Ahmad Bhinder: Representative of the Digital Cooperation Organization


3. Matthew Sharp: Senior manager at Access Partnership


4. Thiago Moraes: Facilitator of the interactive exercise


5. Alaa Abdulaal: Chief of Digital Economy Foresight at the DCO


Discussion Overview


1. Importance of Ethical AI Governance


Chris Martin opened the discussion by framing AI as a societal challenge rather than merely a technical one. He emphasized the monumental stakes involved in AI development and deployment, stressing the need to “get this right” by ensuring AI is ethical, responsible, and human-centric. Martin highlighted the uneven global diffusion of AI technologies, noting the concentration in Asia Pacific, North America, and Europe, while identifying growth opportunities in the Middle East and North Africa. He also mentioned concerns about the increasing energy consumption related to AI development.


2. DCO’s Approach to AI Governance


Ahmad Bhinder introduced the DCO, explaining its membership of 13 countries and an observer network of 20 countries. He elaborated on DCO’s human rights-centered approach to AI governance, noting that the organization had identified which human rights are most impacted by AI and reviewed how AI policies, regulations, and governance intersect with these rights across their diverse membership and globally.


3. Key Principles for Ethical AI


Chris Martin presented DCO’s view on ethical AI governance, outlining six key principles:


a) Accountability and oversight in AI decision-making


b) Transparency and explainability of AI systems


c) Fairness and non-discrimination in AI outcomes


d) Privacy protection and data safeguards


e) Sustainability and environmental impact considerations


f) Human-centered design focused on social benefit


Martin provided specific examples and explanations for each principle, emphasizing their importance in ethical AI development and deployment.


4. DCO’s AI Ethics Evaluation Tool


Matthew Sharp provided a detailed overview of the AI ethics evaluation tool being developed by DCO and Access Partnership. Key features of the tool include:


– Separate risk questionnaires for AI developers and deployers


– Assessment of severity and likelihood of human rights risks


– Interactive visualizations to help prioritize actions


– Practical, actionable recommendations based on risk assessment


Sharp explained the tool’s workflow, highlighting differences between developer and deployer questionnaires. He emphasized that the tool is designed to be comprehensive, practical, and interactive, focusing on human rights impacts across various industries. Sharp also noted how this tool differs from existing frameworks, particularly in its focus on human rights and inclusion of both developers and deployers in the assessment process.


5. Interactive Exercise


Thiago Moraes led an interactive exercise where participants were divided into groups to analyze AI risk scenarios using the framework. The exercise involved:


– Identifying potential ethical risks in AI systems for medical diagnosis and job application screening


– Scoring risks based on severity and likelihood


– Developing actionable recommendations to mitigate identified risks


Participants engaged with real-world scenarios, applying the ethical considerations involved in AI deployment.


6. DCO’s Future Plans and Closing Remarks


Alaa Abdulaal concluded the session by emphasizing DCO’s commitment to a multi-stakeholder approach in addressing ethical AI challenges. Key points included:


– DCO’s belief in collaborative digital transformation


– The organization’s aim to provide actionable solutions for ethical AI deployment


– Plans to share the final AI ethics tool publicly


– DCO’s mission to enable digital prosperity for all through cooperation


– The importance of ethical use of AI in various sectors


Conclusion


The discussion presented a comprehensive overview of the DCO’s efforts to develop an ethical AI governance framework and assessment tool. By emphasizing a human rights-centered approach and providing practical tools for risk assessment, the DCO aims to address the complex challenges posed by AI development and deployment on a global scale.


Additional Notes


– Matthew Sharp mentioned a QR code survey for participants to provide feedback on the session.


– A feedback form was shared at the end of the session for further input from participants.


Session Transcript

Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chris Martin. I’m head of policy innovation at Access Partnership. We’re a global tech policy and regulatory consulting firm. So pleased to be here with all of you and with our partners at the Digital Cooperation Organization. Perhaps we can get started with, I think, a little bit of an acknowledgement that artificial intelligence is no longer really a technical challenge. It’s a societal one. Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact. And the stakes are monumental. They demand that we get this right, and at the same time, key questions remain. Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human-centric? Today we stand at a pivotal moment. Policymakers, technologists, and civil society are coming together to navigate the complex intersection of innovation and ethics, and together we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good. Now, this session isn’t just about policies. It’s about principles in action, defining who we are, what we as a global community value, and how we protect those values, especially in the face of rapid change. I invite you to take this opportunity to explore these possibilities with us, to ask some hard questions, and build pathways to ensure that AI serves humanity and not the other way around. With that, please let me introduce my colleague, Mr. Ahmed Binder from the Digital Cooperation Organization.


Ahmad Bhinder: Hello. Good afternoon, everybody. I see a lot of faces from all around the world, and it is really, really fortunate for us to be able to gather you all here together and showcase some of our work, actually tell you who we are as Digital Cooperation Organization, and discuss some of the work that we are doing and seek your inputs. So we really meant to have this a very interactive discussion, a roundtable session to say let’s see how we can convert this into a roundtable discussion going forward. So my name is Ahmed Binder, and I represent the Digital Cooperation Organization. We are an intergovernmental organization. We are represented by the Ministers of Digital Economy and ICT for the the 16 member states who we represent. And the member states come from, as you would see, from the Middle East, from Europe, from Africa, to South Asia, and we are very rapidly expanding. We have a whole network of private sector partners that we call observers, as you would see in other intergovernmental organizations. And we have over 40 observers who are already with us now. Since our existence, which is, we are quite young, so we started our, we came into conception from end of 2020, so we’re in our fourth year. So with a bit of our organization, what DCO is and how it works, this work this year we started was to look at the ethical AI, ethical governance of AI. And we thought that while a lot of work is done, is being done on ethical and responsible AI governance, we wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights, and what needs to be done to ensure that we have a human rights protective, ethical AI governance approach. There are a couple of reports that we are going to publish on that, and we are developing a policy tool, which will be the crux of our discussion today. We have developed a framework on the human rights risks associated to AI, what are the ethical principles that need to be taken care of, and then the tool is going to provide our member states and beyond. and a mechanism where we can get, yeah, can you hear me all right? Okay, sorry. So we’ll provide with the AI system developers or deployers with a tool where they can assess the systems, compliance per se, or their closeness to the human rights or ethical principles. And then the tool is going to recommend improvements in the systems. So again, I don’t want to kill it in my opening remarks. We have our colleagues from Access Partnership who we are developing this tool together with. So I will give it back to Chris to take us through this and I would look forward to all your inputs into the discussions today. Thank you so much.


Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what DCO’s view is on ethical AI governance. And then my colleague, Matt Sharp, will walk us through the tool itself. And as Ahmed previewed, then we’ll kind of break out to do a little bit of scenarios and get you started. And then we’ll kind of break out to do a little bit of scenarios and get a chance to play with the tool yourselves. I think the first question, why is this important for DCO? Well, it’s a big deal, I think everywhere. And DCO working with members and other stakeholders wants to really be an active member at the forefront of this debate. So these are two of the objectives to kind of start. And then the tool can be seen as one way to instill and align alignment and interoperability. between regulatory frameworks. I think we all recognize there’s a real wide divergence right now of AI readiness and regulatory approach. And then once you start to see that, actually proposing impactful, actionable initiatives is critical. DCO feels that’s important. And lastly, facilitating interactive dialogues like the one we’re here today to have. So a bit deeper on what does a human rights approach look like in AI governance for DCO? Well, it starts with four things. First, looking to prioritize protection and promotion of human rights. Name of the session, that’s why we’re here. To design and uphold human dignity, privacy, equity, and freedom from discrimination. Third, to create systems that are transparent, accountable, that are inclusive, and ones that don’t exacerbate inequalities. And lastly, to ensure advancement and contribute to the common good while mitigating all the potential harms I think we’re starting to see evolve with AI. So the toolkit that we’re developing will take a human rights-centered approach across four different areas. Again, looking at inclusive design and ensuring that there’s participation from diverse communities, especially marginalized ones. It will look to integrate human rights principles like dignity, equality, non-discrimination privacy at each stage of the AI lifecycle. It will seek to recommend the use of human rights impact assessments as a way to get in front of AI deployments and ensure that you mitigate those potential problems early. And then lastly, promote transparency and looking at disclosure of how AI makes its decisions. Taking it a little bit of a step back, and I think illustrating the moment we’re in, is that AI diffusion is pretty uneven across the world. This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside. So this is an important moment to get involved at an early stage. On the governance side, DCO sees really seven different areas where global best practice can be leveraged to advance AI governance. The first looks at institutional mechanisms. Typically these involve how do nation states govern artificial intelligence within their jurisdictions? Do they develop an AI regulator? Do they do it sector by sector? These are questions that are live at the moment across every country. How are they going to plan for that at the government level? Is there an AI strategy or an AI policy that helps dictate the different stages? And then beyond AI specifically, where are they in policy readiness? Cybersecurity frameworks, privacy frameworks, intellectual property, the whole range of different areas that impact AI and are important to consider. And then shifting beyond just the government specific places, but how do you build an innovation ecosystem? On the government side, can you foster investment and entrepreneurship in AI? But also how do you build a culture around that? And how do you do that in a way that also brings in that diversity of participants and voices? So that’s really critical to getting it right. The seventh area is, or sixth area, I’m sorry, is future-proofing the population. And by this we mean getting a population ready for AI. There is going to be displacements in the workforce, there are going to be educational requirements, and countries have to address those as they build these into their societies. And then lastly, an international cooperation is so fundamental, I think that’s why we’re all here at IGF today. And there are a lot of different processes that are underway to allow international collaboration to happen, and being a part of that is important. I think some of the findings across DCO member states are interesting in the sense that it’s a, I think, a unique pairing of different types of nation-states. And we see that it has a lot of varying levels of AI governance across it. It’s not to be unexpected when you have both regionally diverse and economically diverse countries within a single group. And that’s, I think, reflective of the case that we face globally. That feeds into the diverse definitions and approaches to AI, and it also feeds into the potential for further engagement and international cooperation, both within the DCO’s membership itself, but also in events and engagements like the one we’re doing. So there’s a view that we are building around the generic ethical considerations of AI, but part of our conversation today is to help us think about this, and are we getting it right? And there are, right now, very limited recommendations and practical implications to address human rights in DCO member states. And so this tool and this exercise is part of creating that for DCO and potentially beyond. I’m going to walk through these ethical principles very quickly, and then I’m going to pass it to my colleague, Matt, to pick up the tool itself. but the ethical principles that govern this tool are six-fold. The first, dealing with accountability and oversight. We want to ensure there’s clear responsibility for AI decision-making, addressing those gaps in things like verification, audit trails, incident response. We’ll want to look at transparency and accountability, as already discussed. Things that promote clarity in how you make these decisions is important, and you don’t want the complexity to undermine a user’s understanding. We’ve got fairness and non-discrimination as our third principle. Protecting against bias, unfair treatment and outcomes, and mitigating demographic disparities in how these systems perform. Fourth will be privacy. We all care about our privacy, and we’re all concerned about this, as our uses of different technologies now feed the AI ecosystem. We want to make sure there are those personal safeguards in place, and a respect for privacy rights. Fifth, around sustainability and environmental impact. I was on the panel right before this one in this room, and they talked about how AI is going to require the equivalent of another Japan in terms of energy use and consumption, and that’s going to put a strain on resources, so we’ve got to address that. And for the development of AI, comport environmental goals. And then lastly, it’s got to be human-centered. It’s got to be looking at social benefit and ensuring that it’s meeting societal needs while respecting individuality, and aligning these capabilities with those needs. So with that, I’m going to pass it to Matt. He can walk you through the tool itself in a little further detail, and then we’ll pick up the exercise.


Matthew Sharp: Hi everyone. I’m Matt Sharpe, a senior manager at Axis Partnership. Yeah, so the six principles are based on extensive research of frameworks around the world, which we try to distill into these six areas to focus on. And this is a brief description of the tool that we’ve developed, which is still in its prototype phase. But the idea is that this will be available online and publicly accessible for everyone. The tool provides a detailed risk questionnaire, which is different for both developers and deployers. And there are questions to ascertain both the severity and likelihood of risks related to human rights. And based on the way that the questions are answered, there will be an interactive radar graph, which basically helps the user prioritize their actions. And each risk area, an average score will be calculated for each risk area. And this will lead to actionable recommendations being given based on the specific way that the questions were answered. OK, if you go to the next slide. Yeah, and so the tool is designed to be comprehensive, practical, and interactive. The human rights first approach, which maps AI systems to universal human rights, it’s designed to be very practical. So it accommodates various AI systems across diverse industries. and organizations will get comprehensive risk insights and practical guidance on how to mitigate risks related to human rights. So our tool, there are a few other frameworks and tools related to R1. A lot of these are developed by national governments and tend to focus on their own national contexts. For example, the UAE and the New Zealand framework, a lot of them focus on verification of actions rather than risk assessments. And yeah, and a few of the existing tools focus only on AI developers and not AI deployers as well. And generally ours is the one that’s most focused on human rights. So we think this tool offers a unique contribution to advance ethical AI. So I mean, I already talked about this, but basically the way that the tool works, the workflow, so users will register on the website, they’ll provide some information about their AI systems and their industries, they’ll complete the questionnaire covering six risk categories. The questions will be different for developers and deployers. And then based on how the risks are assessed, they will see this risk radar chart, identifying priority areas for action, and they’ll receive advice on targeted mitigation strategies. OK, so this is our framework that underlies the tool. Basically. The diagram shows the principles at the top, and underneath those are more specific risks related to those principles. For example, in the case of privacy, there’s a focus on data protection and unauthorized collection and processing of sensitive information related to accountability and oversight. The risks there are insufficient involvement of humans and inadequate incident handling, for example. Then there’s detailed recommendations below this, which there’s no one-to-one mapping between the principles and the recommendation areas. When a risk category is high risk, it’ll quite often be the case that there’ll be specific recommendations related to each of these risk areas. But of course, these cover data management, validation, and testing of AI systems. The integration of stakeholder feedback is, of course, very important as well. Then just to say that there are two distinct stakeholder groups in the AI lifecycle, the developers and deployers. Of course, they will receive slightly different questionnaires. Developers are, of course, focusing on the design and the construction of AI systems. They need to predict AI risks in advance. They need to think about technical architecture. Deployers, of course, are focused on the implementation of these AI systems and for them, of course, the focus is on operational vulnerabilities and actual impacts on users and stakeholders. Yeah, I mean, this slide is perhaps a bit detailed, but just to say that for each of the recommendation categories, because of the different position in the AI library, there’ll be slightly, which is given to developers and deployers, but the six recommendation categories are consistently used for both developers and deployers. So yeah, if you wouldn’t mind just using a QR code to answer a couple of quick questions. So, yeah, so once you’ve answered those two questions. questions. We have a breakout activity which is designed to basically understand the logic of the AI ethics tool that we’ve developed. So, Kevin, I think, will be handing out worksheets that you could fill in. There’ll be different AI risk scenarios, and the idea here is to review the framework that we presented for the AI evaluator tool, AI ethics evaluator tool, and then identify two ethical risks related to each scenario, the scenario that you’ve given, and then do a risk scoring exercise where you score both the severity and likelihood of the risks you’ve identified. So, you can pick two of the principles that are relevant for your particular scenario. You score the severity and likelihood, their definitions on the worksheets. You calculate a combined score, an impact score for each risk, and then you’re able to rank them from most to least critical, and then you develop actionable recommendations, just trying to come up with two recommendations for the two risks for developer. And this whole exercise should take 15 minutes.


Ahmad Bhinder: Well, sorry to put you through this. We intended to make it as an interactive discussion, and we really wanted, selfishly, to get your inputs on to brainstorming on some of these scenarios. So, I do apologize in advance to the organizers for mixing up the chairs, but I think we should convert how we are sitting into, I think, three breakout groups and have the discussion. Please move your chairs around, and let’s go through this exercise. we can have a more interactive discussion. So we are well within the time for this session. We have half an hour to go. So for 15 minutes, let’s go through this exercise and then we would love to hear your thoughts on this. Thank you.


Chris Martin: And guys, I know this seems daunting. It is not. I promise, I did it myself last week. It’s actually kind of fun. And it gives you a real sense of how to actually start putting yourself in the mindset of assessing AI risk. So we were thinking maybe this side of the room could be one group, and then maybe split this side of the room in two. Those of you in the back, one group, and then those of you up front, here another. We’ve got these sheets that my colleague Kevin is gonna start passing out. I think we’ll hand out one set on this side and then one set there and one set here. And I’m happy to go around and check in with you guys as we take this forward and see how we can actually pull this together. Yeah. Thank you. . . . . . . . . . . . . . . . . . A high likelihood, high risk. The discriminatory impact on vulnerable groups, same thing. And I think for kind of working backwards then, the recommendations we had were you’re going to need valid testing of these thresholds to understand what is going to be correct for your platform. So validation and testing is going to be one remediation measure and then a continuous evolution to improve that. For the harmful or the inadequate human verification, we saw that you have to have human in the loop. And then for the last one, we really… Actually, I think that’s all we have. Thank you very much. Sorry for this. We have 30 minutes and I’ll give my mic to Mr. Thiago. Please, yeah.


Thiago Moraes: So yeah, well, our case is the use of AI for… Okay, diagnose systems critical rare disease, right?


Chris Martin: so The first risk was related to the explainability since we’re talking about rare disease and inaccurate answers can give issues here also Scrutinatory issues more specifically we talked about gender-based Discrimination if like the populations and statistics not well used and privacy risks like data leaks with Very high sensitive data. So for scoring the first one we did a six in the end So for explainability the one for discrimination a four and no privacy We think it’s the most sensitive here. It we gave a nine because from there from a leak many other issues may happen And following Following the Following following the DCO recommendations we suggest for the explainability enhance comprehensive documentation so documentation and reporting and for the discriminatory Impact we suggest validating and testing and for the privacy. We suggest that the management


Ahmad Bhinder: Thank you so much and Again, let’s have 30 seconds here. Are you going to?


Chris Martin: Okay, so our scenario is we have a multinational Okay, so we have an our scenario is we have a multinational cop that is deploying an AI system for screening job applications and to do that they are using historical data to rank the candidates based on predicted peppermint so So for us we thought it’s a risk on Discrimination because we’re looking at it from a perspective that historically people who work in the engineering field with men and mostly white male. And now you’re using that historical data to make an assessment on people who may be applying who look like me. So already we said fairness and non-discrimination, that’s a risk especially discriminatory impact on vulnerable groups. And performance, the scoring was quite high, likelihoods three, severity three, everything quite high. And then here’s one. Thank you. Thank you so much.


Ahmad Bhinder: Before we are kicked out, I would pass the mic to Alaa Abdullal. She is our Chief of Digital Economy Foresight at the DCO for some closing remarks. And sorry to rush through the whole thing.


Alaa Abdulaal: So hello, everyone. I think I was honored to join the session. And I have seen a lot of amazing conversation. At DCO, we are really, as our name say, we are the digital cooperation organization. We believe in a multi-stakeholder approach. And we believe that this is the only approach that will help in the acceleration of digital transformation. And the topic of ethical use of AI is an important topic. Because again, AI now is being one of the main emerging technologies that are offering a lot of advancement and efficiency in the digital transformation of government and different sectors. This is why it was very important for us as a digital cooperation organization to provide actionable solution to help countries and even developers to have that right tool to make sure that whatever systems that are being deployed have the right risk assessment from human rights. And to have that tool available for everyone. And this is why we wanted to have this session to get the feedback to really understand if what we are developing is in the right way. And thank you so much for being here and allocating the time and effort to join this discussion and provide your valuable inputs. And we are looking forward to share with you the final deliverable and the ethical tool hopefully soon. And hopefully together we are building a future to enable digital prosperity for all. Thank you very much for your time and for being here.


Chris Martin: Thanks, everybody. We also just put this up. If you want to provide feedback, we certainly welcome it on this session. Take a picture. It shouldn’t take long. And thanks, all. We really appreciate your participation.


C

Chris Martin

Speech speed

123 words per minute

Speech length

2136 words

Speech time

1034 seconds

AI is now a societal challenge, not just a technical one

Explanation

Chris Martin emphasizes that AI has evolved beyond being merely a technical issue and now impacts society as a whole. He stresses the importance of addressing the societal implications of AI systems.


Evidence

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Ahmad Bhinder


Matthew Sharp


Agreed on

Importance of ethical AI governance


Need to develop frameworks that anticipate risks and seize AI’s potential for good

Explanation

Martin argues for the development of comprehensive frameworks to address potential risks associated with AI while also harnessing its positive potential. He emphasizes the need for a balanced approach in AI governance.


Evidence

Together we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Ahmad Bhinder


Matthew Sharp


Agreed on

Importance of ethical AI governance


AI diffusion is uneven globally, creating an opportunity to get involved early

Explanation

Martin points out that AI adoption is not uniform across the world, with some regions lagging behind. He suggests this presents an opportunity for early involvement in shaping AI governance in these areas.


Evidence

This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Accountability and oversight in AI decision-making

Explanation

Martin emphasizes the importance of clear responsibility and oversight in AI decision-making processes. He highlights the need for mechanisms to ensure accountability in AI systems.


Evidence

We want to ensure there’s clear responsibility for AI decision-making, addressing those gaps in things like verification, audit trails, incident response.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Transparency and explainability of AI systems

Explanation

Martin stresses the need for AI systems to be transparent and explainable. He argues that the complexity of AI should not undermine users’ understanding of how decisions are made.


Evidence

Things that promote clarity in how you make these decisions is important, and you don’t want the complexity to undermine a user’s understanding.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Fairness and non-discrimination in AI outcomes

Explanation

Martin highlights the importance of ensuring fairness and preventing discrimination in AI outcomes. He emphasizes the need to protect against bias and unfair treatment in AI systems.


Evidence

Protecting against bias, unfair treatment and outcomes, and mitigating demographic disparities in how these systems perform.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Privacy protection and data safeguards

Explanation

Martin emphasizes the importance of privacy protection in AI systems. He argues for the implementation of personal safeguards and respect for privacy rights in the AI ecosystem.


Evidence

We want to make sure there are those personal safeguards in place, and a respect for privacy rights.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Sustainability and environmental impact considerations

Explanation

Martin highlights the need to consider the environmental impact of AI systems. He points out the significant energy consumption associated with AI and the need to align AI development with environmental goals.


Evidence

They talked about how AI is going to require the equivalent of another Japan in terms of energy use and consumption, and that’s going to put a strain on resources, so we’ve got to address that.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Human-centered design focused on social benefit

Explanation

Martin emphasizes the importance of human-centered AI design that prioritizes social benefits. He argues that AI should meet societal needs while respecting individual rights and aligning with human values.


Evidence

It’s got to be looking at social benefit and ensuring that it’s meeting societal needs while respecting individuality, and aligning these capabilities with those needs.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


A

Ahmad Bhinder

Speech speed

136 words per minute

Speech length

695 words

Speech time

305 seconds

DCO is taking a human rights-centered approach to AI governance

Explanation

Ahmad Bhinder explains that the Digital Cooperation Organization (DCO) is focusing on ethical AI governance from a human rights perspective. They are developing tools and frameworks to ensure AI systems respect and protect human rights.


Evidence

We wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Chris Martin


Matthew Sharp


Agreed on

Importance of ethical AI governance


M

Matthew Sharp

Speech speed

109 words per minute

Speech length

890 words

Speech time

485 seconds

Tool provides risk questionnaires for both AI developers and deployers

Explanation

Matthew Sharp describes the DCO’s AI ethics evaluation tool, which includes separate questionnaires for AI developers and deployers. This approach recognizes the different roles and responsibilities in the AI lifecycle.


Evidence

The tool provides a detailed risk questionnaire, which is different for both developers and deployers.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Assesses severity and likelihood of human rights risks

Explanation

Sharp explains that the tool evaluates both the severity and likelihood of human rights risks associated with AI systems. This comprehensive assessment helps users understand the potential impact of their AI applications.


Evidence

And there are questions to ascertain both the severity and likelihood of risks related to human rights.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Generates interactive visualizations to help prioritize actions

Explanation

The tool creates interactive visualizations, such as radar graphs, to help users prioritize their actions. This feature aids in identifying the most critical areas for improvement in AI systems.


Evidence

There will be an interactive radar graph, which basically helps the user prioritize their actions.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Offers practical, actionable recommendations based on risk assessment

Explanation

Sharp highlights that the tool provides specific, actionable recommendations based on the risk assessment results. These recommendations are tailored to the user’s responses and help guide improvements in AI systems.


Evidence

And this will lead to actionable recommendations being given based on the specific way that the questions were answered.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


T

Thiago Moraes

Speech speed

109 words per minute

Speech length

18 words

Speech time

9 seconds

Participants engaged in scenario-based risk assessment exercise

Explanation

Thiago Moraes describes a practical exercise where participants applied the AI ethics tool to specific scenarios. This hands-on approach allowed attendees to understand the tool’s functionality and the process of ethical risk assessment in AI.


Evidence

Our case is the use of AI for… Okay, diagnose systems critical rare disease, right?


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Identified ethical risks in AI systems for medical diagnosis and job application screening

Explanation

Moraes reports that participants identified various ethical risks in AI systems used for medical diagnosis and job application screening. This exercise highlighted the diverse range of potential ethical issues in different AI applications.


Evidence

The first risk was related to the explainability since we’re talking about rare disease and inaccurate answers can give issues here also Scrutinatory issues more specifically we talked about gender-based Discrimination if like the populations and statistics not well used and privacy risks like data leaks with Very high sensitive data.


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Scored risks based on severity and likelihood

Explanation

Moraes explains that participants scored the identified risks based on their severity and likelihood. This quantitative approach helps prioritize which ethical issues need the most urgent attention.


Evidence

So for scoring the first one we did a six in the end So for explainability the one for discrimination a four and no privacy We think it’s the most sensitive here. It we gave a nine because from there from a leak many other issues may happen


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Developed actionable recommendations to mitigate identified risks

Explanation

Moraes reports that participants developed actionable recommendations to address the identified risks. This step demonstrates how the tool can guide users towards practical solutions for ethical AI implementation.


Evidence

Following the DCO recommendations we suggest for the explainability enhance comprehensive documentation so documentation and reporting and for the discriminatory Impact we suggest validating and testing and for the privacy. We suggest that the management


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


A

Alaa Abdulaal

Speech speed

139 words per minute

Speech length

250 words

Speech time

107 seconds

DCO believes in a multi-stakeholder approach to digital transformation

Explanation

Alaa Abdulaal emphasizes DCO’s commitment to a multi-stakeholder approach in addressing digital transformation challenges. This approach involves collaboration between various sectors and stakeholders to ensure comprehensive solutions.


Evidence

At DCO, we are really, as our name say, we are the digital cooperation organization. We believe in a multi-stakeholder approach.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Aims to provide actionable solutions for ethical AI deployment

Explanation

Abdulaal highlights DCO’s goal of developing practical, actionable solutions to support ethical AI deployment. This includes tools and frameworks that can be used by countries and developers to assess and mitigate risks in AI systems.


Evidence

This is why it was very important for us as a digital cooperation organization to provide actionable solution to help countries and even developers to have that right tool to make sure that whatever systems that are being deployed have the right risk assessment from human rights.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Plans to share the final AI ethics tool publicly

Explanation

Abdulaal announces DCO’s intention to make their AI ethics evaluation tool publicly available. This commitment to open access aims to promote widespread adoption of ethical AI practices.


Evidence

And we are looking forward to share with you the final deliverable and the ethical tool hopefully soon.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Seeks to enable digital prosperity for all through cooperation

Explanation

Abdulaal emphasizes DCO’s overarching goal of promoting digital prosperity for all through international cooperation. This vision underscores the organization’s commitment to inclusive and ethical digital development.


Evidence

And hopefully together we are building a future to enable digital prosperity for all.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Agreements

Agreement Points

Importance of ethical AI governance

speakers

Chris Martin


Ahmad Bhinder


Matthew Sharp


arguments

AI is now a societal challenge, not just a technical one


Need to develop frameworks that anticipate risks and seize AI’s potential for good


DCO is taking a human rights-centered approach to AI governance


summary

All speakers emphasized the critical need for ethical AI governance, focusing on societal impacts and human rights considerations.


Need for practical tools to assess AI risks

speakers

Ahmad Bhinder


Matthew Sharp


Thiago Moraes


arguments

Tool provides risk questionnaires for both AI developers and deployers


Assesses severity and likelihood of human rights risks


Generates interactive visualizations to help prioritize actions


Offers practical, actionable recommendations based on risk assessment


summary

The speakers agreed on the importance of developing and using practical tools to assess and mitigate AI-related risks, particularly in relation to human rights.


Similar Viewpoints

Both speakers emphasized the importance of key ethical principles in AI governance, including accountability, transparency, and fairness.

speakers

Chris Martin


Matthew Sharp


arguments

Accountability and oversight in AI decision-making


Transparency and explainability of AI systems


Fairness and non-discrimination in AI outcomes


Unexpected Consensus

Environmental impact of AI

speakers

Chris Martin


arguments

Sustainability and environmental impact considerations


explanation

While most discussions focused on societal and ethical impacts, Chris Martin unexpectedly highlighted the significant environmental concerns related to AI energy consumption, which wasn’t echoed by other speakers but is an important consideration.


Overall Assessment

Summary

The speakers demonstrated strong agreement on the importance of ethical AI governance, the need for practical assessment tools, and the focus on human rights in AI development and deployment.


Consensus level

High level of consensus among speakers, particularly on the need for human-centric, ethical AI governance. This agreement implies a shared vision for the future of AI regulation and development, which could facilitate more coordinated and effective approaches to addressing AI-related challenges.


Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

There were no significant areas of disagreement identified among the speakers.


difference_level

The level of disagreement was minimal to non-existent. The speakers presented a unified approach to ethical AI governance, focusing on human rights, practical tools for risk assessment, and multi-stakeholder collaboration. This alignment suggests a cohesive strategy within the DCO for addressing ethical challenges in AI development and deployment.


Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers emphasized the importance of key ethical principles in AI governance, including accountability, transparency, and fairness.

speakers

Chris Martin


Matthew Sharp


arguments

Accountability and oversight in AI decision-making


Transparency and explainability of AI systems


Fairness and non-discrimination in AI outcomes


Takeaways

Key Takeaways

AI governance is now a critical societal challenge requiring ethical frameworks and human rights protections


The Digital Cooperation Organization (DCO) is developing an AI ethics evaluation tool focused on human rights


The tool assesses risks for both AI developers and deployers across six key ethical principles


Practical application of ethical AI principles requires careful risk assessment and mitigation strategies


A multi-stakeholder, cooperative approach is essential for responsible AI development and deployment


Resolutions and Action Items

DCO to finalize and publicly release their AI ethics evaluation tool


Participants to provide feedback on the session and tool prototype via the provided QR code


Unresolved Issues

Specific implementation details of the AI ethics tool across different contexts and industries


How to address the uneven global diffusion of AI technologies and governance frameworks


Balancing innovation with ethical considerations in AI development


Suggested Compromises

None identified


Thought Provoking Comments

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact. And the stakes are monumental. They demand that we get this right, and at the same time, key questions remain. Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human-centric?

speaker

Chris Martin


reason

This comment sets the stage for the entire discussion by emphasizing the far-reaching impact of AI and the critical importance of ethical governance.


impact

It framed the subsequent conversation around the ethical implications of AI and the need for responsible development and deployment.


We wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights, and what needs to be done to ensure that we have a human rights protective, ethical AI governance approach.

speaker

Ahmad Bhinder


reason

This comment introduces a unique approach to AI governance by centering it on human rights, which is not commonly seen in other frameworks.


impact

It shifted the focus of the discussion towards considering AI’s impact on specific human rights, leading to a more nuanced conversation about ethical AI governance.


AI diffusion is pretty uneven across the world. This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside.

speaker

Chris Martin


reason

This observation highlights the global disparities in AI development and adoption, bringing attention to the need for inclusive approaches.


impact

It broadened the scope of the discussion to consider the global context and the importance of supporting AI development in regions that are currently underrepresented.


Our tool, there are a few other frameworks and tools related to R1. A lot of these are developed by national governments and tend to focus on their own national contexts. For example, the UAE and the New Zealand framework, a lot of them focus on verification of actions rather than risk assessments. And yeah, and a few of the existing tools focus only on AI developers and not AI deployers as well. And generally ours is the one that’s most focused on human rights.

speaker

Matthew Sharp


reason

This comment provides a comparative perspective on existing AI governance tools, highlighting the unique features of the DCO’s approach.


impact

It helped participants understand the distinctive aspects of the DCO’s tool, particularly its focus on human rights and inclusion of both developers and deployers.


Overall Assessment

These key comments shaped the discussion by establishing the critical importance of ethical AI governance, introducing a human rights-centered approach, highlighting global disparities in AI development, and differentiating the DCO’s tool from existing frameworks. They collectively steered the conversation towards a more comprehensive, globally-aware, and human-centric consideration of AI ethics and governance.


Follow-up Questions

How can we ensure AI systems are transparent and their decision-making processes are explainable?

speaker

Chris Martin


explanation

Transparency in AI decision-making is crucial for building trust and ensuring accountability.


What are the best practices for conducting human rights impact assessments for AI systems?

speaker

Chris Martin


explanation

Human rights impact assessments are important for mitigating potential problems early in AI deployment.


How can countries address workforce displacement and educational requirements resulting from AI adoption?

speaker

Chris Martin


explanation

Preparing populations for AI-driven changes in the job market is crucial for future-proofing societies.


What are effective strategies for fostering investment and entrepreneurship in AI while ensuring diversity and inclusivity?

speaker

Chris Martin


explanation

Building a diverse and inclusive AI innovation ecosystem is critical for ethical AI development.


How can we address the increasing energy consumption requirements of AI systems to align with environmental goals?

speaker

Chris Martin


explanation

The growing energy demands of AI pose significant environmental challenges that need to be addressed.


What are the most effective ways to integrate stakeholder feedback in AI system development and deployment?

speaker

Matthew Sharp


explanation

Incorporating diverse perspectives is crucial for developing ethical and human-centered AI systems.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.