Networking Session #60 Risk & impact assessment of AI on human rights & democracy

16 Dec 2024 14:00h - 15:00h

Networking Session #60 Risk & impact assessment of AI on human rights & democracy

Session at a Glance

Summary

This panel discussion focused on assessing AI risks and impacts, with an emphasis on safeguarding human rights and democracy in the digital age. The speakers represented various organizations involved in AI governance, including government agencies, standards bodies, research institutions, and advocacy groups.


David Leslie introduced the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA) methodology recently adopted by the Council of Europe. This approach aims to provide a structured framework for evaluating AI systems’ impacts on human rights and democratic values. Several speakers highlighted the importance of flexible, context-aware approaches to AI risk management that can be tailored to specific use cases.


Representatives from standards organizations like ISO/IEC and IEEE discussed their work on developing AI standards and certification processes to promote responsible AI development. Government officials from Japan and the US shared insights on their national AI governance initiatives and how these align with international frameworks. The importance of stakeholder engagement, skills development, and ecosystem building was emphasized by multiple speakers.


Industry perspectives were provided by LG AI Research, which outlined its approach to implementing AI ethics principles throughout the AI lifecycle. The role of NGOs in advocating for strong AI governance and bringing public voices into policy discussions was highlighted by the Center for AI and Digital Policy.


Overall, the discussion underscored the need for collaborative, multi-stakeholder efforts to develop effective AI governance frameworks that protect human rights and democratic values while fostering innovation. The speakers agreed on the importance of proactive approaches to identifying and mitigating AI risks as the technology continues to advance rapidly.


Keypoints

Major discussion points:


– The development and adoption of AI governance frameworks and risk assessment methodologies, like the Council of Europe’s HUDERIA


– The role of standards organizations and governments in creating AI governance guidelines and policies


– The importance of stakeholder engagement, skills development, and ecosystem building in AI governance


– Approaches to operationalizing human rights considerations in AI development and deployment


– The contributions of NGOs and civil society in advocating for responsible AI and human rights protections


Overall purpose:


The goal of this discussion was to explore various international and organizational approaches to AI governance, risk assessment, and human rights protection in the context of AI development and use. Speakers shared insights from government, industry, standards bodies, and NGOs on frameworks and best practices for responsible AI.


Tone:


The tone was largely collaborative and optimistic, with speakers building on each other’s points and emphasizing the importance of working together across sectors and borders. There was a sense of urgency about the need to develop robust governance frameworks, but also confidence in the progress being made. The tone remained consistent throughout, focusing on constructive approaches and shared goals.


Speakers

– David Leslie: Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, Professor of Ethics, Technology, and Society at Queen Mary University of London


– Wael William Diab: Chair of the ISO-IEC JTC1 SC42 (AI standardization)


– Tetsushi Hirano: Deputy Director of the Iraq Digital Policy Office at the Japanese Ministry of Internal Affairs and Communications


– Matt O’Shaughnessy: Senior Advisor at the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor


– Clara Neppel: Senior Director at IEEE


– Myoung Shin Kim: Principal Policy Officer at LG AI Research, IEEE Certified AI Professor


– Heramb Podar: Center for AI and Digital Policy (CAIDP), Executive Director of ENCODE India


Additional speakers:


– Samara Jaideva: Researcher at the Alan Turing Institute


Full session report

AI Governance and Human Rights: A Multi-Stakeholder Approach


This panel discussion brought together experts from various sectors to explore approaches to AI governance, risk assessment, and human rights protection in the context of AI development and deployment. The speakers represented government agencies, standards bodies, research institutions, and advocacy groups, providing a comprehensive overview of current efforts and challenges in responsible AI development.


Key Frameworks and Methodologies


The discussion began with David Leslie introducing the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA) methodology recently adopted by the Council of Europe. This framework aims to provide a structured approach for evaluating AI systems’ impacts on human rights and democratic values. Leslie described HUDERIA as “a unique anticipatory approach to the governance of the design, development, and deployment of AI systems” anchored in four fundamental elements. He also noted the Japanese government’s support for the Council of Europe’s work on an AI convention.


Other speakers presented complementary frameworks and standards:


1. Wael William Diab discussed ISO/IEC standards for AI systems, emphasising the importance of third-party certification and audits to ensure responsible adoption.


2. Tetsushi Hirano outlined the Japanese AI Guidelines for Business, which differentiates aspects from the perspective of AI actors.


3. Matt O’Shaughnessy highlighted the NIST AI Risk Management Framework, emphasising its flexible and context-aware application. He also discussed the White House Office of Management and Budget Memorandum, which provides guidance on AI use in the federal government.


4. Clara Neppel presented IEEE standards for ethically aligned AI design, focusing on building ecosystems to implement these standards. She also mentioned IEEE’s work on environmental impact assessment of AI.


5. Myoung Shin Kim shared LG AI Research’s approach to AI ethics and risk governance, which includes internal processes and education. She discussed their XR1 generative AI model and detailed their AI ethics implementation process.


6. Heramb Podar presented CAIDP’s advocacy work, including their Universal Guidelines on AI and efforts to promote ratification of AI treaties.


Human Rights Considerations


A significant portion of the discussion focused on incorporating human rights considerations into AI development and governance. Key points included:


1. The importance of stakeholder engagement in AI impact assessments, with multiple speakers emphasising the need to involve affected communities.


2. Data quality standards for AI systems, as highlighted by Wael William Diab.


3. The need for detailed analysis of rights holders in impact assessments, as mentioned by Tetsushi Hirano.


4. Human rights impact assessments for government AI use, discussed by Matt O’Shaughnessy.


5. Incorporating human rights principles in AI standards, as emphasised by Clara Neppel.


6. Educating data workers on human rights, a focus for LG AI Research according to Myoung Shin Kim.


7. The role of NGOs in advocating for human rights in AI governance, highlighted by Heramb Podar.


International Cooperation and Implementation


The speakers agreed on the importance of international cooperation and interoperability between different AI governance frameworks. This was evident in discussions about:


1. The Council of Europe’s work on an AI convention, mentioned by David Leslie.


2. Efforts to ensure interoperability between AI frameworks, highlighted by Tetsushi Hirano.


3. How U.S. domestic AI policies inform international work, discussed by Matt O’Shaughnessy.


4. IEEE’s global network of AI ethics assessors, presented by Clara Neppel.


5. LG AI Research’s collaboration with UNESCO, shared by Myoung Shin Kim.


6. CAIDP’s advocacy for ratification of AI treaties, mentioned by Heramb Podar.


Practical Implementation Challenges


The discussion also addressed the practical challenges of implementing AI ethics principles:


1. Matt O’Shaughnessy emphasised the need for context-aware application of risk management frameworks.


2. Clara Neppel discussed the importance of building ecosystems to implement ethical AI standards.


3. Myoung Shin Kim outlined LG’s AI ethics impact assessment process and mentioned their upcoming annual report on AI ethics implementation.


4. Heramb Podar highlighted the need for clear prohibitions on high-risk AI use cases.


5. Several speakers noted the challenge of balancing innovation with responsible AI development.


Education and Public Engagement


Myoung Shin Kim from LG AI Research emphasized the importance of education in AI ethics implementation. She discussed initiatives to educate data workers on human rights and improve citizens’ AI literacy. While other speakers touched on stakeholder engagement, Kim’s presentation provided the most detailed discussion of education efforts.


Conclusion


The discussion underscored the need for collaborative, multi-stakeholder efforts to develop effective AI governance frameworks that protect human rights and democratic values while fostering innovation. The speakers presented a range of approaches and methodologies for responsible AI development, highlighting both progress and ongoing challenges in the field. As David Leslie noted in his closing remarks, the conversation demonstrated the complexity of the issues and the importance of continued dialogue and cooperation among diverse stakeholders in shaping the future of AI governance.


Session Transcript

David Leslie: Can everyone hear me? Samara, can you hear me? Hello? Hello? Yes? Yeah? Okay. Perfect. If everyone’s ready, we can get started. I believe everyone’s joined us online. Perfect. Good evening. Thank you so much for joining us here today, this evening. We know it’s the last session, but I can promise you we have an other networking session on assessing AI risks and impacts, safeguarding human rights and democracy in the digital age. We will be moderated by Professor David Leslie, who is the Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, and Professor of Ethics, Technology, and Society at Queen Mary University of London. He will be introducing the rest of us, but to everyone joined here today and online, my name is Samara Jaideva. I’m a researcher at the Alan Turing Institute, and I’m very proud to say I have supported in helping publish and develop this human rights impact assessment framework that we’ve done with the Council of Europe. So now I’ll turn it to David to introduce us to this panel. Great. Samara, can you hear me? Am I… Just give me an acknowledgement and I’ll keep going. Good? Okay. Okay, so thank you so much, Samara. I am very thrilled with you. Just to say, our team at the Turing has been really involved with this process dating back to 2020 when the Ad Hoc Committee on Artificial Intelligence was really doing the initial steps to building a feasibility study that would come to inform what now is the framework convention, the treaty that is aligning human rights, democracy, and the rule of law with AI. And I’ll just also say that really this is the adoption of the Huderia methodology, which has just happened this past month, is really a kind of historical moment in a time of change where so much of the activities in the kind of international AI governance ecosystem are yet to be decided. And so this is really a kind of path breaking outcome, I would say. And I was just thinking about it, over the years, in being at the Council of Europe plenaries, where we’ve really talked through governance measures. It was early 2021, I want to say, where we first took a question about foundation models and frontier AI. I mean, you can just imagine that rich conversation about governance challenges has been going on at the Council of Europe’s venue in Strasbourg for a number of years now. So I’ll also just quickly say the Huderia itself that has been developed through the activities of the Committee on Artificial Intelligence and all the member states and observer states, it really is a unique anticipatory approach to the governance of the design, development, and deployment of AI systems that anchors itself in basically four fundamental elements. We’ve got a context-based risk analysis, which provides a kind of structured approach initially to collecting the information that’s needed to understand the risks of AI systems, in particular the risks they pose to human rights, democracy, and the rule of law. It really focuses in on what we call the socio-technical context, so the environments, the social environments in which the technology is embedded. It also allows for an initial determination of whether the system is the right approach at all, and it provides a mechanism for triaging more or less involving governance processes in light of the risks of the systems. There is also a stakeholder engagement process, which proposes an approach to enable engagement as appropriate for relevant stakeholders, so impacted communities, in order to sort of amplify the voices of those who are affected and to gain information regarding how they might view the impacts, and in particular contextualize and corroborate potential harms. Then there’s the third module, if you will, or the third element, a real risk and impact assessment, which is a more full-blown process to assess the risks and impacts that are related to human rights, democracy, and the rule of law in ways that really both integrate stakeholder consultation, but also really ask the how questions and try to think of downstream effects in a much more full-blown way. And then finally, there’s a kind of a mitigation planning element, which provides steps for mitigation and remedial measures that allow for access to remedy and iterative review. And as a whole, the Huderia also stresses that there’s a need for iterative revisitation of all of these processes and all of the elements of Huderia insofar as both the innovation environment, so the way that the systems are designed, developed, and deployed, both that is very dynamic and changing, but also the broader social and legal, economic, political contexts are always changing. And those changes mean that we need to be flexible and continually revisit how we’re looking at the governance process for any given system. So with that, let me now then introduce our first panel speaker, and that is Mr. William Diab, who is chair of the ISO-IEC JTC1 SC42, so just a wonderful standards development organization or set, a group of them that are doing great work on AI standards. And he’ll address the role of AI standardization in safeguarding human rights democracy as well as cover some existing and upcoming standards on these issues. So I’ll turn it over to you, Will. Go ahead.


Wael William Diab: Thank you, David, and thank you for the warm introduction. I’d like to thank you also for the invitation to present on this panel. My name is Will, and as David mentioned, I chaired the Joint Committee of ISO and IEC on Artificial Intelligence. And so I’m going to give you a brief flavor of what we do. Just to quickly acknowledge it’s not just me that does this. We have a pretty large management team, and we’ll make all of these slides available, but in the interest of time, I’m going to just jump into just what it is that we do. And so we take a look at the full ecosystem when it comes to AI. We start by looking at some of these non-technical trends and requirements, whether it’s application domains, regulatory policy, or what’s perhaps most relevant here is emerging societal requirements. Through that, we assimilate the context of use of the technologies we cover, and then what we do is we provide what we call horizontal and foundational projects on artificial intelligence. And I’ll talk a little bit more about examples, but I want to point out that the story doesn’t stop there. We have lots of sister committees in IEC and ISO that focus on the application domains themselves that leverage our standards. We work with open source communities and others. So we are part of the ISO and IEC families. Our scope is we are the focal point for the IT standardization of AI, and we help other sister committees in terms of looking at the application side. We’ve been growing quite a bit. We’ve published over 30 standards and have about 50 that are active. We have 68 countries, so the way we develop our standards is by one country, one vote principle, and about 800 unique experts that are in our system. I would also like to note that we work extensively with others. We have about 80 liaison relationships, both internal and external, and I’ll show a slide at the end. We also run a biannual workshop. The way we’re structured is we currently have 10 major subgroups, five of which are joint with other committees, and I’ll show what we do. So the first thing that’s important about understanding AI and being able to work with different stakeholders that have different needs is to have some foundational standards, and this area covers everything from common terminology and concepts, and by the way, that is a freely available standard that we do to work using AI. A lot of the work in this area has also been around enabling what we call certification and third-party certification. So I’ll show a slide at the end. The second thing that’s important to understand is that third-party audit of AI systems, so we believe that it’s important to enable this to ensure that we have broad, responsible adoption of AI. Another big area for us is around data. So data, as many people know, is at the cornerstone of a responsible and quality AI system. This original work started by looking at big data, and we completed all those projects, and then we expanded the scope to look at anything related to data and AI. And so we’re in the process of publishing a six-part multi-series. The first three have been published, and the next three should be published in this coming year around data quality for analytics in the AI space. Some of the more recent work is around synthetic data and data profiles for AI. Trustworthiness, which is very relevant to the topic at hand, as well as enabling responsible systems, is probably our largest area of work. The slide is a bit of an eye chart to try and read, and the reason is that we start from the fact that they are IT systems themselves, and yet with some differences from a traditional IT system, for example, in terms of the learning. And so what this allows us to do is to build on the large portfolio of standards that IEC and ISO have developed, and then extend that for the areas that are specific to AI. So one example of the work here is our AI risk management framework. This was built on the ISO 31,000 series as an implementation specific. But other things that you might see bolded on this chart are things that you might hear in every day, so making something controllable, explainable, transparent, and what we do is then take those concepts and translate them into technical requirements. A colleague of mine had put this together to indicate where societal and ethical issues lie in terms of the direct impact versus things that are further away, and I thought it was a great slide because everything in yellow really maps into what we’re doing today. So when it comes to societal issues in two ways, the first is through dedicated projects that are directly around this area, and again, you know, using use cases to translate from some of these non-technical requirements down to technical requirements and prescriptions on guidance, how to address them, as well as integrating it across our entire portfolio. For instance, when we look at use cases, we ask what some of the ethical and societal issues are. We don’t do this alone with a number of international organizations. In terms of use cases and applications, it’s important for us to be able to provide horizontal standards, and as I mentioned, you know, we’ve collected over 185 use cases, and we’re constantly updating this document, but we also take a look at the work from the point of view of an application developer, whether it’s at the technical development side or at the deployment side, and we have standards in this area. We’ve also started to look at the environmental sustainability aspects as well as the beneficial aspects of AI and big human machine teaming. Computational methods are at the heart of AI systems, and we have a large portfolio of work here. Our more recent work has been focused around having more efficient training and modeling mechanisms. Governance implications of AI, so this is looking at it from the point of view of a decision maker, whether it be a board or an organization, and answering some of the questions that might come up. Weeding of AI-based systems, this is another joint effort for us, and we have a multi-part series focused on testing, verification, and validation. In addition to the existing work, we’re looking at new ideas around things like red teaming. Health informatics is a joint effort with ISO TC215, and this is really taking us into the healthcare space, trying to assist them in building out their roadmap. In addition to the foundational project that we’ve got, we are also looking at extending the terminology concepts for the sector, which may serve as a model for other sectors as well, as well as looking at enabling certification for the healthcare space. In terms of functional safety, this is the work around enabling functional safety, which is essential for sectors that consider safety important. This is being done jointly with IEC SC65A. Natural language processing is around everything to do with language, and it goes beyond just text, and this is becoming increasingly important in new deployments. Last but not least, we have started a new joint working group with the ISO CASCO group that does certification and conformity assessment to look at conformity assessment schemes. Sustainability is a big area for us, both in terms of looking at the sustainability of AI and how AI can be applied to sustainability. I’m going to skip to just this slide. One of the important things is to allow this idea of a third-party certification and audit in order to ensure broad responsible adoption. This picture shows you how a lot of our standards come together. ISO-IEC 42001, which if you’re familiar with 27001, cybersecurity, or ISO 9001, is built around the same concepts, allows us to do this. Just quickly wrapping up, just to allow time for my other co-speakers, just to sum up, we’re looking at the entire ecosystem. We’re growing very rapidly. We work with a lot of other organizations, and it’s to join. We also run a biannual workshop that typically looks at four tracks applications. One of our recent ones was looking at transportation. We look at beneficial AI. We look at emerging standards and also what some of the emerging technology and requirements are. With that, I hand it back over to the moderator.


David Leslie: Thank you very much. Thanks so much, Val. That was a brilliant presentation. It just shows how much work on the concrete side of how the devil’s in the details, and we need to really work. I would say the Huderia that we’ve just adopted, this is the methodology. And as we move on in the next year or so, we’ll be working on what we call the model, which really gets into the trenches and explores some of those areas that you just presented, thinking also about the importance of alignment and ensuring the kind of standards are aligning with the way that we’re approaching this on the international governance level. So, our next speaker is Tetsushi Hirano, the Deputy Director of the Iraq Digital Policy Office at the Japanese Ministry of Internal Affairs and Communications. And Hirano-sensei will offer us his perspective on AI and its impacts on human rights and governance, both in Japan and internationally. Tetsushi, the floor is yours.


Tetsushi Hirano: Thank you, David. I’m very pleased to participate in this important session following the successful adoption of the Huderia methodology. And I sincerely hope that this pioneering work will promote this new type of approach and facilitate the accession of the interested countries to the AI cooperation. Speaking of Japan, Japan has been developing its own AI risk management framework since 2016. And this year, we released the AI Guidelines for Business, which took into account the results of the Hiroshima AI process for the advanced AI systems as well. And I see some similarities and differences between the Japanese guidelines and the similarities. Both are based on common human-centered values and also pay attention to the different contexts of AI life cycles. While Huderia provides a model of risk analysis of the application design and development deployment context, the Japanese guideline differentiates these aspects from the perspective of AI actors. Namely, the guidelines provide a detailed list of what developers and deployers and users are recommended to do with respect to our analysis. This is one of the features of our guidelines. compared to other frameworks. But despite this formal difference, Huderia and the Japanese guidelines go in the same direction in the analysis. So we are hoping to contribute to the further development of Huderia technical document plan for 2025. And the next is the difference. And this is also a strong point of the Huderia as far as I can see. And the Huderia offers a detailed analysis of right holders and effects on them. But some Japanese experts evaluate COBRA very highly, especially in view of the COBRA, which can be seen as a threshold mechanism. And it also provides a step-by-step analysis of stakeholder involvement. And I have to admit that the stakeholder involvement process presented there is demanding if some of the steps are to be implemented precisely. But this can serve as a kind of benchmark for continuous development. And the Japanese government is future framework for domestic AI regulations. And I’m sure that Huderia will be one of the key important documents to look at, especially when developing public procurement rules, for example, where the protection of the citizens’ rights is at the core of the issue. I would also like to mention interoperability, a document of which is also planned for 2025. As we all know, there are many AI risk management frameworks under development. And for example, the reporting framework based on the Hiroshima process code of conduct or EU AI Act itself has three different type management framework, and to name but a few. The interoperability document may highlight the commonalities of these frameworks, as well as their respective strengths, which can facilitate mutual learnings between them. In particular, there are documents that only address advanced AI systems and we will have to think about what kind of impact, for example, synthetic content created by generative AI can have on democracy also in the meetings of the future meetings of the AI Convention. And finally, I would like to address the future role of conferencing parties to the AI Convention. As a pioneering work in this field, Huderia is expected to become a benchmark. However, it is also important to share knowledge and the best practices with concrete examples as this type of risk and impact assessment is not yet well known. This together with the interoperability document will help interested to join this convention.


David Leslie: Thank you. Thank you so much, Tetsushi. And I’ll just say that the support of the Japanese government across this process has been absolutely essential to the innovative nature and the success of the instrument. So, just a real deep thank you there. Speaking of which, I am now, I have the pleasure of introducing Matt O’Shaughnessy, who is Senior Advisor at the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor. And I’ll just say that the past few years have really marked major strides, one might even say quantum leaps, in these approaches that the U.S. has developed, for instance, in AI risk management and governance, with key initiatives like NIST’s AI Risk Management Framework through the recent White House Office of Management and Budget Memorandum on Advancing Governance, Innovation, and Risk Management of Artificial Intelligence. So, there’s been a lot of really excellent work coming out of the public sector in the U.S. And so, Matt, I wanted to really ask you if you could talk a little bit more about these kind of national initiatives and speak a bit about how they reflect and contribute to emerging global frameworks and shared principles for AI development and use.


Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the NIST AI Risk House Office of Management and Budget Memorandum on Government Use of AI. Maybe I’ll say a few words, kind of being an overview of each of those, and then talk kind of about how they interact and inform our international approach to AI. So, both of these documents take a similar approach. They’re both flexible, they’re both very context-aware, directed. Specifically at how particular AI systems are designed and used in particular contexts. And they both aim to promote innovation, of course, while also setting out concrete groups that can help effectively manage their risks. So, I guess, let me start with the NIST AI Risk Management Framework. So, this is our general risk management framework that sets out steps that are applicable to all organizations, whether they’re private entities or government agencies who are developing or using AI. So, the AI Risk Management Framework describes different actions that organizations can take to manage risks of all of their AI activities. A lot of those are relevant to respect for human rights. So, for instance, it describes both technical and kind of steps that can help manage harmful bias, discrimination, mitigate risks to privacy. But it also describes a lot of more general actions, things like how to establish processes for documenting the outcomes of AI systems, processes for deciding whether an AI system should be commissioned or deployed in the first place, or policies or procedures that improve accountability, or kind of increased knowledge about the risks and impacts the application of that AI system has. So, a lot of these governance-oriented actions address many of the concepts that are set out in the council. And they help lay the groundwork for organizations to better consider the risks to human rights that their AI activities pose, and also address and mitigate them. As I mentioned before, the Risk Management Framework is really designed to be applied in a flexible and context-aware manner. And that’s really important. It helps ensure that the risk management steps are both well-tailored and proportionate to the specific context of use, but also that they’re effective, and that they effectively target the most salient risks that are posed by a particular system in the particular context of its use. David, in your instance of the Houdini area taking a socio-technical approach, considering both the social context that an AI system is developed in, it’s deployed in, and that’s really core to the NIST Risk Management Framework. And I think really important to making sure that AI risk management, more generally, is effective and effectively targets the most important risks. The Risk Management Framework sets out a lot of these kind of general steps that organizations can take to manage various risks. But as I said before, it’s most effective when it’s deployed in a very context-aware manner. And to do that more effectively, it supported the development of what it calls, quote, profiles, that describe how it can be used in specific sectors, for specific AI technologies, or for specific types of end-use organizations, whether it’s like a government agency or a specific private sector entity. So one example of that that the Department of State has developed is a risk management profile for AI and human rights. And that describes specific potential human rights impacts of AI systems. And that can help developers of AI systems better anticipate the specific human rights impacts that their AI systems could have, and help them tailor the actions that are described in the Risk Management Framework to the specific end-use that they could have. And this is also where tools like the Council of Europe’s Huderia tool, the Human Rights, Democracy, Rule of Law Impact Assessment tool, can contribute and be most effective. So, you know, a lot of the kind of key risk management steps that the Huderia sets out are similar to those in the NIST AI Risk Management Framework. But the Huderia provides more detail on actions that are particularly relevant to human rights and democracy. Things like, you know, engaging stakeholders to make sure that organizations are aware of the human have, or establishing mechanisms for remedy. So as Tetsushi mentioned, the detailed resources that will be negotiated and developed next year will be particularly helpful in kind of offering this insight for organizations who are applying risk management tools that already exist, but are looking for more detailed references or resources to help them specifically look at human rights impacts in contexts where those are particularly salient. Okay, so that’s our framework, which again applies to kind of all organizations. And again, it’s kind of a very flexible, context-oriented tool. You also asked about our White House Office of Management and Budgets memorandum governance, innovation, and risk management for agency use of AI. So this is the set of rules, binding rules for government agencies, covered government agencies that use AI, and it sets out similarly key risk management actions that government agencies who are developing or using AI systems must follow in their AI activities. So this memo was released in March of twenty twenty four. You can look it up online. It’s M. And it was in fulfillment of the AI and government act of twenty twenty. And even though it was developed by this administration, it builds on work that was started in the previous administration, such as a December twenty twenty executive order called Promoting the Use of Trustworthy AI in the federal government. So it sets out a lot of bipartisan priorities. This memo, again, kind of reflects our broader approach in the United States to AI governance. It’s meant to be tailored to advance innovation, make sure that we’re using AI in ways that benefit citizens and the public at large, but also make sure that we. The example in managing and addressing the risks of AI, this guidance aligns with a lot of the provisions that were set out in the Council of Europe’s AI Convention, I’ll just give you a quick overview of some of those key aspects. So it establishes some AI governance structures and federal agencies like chief officers or governance boards that promote accountability, documentation, transparency. It sets out some key risk management practices, especially for AI systems that are determined to be what we call safety, impacting or right. Those include steps for things like risk evaluation or assessments of the quality of an AI data set that’s used for training or testing, ongoing testing and monitoring steps, training, oversight for human operators, assessments and mitigations of harmful bias, engagement for affected communities, for rights impacting AI systems. So, again, just kind of some key risk management steps that are mandated for government AI systems. And we see those as really instrumental for managing impacts on human rights. You know, things like AI systems that are used in law enforcement contexts or related to critical government services, determining whether someone is eligible for benefits, which we would label as rights impacting and apply these, you know, kind of key risk management steps that are set out in this memorandum. So those are kind of our two key domestic policies that set out AI risk management practices. And in terms of the international implications of these, both of them were informed by international best practices looking to work done by other countries, international organizations. The NIST AI Risk Management Framework had extensive international multi-stakeholder consultations. And. It’s 1.0 right now and is intended to be updated over the years, so there’ll be, you know, kind of continuing conversation between these domestic efforts and best practices that are being set out and developed internationally. And in turn, both of these these domestic products inform our international work. So both the Council of Europe’s Huderia and recent OECD projects have drawn from the AI Risk Management Framework. It’s informed the work of standards developing organizations like ISO, IEC. Continuing to work with NIST to develop crosswalks of their own domestic guidelines with the RMF, which helps ease compliance and aid interoperability. So both of these things kind of lay the groundwork for all of our international work on safe, secure and trustworthy AI, whether it’s in the Council of Europe’s AI Convention, whether it’s our UN General Assembly resolution on AI or our Freedom Online Coalition joint statement on responsible government practices for AI. And, you know, we’re looking forward to over the next couple of years, continuing to. The and the conversation on AI Risk Management continues to develop on there and turn it back over to you, David, thanks again.


David Leslie: Thanks, Matt. And and also just to say, Matt’s presence in Strasbourg has has been a huge boon for for the you know, as we’ve tried to to sort of develop the Huderia over the months and years. And so just to also thank you for that, for that continuing commitment to that process. I think it’s been really important to have, you know, everybody speak and and share insights in the room and at the Council of Europe. So I’d like to now introduce Clara Neppel, who is a senior director at IEEE. You’re up in at the very forefront of driving initiatives that address the ethical and societal implications of emerging technologies. IEEE is one of the world’s largest technical organizations, has been instrumental in developing frameworks and standards for responsible use for a number of years now. And it’s always had a strong focus on risk management. IEEE’s work on risk management provides practical tools and methodologies to ensure that these AI systems that are being developed are robust, fair and aligned with societal values. And and so Clara will share with us insights into, you know, into the into this work and into into how it’s contributing to our to sort of the broader AI governance ecosystem. And I think you’re there, Clara, in person. So go ahead. Yes, yes.


Clara Neppel: Thank you. Thank you, David. Thank you also for the kind introduction. Yes, we were we were also very active in the Council of Europe as well as in the OECD and other international organizations. And maybe one of the, I think, critical aspects here is that IEEE is not only a standard setting organization, but also an association, as you mentioned, of technology also that permits us to be quite early in identifying risks. And maybe this is also the reason why we were among the first to start working on what we call ethically aligned design in 2016, which permitted us to come up with some concrete instruments like standard certifications quite early. And what I would like to share with you now is really also some practical lessons learned, which I think is important to implement human rights in technical systems, AI systems. So first lesson learned is really that we need the time and we need the stakeholders. We need for even if we think that some of the concepts like transparency or fairness are already quite defined, you might be surprised. So I’m also co-chair of the OECD expert group on AI and privacy. And both, let’s say, ecosystems have very clear understanding of what transparency means or what fairness means, but they have this is very different. The professionals, for instance, transparency is about the transparency of data collection. And on the AI expert side, it’s really about how the, let’s say, decisions of the systems are made understandable. So this is just one example. And so let’s say one of our most used standards right now, IEEE 7000 took this time, so it took five years to being developed. In 2000, when the standard published. And since then, there are a lot of, let’s say, lessons that we would like because it was really worldwide deployed. So the second lesson that I would like to share with you is that we need skills. The skills that we need is not only the technology, the skills related to technology, but also to ethics. And we were investing in this also right from the beginning. We have not only systems certification, but also personal certification. In addition of assessors, and we can say now that we have more than 200 assessors worldwide that are certified by IEEE. We have a training program which reaches from Dubai, as I just heard today, to South Korea and obviously in Europe. So we have, let’s say, this worldwide network of assessors that also, let’s say, have a certified way of understanding of what human rights and ethics is. And third, and I think this is the most important of these standards instruments, and we have the skills and the people that can implement it, we can build very strong ecosystems. And I think that without that, you are still working in isolation. You need these ecosystems. I can give you the example now in Austria, because the European office is based in Vienna. We have now, starting from the city of Vienna, so from public services to data hubs in Tirol, for instance, that are built on the basin, which that means that already the data governance, let’s say, is according to ethical principles. And then all the applications that are running on this data hub are also required to to fulfill the same requirements. And this permits to have these ecosystems, which, in the end, it’s, let’s say, they found of what we want to achieve with human rights. I think what the Huderia methodology concerns, the standard was a human rights first approach. And this was also acknowledged by the Joint Research Center of the European Commission that made the analysis of existing standards on human rights and acknowledged that IEEE standards are very close to what is being required with respect to human rights. It is about stakeholder engagement, if you want, so it’s about the recipe about how to engage stakeholders, how to understand the values of the stakeholders. And I would like maybe to bring here also an aspect which I think is very often not seen. So very often we are focusing on transparency, on fairness and so on. But there are human rights that are not in the existing framework, like dignity. And we have in 7000 all these aspects, all these values that are being analyzed because it’s a risk-based approach. Then there is a clear methodology on how to mitigate those risks with translating it into concrete system requirements or organizational methods. So this is about the design phase and this is complemented by certified, so a certification method, which is also looking to existing systems and assesses it along the different aspects of transparency, accountability. Last but not least, I would like to mention that we are now also in the process of scaling, let’s say, the certification system. We are working with VDE from Germany and Positive AI from France to develop trust label, AI trust label, which would include the seven aspects of human agency and oversight, technical robustness and safety, private transparency, diversity, and social environmental well-being. Just to the last one, for the environmental well-being, we just started a working group on the environmental impact of AI to clearly define the metrics that are being used for environmental impact, including also inference cost and not only in energy, but also, for instance, data usage. We are doing this also together with the OECD. So I think that’s a first overview of what we’re doing. Thank you.


David Leslie: Thanks, Clara. And I mean, it’s really important to note here as well that, you know, making these approaches usable for people is such a priority. And one of the things I think that lies ahead of us is really making the human rights, the range of human rights that are accessible to people and being able to translate them out so that people can actually pick up, you know, the various approaches to risk management and really, if you will, operationalize a concrete approach to understanding and assessing the impacts on those rights. So I’ll now introduce Mr. Myung-Shin Kim, who is Principal Policy Officer at LG AI Research and an IEEE Certified AI Professor. LG AI Research really focuses on innovation in AI that is responsible and that’s developed and deployed safely and ethically. And I think, you know, an important dimension of that is risk governance and addressing bias mitigation and ensuring transparency and accountability. So, Mr. Kim, I’m wondering if you could share LG AI Research’s perspective on specifically on AI risk governance. How does your organization approach managing these risks? And what do you believe an ideal framework for AI risk governance should look like? Right.


Myoung Shin Kim: Thank you very much for inviting me to this meaningful discussion. Today, I will share how LG AI Research is translating our AI ethics principle into tangible action, focusing on AI risk governance. …about LG AI Research. Established four years ago, our mission is to provide advanced AI technologies and capabilities to LG affiliates, such as LG Electronics and LG Chemical. One of our landmark achievements is the development of XR1, a generative AI model capable of understanding and creating contents in both Korean and English. XR1 has achieved performance on par with global benchmarks, demonstrating its competitive edge in the international AI landscape. Just last week, we released XR1 3.5 as an open-source language model contributing to the development of the AI research ecosystem. Beyond AI technology, LG AI Research places a strong emphasis on adhering to AI ethics throughout the entire lifecycle of the AI system. Since XR1, LG officially announced its AI ethics principles with five core values, humanity, fairness, safety, accountability, and transparency. But you know, more important than principles is putting them into practice. So we employ three different strategic pillars to ensure adherence to AI ethics principles, namely governance, research, and engagement. Let me explain each in detail. First of all, we conduct assessment for every project to identify and address potential risks across the AI lifecycle. It consists of three steps. First, analyzing project characteristics, setting problem-solving practice, and verifying research and documentation. When risks or problems are identified, we establish specific solutions and assign responsibilities to designated personnel and set deadlines for resolving the issues. The entire AI ethical impact assessment process and its outcome are attached to the final report when the project closes in our project management system. A unique aspect of our approach is the involvement of a cross-functional task force. This brings together researchers in charge of technology, business, and AI ethics, each contributing their specialized knowledge and diverse perspectives. From a human rights perspective, we pay special attention to some of the key questions during the AI ethical impact assessment. For example, what groups are included among stakeholders affected, and if there is any possibility of intentional or unintentional misuse of the AI system by users. Additionally, we educate data workers about the Universal Declaration of Human Rights and the Sustainable Development Goals, providing guidelines to respect, protect, and promote human rights during data production or the data-steaming process. As you know, generative AI models sometimes produce inaccurate information known as hallucinations due to misinformation. To address this issue, we have developed AI models that generate answers based on factual information and evidence. Additionally, we are continually researching unlearning techniques to selectively delete personal information that was unintentionally used during the training process. Considering that AI is ultimately created by humans, I think it is also important to assess the level of human rights sensitivity among our researchers. For this reason, every spring, LGRI Research conducts an AI ethics awareness survey to assess and improve adherence to our AI ethics principles. I am personally pleased to see that the gap between awareness and practice has narrowed this spring compared to last year. Additionally, we hold an AI ethics seminar bi-weekly to boost interest and participation in AI ethics. For AI ethics to take root in our society, I believe citizens’ AI literacy must improve. Additionally, if high-quality AI education is not evenly provided, existing economic and social disparities may widen. To address this issue, we provide a customized AI education program to over 40,000 youth, college students, and workers annually. An old curriculum includes AI ethics to help citizens to grow a more mature user and also like critical watchdogs in the AI market. And our efforts are expanding beyond Korea to the global level. We are collaborating with UNESCO to develop online educational content for AI ethics targeting researchers, developers, and policy makers. The final MOOC will be held worldwide by early 2026. Lastly, every January we published a report compiling all the outcomes and lessons learned from implementing our AI ethics principles. These reports illustrate how we are implementing not only just like our own AI ethics principles, but also UNESCO’s recommendation on the ethics of AI and South Korea’s national AI ethics guidelines. We hope this can serve as a reference for their AI ethics implementation approach. The following report is scheduled to be published at the end of like January, next month, and will be available on our homepage. So if you have interest, please check. Thank you for your attention. Thanks so much for all that great information, Dr. Kim. Now, in the interest of time,


David Leslie: I’m just going to go right to introducing Hiram Poddar, who is at the Center for AI and Digital Policy, CAIDP, and is also Executive Director of ENCODE India. Now, in particular, CAIDP has been a vocal advocate for the development and implementation of strong governance frameworks that prioritize transparency, accountability, and fairness in the production and use of AI systems. It’s an organization that’s also deeply engaged in policy analysis and stakeholder collaboration to safeguard human rights and democratic principles in the face of rapid technological transformation. So, Hiram, given your work with CAIDP, could you share some thoughts on how NGOs can contribute to creating good governance guardrails for AI? In particular, what do you see as the critical steps for ensuring that AI systems are designed and deployed in ways that uphold societal values and human rights? And you are there in the room, if I’m not mistaken.


Heramb Podar: Yes, I am. I hope you can hear me. For the opportunity to speak, CAIDP has been, indeed, a very vocal advocate. All of the work we do is grounded in policies to uphold human rights, democracy, and the rule of law. Ultimately, for NGOs, it’s all about advocacy through engagement with the due process in terms of, you know, public voice opportunities which might come up and bringing in as much of a public voice as possible. Just a few minutes ago, my co-speaker was just speaking about, you know, like how all rights are not often covered. Sometimes there are contexts which are overlooked, unfortunately. So, really kind of CSOs and NGOs can be that bridge between the implementation or on-ground, you know, risks and how the public is feeling and the policies that are being developed, whether it be at the COE or whether it be the NIST frameworks and so on. Highlighting, like, specific actions CAIDP has taken, we have been very vocal in the advocacy for the ratification of the Council of Europe AI Treaty. We think it prevents global fragmentation, it aligns everyone’s national policies to global standards, and we have recently released statements to the South African presidency for the G20 to ratify the treaty, to the U.S. Senate to ratify the treaty. You know, bring in voices, as I was talking about earlier. One of our key members in our global academic network is NCORE Justice, which is a youth organization focused on AI risks, making sure that AI works for everyone and that AI is safe for, you know, any kind of future generations that do not inherit any kind of malicious AI that might impact human rights. Quickly jumping on to, you know, specific actions in terms of design and development, that was a very interesting question. At CAIDP, we have something called the Universal Guidelines on AI. We just recently celebrated the sixth anniversary of the UKAI principles, as we like to call it, and what we would like to see most is, you know, having clear red lines in whatever policies governments put out in terms of prohibiting use cases that are not based on scientific validity, in terms of use cases that might be adversely impacting certain groups or impacting human rights. We see some early examples of high-risk use cases, for example, in the EU AI Act, things like biometric surveillance or social scoring and so on. What would be, you know, exciting to see is, you know, having ex-ante impact assessments, having proper kind of transparency and explainability across the AI life cycle from design to decommissioning. Ultimately, having, you know, whistleblower protections, we’re seeing an increasing kind of a race to turn better AI systems, and we find it very necessary, you know, for there to be certain guardrails and certain whistleblower protections so that people can speak their mind. Yeah, and just in specific use cases like autonomous weapon systems having termination obligations, which is another one of the cornerstones of our UKAI principles, so having human oversight, we see constantly that a lot of states, so we released something called the Artificial Intelligence and Democratic Values Report on an annual basis, which is the world’s most comprehensive coverage of, like, national AI policies, and we rank countries according to their metrics. And something we saw very interestingly was also a recommendation on AI ethics, where countries are really slow in implementing them, and this also brings to light, kind of, the global digital divide. A lot of the global south countries are particularly, you know, playing catch-up. Countries are not getting to, you know, submit their readiness assessment methodologies to the UNESCO, which is our key indicator for implementation. So, again, you know, NGOs, coming back to the original question, have a role to play in terms of making sure that countries, companies, you know, other different sectors, they not only make these commitments, but they also, you know, follow through with action, you know, and not just rooted in words which might, you know, have interpreting differences and, like, actually having some sort of grounded principles or grounded metrics. Yeah, and I’ll end this here. Thank you so much, Haram. That’s really, really great to hear that this is a kind of multilateral, it needs to be a multilateral effort, and NGOs need to play a central role as we develop the governance instruments. So, I’ll just say that it’s been amazing to hear


David Leslie: about all of this innovative work that’s been done in standards development organizations at the state level. The work of the Council of Europe, I think, is, you know, it’s been out ahead on many things, and always hearing about all this innovation, innovative work really just reminds me that actually, you know, we talk a lot about move fast, break things, right? But I think, you know, on our end of things and hearing about all the fast and safe things, you know, we need to be out in front of some of the way these technologies are developing. So, to close here, I want to just maybe turn back to Smera and ask if you had any closing observations. Yes, all I would say is it’s so fantastic to hear from everyone who’s joined us here today. I think so many excellent points about stakeholder engagement, the role of civil society being a part of it, being ahead of the curve and identifying some of those risks, skills development as well, which was mentioned. So, I think all of this develops a really good and strong ecosystem, and when you use tools like the Huderia methodology in this space to identify this and introduce impact mitigation measures, you know, as you said, David, move fast and save things. So, I think on that note, I’ll send back to you. Okay, wonderful. So, just again, one more thank you to all of our speakers. We are striving to finish on time, and thank you so, so much to all the important comments and information that were shared today. So, I wish you well from the southeast of England, and I hope you have a nice time who are physically there in Riyadh at the rest of IGF. Take care. Thank you.


D

David Leslie

Speech speed

138 words per minute

Speech length

2145 words

Speech time

928 seconds

Huderia methodology for AI risk assessment

Explanation

The Huderia methodology is a unique anticipatory approach to AI governance. It focuses on four fundamental elements: context-based risk analysis, stakeholder engagement, risk and impact assessment, and mitigation planning.


Evidence

Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mitigation planning


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Stakeholder engagement in AI impact assessment

Explanation

The Huderia methodology emphasizes the importance of stakeholder engagement in AI impact assessment. It proposes an approach to enable engagement with relevant stakeholders, including impacted communities.


Evidence

Aims to amplify voices of affected communities and gain information on how they view potential impacts


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


W

Wael William Diab

Speech speed

138 words per minute

Speech length

1441 words

Speech time

624 seconds

ISO/IEC standards for AI systems

Explanation

ISO/IEC JTC1 SC42 is developing standards for the full AI ecosystem. These standards cover various aspects including non-technical trends, requirements, and horizontal and foundational projects on artificial intelligence.


Evidence

Over 30 published standards, about 50 active projects, 68 participating countries, and 800 unique experts involved


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Data quality standards for AI systems

Explanation

ISO/IEC is developing standards for data quality in AI systems. This includes a six-part multi-series on data quality for analytics in the AI space.


Evidence

First three parts of the data quality series have been published, with the next three scheduled for publication in the coming year


Major Discussion Point

Human Rights Considerations in AI Development


T

Tetsushi Hirano

Speech speed

131 words per minute

Speech length

576 words

Speech time

262 seconds

Japanese AI Guidelines for Business

Explanation

Japan has developed AI Guidelines for Business, taking into account the results of the Hiroshima AI process for advanced AI systems. The guidelines differentiate aspects of AI from the perspective of AI actors, providing detailed recommendations for developers, deployers, and users.


Evidence

Guidelines provide a detailed list of recommendations for developers, deployers, and users


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Differed with

Matt O’Shaughnessy


Differed on

Approach to AI risk assessment frameworks


Detailed analysis of rights holders in Huderia

Explanation

The Huderia methodology offers a detailed analysis of rights holders and effects on them. It provides a step-by-step analysis of stakeholder involvement, which is seen as a benchmark for continuous development.


Evidence

Japanese experts evaluate COBRA (part of Huderia) highly, especially as a threshold mechanism


Major Discussion Point

Human Rights Considerations in AI Development


Interoperability between AI frameworks

Explanation

There is a need for interoperability between different AI risk management frameworks. An interoperability document planned for 2025 may highlight commonalities of these frameworks and their respective strengths.


Evidence

Mentions various frameworks like the Hiroshima process code of conduct and EU AI Act


Major Discussion Point

International Cooperation on AI Governance


M

Matt O’Shaughnessy

Speech speed

163 words per minute

Speech length

1461 words

Speech time

536 seconds

NIST AI Risk Management Framework

Explanation

The NIST AI Risk Management Framework is a general risk management framework applicable to all organizations developing or using AI. It describes actions organizations can take to manage risks of their AI activities, including those relevant to human rights.


Evidence

Framework describes technical steps to manage harmful bias, discrimination, mitigate privacy risks, and improve accountability


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Differed with

Tetsushi Hirano


Differed on

Approach to AI risk assessment frameworks


Human rights impact assessments for government AI use

Explanation

The White House Office of Management and Budget memorandum sets out binding rules for government agencies using AI. It mandates key risk management actions, particularly for AI systems determined to be safety-impacting or rights-impacting.


Evidence

Includes steps for risk evaluation, data quality assessment, ongoing testing and monitoring, and engagement with affected communities


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Clara Neppel


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


U.S. domestic AI policies informing international work

Explanation

U.S. domestic AI policies, such as the NIST AI Risk Management Framework, inform international work on AI governance. These domestic products have influenced international initiatives and standards.


Evidence

Council of Europe’s Huderia and OECD projects have drawn from the AI Risk Management Framework


Major Discussion Point

International Cooperation on AI Governance


Context-aware application of risk management frameworks

Explanation

The NIST AI Risk Management Framework is designed to be applied in a flexible and context-aware manner. This approach ensures that risk management steps are well-tailored and proportionate to the specific context of use.


Evidence

Framework supported by ‘profiles’ that describe how it can be used in specific sectors, for specific AI technologies, or for specific types of end-use organizations


Major Discussion Point

Practical Implementation of AI Ethics


C

Clara Neppel

Speech speed

133 words per minute

Speech length

932 words

Speech time

419 seconds

IEEE standards for ethically aligned AI design

Explanation

IEEE has been developing standards for responsible use of AI with a strong focus on risk management. Their work provides practical tools and methodologies to ensure AI systems are robust, fair, and aligned with societal values.


Evidence

IEEE 7000 standard took five years to develop and has been widely deployed


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Incorporating human rights principles in AI standards

Explanation

IEEE standards take a human rights first approach in AI development. Their standards are acknowledged to be very close to what is required with respect to human rights.


Evidence

Acknowledgment by the Joint Research Center of the European Commission


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Matt O’Shaughnessy


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


IEEE’s global network of AI ethics assessors

Explanation

IEEE has developed a global network of certified AI ethics assessors. This network helps in implementing and assessing adherence to AI ethics principles worldwide.


Evidence

More than 200 certified assessors worldwide, training programs from Dubai to South Korea


Major Discussion Point

International Cooperation on AI Governance


Building ecosystems to implement ethical AI standards

Explanation

IEEE emphasizes the importance of building strong ecosystems to implement ethical AI standards. These ecosystems involve various stakeholders and ensure that AI systems adhere to ethical principles from data governance to application development.


Evidence

Example of ecosystem in Austria, from city of Vienna public services to data hubs in Tirol


Major Discussion Point

Practical Implementation of AI Ethics


M

Myoung Shin Kim

Speech speed

111 words per minute

Speech length

774 words

Speech time

416 seconds

LG AI Research’s approach to AI ethics and risk governance

Explanation

LG AI Research has developed an approach to AI ethics and risk governance based on five core values: humanity, fairness, safety, accountability, and transparency. They employ three strategic pillars: governance, research, and engagement.


Evidence

Development of XR1, a generative AI model, and implementation of AI ethics principles


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Agreed on

Importance of AI risk management frameworks


Educating data workers on human rights

Explanation

LG AI Research educates data workers about the Universal Declaration of Human Rights and the Sustainable Development Goals. They provide guidelines to respect, protect, and promote human rights during data production or data-steaming process.


Major Discussion Point

Human Rights Considerations in AI Development


LG AI Research’s collaboration with UNESCO

Explanation

LG AI Research is collaborating with UNESCO to develop online educational content for AI ethics. This initiative targets researchers, developers, and policymakers globally.


Evidence

Final MOOC planned to be held worldwide by early 2026


Major Discussion Point

International Cooperation on AI Governance


LG’s AI ethics impact assessment process

Explanation

LG AI Research conducts an AI ethics impact assessment for every project to identify and address potential risks across the AI lifecycle. This process involves a cross-functional task force bringing together researchers from technology, business, and AI ethics.


Evidence

Three-step process: analyzing project characteristics, setting problem-solving practice, and verifying research and documentation


Major Discussion Point

Practical Implementation of AI Ethics


Agreed with

David Leslie


Matt O’Shaughnessy


Clara Neppel


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


H

Heramb Podar

Speech speed

154 words per minute

Speech length

753 words

Speech time

292 seconds

NGO advocacy for human rights in AI governance

Explanation

NGOs like CAIDP play a crucial role in advocating for human rights in AI governance. They act as a bridge between on-ground risks, public sentiment, and policy development.


Evidence

CAIDP’s advocacy for the ratification of the Council of Europe AI Treaty


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Stakeholder engagement in AI impact assessment


CAIDP’s advocacy for ratification of AI treaties

Explanation

CAIDP advocates for the ratification of international AI treaties to prevent global fragmentation and align national policies with global standards. They have released statements urging various countries and organizations to ratify the Council of Europe AI Treaty.


Evidence

Statements released to the South African presidency for the G20 and to the U.S. Senate


Major Discussion Point

International Cooperation on AI Governance


Need for clear prohibitions on high-risk AI use cases

Explanation

CAIDP advocates for clear red lines in AI policies, prohibiting use cases that are not based on scientific validity or that might adversely impact certain groups or human rights. They call for ex-ante impact assessments and proper transparency across the AI lifecycle.


Evidence

Examples of high-risk use cases in the EU AI Act, such as biometric surveillance or social scoring


Major Discussion Point

Practical Implementation of AI Ethics


Agreements

Agreement Points

Importance of AI risk management frameworks

speakers

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


arguments

Huderia methodology for AI risk assessment


ISO/IEC standards for AI systems


Japanese AI Guidelines for Business


NIST AI Risk Management Framework


IEEE standards for ethically aligned AI design


LG AI Research’s approach to AI ethics and risk governance


summary

All speakers emphasized the importance of developing and implementing comprehensive AI risk management frameworks to ensure responsible AI development and deployment.


Stakeholder engagement in AI impact assessment

speakers

David Leslie


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Heramb Podar


arguments

Stakeholder engagement in AI impact assessment


Human rights impact assessments for government AI use


Incorporating human rights principles in AI standards


LG’s AI ethics impact assessment process


NGO advocacy for human rights in AI governance


summary

Multiple speakers highlighted the importance of involving stakeholders, including affected communities, in AI impact assessments to ensure comprehensive consideration of potential risks and impacts.


Similar Viewpoints

Both speakers emphasized the importance of applying AI risk management frameworks in a context-specific manner, taking into account the unique ecosystems and environments in which AI systems are deployed.

speakers

Matt O’Shaughnessy


Clara Neppel


arguments

Context-aware application of risk management frameworks


Building ecosystems to implement ethical AI standards


These speakers highlighted the importance of aligning national and international AI governance efforts to ensure consistency and prevent fragmentation in global AI governance.

speakers

Tetsushi Hirano


Matt O’Shaughnessy


Heramb Podar


arguments

Interoperability between AI frameworks


U.S. domestic AI policies informing international work


CAIDP’s advocacy for ratification of AI treaties


Unexpected Consensus

Education and skill development for AI ethics

speakers

Clara Neppel


Myoung Shin Kim


arguments

IEEE’s global network of AI ethics assessors


Educating data workers on human rights


explanation

Both speakers from different sectors (standards organization and private company) emphasized the importance of education and skill development in AI ethics, which was an unexpected area of focus given the primarily policy-oriented discussion.


Overall Assessment

Summary

The speakers showed strong agreement on the need for comprehensive AI risk management frameworks, stakeholder engagement in impact assessments, and the importance of aligning national and international AI governance efforts.


Consensus level

High level of consensus among speakers, indicating a shared understanding of key challenges and approaches in AI governance. This consensus suggests potential for collaborative efforts in developing and implementing AI governance frameworks across different sectors and jurisdictions.


Differences

Different Viewpoints

Approach to AI risk assessment frameworks

speakers

Tetsushi Hirano


Matt O’Shaughnessy


arguments

Japanese AI Guidelines for Business


NIST AI Risk Management Framework


summary

While both speakers discuss AI risk assessment frameworks, they present different approaches. Hirano focuses on the Japanese AI Guidelines for Business, which differentiates aspects from the perspective of AI actors, while O’Shaughnessy emphasizes the NIST framework’s flexible and context-aware application.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches and frameworks for AI risk assessment and governance, with different organizations and countries presenting their own methodologies.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers present complementary rather than conflicting views, focusing on their respective organizations’ or countries’ approaches to AI governance. This suggests a general alignment in recognizing the importance of AI ethics and risk management, but with variations in implementation strategies. The implications are that while there is a shared goal of responsible AI development, there may be challenges in creating a unified global approach due to these differing methodologies.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of implementing ethical AI standards, but they differ in their approaches. Neppel emphasizes building ecosystems and a global network of assessors, while Kim focuses on internal processes and education within LG AI Research.

speakers

Clara Neppel


Myoung Shin Kim


arguments

IEEE standards for ethically aligned AI design


LG AI Research’s approach to AI ethics and risk governance


Similar Viewpoints

Both speakers emphasized the importance of applying AI risk management frameworks in a context-specific manner, taking into account the unique ecosystems and environments in which AI systems are deployed.

speakers

Matt O’Shaughnessy


Clara Neppel


arguments

Context-aware application of risk management frameworks


Building ecosystems to implement ethical AI standards


These speakers highlighted the importance of aligning national and international AI governance efforts to ensure consistency and prevent fragmentation in global AI governance.

speakers

Tetsushi Hirano


Matt O’Shaughnessy


Heramb Podar


arguments

Interoperability between AI frameworks


U.S. domestic AI policies informing international work


CAIDP’s advocacy for ratification of AI treaties


Takeaways

Key Takeaways

Multiple AI governance frameworks and standards are being developed by different organizations globally, including Huderia, ISO/IEC, NIST, IEEE, and country-specific guidelines.


Human rights considerations are becoming increasingly important in AI development and governance, with a focus on stakeholder engagement, impact assessments, and data quality.


International cooperation and interoperability between different AI governance frameworks is crucial for effective global AI governance.


Practical implementation of AI ethics requires context-aware application of risk management frameworks, ecosystem building, and clear prohibitions on high-risk AI use cases.


NGOs and civil society organizations play a vital role in advocating for human rights in AI governance and bridging the gap between policy development and on-the-ground risks.


Resolutions and Action Items

Continue development of the Huderia technical document plan for 2025


Develop interoperability document for AI risk management frameworks by 2025


LG AI Research to publish annual report on AI ethics implementation in January


UNESCO and LG AI Research to develop online educational content for AI ethics by early 2026


Unresolved Issues

How to effectively address the global digital divide in AI governance implementation


Balancing innovation with responsible AI development and use


Addressing potential impacts of synthetic content created by generative AI on democracy


Ensuring consistent implementation of AI ethics recommendations across different countries


Suggested Compromises

Flexible and context-aware application of AI risk management frameworks to balance innovation and risk mitigation


Collaboration between public and private sectors in developing AI governance approaches


Incorporating diverse stakeholder perspectives in AI impact assessments to address varied concerns


Thought Provoking Comments

The Huderia itself that has been developed through the activities of the Committee on Artificial Intelligence and all the member states and observer states, it really is a unique anticipatory approach to the governance of the design, development, and deployment of AI systems that anchors itself in basically four fundamental elements.

speaker

David Leslie


reason

This comment introduces the core structure of the Huderia methodology, highlighting its comprehensive and forward-looking approach to AI governance.


impact

It set the stage for the entire discussion by outlining the key elements of Huderia, providing a framework for subsequent speakers to relate their work and perspectives to.


One of the important things is to allow this idea of a third-party certification and audit in order to ensure broad responsible adoption.

speaker

Wael William Diab


reason

This insight emphasizes the critical role of independent verification in ensuring responsible AI adoption, introducing a key governance mechanism.


impact

It shifted the conversation towards the importance of standardization and certification in AI governance, prompting discussion on practical implementation of ethical principles.


As a pioneering work in this field, Huderia is expected to become a benchmark. However, it is also important to share knowledge and the best practices with concrete examples as this type of risk and impact assessment is not yet well known.

speaker

Tetsushi Hirano


reason

This comment highlights both the potential of Huderia and the need for practical implementation guidance, addressing a crucial gap in current AI governance efforts.


impact

It prompted consideration of how to make abstract governance principles more concrete and actionable, influencing subsequent discussions on implementation and best practices.


We need the time and we need the stakeholders. We need for even if we think that some of the concepts like transparency or fairness are already quite defined, you might be surprised.

speaker

Clara Neppel


reason

This insight underscores the complexity of defining and implementing ethical AI concepts, emphasizing the need for diverse stakeholder engagement and iterative development.


impact

It deepened the conversation by highlighting the challenges in operationalizing ethical principles, leading to discussions on the importance of multi-stakeholder collaboration and ongoing refinement of governance approaches.


For AI ethics to take root in our society, I believe citizens’ AI literacy must improve. Additionally, if high-quality AI education is not evenly provided, existing economic and social disparities may widen.

speaker

Myoung Shin Kim


reason

This comment introduces the crucial aspect of public education and literacy in AI ethics, linking it to broader societal issues of equality and fairness.


impact

It broadened the scope of the discussion to include the role of public education in AI governance, prompting consideration of how to engage and empower the general public in AI ethics discussions.


Overall Assessment

These key comments shaped the discussion by progressively expanding the scope of AI governance considerations. Starting from the structural framework of Huderia, the conversation evolved to cover practical implementation challenges, the need for standardization and certification, the importance of stakeholder engagement, and the role of public education. This progression highlighted the multifaceted nature of AI governance, emphasizing the need for comprehensive, collaborative, and adaptable approaches that consider both technical and societal aspects of AI development and deployment.


Follow-up Questions

How can the Huderia methodology be further developed and refined?

speaker

David Leslie


explanation

David mentioned that as they move forward in the next year, they will be working on what they call ‘the model’, which will explore some areas in more detail. This suggests a need for further development of the Huderia methodology.


How can interoperability between different AI risk management frameworks be improved?

speaker

Tetsushi Hirano


explanation

Tetsushi mentioned the need for an interoperability document that highlights commonalities between different frameworks and their respective strengths. This is important for facilitating mutual learning and potentially easing compliance across different standards.


How can knowledge and best practices of AI risk and impact assessment be shared more effectively?

speaker

Tetsushi Hirano


explanation

Tetsushi emphasized the importance of sharing knowledge and best practices with concrete examples, as this type of risk and impact assessment is not yet well known. This is crucial for helping interested parties join the AI Convention.


How can we better address the impacts of synthetic content created by generative AI on democracy?

speaker

Tetsushi Hirano


explanation

Tetsushi highlighted the need to consider the impact of synthetic content created by generative AI on democracy in future meetings of the AI Convention. This is an emerging area of concern that requires further research and discussion.


How can we improve the implementation of AI ethics recommendations globally, particularly in Global South countries?

speaker

Heramb Podar


explanation

Heramb noted that many countries, especially in the Global South, are slow in implementing AI ethics recommendations. This highlights a need for research into effective implementation strategies and addressing the global digital divide in AI governance.


How can we develop more effective metrics for assessing countries’ implementation of AI ethics and governance frameworks?

speaker

Heramb Podar


explanation

Heramb mentioned the need for grounded principles or metrics to assess countries’ follow-through on AI ethics commitments. This suggests a need for research into developing more robust assessment methodologies.


How can we improve AI literacy among citizens to ensure they can be mature users and critical watchdogs in the AI market?

speaker

Myoung Shin Kim


explanation

Myoung Shin emphasized the importance of improving citizens’ AI literacy to help AI ethics take root in society. This suggests a need for research into effective AI education strategies for the general public.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.