Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Owen Later
Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy. Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices.
In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions. To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.
Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology. Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices.
Microsoft emphasises the need for safeguards at both the model and application levels of AI development. The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards. Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process.
Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building. Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI.
In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations. They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.
Clara Neppel
During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.
Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design. These efforts aim to ensure that AI systems are accountable, fair, and free from bias.
The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI. An example was provided in the form of the UK’s Children’s Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks.
Capacity building for AI certification was also discussed as an essential component. IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.
The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law. This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions.
The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation. This concern underscores the need for stability and predictability to support a thriving and sustainable private sector.
Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems. Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications.
In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI. IEEE’s involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law. The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.
Maria Paz Canales
The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI. Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation.
Another important point is the evaluation of AI’s impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights. Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.
The discussion also emphasized the importance of education and capacity building in understanding AI’s impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights. By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology.
Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems. The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI.
In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI. This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance.
Lastly, the discussion emphasized the need for a bottom-up approach in AI governance. This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.
In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance. By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.
Thomas Schneider
The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies. It is believed that this approach will allow for a more nuanced and effective regulation of AI. Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful. By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology. The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward. This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries. This represents a significant development in the global governance of AI and the protection of human rights. The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights. This ensures that AI development and deployment align with society’s values and principles. Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework. Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive. In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement. By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles. This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation. However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI. Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users. In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place. Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.
Suzanne Akkabaoui
Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.
Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services.
The AI for development pillar highlights Egypt’s commitment to utilizing AI as a catalyst for economic growth and social development. The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality.
Capacity building is prioritized in Egypt’s AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.
International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network.
To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals. These guidelines address aspects such as robustness, security, safety, and social impact assessments.
Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.
Overall, Egypt’s AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.
Moderator
In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary. However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.
Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.
Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network. This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations.
Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement. Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions.
UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI. The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.
In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy. For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.
The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes.
An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.
Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.
Set Center
AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.
On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework. These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI.
Technological advancements, such as the development of foundation models, have ushered in a new era of AI. Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.
To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called ‘Voluntary Commitments’ to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure. Its objective is to build trust and security amidst the fast-paced evolution of AI technologies.
In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations. Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field. The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.
Auidence
The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties. Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.
To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.
There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level. This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails.
Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI. The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.
In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI. The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.
Prateek Sibal
UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.
The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable.
UNESCO is implementing the recommendation through various tools, forums, and initiatives. This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems.
The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms. Ethical viewpoints are crucial to align AI with societal expectations.
Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement.
In conclusion, UNESCO’s recommendation on AI ethics provides valuable guidelines for responsible AI development. Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.
Galia
The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.
Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.
Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance. This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process.
The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected. By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives.
In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16. The speakers’ identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.
Nobuhisa Nishigata
Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries. Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society.
While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI. Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan’s pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.
Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the ‘Hiroshima AI process’. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations. Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully.
Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life. It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan’s commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society.
In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries. While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the ‘Hiroshima AI process’ showcases Japan’s commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.
Session transcript
Moderator:
th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Galia :
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Moderator:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Nobuhisa Nishigata:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . et cetera. And then now we had the Hiroshima AI process and the second round of our chair in the G7 this year. So then we had many things happening, like, for example, 2019, the G7, the French chair, they introduced the G-PAY, the Global Partnership on AI. And the same year, Japan hosted the G20 meeting. And G20 agreed on the G20. It’s on an AI principles, but it is just the same, that the OECD’s principle is kind of copy in the same text. Then we have some development in the afterwards, and then it comes to this year, to 2023. So next slide, please. Next one, yes. So just it’s more kind of history now. It’s seven years ago, the photo from Takamatsu. And then we had some discussion. Next slide, please. It’s going to show up some detail, yes. So it is the first time that at that time, the minister, Mr. Takaichi, the proposals and the discussions among the G7 at the time to talk about AI, like what the risk, what the opportunity, and then what next. And then Japan wanted to have some common understanding, common principles to cope with this new technologies at that time. So the bottom line, maybe touching upon the relationship between innovation and regulation, those kind of things, just think about what Japan is right now. We are facing several big social problems that are aging. And we need more people, I mean, to sustain our economy. And then we need more machines. So Japan is, I think, one of the kind of aged position to more like we need more machines to help us to sustain the economy, the business, and et cetera, or even the daily life. So we are very much friendly to the AI, but of course, we recognize some unsightly in this technology. So then we started a discussion at the G7, kind of trying to how they feel about the AI at the time. Then fortunately, our proposal on the AI discussion was very well received by the member of the G7. So the Japanese government decided to ask the OECD to continue the work further. Then they came out to the OECD principle in 2019. So that the kind of the beginning, the whole history of what we have now. And then the next slide, please. So that’s just the introduction. Since Gaia didn’t have the deck, I can do it for her. That’s a lot of OECD principles. It’s very simple, 10 principles. The first five is more like the value-based principles. And then the other five is more like a recommendation to the policy makers of the government. And they just, the 10 things. So the next one, please. And this year, as the chair of the G7, and we had the first, the hosted digital and tech ministers meeting in Takasaki in Japan. And it’s not only the AI ministers meeting. So we have like eight themes. And then the one is of course about AI. The third theme is responsible AI and global AI governance. And actually the ministers discuss more about the interoperability of the AI governance. Looking at some, you know, the European countries are working hard to pass the AI Act on it. Of course, we know it. But on the other hand, still Japan is not the one to introduce the legislation over the AI technologies yet. We would like more to see to, you know, the more opportunity or possibility of this technology. So in Japan’s perspective, it’s too early to introduce the legislation yet. In Japan’s perspective, it’s too early to introduce the legislation over the AI. But on the other hand, we respect what the EU is doing. So then like we try to start the discussion about the international kind of interoperability in the governance level. So that, you know, we don’t want to put more burden on the business side. I mean, of course, like a multinational people should work everywhere in this globe. So the thinking about the proposal. Then the next slide, please. Yes, thank you. Then like, so before the ministerial meeting, we are thinking more about the interoperability. But now sometimes these kind of things happen in the G7 things, like escalation to the leaders. So then like what happened is like the leaders agree to create or establish the what we say Hiroshima AI process. And the discussion is more focused on the generative AI and the foundation model, the new technology. And then now again, we are asking OECD to some support to summarize the report for the stock taking and the risk and the challenge and the opportunities on the new technology, particularly for the generative AI. Then, of course, the goal is some development of the code of conduct in organizations, or like a project-based cooperation to support the development of the new responsible AI tools and the best practices. Then you can see the link here. Then this is kind of ministerial declaration in September. And then the G7 delegates are working hard to compile the report, which is mandated to report to the leaders by the end of this year. And do we have more? Oh, no. So that’s about it. So in the end, coming back to the point from the moderator, so Japan is more, want to more to see the new technology can do, particularly for our society. I mean, not only Japan, but also the whole world. So thank you very much. Thank you. That was a really good overview of what influences, perhaps the space that a national government or an organization has to deal with when we’re thinking about how we set international principles and guidelines. How do we make them, how do we bring them home? And then how do we bring our own issues that we have in our own societies and economies back into these spaces to shape some of those responses?
Moderator:
So that was a very nice full circle there. To continue on this track of how national governments deal with the international policy space, and how do they bring their own opinions into it, we’re going to move from Japan to Egypt from the room online. And we’re going to hear from Ms. Suzanne Akobawi. Suzanne, I hope you are well connected and you can hear us. We can see you and we can hear you. The floor is yours. Perfect.
Suzanne Akkabaoui:
Thank you so much. Thank you for the opportunity to take part in this very interesting discussion. And with such an esteemed panel of guests. I’m Suzanne Akobawi. I am an advisor to the minister of ICT on data governance. So this stays a bit about how we are moving towards creating an infrastructure, institutional, legal, technical infrastructure to be in line with the technological advancement that are happening. And clearly in relation to AI as well. So just to give you a bit of background, Egypt has a national AI strategy that aims to create an AI industry in Egypt. It also wishes to exploit AI technology to serve Egypt’s development goals. The AI strategy was built on four pillars, AI for government, AI for development, capacity building and international relations. It also set and has four enablers, governance, data, ecosystem and infrastructure. The strategy was drafted according to a model that promotes effective partnership between the government and the private sector in a way to create a dynamic work environment and to support building digital Egypt and achieve digital transformation in a way that is led by AI application. One of the main principles of the strategy is that AI should enhance human labor and not replace it. Unlike our friends in Japan, we are very young, so we don’t have a lot of experience in Japan, we are a very young society and the majority of the population is between 16 to 45 years of age. So we face challenges with respect to the acceptance of AI and in showing that it has positive aspects other than taking away jobs from this young population. So one of our main principles for the strategy is that AI should enhance human labor and not replace it. And this requires that we conduct a thorough impact assessment for each AI product focusing on whether or not the AI is the best solution to the problem and what are the expected social and economic impacts of each new AI system. The strategy also emphasizes the importance of fostering regional and international cooperations. As mentioned earlier, we are working as mentioned earlier, we are members of the OECD AI network. The 70 countries that were mentioned earlier were one of them. Recently we have enacted, so we’ve published an AI charter for responsible, a charter for responsible AI. The charter is divided into two parts, general guidelines and implementation guidelines. The general guidelines give a layer of details about how to implement the principles that were in the strategy. And so in the general guidelines, we have a primary goal of using AI in government. And the purpose behind it is to promote the well-being of citizens and combating poverty, hunger, inequality, et cetera, which is in line with the human centeredness principles. With respect to any, we have the general guidelines also provide that any end user using an AI has the fundamental rights to know that they are using and interacting with an AI system. Again, a reaffirmation that no individual should be harmed by the introduction of an AI system, especially with respect to job creation. We also have, I mean, there is a list of general guidelines that are present in the document and all in line with the principles that were of the OECD principles. For the implementation guidelines, which is the second part of the charter, it provided that the AI should be robust, secure, and safe throughout the entire life cycle, that any AI project should be preceded by a pilot and a proof of concept, and that additional measures should be in place in case of sensitive and mission critical AI applications. So in short, this is where we stand. We are in line with the existing principles, guidelines, and frameworks that were decided on the international arena. And we have a clear understanding of our cultural differences, trying to find ways to bridge the gaps, the cultural and sociological gaps that come with the AI. That come with the technological advance.
Moderator:
Thank you so much, Suzanne, for sharing all of that and really emphasizing how AI approaches, policies, and frameworks need to be responsive to national context, cultures, local expectations while aligning with global values and also making sure that policies are interoperable with some of the other countries and regional and global initiatives so that we can manage towards truly global governance goals that we have. So as we’ve heard from our speakers from the national government side, I’m going to turn to some of our non-governmental stakeholders, and I’m going to go full circle and go back to international initiatives at the end, giving a bit of a breathing time to our newest speaker who joined us. Welcome, Thomas. So I’m going to turn now to Mr. Owen Larter from Microsoft and ask you, now that you’ve heard from the international space a little bit and how national governments cope with this challenge, how does that happen in a private company? How do you implement this? How do you come up with some of your own? And how do you dialogue with these initiatives?
Owen Later:
Fantastic. Hello, is this on? Can people hear me? Excellent, thank you. Good morning, everyone. My name is Owen Larter. I work on responsible AI public policy issues at Microsoft. It’s a pleasure to be here and a pleasure to be able to join such an esteemed panel. So we are enthusiastic at Microsoft about the potential of AI. We’re excited to see the way in which customers are already using our Microsoft AI co-pilots to be more productive in their day-to-day life. And I think more broadly, we see a huge amount of promise in this technology to help us better understand and manage complex systems, and in doing so, respond to major challenges, whether it’s in relation to healthcare or the climate or improving education for everyone. But there are clearly risks. We feel as a private sector company developing this technology that we have a real responsibility to lean in and make sure that the technology is developed in a way that is safe and trustworthy. So I sort of wanted to talk about three buckets of responsibilities that I view Microsoft as having to contribute to. The first one is to make sure that we’re developing this technology in a way that is responsible. And so we’ve been building out our responsible AI program for six years now. We’ve got over 350 people working right across the company on these issues from a real diversity of backgrounds, which we feel is very important. So we have people who are deep in the engineering space, research, legal, policy, people with sociological backgrounds, all coming together to work out how we identify AI risks and then put together a program internally that can help address them. So we’ve got sort of the core of our program, our responsible AI standard. This is an internal standard. It is based on the OECD principles. It’s based around making sure that people are upholding our AI principles at Microsoft. We’ve got tens of requirements across our 14 goals, and it really is a rule book. So anyone at Microsoft that is developing or deploying an AI system has to abide by this responsible AI standard. We’ve also shared this externally now, so anyone could go out and find it online if you type in Microsoft’s responsible AI standard. We think this is really important, A, to show that we’re doing the legwork here, and it’s not just nice words, but B, so that others can critique it, and build on it, and improve it as well. And then we’re building out the rest of our responsible AI infrastructure at Microsoft as well. So we have a sensitive uses team that anytime we’re engaging in developing a higher risk system, we bring greater scrutiny, we apply additional safeguards. We have an AI red team that is a centralized team within the company and goes product to product before we release it, making sure that we’re evaluating it thoroughly and that we’re able to identify, measure, and mitigate any potential risks. That’s that first bucket around responsible development. I also think we as a company, and we as an industry, quite frankly, have a real responsibility to lean into governance discussions like this. So we have recently founded the Frontier Model Forum with a number of other leading AI labs. We are trying to accelerate work around technical best practice on frontier models in particular. So these are the really highly capable models that offer a lot of promise but also compose some very significant risks as well. We want to develop that best practice, we want to implement that ourselves as companies, but we also want to share that externally to inform conversations on governance. And we’re really pleased to be able to engage internationally in global governance conversations, very supportive of the work that is going on at the UN, you know, the UN doing a very good job I think of catalyzing a globally representative conversation, UNESCO’s recommendations on the AI framework, very supportive of it, and of course all the technical work that is being done by the OECD in the background, very supportive of that as well. And I think the final responsibility we have is to lean in and help shape the development of regulation as well. So the self-regulatory steps that we’ve taken we feel are really important but they are just the start. We do feel that this new technology will need new rules and so we want to lean in, we want to share information about where the technology is going, it’s moving in a very, very fast pace, so how can we help others sort of understand exactly the trajectory of the technology. We want to share what’s working in terms of how we’re identifying and mitigating risks internally and also what’s not working quite frankly. And then finally we want to help build capacity. I think this is going to be a really key issue to underpin the development of governance frameworks and regulation in the coming years. How do you make sure that governments have the capacity to develop viable, effective regulation and then also critically how do you ensure that regulators have the capacity to understand how AI is going to impact their sector, whether it’s healthcare, whether it’s financial services and be able to address the risks that they may pose. So I’ll stop there for now and pass it back to the chair.
Moderator:
Thank you Owen. I think that the picture that it paints for me, it was very, very structured so thank you for that. It always makes the moderator’s job easy when there’s a clear one, two, three, four points in a speech. It really strikes me from what you say is that through these steps of responsible development, governance, regulation, capacity building, multi-stakeholder and cooperation between governments, private sector, civil society, the technical community, academics, research, it’s not one or the other. It’s not that one sector needs to do this and the other sector needs to do that, but we all need to be at the table at all of those levels to really make this work and then actually have the buy-in to be able to implement all of this when the rubber meets the road. You referenced how Microsoft dialogues and supports some of the UN initiatives, so I think that was a good segue to turn to Pratik and ask how does UNESCO think about all this. You’ve done a lot of work in coming up with the ethical guidelines on AI. We’ve missed the beginning of the section. We’ve had a little poll here to the audience to see how familiar the audience is with some of the AI policy frameworks out there, and UNESCO is closely second after OECD, so there’s a good understanding of what is in there, I think, in the guidelines, but I think it would be great to hear a little bit about what you do and how it works when you actually try and implement this, what are the lessons that you’ve learned, and what the challenges are in actually bringing those global principles into the national level, building capacity as Owen mentioned, and how is that working?
Prateek Sibal:
And how much time do I have? Five minutes. Okay. Right. Thanks, and apologies for being late. There was a scheduling conflict and I was hosting another session. So just very briefly on the UNESCO recommendation on the ethics of AI, it was developed through a multi-stakeholder process over a period of two years. The recommendation itself, we had a group of 24 experts which were selected from around the world who prepared the first draft. This draft was widely consulted with different stakeholder groups in different regions, in different languages, and then the document went through an intergovernmental process of about 200 hours of negotiations. And then we had this as the first global normative instrument on artificial intelligence which was adopted in 2021 by 193 countries. As far as the structure of this recommendation goes, maybe it’s worth spending a little bit of time of why we are talking about ethics. And so when we are talking about technologies, there are different kind of views of how we see technology. One is a very deterministic view of technology, that we will have technology, it will guide our life and it will do things. Then we have a very instrumentalist view of technology which is like, oh, it’s just a tool and it’s up to us on how we use it and what we do with it. And then there’s a third view of technology which is kind of like technology is a mediating force in society. So not only is technology influencing our actions, but also we are influencing how technology is shaping the world. So let me give you an example. At a very micro level, we know speed breakers, right? They force us to slow down. It’s a technology which is embedded with a script. At a macro level, if in a classroom you have a teacher and then you put a robot there to assist in the teaching, won’t our ideas of what teaching and learning looks like, what is the role of a teacher in our world, also start shifting? So there’s a shift which will happen in terms of norms. Now when we talk about ethics, it’s not just about saying, okay, these are the principles. We need to go into why. Because companies, developers, all the people who are developing and using AI, per se, are embedding technology with certain scripts. And these scripts need to be informed by ethical values and principles that we want. And this is what the recommendation does. It talks about values of human rights. It talks about leaving no one behind. It talks about sustainability. And then it goes into articulating the principles around transparency, explainability. And once we talk about these principles, it gives a clear indication to the developers, users, okay, this is how the technology should interface with us. Now it goes on further than to talk about the policy areas and what specifically needs to be done, for instance, in the domain of education, in the domain of communication and information. We have so much misinformation, disinformation going around. So it’s a very beautiful document, I would say. And I would invite you to look at it. Now how are we going to address the second part of your question, Dimya? How are we going about addressing the implementation part? Because that’s where the change would hopefully happen. The recommendation itself calls for the development of certain tools, a readiness assessment methodology which has been developed by UNESCO to look at where do countries stand vis-a-vis their state of AI development, vis-a-vis the policy areas and so on in the recommendation. And this is ongoing in about 50 countries around the world. And next year, in 2024, we’ll have the second global forum on the ethics of AI, which will be a platform in February, which will be a platform to learn from what’s coming up around in different parts of the world. Another tool is an ethical impact assessment. And there are so many of these tools. And that is wonderful to have so many diverse perspectives. This is really to guide companies, to guide governments who are procuring AI systems on what are the ethical aspects you need to look at, what at each stage of the AI lifecycle, what do we need to be concerned about. Going forward, I think capacity building was also mentioned by Owen. We don’t need to wait for these kind of regulations to be put in place to start working on capacity building. As an example, we are working with the judiciary. And the judiciary can actually, even in a lot of countries where you don’t have AI, in most countries, actually, we don’t have any kind of AI regulation, they can rely on existing human rights frameworks or other laws like data protection laws to start addressing the challenges around bias, discrimination, or privacy, and so on. So at UNESCO, we have been working with the judiciary for over 10 years. And we have reached over 35,000 judicial operators over these 10 years in 160 countries on issues related to freedom of expression, access to information, safety of journalists. And in 2020, we started working on AI and the rule of law. And we have now a massive open online course, which was used by about 5,000 judicial operators. And I say judicial operators means judges, lawyers, prosecutors, people working in legal administrations on what are the opportunities of using AI in the judicial system. So use cases around case management, we caution them about predictive justice. But also, what are the broader human rights and legal implications of this technology? And how can they address those challenges? Because you will also start seeing binding judgments, and we are already seeing this coming around. And we’re working specifically with regional human rights courts in Europe, in Latin America, in Africa, so that when we have those judgments coming on, it percolates down to the national level. And finally, I will say the work that we’re doing, we’re doing around capacities for civil servants. We keep on loading civil servants and governments with a lot of new work, complex, volatile, uncertain environments and ask them to work on regulation and implementing them without really equipping them with the necessary skills. We saw the case in Netherlands, even the Robodeb scandal in Australia, where AI systems were used and thousands of people were deprived of public benefits, and which had very serious implications. So these duty bearers, we need to work with them on strengthening their capacity. So we’ve developed an AI and digital transformation competency framework for civil servants. And in fact, I was this morning, we were launching a dynamic coalition on digital capacity building, which will focus on capacities for civil servants. So I’ll stop here and happy to go on later.
Moderator:
Thank you so much, Pratik. There was a lot there to unpack, but what particularly was striking in your comments that we don’t need to wait for AI regulation to start addressing some of those biases that we sometimes see coming out in AI systems. And I think that that is something to note here. We finally have a full panel, as you can see, we’ve expanded beyond the door, the table. I’m sorry that most of you are sitting so uncomfortably. But yeah, we really packed everything in here. Welcome, Dr. Senter, to the conversation. We’ve been really jumping between national and international frameworks, and how do we set some of these guidelines, norms, how do we implement them, and having this conversation. We started this morning with a poll on how aware some of the audience is here by some of the initiatives that are in place at national, international levels. We haven’t asked them about the White House framework, but we did ask them about the NIST, risk assessment framework. And I have to say, that was the one that the audience was least aware of. It was a 3.9 awareness on a scale of 10. So with that introduction, not to say that anything wrong with the framework, but it might need a little bit of refresher for the audience, and what the U.S. is doing. I think the audience is keen to hear from you on how the U.S. is positioning around AI governance, AI frameworks at the national, at the international level, and what are some of the approaches that you’re taking.
Set Center:
Thank you, and I’m sorry I was late. I think your taxi must have been faster than mine coming from our previous event. So speed and regulation are in natural tension. I think we know that in an AI context. We know that in any technological governance framework. It’s most conspicuous right now because of the intensification of political and cultural attention on AI, and the seeming inadequacy of governments to meet the moment or at least perception that they’re not moving fast enough. And so I think that dynamic is exactly the right one to frame this session around. In the United States, we have two foundational documents that preceded the chat GPT moment that came out in the last two years. One is the AI Bill of Rights, which provides a sociotechnical framework for dealing with automated systems that is sector agnostic. So in other words, how do we think about a risk framework for any kind of automated system? And the other is the Risk Management Framework, which scored a 3.9. What is the highest we can score? 5.9 with the OECD. Well, next year, we were going to come after the, if Audrey was here, we would come after her and go for a 5.9 next year. The Risk Management Framework, which shares some commonalities with the OECD framework insofar as it’s multi-stakeholder, 240 organizations contributed over 18 months. It was a rigorous effort to solicit views from industry, civil society, to create a framework for users, all the way from users to developers across the entire AI life cycle to manage risk. Those preceded the moment in which all of us are here filling these rooms to talk about AI. What happened, and I think this happened at a national level and a global level, is that many of us forgot all of the work that had been done prior to foundation models emerging and all of the hard work and valuable work that had been done before that. All of us lurched into a new political moment, a new cultural moment, precipitated by the belief that we’re in a new technological era, or at least new era of AI. In the United States, obviously, a lot of the leading companies that were developing large language models, frontier models, are located there. We felt it was incumbent on the United States to move quickly. As many of you know, it is unlikely that we will rapidly move to legislation. I think that’s the case in many countries. We realized that the moment required action, and it required action that defined the problem in a new way and then set obligations around the developers of frontier models. Over the course of the spring and summer, the White House talked to these developers of frontier models, tried to understand and define the nature of the unique risks posed by these models as distinct from, or at least in addition to, the more basic, substantial but understood risks posed by AI that we’ve been talking about for several years, and then tried to create a technically informed framework for dealing with them. That emerged as something called the voluntary commitments, which companies have signed down to in the United States in two waves. Essentially, it asks companies to undertake a series of obligations to responsibly manage, in a secure, trustworthy way, their AI systems. What does it entail? I think what we would now consider fairly understandable basic steps at a level of principles, but I think when we get into the question of implementation, it gets quite complicated very quickly. The commitments essentially require things like red teaming, information sharing among the frontier model developers, so that as they discover emergent risks, they can share them with each other, so each company or developer will understand them. It includes basic principles of cybersecurity and cyber hygiene, with the belief and logic that the model weights that essentially provide the power around these finished models are sufficiently important to protect, that companies need to treat them essentially as the crown jewels of their IP. It includes disclosure, public transparency and disclosure. In other words, if you think about a basic idea like a nutrition card for a food product, you would want to disclose the basic information about how a model’s been trained, how powerful it is, so everyone understands the power of the model itself. The idea is this combination of internal technical work and external transparency will generate the kind of trust and security that we need as these models continue to rapidly evolve, and prior to, or as a bridge to, a legal or regulatory framework in which we can deal with them in a more substantial way. This is a bridge. It’s a first step. I think if we were to solicit a poll, I think one thing people have focused on is the voluntary aspect, as opposed to the technical criteria underneath the voluntary aspect. If you actually look at the technical criteria, it’s quite a serious effort by the engineers, computer scientists, designers to come to terms with what they’re building, and I would suggest it probably represents the best technical framework for thinking about the era we’re moving into, even if I think there’s going to be extraordinary diversity in the kinds of legal approaches we take in the coming era.
Moderator:
Thank you so much for that and thank you for joining us in your very busy schedule. I think, I hope that this would, maybe we should ask the question again at the end and see if this improves the numbers of the understanding of some of the frameworks, but I think what’s very useful for us to know that commitments and work on this don’t just come out of the blue, it’s really built on a long-standing conversations around the topic, beyond the boom moment of when AI really became so user-friendly that we all of us are using it now on our phones every day, but really it’s based on considerations and conversations that have been coming from such a long while and it does bring together not just policymakers or the policy teams in companies, but the engineers and those who set some of the technical work and the technical standards as well. So with that, I think that’s a good segue to turn to Clara and ask how does this work in global standard setting bodies, which is IEEE and how do you think about some of the AI challenges in your work?
Clara Neppel:
Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what is our role here? And I would like to maybe echo what we just heard, that it needs actually both a bottom-up approach as well as a top-down approach. And IEEE started probably the same time as Japan, already in 2016, to think about what are the ethical challenges of technology. And it came because we have a constituency of 400,000 members, we are the largest technical association, so we’re not only a standard setting body, but we are an association of technologies. So this realization that there is a responsibility in creating this technology, which really I have to say it’s not neutral, it is really embodying the values and business models of those that create it, that we have a responsibility. And this is how this initiative in IEEE of ethical airline design came to existence. It was about identifying issues and how we can deal with them, we as technologists, and how we can deal with them in discussions with regulatory bodies. So what happened since then is that we developed a set of socio-technical standards. We are already developing a lot of technical standards, including the Wi-Fi standard that you are using right now. But when it comes to socio-technical standards, it is a completely different discussion, because what we heard here, we need to have then this multi-stakeholder approach. So we have in our standard setting bodies now people that were not accustomed to this, and we had to develop a common terminology, of course, when it comes to value-based design. When it comes to what does it mean to transparency, everybody agrees that transparency is important, but what does it actually mean? And what is it, first of all, what we want to achieve, and how we are actually achieving it at the technical level. So the set of standards, as I said, one is on value-based design. It is really around identifying what are the values or expectations of the stakeholders of an AI system for a given context, and how you are actually prioritizing them, because you cannot have everything built into the system. How you are actually prioritizing them, and how you are translating them into concrete system requirements. And I think it’s about also practice and experience. Since then, we have several projects, including public-private partnership projects from UNICEF, but also including industry, which prove that it is a very valuable standard, and giving this methodology to developers and system designers, which actually influences the outcome of the systems. You actually come up with a different system, which takes this value into account. As I already said, when it comes to transparency, we have a standard which defines transparency. There are different levels of transparency, so it gives a common terminology when we speak about these terms. The same thing if it goes to bias. We are discussing about bias, and that we have to exclude bias and deal with it. But again, we don’t know what bias is for a certain context. For instance, when we are going to a medical application, we actually want to have a certain bias. We want to have, for instance, different treatment of women and men, because we have different symptoms. So we need to have that kind of context sensitivity when it comes to certain things that we all agree on. So this was the bottom-up approach. And the question is, how does it come together with a top-down approach, with all these different frameworks that we also heard, and the poll very clearly showed that people know more or less about it. So we engaged also from the beginning with the OECD, with the EU, at the high-level expert group, with the Council of Europe. So I would say when it comes to this question that, of course, industry has, what is the interoperability when it comes to regulatory requirements, I think the standards will play a very important role, because that gives the, let’s say, the very practical approach, how to move from principles to practice. And we are part of the discussion. So we are part of the network of experts, giving, let’s say, the technical background of what are the challenges, what is possible to implement, what is realistic, and also reflecting it in our standards. And I wanted to give also an example of how this complementary between regulation and standards can work. When it comes to children, we have, there is a code, which is in the UK, the Children’s Act, which gives, let’s say, and this is something where, of course, technologists cannot decide for themselves what is the right way to do it. I think this is something that needs to be done in a democratic process. So for the children’s codes, everybody agrees we need to protect children. But again, what does it mean on the technical level? And we have a standard, which is called age-appropriate design, which complements that regulation. And I think that this is, and which gives also very clear guidance on how to do this age verification and so on. So I think that this is the bottom-up approach, and a top-down approach needs to come together. And another example is, that we are doing at the EU level, is how to map the AI Act requirements to the standards. There is also a report, which came out from the joint research center of the European Union, which made this mapping for IEEE standards and actually said that it fills really this gap when it comes to ethics, because a lot of standards are still, of course, focusing on a more technical level, which is also important, but you need both. Yes, I would end up with capacity building. I think that is very important for our certification process, which complements the standards. We started building up an ecosystem, an ecosystem of assessors. We already trained more than 100 people. And what is also important, that we have to have certifications of product and services, but also certification of people. So we need these assessors, which are already in our registries, and we need also certification bodies. So we actually trained some people from certification bodies to be able to make these assessments as well. So I think that these are the things that we will continue doing. Thank you.
Moderator:
Thank you so much, Clara. And thank you for sharing all of that from your work and making that connection between the bottom-up and top-down approaches. I guess capacity building is also in the middle or around all of this to make sure it all fits and that we all have all the necessary skills and capacities to deal with that. I’m going to turn to our last two speakers for today. So I would like to encourage you to please go online and type your questions or comments into the chat box so that we can weave that into the discussion, as I will give a last round to the panelists to react to one another and to your questions. I’d love to see some of them pop up here. The chat has been a bit silent, but do go online and share your questions so that we make sure that we weave in some of the perspectives from the room as well. And with that, I’m going to turn to Maria Paz and ask, you are here on the panel representing civil society. You work a lot on some of these issues that we’ve mentioned, in particular human rights. How does that come in to the discussion from your perspective? What are some of the challenges and opportunities that we have here?
Maria Paz Canales:
Thank you. Thank you very much for the invitation for being here. I think that the benefit of being almost at the end of the round of speakers is that I can build on top of what already has been said and add the component that is specific for the perspective of the civil society organization that we work in, the space of digital governance. And I think that I am very pleased of many concepts that I have heard from different actors here around the table, starting for the concept of a sociotechnical approach to the artificial intelligence governance. And I would like to believe that it’s technology governance at large, not only for artificial intelligence. And I think that one of the other things that have resonated in all the intervention that I have heard so far is the need of this clarification in terms of the framework and how they translate to concrete way to engage with the manner in which the design, but also the deployment and the adoption for many countries of the artificial intelligence technologies is happening. So I think that we need, on top of some of the elements that Clara was mentioning, for example, about the concept of what we mean by transparency, what we mean by bias, also I think that we need to understand, have a more clear understanding in terms of frameworks about what kind of risk we are addressing. What we are considering really a risk. From the work that I represent in the organization that I work on, we try to infuse in this conversation the human rights approach. And we consider that the risk that we are addressing when we are talking about the impact of artificial intelligence, it’s the huge and wide spectrum of impact that this technology is having in the daily life of every people around the world that is touched by the technology. And it touches the exercise of civil and political rights in a very concrete manner, but also increasingly the exercise of economic, social and cultural rights also. So when we see that many places in the world, and particularly governments, are embracing the use of artificial technologies for developing policies in different areas, we wonder if we are talking about the right risk when we are measuring. And if, for example, in what Galia was mentioning, that it had been this very thorough work conducted by the OECD in terms of building the database, are we really concentrating enough in hearing also from the right people in terms of how those risks and possibilities of risk are being measured? So I think that in the example that Clara was giving related to technical community, the bottom-up approach implied to hear the technical expert, and also it’s something that was pointed out by Mr. Santer in terms of how you build this voluntary commitment, trying to be really strong in terms of the technical aspect of the recommendations, of the guidance. And you, Clara, were mentioning that it’s necessary to hear the technical expert in order to really make the assessment converge with what is technically feasible and technically understandable for the ones that are called to implement it. The same way we should be thinking how the risk is connected with different communities that is impacting for the deployment of the technology. And usually those are not in the room. Usually those are not consulted. And even, like, I acknowledge the good effort that had been done in many consultative process, for example, for building the UNESCO recommendations, ethical recommendation, and also in the process of the NIST, RISC-CAS and MENFREG more, that I have the privilege to participate representing some consideration from civil society groups. Usually who are part of those conversations are not the people that it’s the bottom line in terms of impact of the deployment of the technology in society. So I think that we should continue to make a conscious effort to have that people in the room, to bring them into the conversation. This is also related with the capacity building issue that had been raised by several of you. We cannot expect that people is able to talk about how artificial intelligence impact in their daily life if we don’t approach them to the topic in a way that is really concrete for them and understandable. We will not expect that they speak the language of the technical standard. We will not expect that they speak the language of the regulatory issues. But they will talk about how they have been discriminating in the access of housing or employment or access to health or education or how the use of this technology is being weaponized for political manipulation or state control and surveillance of opposition forces in a specific country. So I think that I am very happy of what I am hearing around this table, but I think that there is still some kind of additional work in terms of the clarity of the frameworks of what we are addressing, what we are understanding in terms of risk. And at the end, when we use the concept of responsible AI or trustworthy AI, for me those are not facts that are really useful in terms of like unpacking what it should be done in terms of addressing the governance artificial intelligence issues. Those will be the result of unpacking in a good way the governance issue. And as a result of a good governance, in my perspective with a human rights approach, we will have responsible technologies and we will have trustworthy artificial intelligence. So I will leave it there for the next round of reaction. I am very happy to be here and bring that perspective.
Moderator:
Thank you, Maria Paz. First round of applause. We are not at the end just yet, because we have to hear from Thomas. And I think you set him up for a very difficult question, because in addition to working on these issues in your home country in Switzerland, you are also chairing the process at the Council of Europe that is coming up with the first binding convention on AI and human rights. So I am not going to ask all the questions that Maria Paz asked of you, because it will be very difficult to answer in the five minutes that you have. But how is the process considering some of the human rights impact and what do you expect from it?
Thomas Schneider:
Yes, thank you. And it is actually good that we live in a hybrid world, so I was able to follow the discussion and the poll already from the taxi through new technologies like Zoom meetings. And so, yes, hello everybody. Happy to be here. First of all, thanks to the ICC for putting this together, because I don’t think it’s a problem that we have several ways, several instruments to try and cope with a new technology. And I think this is the only way to go, because there is not one single instrument that will solve all the problems or enable all the opportunities when it comes to a new technology. So we need a mix of instruments. And we also should not forget that we already have technical norms, legal norms, cultural norms that guide our societies and that have guided us with previous technologies that have also been maybe more or less disruptive. And so we do not have to reinvent the wheel every time something new comes up. We may have to adjust it or maybe add a little bit to the wagon, but not everything is completely new. So it’s good to see what is new and what is not new or what is maybe a new version of something that we may already know. And actually, if you look at AI, in many ways, you can actually compare it in its disruptiveness that AI is replacing cognitive human work through machines. AI is a driver, is an engine that drives other machines. You can actually compare it in a number of ways to actually the combustion engines and their disruptive effect that they had a few 150 or 200 years ago and actually until now. Because also there, you don’t have one law that is regulating all engines worldwide at once. You have thousands of technical, legal, and cultural norms that are actually most of them not focusing on the engine itself, but on the machine that the engine is driven by or on the people that are actually guiding the machine, on the infrastructure that the machines are using. And there’s different levels of harmonization between these rules. If you take machines that move people in the airline business, you have quite a harmonized set of rules across the world. If you take cars, looking at my German friends here, they think that they can still live without speed limits. And apparently, they don’t live that badly without speed limits. In my country, it’s slightly different. The U.S. has even lower speed limits than we do. And some people drive on the left side of the streets, but the British or the Japanese can drive on the right side. Swiss roads they just have to pay more attention. So there’s different levels of interoperability or harmonization according to the specific application of an engine in a particular machine that is used for a particular purpose. It’s also there’s a difference whether you move goods or you move people there may be different requirements and so on and so forth. And we are about to do the same with AI in the sense that we try not to regulate the tool itself but the tool in the context that it’s applied and the tool may evolve very quickly but maybe the context is not so quickly changing because in the end it’s people and people tend to not evolve so quickly as maybe the tools which should actually make it easier for us to try and understand the processes that we’re in. And what the Council of Europe is doing is trying to fill one piece to add to this set of norms where we have technical norms that may be one element to actually react quickly to developments and solve issues on a technical level to the extent it’s feasible to create a certain harmonization to create a certain level of security also of predictability of systems. The cultural norms is something else that I will not I don’t have the time to go beyond German motorways but we also may have different ways of dealing with risks in general in different societies. And then you have the legal space where it is great to have industry players taking on responsibilities to regulate themselves. It is good to have guiding soft law guidance from UNESCO from the OECD. The Council of Europe has also developed since 2018 a number of sectoral instruments like an ethical chart on the use of AI in the judicial system, a recommendation on human rights impacts of AI systems in the field of media and elsewhere. So this is all fine but it may not be enough because and I will be very curious to see how these voluntary commitments are followed in the US because voluntary means like you can but you don’t really have to. Well there may be if the incentives are right a voluntary system may actually work better than a so-called compulsory system if it’s too complicated or not workable. So I don’t necessarily say that this is not a good approach but again I think it’s good that we have different approaches and while the European Union has decided to develop an instrument that is basically a market regulatory instrument the Council of Europe for those that are not familiar with the European system the Council of Europe is not the European Union. It is something that is comparable to a UN for Europe and it is 46 member states that have agreed on a set of norms on human rights democracy and rule of law. There’s about 250 conventions and thousands of soft law instruments and so the latest one of the latest things is that we are trying to agree on a number of very high-level principles that are probably not that different from other high-level principles that we’ve already seen in a convention and the new thing is that this is the first intergovernmental agreement that states commit themselves to live up to these principles and these principles are based on the norms of human rights democracy and rule of law and what is special in this case but there have been others before is that this is not a convention just for European countries for member states for the 46 member states of the Council of Europe. It is an open process where we had already from the beginning a number of non-European countries that are leading in AI like the US, Israel, Japan, Canada. We have also Mexico on board since the beginning. We have a number of other countries joining in particular from Latin America so this is the Council of Europe has the opportunity to offer a tool where states from all over the world that respect the same values of human rights democracy and rule of law but may have different institutional arrangements to do this to join become parties to a convention where we agree on a number of principles that should guide us and it’s not just human rights it’s things that are slightly more complicated because they are not that clearly institutionalized like democracy and to agree on a number of principles and then this is one thing because that will be another paper but we also work together with the OECD together with a number of standardization institutions together with the EU and together also with we are also watching what the US colleagues are doing in the NIST framework because in the end you need to have something that operationalizes paper and so what we are calling this is a human rights democracy and rule of law impact assessment which is not a particular tool but it’s a methodology that should help us unite and like create interoperable systems in our countries like the convention is a legal tool that should help us create interoperable legal systems within our countries. Thank you.
Moderator:
Thank you so much Thomas and I very much like your analogy there with the roads systems. We need some of the common principles we all know what to do and learn how to drive and know how to drive and what are the general rules but then we need the flexibility to be able to adapt to context and I think that does apply nicely to some of these systems. We have, I’m going to be generous, 10 minutes left actually seven and I’m going we have a lot of speakers but the audience has been gracious with us because they don’t seem to have many questions so what I’m going to ask of you is to take on one minute and I’ve set the timer one minute each to take your last comments reactions to the other statements or to share one lesson and or recommendation from your side. I’m going to go from that side to that side and you’ll see how many minutes we are running over as we get to the end of the queue so please be respectful of your fellow panelists so that we do have time to go to everyone. Pratik. I would rather give my one minute if someone has a question because it’s too heavy on the panel. Yes, please.
Auidence:
Hi, yeah, Ansgar Kuhn from EY. I had problems with submitting the question. Yeah, my question was around the capacity building side and actually less the capacity building from the point of view of getting the skills of doing it but rather from the point of view of enabling various parties to engage with the process because of the time commitments that are related to that which means either cost for being able to afford to have somebody in your organization if you’re an SME or something or a civil society organization you might not be able to carry that cost of someone who isn’t contributing directly to whatever the product is that you’re creating or if in academia you may struggle to get academic credit for having engaged in this kind of process. So does anybody have any suggestions in that space?
Prateek Sibal:
So I will quickly answer within my 30 seconds left and give back the floor. So I think from our perspective whenever we are having multi-stakeholder conversations we are very sensitive to the fact that not everyone is coming with the same level of knowledge about the technology and we put out some guidance from UNESCO on how to do multi-stakeholder governance of AI in a manner which is more inclusive. So which includes first building awareness. We ourselves have launched quite a number of knowledge products to facilitate that process. For instance, in a fun way we have a comic book on AI because we hear this very often that when you are in a multi-stakeholder conversation not everyone is as familiar with the topic. Some people feel maybe a bit intimidated when you are having the technical experts, the government and everyone coming from the IOs talking in their own jargon which is sometimes very hard to decipher. So we are sensitive to this concern. We’ve also supported people to participate in different fora so that financially but also financially compensate them for their time because civil society ends up doing a lot of free work in these consultative processes without any kind of financial support. And I know colleagues that we are meeting here working on weekends, working nights in civil society which is not fair at all.
Moderator:
Thank you for the question. Thank you Pratik. We have four minutes left but I’m going to take one more question.
Auidence:
I think maybe it’s easier if we all ask the question then any panel member can just catch on it. In four minutes, yes sure. Go ahead. So my name is Liming Zhu. I’m from CSIRO, Australia’s national science agency. We’re working on the science of responsible AI. I have a question on the system level guardrails versus the model level guardrails. We all know that risks are context specific but a lot of people worry if we push the responsibility to the system level, to the users, then the tech vendors can provide unsafe models. So the response would be your legal plans to do that. On the other hand, all the model level guardrails because the general AI is hard to understand, it’s hard to embed specific rules inside a black box model and we need system level guardrails. I’m just wondering whether there’s any comments. Thank you. Thank you. Gentlemen, 30 seconds for your question please. Hi, I’m Steve Park from Roblox. I understand that previously the G20 process has created something like the data free flow with trust. I’m wondering what the Hiroshima process, is there an expectation for that sort of a principle approach for AI as well? Thank you. Thank you so much. I guess that the online system was not really good on taking questions but I’m so glad to have that engagement. 30 seconds each panelist to try and answer. All right, race is on.
Owen Later:
Very good question about models versus applications. You need safeguards at both levels. You need to make sure that you’re developing the model in a responsible way but then when you’re integrating that into an application, you also need to make sure that there are requirements at that level as well. Otherwise, the mitigations that you’ve put in at the model level can just be removed or circumvented. So we think there’s been great progress on the code this week for developers. We think ultimately you need to extend that to deployers as well. And I guess 15 seconds to offer some thoughts on a way forward in terms of global governance. I think there’s been a ton of progress made over the last 12 months and that’s been reflected in the conversation here. I think as we move forward, we should think about ultimately where do we want to get to, what do we want a global governance regime to do and what can we learn from existing regimes. So to offer a few thoughts, I think we want a sort of framework for standard setting globally. I think organizations like the International Civil Aviation Organization where you have a representative process for developing standards that are then implemented at a domestic level, really, really helpful. I think we want to have conversations to advance a consensus on risk. I think of organizations like the Intergovernmental Panel on Climate Change which might be a good model to follow. And then I think ultimately we want to keep building that infrastructure. So both the technical infrastructure so that we can advance work on evaluations where we have still really major gaps to address but also continuing to have these types of conversations. The point that you were making, making sure we have a representative way of having these conversations on global governance and pulling in perspectives from across the world, I think it’s going to be really important.
Moderator:
Thank you, Owen. Very efficient and speedy in response as the private sector generally is.
Nobuhisa Nishigata:
Thank you. And answering the question, maybe I would say like back in 2016, it’s more like Japan started, initiated the discussion but this time for the Hiroshima process, I would say it’s G7’s collective effort. And the main point is that the focus is generatively on the foundation model. I mean the back in the slide, it’s more like about the voluntary commitment. So maybe just you can see some shifting of the discussion even in G7. So what I expect is just more like started by the government of Japan but now it’s all inclusive dialogue around the G7 and we see the other four as well. So I’m just, the Japan would say that we are very happy to see what happened even though we already have some things that we didn’t expect in 2016 but still I would say that things are going well around and we have to continue this kind of dialogue among the whole world. Thank you.
Moderator:
Thank you, Nobu-san. Clara. So very quickly, I think that we need really to balance between voluntary and legal requirements.
Clara Neppel:
I think that it is not the responsibility of the private sector to have guardlays for democracy and rule of law, for instance. So if we want to have certainty on this, we need legal certainty as well. So we need to have regulation on these issues as much as possible and I think that this legal certainty is also expected from the private sector. I think that what is really bad right now is the uncertainty that they are facing. And just responding to one of the answers, I think that how to engage, what I think is going to be essential is to enable feedback loops. I think that this is going to be one of the most important things, especially working with generative AI, enabling these feedback loops and making sure that these feedbacks are actually taken into account by retraining systems or taking them into account when learning from the aviation industry. So the benchmarking I think is also important and common standards, of course. Thank you. So I’m going to take on your points also.
Maria Paz Canales:
One of the examples in which it’s evident that we need some level of complementarity between voluntary standards and legal frameworks, it’s particularly linked with one of the questions around this responsibility at different levels of the safeguard being in the design stage but also in the implementation and functioning of the system. So for example, that’s a topic that should be considered as a part of the regulation, how we distribute and we create obligations related with transparency in the information that were mentioned also by Mr. Senter related to the voluntary commitment, how we ensure that between the different operators in the change of use, production and use of the artificial intelligence, there’s also enough communication that is not against the competition rules also or the intellectual property rules, but they are a shared responsibility and the legal framework accounts for those different responsibilities. 15 seconds on the role. I talked a little bit more during my intervention about this need of the bottom-up approach in terms of societies and different stakeholders in society, but this also applies geopolitically at the global level. In this conversation that are unfolding at the global level of discussion of governance, we need to hear much more about global experiences from different stakeholders that make this process of identification of risk and relevant elements in context much more sensitive to different considerations.
Moderator:
Thank you so much, Maripaz, and don’t try to silence anyone, but we’re making our gracious host very, very anxious, so three minutes, please.
Galia :
Three minutes? One minute each, three minutes together. But just to say, I think this has been a really, really rich conversation. This also made me optimistic that I think it is possible. I think we do have all the elements combined. I really like what Thomas said about how all these things can be complementary. I do think there are still challenges at the implementation level, how we make these things work. I think mapping exercises like what we did with the OECD on the risk assessment front, I hope, can be helpful in this regard. I also think we can think about what we mean by, when we say global governance, then how global can we do it and maintain a credible value alignment, which I think is very important. I think we also have the element of the stakeholder engagement that’s really important. I think this kind of forum is really critical in advancing this kind of conversation, so thank you. Thank you so much.
Thomas Schneider:
Thank you. I try also to talk in two times speeds, like the YouTube videos that you can watch double the speed. Well, I think we need to work on several levels. One is we need to reiterate and see whether we still agree on fundamental values about how we want to respect our human dignity, how we want to be innovative while respecting rights, and see that we are all on the same page on this. Then we somehow need to break it down and see, okay, what are the new elements, what are the new challenges, what are the legal uncertainties that we need to clarify, and then how do we best clarify them without creating a burdensome bureaucracy? How do we clarify? What can we do with technical standards? What is the best tool for solving the problem? So we need to know what are the problems, and then we need to know what are the best tools, and again, it will probably be a mix of tools. Some will be faster, others will be more sustainable, and I think we’re all working on it, and we need to just continue and cooperate with all stakeholders in their respective roles. r,
Moderator:
Thank you so much, Tomas. Dr. Sente on the account of last words. I feel like that’s an awfully positive message to end on, so I will mercifully cede my 45 seconds back. Thank you. That’s very gracious. Thank you so much. Apologies for the next session and to the host for running five minutes over, but I really do want to thank the panel for making the time and coming here and for this really rich discussion, for the audience for sticking with us. I wish we had three more hours and we still couldn’t have stopped talking, I’m sure, but there is a main session on artificial intelligence, I’m told, so see you there and see you around. Thank you so much again. Thank you.
Speakers
Auidence
Speech speed
179 words per minute
Speech length
410 words
Speech time
137 secs
Arguments
Engagement in capacity building could be a challenging process due to time and financial commitments
Supporting facts:
- Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process which may bring additional financial burdens for various parties
- Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main product or services
- Academics may struggle to get academic credit for having engaged in this kind of process
Topics: Capacity Building, Finance, Engagement, Time Management
Worries about the system level guardrails vs the model level guardrails in implementing responsible AI
Supporting facts:
- Risks in AI are context specific, there’s worry about tech vendors providing unsafe models if responsibility is pushed to system level
Topics: Responsible AI, System level guardrails, Model level guardrails
Question about the Hiroshima process and the expectation of a principle approach for AI
Supporting facts:
- Previously G20 process created data free flow with trust
Topics: Hiroshima process, AI Principles
Report
The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties.
Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn’t directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.
To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.
There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level.
This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails. Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI.
The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.
In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI.
The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.
Clara Neppel
Speech speed
164 words per minute
Speech length
1373 words
Speech time
503 secs
Arguments
Creating technology is a responsibility and values and business models are embedded into it.
Supporting facts:
- IEEE started thinking about the ethical challenges of technology in 2016.
- IEEE has a constituency of 400,000 members and is the largest technical association.
Topics: Technology Development, Ethics
Conversation around value-based design, transparency, and bias in AI is crucial.
Supporting facts:
- IEEE holds discussions with regulatory bodies to develop socio-technical standards.
- IEEE has a standard that defines transparency and a standard focused on value-based design.
Topics: AI Ethics, Transparency in AI, Bias in AI
Standards can play an important role in complementing regulation and bringing interoperability in regulatory requirements.
Supporting facts:
- IEEE is part of the network of experts discussing about regulation and technical challenges related to AI.
- Example of the UK’s Children’s Act complemented by an IEEE standard on age-appropriate design.
Topics: AI Standards, Regulatory Requirements
Capacity building is important for the AI certification process.
Supporting facts:
- IEEE has trained over 100 people for AI certification.
- IEEE is also training certification bodies to be able to make assessments.
Topics: Capacity Building, AI Certification
Not the responsibility of the private sector to protect democracy and rule of law
Supporting facts:
- She notes that legal certainty which can only be provided through regulations is necessary.
Topics: Private Sector, Democracy, Rule of Law
Need for legal certainty or regulations to uphold rule of law
Topics: Rule of Law, Legal Certainty, Regulations
Uncertainty facing the private sector is problematic
Topics: Private Sector, Uncertainty
Importance of feedback loops in AI
Supporting facts:
- This includes ensuring that feedback is taken into account in retraining systems.
Topics: AI, Feedback Loops
The need for benchmarking and common standards in AI
Supporting facts:
- Drawing from lessons in the aviation industry.
Topics: AI, Benchmarking, Common Standards
Report
During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.
Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design.
These efforts aim to ensure that AI systems are accountable, fair, and free from bias. The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI.
An example was provided in the form of the UK’s Children’s Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks. Capacity building for AI certification was also discussed as an essential component.
IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.
The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law.
This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions. The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation.
This concern underscores the need for stability and predictability to support a thriving and sustainable private sector. Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems.
Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications. In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI.
IEEE’s involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law.
The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.
Galia
Speech speed
76 words per minute
Speech length
566 words
Speech time
444 secs
Arguments
It is possible to overcome the challenges at the implementation level of global governance and credible value alignment
Supporting facts:
- Galia mentions their mapping exercises with the OECD on the risk assessment front
Topics: Global Governance, Value Alignment, Stakeholder Engagement
Report
The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.
Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.
Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance.
This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process. The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected.
By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives. In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16.
The speakers’ identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.
Maria Paz Canales
Speech speed
166 words per minute
Speech length
1291 words
Speech time
467 secs
Arguments
Adequate understanding and clarification of AI governance is vital
Supporting facts:
- Civil society organizations need a clearer understanding of AI risk areas and implementation strategies.
- The conversation on AI governance should involve various actors including the technical community and the communities affected by AI.
Topics: Artificial Intelligence, Technology Governance
Capacity building is necessary for understanding AI impact
Supporting facts:
- The public can’t speak on AI impact without being educated about the topic in a concrete and understandable way.
- Understanding AI requires not just technical language, but how AI impacts daily life and basic rights.
Topics: Artificial Intelligence, Capacity Building
We need some level of complementarity between voluntary standards and legal frameworks
Supporting facts:
- This is particularly linked with responsibility at different levels of safeguard in the design stage but also in the implementation and functioning of the system
Topics: AI Regulation, Legal Frameworks, Voluntary Standards
The legal framework should account for shared responsibility
Supporting facts:
- This would ensure that between the different operators in the chain of production and use of AI, there is enough communication that does not violate competition rules or intellectual property rules
Topics: AI Regulation, Legal Frameworks, Shared Responsibility
Report
The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI.
Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation. Another important point is the evaluation of AI’s impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights.
Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.
The discussion also emphasized the importance of education and capacity building in understanding AI’s impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights.
By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology. Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems.
The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI. In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI.
This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance. Lastly, the discussion emphasized the need for a bottom-up approach in AI governance.
This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.
In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance.
By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.
Moderator
Speech speed
122 words per minute
Speech length
2365 words
Speech time
1165 secs
Arguments
Japan is in a position where it needs more machines, especially AI, to help sustain its aging economy
Supporting facts:
- Japan is facing several social problems related to aging, hence it needs more people and machines to sustain the economy
Topics: AI, Aging economy
Japan wants to see more of what AI can do, particularly for their society, before introducing legislation on AI
Supporting facts:
- Japan is not ready yet to introduce legislation over the AI technologies
- Japan respects what the EU is doing regarding AI legislation but feels it’s too early to take similar action
Topics: AI, Legislation
The G7 delegates are working to create a report on the risks, challenges, and opportunities of new technology, particularly generative AI
Supporting facts:
- The leaders agreed to establish the Hiroshima AI process, focusing on generative AI
- They’re asking OECD for support in summarizing a report on these topics
Topics: AI, G7, Generative AI
AI approaches, policies, and frameworks need to be responsive to national context, cultures, local expectations while aligning with global values
Supporting facts:
- Egypt has a national AI strategy that aims to create an AI industry in Egypt.
- One of the main principles of the strategy is that AI should enhance human labor and not replace it.
- Egypt published an AI charter for responsible AI
Topics: Artificial Intelligence, Policies
Policies should be interoperable with other countries and regional and global initiatives
Supporting facts:
- Egypt is a member of the OECD AI network.
- The strategy also emphasizes the importance of fostering regional and international cooperations.
Topics: Artificial Intelligence, Policies
AI technology carries immense promise but also risks
Supporting facts:
- Microsoft AI co-pilots are being used by customers to increase productivity
- AI can help understand and manage complex systems and respond to challenges like healthcare or climate or improving education
Topics: AI technology, Microsoft, Potential risks
Technology must be developed responsibly
Supporting facts:
- Microsoft has over 350 people from diverse backgrounds working on identifying and mitigating AI risks
- Microsoft has implemented a responsible AI standard based on OECD principles
- The standard is shared externally for critique and improvement
Topics: Responsible AI, Microsoft, Internal standard
Private sector and industry should actively contribute to governance discussions
Supporting facts:
- Microsoft founded the Frontier Model Forum to accelerate work around technical best practice on frontier models
- The company engages internationally in global governance conversations
Topics: Microsoft, AI governance, Industry contribution
Processes of responsible development, governance, regulation, capacity building must be multistakeholder and cooperative
Supporting facts:
- Microsoft dialogues and supports UN initiatives
- All sectors need to be at the table at all levels for effective implementation
Topics: Responsible AI, Governance, Regulation, Capacity building
UNESCO has developed a recommendation on the ethics of AI
Supporting facts:
- UNESCO’s recommendation was developed through a multi-stakeholder process over two years
- A group of 24 world experts drafted the document which was adopted by 193 countries
- The document provides a clear indication to developers and users of ethical values and principles that should guide AI development and usage
Topics: UNESCO, AI Ethics, AI Governance
There is a need for more clarity in the framework addressing AI and its impacts
Supporting facts:
- AI impacts daily life of people globally. AI affects the exercise of civil and political rights but also economic, social, and cultural rights
Topics: AI governance, Human rights
Consultations should include individuals who are directly impacted by AI technologies
Supporting facts:
- Usually those affected by the technology are not part of the conversations during consultative processes
- There is a need to consciously make efforts to include them
Topics: AI governance, Risk assessment
Concepts like responsible and trustworthy AI should be the results of well-done governance.
Supporting facts:
- There is difficulty in unpacking responsible and trustworthy AI
Topics: AI governance, Risk assessment
The moderator appreciates Thomas Schneider’s analogy of a road system
Supporting facts:
- The analogy draws comparisons between the adaptation of rules based on context and the need for similar flexibility in AI regulation.
- The final discussion allows panelists one minute each for closing remarks.
Topics: AI regulation, Common principles in AI, Flexibility
Prateek Sibal emphasizes the importance of inclusive, understanding, and supportive multi-stakeholder conversations about AI technology.
Supporting facts:
- Prateek and his team have launched knowledge products including a comic book on AI to make it easier to understand.
- They have also provided financial support to people to participate in different fora.
Topics: AI technology, Inclusion, Knowledge products, Multi-stakeholder conversations
Owen stresses the need for safeguards at both levels – model development and integration into applications
Supporting facts:
- Mitigations can be circumvented if only applied at the model level
- Progress on development of safeguards at code level
Topics: AI safeguarding, Model development, Application integration
Owen suggests extension of progress from developers to deployers
Topics: AI deployment, Developer and Deployer roles
Owen presents ideas for forward direction in global governance
Supporting facts:
- Interest in a framework for standard setting
- Use of representative processes for developing standards
Topics: Global governance, AI regulation, Risk consensus
The Hiroshima Process is a collective effort from the G7 nations, focusing on the foundation model.
Supporting facts:
- The discussion was initiated by Japan in 2016.
- The focus has shifted towards a collective dialogue among G7 nations.
Topics: Hiroshima Process, G7, Collective Effort, Foundation Model
Complementarity is needed between voluntary standards and legal frameworks, especially in the design, implementation, and use of artificial intelligence systems
Supporting facts:
- Legal frameworks should consider how responsibilities are distributed and create obligations related to transparency
- There needs to be enough communication between different operators in the production and use of AI
Topics: Artificial Intelligence, Legal Frameworks, Voluntary standards
Report
In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary.
However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.
Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.
Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network.
This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations. Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement.
Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions. UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI.
The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.
In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy.
For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.
The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes. An interesting observation is the need for a balance between voluntary standards and legal frameworks.
Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.
Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.
Nobuhisa Nishigata
Speech speed
147 words per minute
Speech length
1447 words
Speech time
591 secs
Arguments
Japan introduces discussions on AI at the G7 meeting
Supporting facts:
- Japan proposed the AI discussion and it was well received by the members of the G7.
- The Japanese government asked the OECD to continue the work further.
Topics: AI, G7, OECD
Japan established the ‘Hiroshima AI process’.
Supporting facts:
- The goal is development of code of conduct and project-based cooperation.
- The report is expected to complete by the end of the year.
Topics: AI, Hiroshima AI process, G7
The Hiroshima process is a collective effort of the G7
Supporting facts:
- Started back in 2016
- Shift from voluntary commitment to government initiated inclusive dialogue
Topics: G7, Hiroshima process
Report
Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries.
Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society. While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI.
Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan’s pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.
Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the ‘Hiroshima AI process’. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations.
Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully. Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life.
It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan’s commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society. In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries.
While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the ‘Hiroshima AI process’ showcases Japan’s commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.
Owen Later
Speech speed
234 words per minute
Speech length
1374 words
Speech time
352 secs
Arguments
Microsoft takes responsible use of AI technology seriously
Supporting facts:
- Microsoft has been building out its responsible AI program for six years
- They have a team of 350+ including experts in engineering, research, legal and policy
- They have a responsible AI standard based on OECD principles
Topics: Responsible AI, Microsoft, Technology
Private companies and industry need to participate actively in AI governance discussions
Supporting facts:
- Microsoft has founded the Frontier Model Forum with other leading AI labs
- This forum focuses on developing technical best practices for frontier models
- They are also supportive of global efforts like the ones at UN and OECD
Topics: AI Governance, Microsoft, Industry responsibility
Regulations are necessary to manage the use and development of AI
Supporting facts:
- Microsoft believes that new rules are needed for this new technology
- They are actively involved in sharing insights and experiences to help shape regulations
- They also aim to help build capacity for governments and industry regulators
Topics: AI Regulation, Microsoft, Policy making
Need safeguards at both model and application levels of AI development
Supporting facts:
- The model should be developed in a responsible way
- Requirements are also needed when integrating the model into an application
- Mitigations can be removed or circumvented if the application level lacks safeguards
Topics: AI development, AI application, AI model, Technical safeguards
Report
Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy.
Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices. In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions.
To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.
Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology.
Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices. Microsoft emphasises the need for safeguards at both the model and application levels of AI development.
The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards.
Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process. Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building.
Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI. In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations.
They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.
Prateek Sibal
Speech speed
164 words per minute
Speech length
1500 words
Speech time
547 secs
Arguments
UNESCO has developed a recommendation on the ethics of AI through a multi-stakeholder process.
Supporting facts:
- The recommendation was developed over a period of two years.
- A group of 24 world-wide experts prepared the first draft.
- The document went through about 200 hours of intergovernmental negotiations.
- The recommendation was adopted in 2021 by 193 countries.
Topics: Ethics, AI, Multi-stakeholder approach, Policy Framework
The recommendation provides necessary guidelines for developers and users on ethical aspects of AI technology.
Supporting facts:
- The recommendation includes values of human rights, leaving no one behind, and sustainability.
- Emphasizes on principles like transparency, explainability for developers and users.
Topics: AI, Ethics, Policy Guidelines, User Interface
UNESCO is working on implementing the recommendation through various tools, forums, and capacity building initiatives.
Supporting facts:
- UNESCO has developed a readiness assessment methodology to gauge a country’s AI development state.
- UNESCO conducts a Global Forum on the Ethics of AI.
- The organization also provides an ethical impact assessment tool for governments and companies procuring AI systems.
Topics: AI, Implementation, Policy Framework, Capacity Building
Prateek Sibal acknowledges the challenges faced by different parties, particularly SMEs, civil societies and academia in engaging with artificial intelligence due to constraints like knowledge level, financial costs and time commitments.
Supporting facts:
- UNESCO has been working on creating multi-stakeholder governance of AI
- UNESCO has launched several knowledge products to facilitate the process, including a comic book on AI to facilitate understanding
- UNESCO has supported people financially and compensated them for their time in consultation processes
Topics: Artificial Intelligence, Capacity Building, Multi-stakeholder Conversations
Report
UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.
The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable. UNESCO is implementing the recommendation through various tools, forums, and initiatives.
This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems. The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms.
Ethical viewpoints are crucial to align AI with societal expectations. Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement. In conclusion, UNESCO’s recommendation on AI ethics provides valuable guidelines for responsible AI development.
Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.
Set Center
Speech speed
146 words per minute
Speech length
975 words
Speech time
401 secs
Arguments
AI regulation needs to keep up with the speed of technological advancements.
Supporting facts:
- Speed and regulation are in natural tension, particularly evident in case of AI.
- Perceived inadequacy of government response to meet the fast progressing AI.
Topics: AI governance, AI regulation, Technological governance
It is important to have risk frameworks in place for AI.
Supporting facts:
- The US has two foundational documents: AI Bill of Rights and Risk Management Framework.
- 240 organizations have contributed to the formulation of Risk Management Framework.
- This framework is multi-stakeholder and covers the entire AI lifecycle.
Topics: AI governance, Risk Management, AI Bill of Rights
Technologies like foundation models have initiated a new era of AI.
Supporting facts:
- Technological era changed drastically with the emergence of foundation models.
- Leading companies that were developing large language models are located in the US.
Topics: Foundation Models, AI governance, AI Regulation
Report
AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.
On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework.
These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI. Technological advancements, such as the development of foundation models, have ushered in a new era of AI.
Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.
To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called ‘Voluntary Commitments’ to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure.
Its objective is to build trust and security amidst the fast-paced evolution of AI technologies. In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations.
Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field.
The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.
Suzanne Akkabaoui
Speech speed
124 words per minute
Speech length
775 words
Speech time
374 secs
Arguments
Egypt has a national AI strategy that aims to create an AI industry.
Supporting facts:
- Strategy built on four pillars: AI for government, AI for development, capacity building and international relations.
- Promotion of effective partnership between the government and the private sector.
Topics: AI industry, National AI strategy
AI should enhance human labor and not replace it.
Supporting facts:
- Required to conduct a thorough impact assessment for each AI product.
- Focus on whether AI is the best solution and the expected social and economic impacts.
Topics: AI and employment, Social impact of AI
Issued a charter for responsible AI divided into general guidelines and implementation guidelines.
Supporting facts:
- The general guidelines outlined use of AI for promoting well-being of citizens and combating poverty, hunger, inequality.
- Implementation guidelines addressed AI robustness, security, safety throughout entire lifecycle.
Topics: AI policy, AI governance
Understanding cultural differences and bridging cultural and sociological gaps is crucial in technology advancement.
Supporting facts:
- Cultural gaps that come with the technological advance.
Topics: Culture in AI, Social responsibility in AI
Report
Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.
Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services. The AI for development pillar highlights Egypt’s commitment to utilizing AI as a catalyst for economic growth and social development.
The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality. Capacity building is prioritized in Egypt’s AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.
International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network. To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals.
These guidelines address aspects such as robustness, security, safety, and social impact assessments. Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.
Overall, Egypt’s AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.
Thomas Schneider
Speech speed
173 words per minute
Speech length
1648 words
Speech time
570 secs
Arguments
AI should not be regulated as a tool, but in context of its application
Supporting facts:
- AI is compared to combustion engines in its disruptiveness, with the former replacing cognitive human work through machines while the latter displaced physical labor
- Existing cultural, technical and legal norms that guided previous disruptive technologies can be adapted for AI
Topics: AI regulation, Contextual use, Human rights
Council of Europe is developing the first binding convention on AI and human rights
Supporting facts:
- The agreement is the first intergovernmental agreement that intends to commit states to live up to AI principles based on the norms of human rights, democracy and rule of law
- The convention represents a legal tool to create interoperable legal systems within countries
Topics: Council of Europe, AI Regulation, Human Rights
Need for agreement on fundamental values
Supporting facts:
- Reiterate and check agreement on how to respect human dignity, and being innovative while respecting rights
Topics: Global Governance, Value Alignment, Human Dignity
Requirement to tackle new challenges and clarify legal uncertainties
Supporting facts:
- Breaking down new elements and challenges, identifying legal uncertainties to be clarified
Topics: Legal Uncertainties, Global Challenges
Use of best tools and methods to solve problems
Supporting facts:
- Using a mix of tools to solve identified problems, some methods will be faster, others will be more sustainable
Topics: Problem Solving, Governance
Importance of stakeholder cooperation
Supporting facts:
- Need to continue and cooperate with all stakeholders in their respective roles
Topics: Stakeholder Engagement, Cooperation
Report
The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies.
It is believed that this approach will allow for a more nuanced and effective regulation of AI. Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful.
By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology. The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward.
This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries.
This represents a significant development in the global governance of AI and the protection of human rights. The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights.
This ensures that AI development and deployment align with society’s values and principles. Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework.
Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive. In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement.
By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles.
This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation. However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI.
Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users.
In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place.
Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.