Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196
Event report
Speakers and Moderators
Speakers:
- Prateek Sibal, Intergovernmental Organization
- Owen Larter, Private Sector, Western European and Others Group (WEOG)
- Thomas Schneider, Government, Western European and Others Group (WEOG)
- Clara Neppel, Technical Community, Western European and Others Group (WEOG)
- Maria Paz Canales, Civil Society, Latin American and Caribbean Group (GRULAC)
- Nobuhisa NISHIGATA, Government, Asia-Pacific Group
- Suzanne Akkabaoui, Government, African Group
- OECD_Karine Perset, Intergovernmental Organization
Moderators:
- Timea Suto, Private Sector, Eastern European Group
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Owen Later
Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy. Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices.
In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions. To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.
Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology. Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices.
Microsoft emphasises the need for safeguards at both the model and application levels of AI development. The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards. Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process.
Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building. Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI.
In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations. They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.
Clara Neppel
During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.
Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design. These efforts aim to ensure that AI systems are accountable, fair, and free from bias.
The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI. An example was provided in the form of the UK's Children's Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks.
Capacity building for AI certification was also discussed as an essential component. IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.
The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law. This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions.
The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation. This concern underscores the need for stability and predictability to support a thriving and sustainable private sector.
Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems. Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications.
In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI. IEEE's involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law. The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.
Maria Paz Canales
The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI. Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation.
Another important point is the evaluation of AI's impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights. Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.
The discussion also emphasized the importance of education and capacity building in understanding AI's impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights. By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology.
Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems. The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI.
In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI. This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance.
Lastly, the discussion emphasized the need for a bottom-up approach in AI governance. This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.
In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance. By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.
Thomas Schneider
The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies. It is believed that this approach will allow for a more nuanced and effective regulation of AI. Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful. By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology. The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward. This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries. This represents a significant development in the global governance of AI and the protection of human rights. The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights. This ensures that AI development and deployment align with society's values and principles. Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework. Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive. In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement. By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles. This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation. However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI. Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users. In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place. Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.
Suzanne Akkabaoui
Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.
Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services.
The AI for development pillar highlights Egypt's commitment to utilizing AI as a catalyst for economic growth and social development. The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality.
Capacity building is prioritized in Egypt's AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.
International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network.
To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals. These guidelines address aspects such as robustness, security, safety, and social impact assessments.
Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.
Overall, Egypt's AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.
Moderator
In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary. However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.
Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.
Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network. This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations.
Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement. Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions.
UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI. The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.
In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy. For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.
The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes.
An interesting observation is the need for a balance between voluntary standards and legal frameworks. Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.
Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.
Set Center
AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.
On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework. These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI.
Technological advancements, such as the development of foundation models, have ushered in a new era of AI. Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.
To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called 'Voluntary Commitments' to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure. Its objective is to build trust and security amidst the fast-paced evolution of AI technologies.
In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations. Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field. The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.
Auidence
The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties. Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn't directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.
To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.
There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level. This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails.
Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI. The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.
In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI. The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.
Prateek Sibal
UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.
The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable.
UNESCO is implementing the recommendation through various tools, forums, and initiatives. This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems.
The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms. Ethical viewpoints are crucial to align AI with societal expectations.
Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement.
In conclusion, UNESCO's recommendation on AI ethics provides valuable guidelines for responsible AI development. Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.
Galia
The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.
Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.
Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance. This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process.
The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected. By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives.
In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16. The speakers' identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.
Nobuhisa Nishigata
Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries. Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society.
While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI. Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan's pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.
Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the 'Hiroshima AI process'. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations. Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully.
Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life. It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan's commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society.
In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries. While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the 'Hiroshima AI process' showcases Japan's commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.
Speakers
A
Auidence
Speech speed
179 words per minute
Speech length
410 words
Speech time
137 secs
Arguments
Engagement in capacity building could be a challenging process due to time and financial commitments
Supporting facts:
- Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process which may bring additional financial burdens for various parties
- Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn't directly contributing to their main product or services
- Academics may struggle to get academic credit for having engaged in this kind of process
Topics: Capacity Building, Finance, Engagement, Time Management
Worries about the system level guardrails vs the model level guardrails in implementing responsible AI
Supporting facts:
- Risks in AI are context specific, there's worry about tech vendors providing unsafe models if responsibility is pushed to system level
Topics: Responsible AI, System level guardrails, Model level guardrails
Question about the Hiroshima process and the expectation of a principle approach for AI
Supporting facts:
- Previously G20 process created data free flow with trust
Topics: Hiroshima process, AI Principles
Report
The discussions at the event highlighted the challenges that arise in capacity building due to time and financial commitments. Ansgar Kuhn from EY raised the problem of time commitments related to the capacity building process, which may bring additional financial burdens for various parties.
Small to Medium Enterprises (SMEs) and civil society organizations might not be able to afford the cost of someone who isn't directly contributing to their main products or services. Academics may also struggle to get academic credit for engaging in this kind of process.
To address these financial and time commitment issues in capacity building, participants stressed the importance of finding solutions. Ansgar Kuhn specifically asked for suggestions to tackle this problem, underscoring the need to explore feasible strategies to alleviate the burden that time and financial commitments place on different stakeholders.
There were also concerns raised about the implementation of responsible AI, particularly regarding the choice between system level guardrails and model level guardrails. The discussion highlighted worries about tech vendors providing unsafe models if responsibility for responsible AI is pushed to the system level.
This sparked a debate about the best approach to implement responsible AI and the potential trade-offs associated with system level versus model level guardrails. Moreover, the event touched upon the Hiroshima process and the expectation of a principle-based approach to AI.
The previous G20 process, which focused on creating data free flow with trust, served as a reference point for the discussion. There was a question about the need for a principle approach for AI, suggesting the desire to establish ethical guidelines and principles to guide the development and deployment of AI technologies.
In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building and the need for solutions to mitigate these issues. Concerns about system level versus model level guardrails in responsible AI implementation emphasized the importance of balancing safety and innovation in AI.
The desire for a principle-based approach to AI and the establishment of ethical guidelines were also highlighted.
CN
Clara Neppel
Speech speed
164 words per minute
Speech length
1373 words
Speech time
503 secs
Arguments
Creating technology is a responsibility and values and business models are embedded into it.
Supporting facts:
- IEEE started thinking about the ethical challenges of technology in 2016.
- IEEE has a constituency of 400,000 members and is the largest technical association.
Topics: Technology Development, Ethics
Conversation around value-based design, transparency, and bias in AI is crucial.
Supporting facts:
- IEEE holds discussions with regulatory bodies to develop socio-technical standards.
- IEEE has a standard that defines transparency and a standard focused on value-based design.
Topics: AI Ethics, Transparency in AI, Bias in AI
Standards can play an important role in complementing regulation and bringing interoperability in regulatory requirements.
Supporting facts:
- IEEE is part of the network of experts discussing about regulation and technical challenges related to AI.
- Example of the UK's Children's Act complemented by an IEEE standard on age-appropriate design.
Topics: AI Standards, Regulatory Requirements
Capacity building is important for the AI certification process.
Supporting facts:
- IEEE has trained over 100 people for AI certification.
- IEEE is also training certification bodies to be able to make assessments.
Topics: Capacity Building, AI Certification
Not the responsibility of the private sector to protect democracy and rule of law
Supporting facts:
- She notes that legal certainty which can only be provided through regulations is necessary.
Topics: Private Sector, Democracy, Rule of Law
Need for legal certainty or regulations to uphold rule of law
Topics: Rule of Law, Legal Certainty, Regulations
Uncertainty facing the private sector is problematic
Topics: Private Sector, Uncertainty
Importance of feedback loops in AI
Supporting facts:
- This includes ensuring that feedback is taken into account in retraining systems.
Topics: AI, Feedback Loops
The need for benchmarking and common standards in AI
Supporting facts:
- Drawing from lessons in the aviation industry.
Topics: AI, Benchmarking, Common Standards
Report
During the discussion, the speakers highlighted the ethical challenges associated with technology development. They emphasised the need for responsibility and the embedding of values and business models into technology. IEEE, with its constituency of 400,000 members, was recognised as playing a significant role in addressing these challenges.
Transparency, value-based design, and bias in AI were identified as crucial areas of concern. IEEE has been actively engaging with regulatory bodies to develop socio-technical standards in these areas. They have already established a standard that defines transparency and another standard focused on value-based design.
These efforts aim to ensure that AI systems are accountable, fair, and free from bias. The importance of standards in complementing regulation and bringing interoperability in regulatory requirements was emphasised. IEEE has been involved in discussions with experts to address the technical challenges related to AI.
An example was provided in the form of the UK's Children's Act, which was complemented by an IEEE standard on age-appropriate design. This highlights how standards can play a crucial role in ensuring compliance and interoperability within regulatory frameworks. Capacity building for AI certification was also discussed as an essential component.
IEEE has trained over 100 individuals for AI certification and is also working on training certification bodies to carry out assessments. This capacity building process ensures that individuals and organisations possess the necessary skills and knowledge to navigate the complex landscape of AI and contribute to its responsible development and deployment.
The panel also explored the role of the private sector in protecting democracy and the rule of law. One speaker argued that it is not the responsibility of the private sector to safeguard these fundamental principles. However, another speaker highlighted the need for legal certainty, which can only be provided through regulations, in upholding the rule of law.
This debate prompts further reflection on the appropriate roles and responsibilities of different societal actors in maintaining democratic values and institutions. The negative impact of uncertainty on the private sector was acknowledged. The uncertain environment poses challenges for businesses and impedes economic growth and job creation.
This concern underscores the need for stability and predictability to support a thriving and sustainable private sector. Lastly, the importance of feedback loops and common standards in AI was emphasised. This includes ensuring that feedback is taken into account to improve and retrain AI systems.
Drawing from lessons learned in the aviation industry, the development of benchmarking and common standards was seen as vital for enabling efficient and effective collaboration across different AI systems and applications. In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, specifically in the context of AI.
IEEE's involvement in shaping socio-technical standards, capacity building for AI certification, and the need for transparency, value-based design, and addressing bias were highlighted. The role of standards in complementing regulation and promoting interoperability was emphasised, along with the necessity of legal certainty to uphold democracy and the rule of law.
The challenges posed by uncertainty in the private sector and the significance of feedback loops and common standards in AI were also acknowledged. These insights contribute to the ongoing discourse surrounding the responsible development and deployment of technology.
G
Galia
Speech speed
76 words per minute
Speech length
566 words
Speech time
444 secs
Arguments
It is possible to overcome the challenges at the implementation level of global governance and credible value alignment
Supporting facts:
- Galia mentions their mapping exercises with the OECD on the risk assessment front
Topics: Global Governance, Value Alignment, Stakeholder Engagement
Report
The speakers in the discussion highlight the critical role of global governance, stakeholder engagement, and value alignment in working towards SDG 16, which focuses on Peace, Justice, and Strong Institutions. They address the challenges faced in implementing these efforts and stress the importance of establishing credible value alignment.
Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This suggests that the speakers are actively involved in assessing potential risks and vulnerabilities in the context of global governance. The mention of these mapping exercises indicates that concrete steps are being taken to identify and address potential obstacles.
Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically highlights the significance of ensuring alignment with values at a global level. This implies that involving various stakeholders and aligning their interests and values is crucial to achieving successful global governance.
This collaborative approach allows for a wider range of perspectives and enables a more inclusive decision-making process. The sentiment expressed by the speakers is positive, indicating their optimism and belief in overcoming the challenges associated with implementation. Their focus on credible value alignment suggests that they recognize the importance of ensuring that the principles and values underpinning global governance are widely accepted and respected.
By emphasizing stakeholder engagement and value alignment, the speakers underscore the need for a holistic approach that goes beyond mere top-down control and incorporates diverse perspectives. In summary, the discussion emphasizes the vital aspects of global governance, stakeholder engagement, and value alignment in achieving SDG 16.
The speakers' identification of challenges and their emphasis on credible value alignment demonstrate a proactive and thoughtful approach. Their mention of the OECD mapping exercises also indicates a commitment to assessing risks and vulnerabilities. Overall, the analysis underscores the significance of collaboration and the pursuit of shared values in global governance.
MP
Maria Paz Canales
Speech speed
166 words per minute
Speech length
1291 words
Speech time
467 secs
Arguments
Adequate understanding and clarification of AI governance is vital
Supporting facts:
- Civil society organizations need a clearer understanding of AI risk areas and implementation strategies.
- The conversation on AI governance should involve various actors including the technical community and the communities affected by AI.
Topics: Artificial Intelligence, Technology Governance
Capacity building is necessary for understanding AI impact
Supporting facts:
- The public can't speak on AI impact without being educated about the topic in a concrete and understandable way.
- Understanding AI requires not just technical language, but how AI impacts daily life and basic rights.
Topics: Artificial Intelligence, Capacity Building
We need some level of complementarity between voluntary standards and legal frameworks
Supporting facts:
- This is particularly linked with responsibility at different levels of safeguard in the design stage but also in the implementation and functioning of the system
Topics: AI Regulation, Legal Frameworks, Voluntary Standards
The legal framework should account for shared responsibility
Supporting facts:
- This would ensure that between the different operators in the chain of production and use of AI, there is enough communication that does not violate competition rules or intellectual property rules
Topics: AI Regulation, Legal Frameworks, Shared Responsibility
Report
The discussion on artificial intelligence (AI) governance highlighted several key points. Firstly, there is a need for a clearer understanding and clarification of AI governance. This involves the participation of various actors, including civil society organizations and communities that are directly affected by AI.
Civil society organizations require a better understanding of AI risk areas, as well as effective strategies for AI implementation. Another important point is the evaluation of AI's impact on rights, with a specific focus on inclusiveness. It is essential to assess the effect of AI on civil, political, economic, social, and cultural rights.
Unfortunately, the perspectives of communities impacted by AI are often excluded in the assessment of AI risks. Therefore, it is necessary to include the viewpoints and experiences of these communities to comprehensively evaluate the impact of AI. This inclusive approach to AI impact assessment can lead to the development of responsible and trustworthy technology.
The discussion also emphasized the importance of education and capacity building in understanding AI's impact. The public cannot fully comprehend the consequences of AI without being adequately educated about the subject in a concrete and understandable way. Therefore, it is crucial to provide not only technical language but also information on how AI impacts daily life and basic rights.
By enhancing education and capacity building, individuals can better grasp the implications and intricacies of AI technology. Furthermore, it was highlighted that there should be some level of complementarity between voluntary standards and legal frameworks in AI governance. This includes ensuring responsibility at different levels, from the design stage to the implementation and functioning of AI systems.
The relationship between voluntary standards and legal frameworks must be carefully balanced to create an effective governance structure for AI. In addition, the discussion underscored the importance of accounting for shared responsibility within the legal framework. It is crucial to establish effective communication between the different operators involved in the production and use of AI.
This communication should adhere to competition rules and intellectual property regulations, avoiding any violations. By accounting for shared responsibility, the legal framework can ensure ethical and responsible AI governance. Lastly, the discussion emphasized the need for a bottom-up approach in AI governance.
This approach involves the active participation of societies and different stakeholders at both the local and global levels. Geopolitically, there is a need to hear more about experiences and perspectives from various stakeholders in global-level governance discussions. By adopting a bottom-up approach, AI governance can become more democratic, inclusive, and representative of the diverse needs and interests of stakeholders.
In conclusion, the discussion on AI governance highlighted the importance of a clearer understanding and clarification of AI governance, inclusiveness in impact assessment, education and capacity building, complementarity between voluntary standards and legal frameworks, the consideration of shared responsibility, and a bottom-up approach in AI governance.
By addressing these aspects, it is possible to develop responsible and trustworthy AI technology to benefit society as a whole.
M
Moderator
Speech speed
122 words per minute
Speech length
2365 words
Speech time
1165 secs
Arguments
Japan is in a position where it needs more machines, especially AI, to help sustain its aging economy
Supporting facts:
- Japan is facing several social problems related to aging, hence it needs more people and machines to sustain the economy
Topics: AI, Aging economy
Japan wants to see more of what AI can do, particularly for their society, before introducing legislation on AI
Supporting facts:
- Japan is not ready yet to introduce legislation over the AI technologies
- Japan respects what the EU is doing regarding AI legislation but feels it's too early to take similar action
Topics: AI, Legislation
The G7 delegates are working to create a report on the risks, challenges, and opportunities of new technology, particularly generative AI
Supporting facts:
- The leaders agreed to establish the Hiroshima AI process, focusing on generative AI
- They're asking OECD for support in summarizing a report on these topics
Topics: AI, G7, Generative AI
AI approaches, policies, and frameworks need to be responsive to national context, cultures, local expectations while aligning with global values
Supporting facts:
- Egypt has a national AI strategy that aims to create an AI industry in Egypt.
- One of the main principles of the strategy is that AI should enhance human labor and not replace it.
- Egypt published an AI charter for responsible AI
Topics: Artificial Intelligence, Policies
Policies should be interoperable with other countries and regional and global initiatives
Supporting facts:
- Egypt is a member of the OECD AI network.
- The strategy also emphasizes the importance of fostering regional and international cooperations.
Topics: Artificial Intelligence, Policies
AI technology carries immense promise but also risks
Supporting facts:
- Microsoft AI co-pilots are being used by customers to increase productivity
- AI can help understand and manage complex systems and respond to challenges like healthcare or climate or improving education
Topics: AI technology, Microsoft, Potential risks
Technology must be developed responsibly
Supporting facts:
- Microsoft has over 350 people from diverse backgrounds working on identifying and mitigating AI risks
- Microsoft has implemented a responsible AI standard based on OECD principles
- The standard is shared externally for critique and improvement
Topics: Responsible AI, Microsoft, Internal standard
Private sector and industry should actively contribute to governance discussions
Supporting facts:
- Microsoft founded the Frontier Model Forum to accelerate work around technical best practice on frontier models
- The company engages internationally in global governance conversations
Topics: Microsoft, AI governance, Industry contribution
Processes of responsible development, governance, regulation, capacity building must be multistakeholder and cooperative
Supporting facts:
- Microsoft dialogues and supports UN initiatives
- All sectors need to be at the table at all levels for effective implementation
Topics: Responsible AI, Governance, Regulation, Capacity building
UNESCO has developed a recommendation on the ethics of AI
Supporting facts:
- UNESCO's recommendation was developed through a multi-stakeholder process over two years
- A group of 24 world experts drafted the document which was adopted by 193 countries
- The document provides a clear indication to developers and users of ethical values and principles that should guide AI development and usage
Topics: UNESCO, AI Ethics, AI Governance
There is a need for more clarity in the framework addressing AI and its impacts
Supporting facts:
- AI impacts daily life of people globally. AI affects the exercise of civil and political rights but also economic, social, and cultural rights
Topics: AI governance, Human rights
Consultations should include individuals who are directly impacted by AI technologies
Supporting facts:
- Usually those affected by the technology are not part of the conversations during consultative processes
- There is a need to consciously make efforts to include them
Topics: AI governance, Risk assessment
Concepts like responsible and trustworthy AI should be the results of well-done governance.
Supporting facts:
- There is difficulty in unpacking responsible and trustworthy AI
Topics: AI governance, Risk assessment
The moderator appreciates Thomas Schneider's analogy of a road system
Supporting facts:
- The analogy draws comparisons between the adaptation of rules based on context and the need for similar flexibility in AI regulation.
- The final discussion allows panelists one minute each for closing remarks.
Topics: AI regulation, Common principles in AI, Flexibility
Prateek Sibal emphasizes the importance of inclusive, understanding, and supportive multi-stakeholder conversations about AI technology.
Supporting facts:
- Prateek and his team have launched knowledge products including a comic book on AI to make it easier to understand.
- They have also provided financial support to people to participate in different fora.
Topics: AI technology, Inclusion, Knowledge products, Multi-stakeholder conversations
Owen stresses the need for safeguards at both levels - model development and integration into applications
Supporting facts:
- Mitigations can be circumvented if only applied at the model level
- Progress on development of safeguards at code level
Topics: AI safeguarding, Model development, Application integration
Owen suggests extension of progress from developers to deployers
Topics: AI deployment, Developer and Deployer roles
Owen presents ideas for forward direction in global governance
Supporting facts:
- Interest in a framework for standard setting
- Use of representative processes for developing standards
Topics: Global governance, AI regulation, Risk consensus
The Hiroshima Process is a collective effort from the G7 nations, focusing on the foundation model.
Supporting facts:
- The discussion was initiated by Japan in 2016.
- The focus has shifted towards a collective dialogue among G7 nations.
Topics: Hiroshima Process, G7, Collective Effort, Foundation Model
Complementarity is needed between voluntary standards and legal frameworks, especially in the design, implementation, and use of artificial intelligence systems
Supporting facts:
- Legal frameworks should consider how responsibilities are distributed and create obligations related to transparency
- There needs to be enough communication between different operators in the production and use of AI
Topics: Artificial Intelligence, Legal Frameworks, Voluntary standards
Report
In the analysis, several speakers discuss various aspects of AI and its impact on different sectors and societies. One key point raised is the need for more machines, especially AI, in Japan to sustain its ageing economy. With Japan facing social problems related to an ageing population, the introduction of more people and machines is deemed necessary.
However, it is interesting to note that while Japan recognises the importance of AI, they believe that its opportunities and possibilities should be prioritised over legislation. They view AI as a solution rather than a problem and want to see more of what AI can do for their society before introducing any regulations.
Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opportunities of new technology, particularly generative AI. They have sought support from the OECD in summarising this report. This highlights the importance of international cooperation in addressing the impact of AI.
Egypt also plays a significant role in the AI discussion. The country has a national AI strategy that seeks to create an AI industry while emphasising that AI should enhance human labour rather than replace it. Egypt has published an AI charter for responsible AI and is a member of the OECD AI network.
This showcases the importance of aligning national strategies with global initiatives and fostering regional and international collaborations. Microsoft is another notable player in the field of AI. The company is committed to developing AI technology responsibly. It has implemented a responsible AI standard based on OECD principles, which is shared externally for critique and improvement.
Microsoft actively engages in global governance conversations, particularly through the Frontier Model Forum, where they accelerate work on technical best practices. Their contributions highlight the importance of private sector involvement in governance discussions. UNESCO has made significant contributions to the AI discourse by developing a recommendation on the ethics of AI.
The recommendation was developed through a two-year multi-stakeholder process and adopted by 193 countries. It provides a clear indication of ethical values and principles that should guide AI development and usage. Furthermore, UNESCO is actively working on capacity building to equip governments and organizations with the necessary skills to implement AI systems ethically.
In terms of addressing concerns and ensuring inclusivity, it is highlighted that AI bias can be addressed even before AI regulations are in place. Existing human rights frameworks and data protection laws can start to address challenges related to bias, discrimination, and privacy.
For example, UNESCO has been providing knowledge on these issues to judicial operators to equip them with the necessary understanding of AI and the rule of law. Additionally, discussions around AI governance emphasise the need for clarity in frameworks and the inclusion of individuals who are directly impacted by AI technologies.
The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-stakeholder and cooperative processes. All sectors need to be involved in the conversation and implementation of AI initiatives to ensure effective and inclusive outcomes. An interesting observation is the need for a balance between voluntary standards and legal frameworks.
Complementarity is needed between these two approaches, especially in the design, implementation, and use of AI systems. Furthermore, the analysis highlights the importance of a bottom-up approach in global governance, taking into account diverse stakeholders and global geopolitical contexts. By incorporating global experiences from different stakeholders, risks can be identified, and relevant elements can be considered in local contexts.
Overall, the analysis provides insights into various perspectives on AI and the importance of responsible development, global collaboration, and inclusive policies in shaping the future of AI.
NN
Nobuhisa Nishigata
Speech speed
147 words per minute
Speech length
1447 words
Speech time
591 secs
Arguments
Japan introduces discussions on AI at the G7 meeting
Supporting facts:
- Japan proposed the AI discussion and it was well received by the members of the G7.
- The Japanese government asked the OECD to continue the work further.
Topics: AI, G7, OECD
Japan established the 'Hiroshima AI process'.
Supporting facts:
- The goal is development of code of conduct and project-based cooperation.
- The report is expected to complete by the end of the year.
Topics: AI, Hiroshima AI process, G7
The Hiroshima process is a collective effort of the G7
Supporting facts:
- Started back in 2016
- Shift from voluntary commitment to government initiated inclusive dialogue
Topics: G7, Hiroshima process
Report
Japan has emerged as a frontrunner in the discussions surrounding artificial intelligence (AI) during the G7 meeting. Japan proposed the inclusion of AI as a topic for discussion at the G7 meeting, and this proposal was met with enthusiasm by the other member countries.
Consequently, the Japanese government has requested the OECD to continue the work on AI further. This indicates the recognition and value placed by the G7 nations on AI and its potential impact on various aspects of society. While Japan is proactive in advocating for AI discussions, it adopts a cautious approach towards the introduction of legislation for AI.
Japan believes that it is premature to implement legislation specifically tailored for AI at this stage. Nevertheless, Japan acknowledges and respects the efforts made by the European Union in this regard. This perspective highlights Japan's pragmatic approach towards ensuring that any legislation around AI is well-informed and takes into account the potential benefits and challenges presented by this emerging technology.
Underlining its commitment to fostering cooperation and setting standards in AI, Japan has established the 'Hiroshima AI process'. This initiative aims to develop a code of conduct and encourage project-based collaboration in the field of AI. The process, which began in 2016, has seen a shift from voluntary commitment to government-initiated inclusive dialogue among the G7 nations.
Japan is pleased with the progress made in the Hiroshima process and the inclusive dialogue it has facilitated. It is worth noting that despite unexpected events in 2016, the process has continued to move forward successfully. Japan recognises the immense potential of AI technology to serve as a catalyst for economic growth and improve everyday life.
It believes that AI has the ability to support various aspects of the economy and enhance daily activities. This positive outlook reinforces Japan's commitment to harnessing the benefits of AI and ensuring its responsible and sustainable integration into society. In conclusion, Japan has taken a leading role in driving discussions on AI within the G7, with its proposal being well-received by other member countries.
While cautious about introducing legislation for AI, Japan appreciates the efforts made by the EU in this regard. The establishment of the 'Hiroshima AI process' showcases Japan's commitment to setting standards and fostering cooperation in AI. Overall, Japan is optimistic about the potential of AI to generate positive outcomes for the economy and society as a whole.
OL
Owen Later
Speech speed
234 words per minute
Speech length
1374 words
Speech time
352 secs
Arguments
Microsoft takes responsible use of AI technology seriously
Supporting facts:
- Microsoft has been building out its responsible AI program for six years
- They have a team of 350+ including experts in engineering, research, legal and policy
- They have a responsible AI standard based on OECD principles
Topics: Responsible AI, Microsoft, Technology
Private companies and industry need to participate actively in AI governance discussions
Supporting facts:
- Microsoft has founded the Frontier Model Forum with other leading AI labs
- This forum focuses on developing technical best practices for frontier models
- They are also supportive of global efforts like the ones at UN and OECD
Topics: AI Governance, Microsoft, Industry responsibility
Regulations are necessary to manage the use and development of AI
Supporting facts:
- Microsoft believes that new rules are needed for this new technology
- They are actively involved in sharing insights and experiences to help shape regulations
- They also aim to help build capacity for governments and industry regulators
Topics: AI Regulation, Microsoft, Policy making
Need safeguards at both model and application levels of AI development
Supporting facts:
- The model should be developed in a responsible way
- Requirements are also needed when integrating the model into an application
- Mitigations can be removed or circumvented if the application level lacks safeguards
Topics: AI development, AI application, AI model, Technical safeguards
Report
Microsoft has made significant efforts to prioritise the responsible use of AI technology. They have dedicated six years to building their responsible AI programme, which involves a team of over 350 experts in various fields including engineering, research, legal, and policy.
Their responsible AI standard is based on the principles outlined by the Organisation for Economic Co-operation and Development (OECD), emphasising the importance of ethical AI practices. In addition to their internal initiatives, Microsoft recognises the need for active participation from private companies and the industry in AI governance discussions.
To foster collaboration and best practices, they have founded the Frontier Model Forum, which brings together leading AI labs. This forum focuses on developing technical guidelines and standards for frontier models. Microsoft also supports global efforts, such as those taking place at the United Nations (UN) and the OECD, to ensure that AI technology is governed responsibly.
Another crucial aspect highlighted by Microsoft is the importance of regulations to effectively manage the use and development of AI. They actively share their insights and experiences to help shape regulations that address the unique challenges posed by AI technology.
Furthermore, Microsoft aims to build capacity for governments and industry regulators, enabling them to navigate the complex landscape of AI and ensure the adoption of responsible practices. Microsoft emphasises the need for safeguards at both the model and application levels of AI development.
The responsible development of AI models includes considering ethical considerations and ensuring that the model meets the necessary requirements. However, Microsoft acknowledges that even if the model is developed responsibly, there is still a risk if the application level lacks proper safeguards.
Therefore, they stress the importance of incorporating safeguards throughout the entire AI development process. Microsoft also supports global governance for AI and advocates for a representative process in developing standards. They believe that a global governance regime should aim for a framework that includes standard setting, consensus on risk assessment and mitigation, and infrastructure building.
Microsoft cites the International Civil Aviation Organization and the Intergovernmental Panel on Climate Change as potential models for governance, highlighting the importance of collaborative and inclusive approaches to effectively govern AI. In conclusion, Microsoft takes the responsible use of AI technology seriously, as evident through their comprehensive responsible AI programme, active participation in AI governance discussions, support for global initiatives, and commitment to shaping regulations.
They emphasise the need for safeguards at both the model and application levels of AI development and advocate for global governance that is representative, consensus-driven, and infrastructure-focused. Through their efforts, Microsoft aims to ensure that AI technology is harnessed responsibly and ethically, promoting positive societal impact.
PS
Prateek Sibal
Speech speed
164 words per minute
Speech length
1500 words
Speech time
547 secs
Arguments
UNESCO has developed a recommendation on the ethics of AI through a multi-stakeholder process.
Supporting facts:
- The recommendation was developed over a period of two years.
- A group of 24 world-wide experts prepared the first draft.
- The document went through about 200 hours of intergovernmental negotiations.
- The recommendation was adopted in 2021 by 193 countries.
Topics: Ethics, AI, Multi-stakeholder approach, Policy Framework
The recommendation provides necessary guidelines for developers and users on ethical aspects of AI technology.
Supporting facts:
- The recommendation includes values of human rights, leaving no one behind, and sustainability.
- Emphasizes on principles like transparency, explainability for developers and users.
Topics: AI, Ethics, Policy Guidelines, User Interface
UNESCO is working on implementing the recommendation through various tools, forums, and capacity building initiatives.
Supporting facts:
- UNESCO has developed a readiness assessment methodology to gauge a country's AI development state.
- UNESCO conducts a Global Forum on the Ethics of AI.
- The organization also provides an ethical impact assessment tool for governments and companies procuring AI systems.
Topics: AI, Implementation, Policy Framework, Capacity Building
Prateek Sibal acknowledges the challenges faced by different parties, particularly SMEs, civil societies and academia in engaging with artificial intelligence due to constraints like knowledge level, financial costs and time commitments.
Supporting facts:
- UNESCO has been working on creating multi-stakeholder governance of AI
- UNESCO has launched several knowledge products to facilitate the process, including a comic book on AI to facilitate understanding
- UNESCO has supported people financially and compensated them for their time in consultation processes
Topics: Artificial Intelligence, Capacity Building, Multi-stakeholder Conversations
Report
UNESCO has developed a comprehensive recommendation on the ethics of AI, achieved through a rigorous multi-stakeholder process. Over two years, 24 global experts collaborated on the initial draft, which then underwent around 200 hours of intergovernmental negotiations. In 2021, all 193 member countries adopted the recommendation, emphasizing the global consensus on addressing ethical concerns surrounding AI.
The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers and users. It emphasizes transparency and explainability, ensuring AI systems are clear and understandable. UNESCO is implementing the recommendation through various tools, forums, and initiatives.
This includes a readiness assessment methodology, a Global Forum on the Ethics of AI, and an ethical impact assessment tool for governments and companies procuring AI systems. The role of AI in society is acknowledged, with an example of a robot assisting teachers potentially influencing learning norms.
Ethical viewpoints are crucial to align AI with societal expectations. Prateek Sibal advocates for inclusive multi-stakeholder conversations around AI, emphasizing the importance of awareness, accessibility, and sensitivity towards different sectors. He suggests financial compensation to facilitate civil society engagement. In conclusion, UNESCO's recommendation on AI ethics provides valuable guidelines for responsible AI development.
Their commitment to implementation and inclusive dialogue strengthens the global effort to navigate ethical challenges presented by AI.
SC
Set Center
Speech speed
146 words per minute
Speech length
975 words
Speech time
401 secs
Arguments
AI regulation needs to keep up with the speed of technological advancements.
Supporting facts:
- Speed and regulation are in natural tension, particularly evident in case of AI.
- Perceived inadequacy of government response to meet the fast progressing AI.
Topics: AI governance, AI regulation, Technological governance
It is important to have risk frameworks in place for AI.
Supporting facts:
- The US has two foundational documents: AI Bill of Rights and Risk Management Framework.
- 240 organizations have contributed to the formulation of Risk Management Framework.
- This framework is multi-stakeholder and covers the entire AI lifecycle.
Topics: AI governance, Risk Management, AI Bill of Rights
Technologies like foundation models have initiated a new era of AI.
Supporting facts:
- Technological era changed drastically with the emergence of foundation models.
- Leading companies that were developing large language models are located in the US.
Topics: Foundation Models, AI governance, AI Regulation
Report
AI regulation needs to keep up with the rapid pace of technological advancements, as there is a perceived inadequacy of government response in this area. The tension between speed and regulation is particularly evident in the case of AI. The argument put forth is that regulations should be able to adapt and respond quickly to the ever-changing landscape of AI technology.
On the other hand, it is important to have risk frameworks in place to address the potential risks associated with AI. The United States has taken steps in this direction by introducing the AI Bill of Rights and Risk Management Framework.
These foundational documents have been formulated with the contributions of 240 organizations and cover the entire lifecycle of AI. The multi-stakeholder approach ensures that various perspectives are considered in managing the risks posed by AI. Technological advancements, such as the development of foundation models, have ushered in a new era of AI.
Leading companies in this field are primarily located in the United States. This highlights the influence and impact of US-based companies on the development and deployment of AI technologies worldwide. The emergence of foundation models has disrupted the technology landscape, showcasing the potential and capabilities of AI systems.
To address the challenges associated with rapid AI evolution, the implementation of voluntary commitments has been proposed. The White House has devised a framework called 'Voluntary Commitments' to enhance responsible management of AI systems. This framework includes elements such as red teaming, information sharing, basic cybersecurity measures, public transparency, and disclosure.
Its objective is to build trust and security amidst the fast-paced evolution of AI technologies. In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived inadequacy of government response highlights the need for agile and adaptive regulations.
Additionally, risk frameworks, such as the AI Bill of Rights and Risk Management Framework, are important in managing the potential risks associated with AI. The emergence of technologies like foundation models has brought about a new era of AI, with leading companies based in the US driving innovation in this field.
The implementation of voluntary commitments, as proposed by the White House, aims to foster trust and security in the ever-evolving landscape of AI technologies.
SA
Suzanne Akkabaoui
Speech speed
124 words per minute
Speech length
775 words
Speech time
374 secs
Arguments
Egypt has a national AI strategy that aims to create an AI industry.
Supporting facts:
- Strategy built on four pillars: AI for government, AI for development, capacity building and international relations.
- Promotion of effective partnership between the government and the private sector.
Topics: AI industry, National AI strategy
AI should enhance human labor and not replace it.
Supporting facts:
- Required to conduct a thorough impact assessment for each AI product.
- Focus on whether AI is the best solution and the expected social and economic impacts.
Topics: AI and employment, Social impact of AI
Issued a charter for responsible AI divided into general guidelines and implementation guidelines.
Supporting facts:
- The general guidelines outlined use of AI for promoting well-being of citizens and combating poverty, hunger, inequality.
- Implementation guidelines addressed AI robustness, security, safety throughout entire lifecycle.
Topics: AI policy, AI governance
Understanding cultural differences and bridging cultural and sociological gaps is crucial in technology advancement.
Supporting facts:
- Cultural gaps that come with the technological advance.
Topics: Culture in AI, Social responsibility in AI
Report
Egypt has developed a comprehensive national AI strategy aimed at fostering the growth of an AI industry. The strategy is based on four pillars: AI for government, AI for development, capacity building, and international relations. It focuses on leveraging AI technologies to drive innovation, improve governance, and address societal challenges.
Under the AI for government pillar, Egypt aims to enhance the effectiveness of public administration by adopting AI technologies. This includes streamlining administrative processes, improving decision-making, and delivering efficient government services. The AI for development pillar highlights Egypt's commitment to utilizing AI as a catalyst for economic growth and social development.
The strategy focuses on promoting AI-driven innovation, entrepreneurship, and tackling critical issues such as poverty, hunger, and inequality. Capacity building is prioritized in Egypt's AI strategy to develop a skilled workforce. The country invests in AI education, training programs, research, and collaboration between academia, industry, and government.
International cooperation is emphasized to exchange knowledge, share best practices, and establish standardized approaches to AI governance. Egypt actively participates in global discussions on AI policies and practices as a member of the OECD AI network. To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-being and aligning with national goals.
These guidelines address aspects such as robustness, security, safety, and social impact assessments. Egypt also recognizes the importance of understanding cultural differences and bridging gaps for technological advancements. The country aims to address cultural gaps and promote inclusivity, ensuring that the benefits of AI are accessible to all segments of society.
Overall, Egypt's AI strategy demonstrates a commitment to creating an AI industry and leveraging AI for governance, development, capacity building, and international cooperation. The strategy aims to foster responsible and inclusive progress through AI technologies.
TS
Thomas Schneider
Speech speed
173 words per minute
Speech length
1648 words
Speech time
570 secs
Arguments
AI should not be regulated as a tool, but in context of its application
Supporting facts:
- AI is compared to combustion engines in its disruptiveness, with the former replacing cognitive human work through machines while the latter displaced physical labor
- Existing cultural, technical and legal norms that guided previous disruptive technologies can be adapted for AI
Topics: AI regulation, Contextual use, Human rights
Council of Europe is developing the first binding convention on AI and human rights
Supporting facts:
- The agreement is the first intergovernmental agreement that intends to commit states to live up to AI principles based on the norms of human rights, democracy and rule of law
- The convention represents a legal tool to create interoperable legal systems within countries
Topics: Council of Europe, AI Regulation, Human Rights
Need for agreement on fundamental values
Supporting facts:
- Reiterate and check agreement on how to respect human dignity, and being innovative while respecting rights
Topics: Global Governance, Value Alignment, Human Dignity
Requirement to tackle new challenges and clarify legal uncertainties
Supporting facts:
- Breaking down new elements and challenges, identifying legal uncertainties to be clarified
Topics: Legal Uncertainties, Global Challenges
Use of best tools and methods to solve problems
Supporting facts:
- Using a mix of tools to solve identified problems, some methods will be faster, others will be more sustainable
Topics: Problem Solving, Governance
Importance of stakeholder cooperation
Supporting facts:
- Need to continue and cooperate with all stakeholders in their respective roles
Topics: Stakeholder Engagement, Cooperation
Report
The topic of AI regulation is currently being discussed in the context of its application. The argument put forth is that instead of regulating AI as a tool, it should be regulated in consideration of how it is used. This means that regulations should be tailored to address the specific applications and potential risks of AI, rather than imposing blanket regulations on all AI technologies.
It is believed that this approach will allow for a more nuanced and effective regulation of AI. Voluntary commitment in AI regulations is seen as an effective approach, provided that the right incentives are in place. This means that instead of enforcing compulsory regulations, which can often be complicated and unworkable, voluntary agreements can be more successful.
By providing incentives for AI developers and users to adhere to certain standards and guidelines, it is believed that a more cooperative and collaborative approach can be fostered, ensuring responsible and ethical use of AI technology. The Council of Europe is currently developing the first binding convention on AI and human rights, which is seen as a significant step forward.
This intergovernmental agreement aims to commit states to uphold AI principles based on the norms of human rights, democracy, and the rule of law. The convention is not only intended to ensure the protection of fundamental rights in the context of AI, but also to create interoperable legal systems within countries.
This represents a significant development in the global governance of AI and the protection of human rights. The need for agreement on fundamental values is also highlighted in the discussions on AI regulation. It is essential to have a consensus on how to respect human dignity and ensure that technological advancements are made while upholding and respecting human rights.
This ensures that AI development and deployment align with society's values and principles. Addressing legal uncertainties and tackling new challenges is another important aspect of AI regulation. As AI technologies continue to evolve, it is necessary to identify legal uncertainties and clarify them to ensure a clear and coherent regulatory framework.
Breaking down new elements and challenges associated with AI is crucial to ensuring that regulations are effective and comprehensive. In solving the problems related to AI regulation, it is emphasized that using the best tools and methods is essential. Different problems may require different approaches, with some methods being faster but less sustainable, while others may be more sustainable but take longer to implement.
By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperation is also considered to be of utmost importance in the realm of AI regulation. All stakeholders, including governments, businesses, researchers, and civil society, need to continue to engage and cooperate with each other, leveraging their respective roles.
This collaboration ensures that diverse perspectives and expertise are taken into account when formulating regulations, thereby increasing the chances of effective and balanced AI regulation. However, there is also opposition to the creation of burdensome bureaucracy in the process of regulating AI.
Efforts should be made to clarify issues and address challenges without adding unnecessary administrative complexity. It is crucial to strike a balance between ensuring responsible and ethical use of AI technology and avoiding excessive burdens on AI developers and users.
In conclusion, the discussions on AI regulation are centered around the need to regulate AI in consideration of its application, rather than treating it as a tool. Voluntary commitments, such as the binding convention being developed by the Council of Europe, are seen as effective approaches, provided the right incentives are in place.
Agreement on fundamental values, addressing legal uncertainties, and stakeholder cooperation are crucial aspects of AI regulation. It is important to strike a balance between effective regulation and avoiding burdensome bureaucracy.