Searching for Standards: The Global Competition to Govern AI | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Kyoko Yoshinaga, Civil Society, Asia-Pacific Group
- Tomiwa Ilori, Civil Society, African Group
- Simon Chesterman, Government, Asia-Pacific Group
- Carlos Affonso Souza, Civil Society, Latin American and Caribbean Group (GRULAC)
- GABRIELA RAMOS, Intergovernmental Organization, Intergovernmental Organization
- Courtney Radsch, Civil Society, Western European and Others Group (WEOG)
Moderators:
- Michael Karanicolas, Civil Society, Western European and Others Group (WEOG)
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Michael Karanicolas
During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world. The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally.
The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority. It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.
Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions.
The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it. As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively. The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance.
Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world. It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts. The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance.
Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation. The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic. It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context.
In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally. It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights. The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.
Tomiwa Ilori
AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies. For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations.
Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance. This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.
However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts. This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities.
On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives. International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.
In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies. Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.
Carlos Affonso Souza
In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology. The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws.
However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions. Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms.
One of the challenges in regulating AI in the majority world lies in the nature of the technology itself. AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI.
Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications. This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society.
Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation. The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation.
Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation. It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.
Countries’ motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally. This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI.
In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks. The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation. The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.
Irakli Khodeli
The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies. These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies.
To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization’s commitment to bridging the gap between theoretical principles and practical implementation. By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes.
One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability. The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.
Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace. Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities.
While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level. It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO.
In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach. Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation. Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level.
To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants. The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale.
Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully. UNESCO’s Universal Declaration on Bioethics and Human Rights, along with the Council of Europe’s Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.
In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly. This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.
Kyoko Yoshinaga
Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry. Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance. Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.
Simon Chesterman
The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere. On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation.
Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases. However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI.
Thirdly, Singapore’s approach to AI governance is highlighted. The focus of Singapore’s AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles. This approach reflects Singapore’s commitment to ensuring human-centrality and transparency in AI governance.
Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.
The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.
Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.
The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal. This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies.
Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.
In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users’ rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.
Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.
In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points. It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized. Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.
Audience
During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors. It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders.
Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications. There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI.
The discussion also explored the need for context-based trade-offs in AI usage. One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned. This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications.
The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively. This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use.
The impact of jurisdiction size on regulation was also discussed. The example of Singapore’s small jurisdiction size potentially driving businesses away due to regulations was mentioned. However, it was suggested that Singapore’s successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise.
Data governance and standard-setting bodies were also acknowledged as influential in AI regulation. Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.
The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI. This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice.
Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement. However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.
The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.
Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.
A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation. Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential.
The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized. It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards.
Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge. The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation.
In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised. The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts. Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.
Courtney Radsch
In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI. The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies.
Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors. This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices.
However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon. These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.
Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.
The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system. It is argued that a more diverse representation in the tech community is needed to neutralize big tech’s unfair data advantage.
The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes. The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI.
The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well. The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation.
A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments. It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed.
In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation. However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies. These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.
Session transcript
Michael Karanicolas:
Hi, Simon. How are you? Can you hear us? I can indeed. Great to see you also. And we can hear you. You can hear me? Just give me a thumbs up if you can. Can you hear? I can hear you. I’m not sure if I’m coming through on your side. Welcome. So just off the top, I want to invite folks that are sitting in the back to come join us at the table. We want this to be as interactive as possible. So please don’t be shy. It’s OK if you’re doing your emails. We won’t judge. We just want we want people to be participating in the conversation as opposed to, you know, 75 or 90 minutes of us talking at you. Welcome to today’s session, Searching for Standards, the Global Competition to Govern AI. My name is Michael Karnikolas. I’m the executive director of the UCLA Institute for Technology, Law and Policy, which is a collaboration between the School of Law and the School of Engineering. And this session is co-organized with the Yale Information Society Project and the Georgetown Institute for Technology, Law and Policy. Our objective today is to foster a conversation on the development of new regulatory trends around the world, particularly through the influence of a few major regulatory blocks, particularly China, the U.S. and the EU, whose influence is increasingly being felt globally. And the tension between rulemaking within these centers. power and the impacts of AI as they’re being felt outside of this privileged minority. As part of that conversation, we have a fantastic set of panelists. We’re not going to be setting aside a specific time at the end for Q&A. Rather, we’re hoping to run this session more as an inclusive conversation. So what that means is that after an initial round of short three-minute interventions from each of our panelists, strictly policed three minutes, we’ll have a set of discussion questions. And for each of those discussion questions, after a couple of interventions from our panel, we’re going to be inviting interventions and comments from the rest of you to engage on these questions as well. So please, again, for those of you who are just joining us in, come join us here at the table and participate. So without further ado, let’s kick things off with a set of short introductory comments from our panelists to discuss trends in AI governance related to their region and area of specialization. Out of deference to our wonderful host country, I’m going to start with Kyoko Yoshinaga, who is a project associate professor of the Graduate School of Media and Governance at Keio University and also an expert at GPAI’s Future of Work Working Group. Kyoko.
Kyoko Yoshinaga:
Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brief overview of AI regulations in Japan. Japan adopts soft law approach to AI governance horizontally while revising some sector specific laws. It’s not really known worldwide that Japan took the lead in introducing principles for AI research and development designed to guide related G7 and OECD discussions. In 2016, the then Internal Affairs Minister, Minister Takaichi proposed eight principles of AI R&D principles. Transparency, controllability, safety, security, privacy, ethics, user assistance, and accountability as non-binding international framework which was agreed by participating G7 and OECD countries. And it contributed to the OECD’s AI principles. Japan has social principles of human-centric AI which was developed by the Cabinet Offices Council as principles for implementing AI in AI-ready society. And there are seven principles to which society, especially state legislative and administrative bodies should pay attention and they are human-centric, education literacy, privacy, ensuring security, fair competition, fairness, accountability, and transparency, and innovation. Then we have AI R&D guidelines made in 2017 which added collaboration to the previous eight principles of AI R&D principles which I mentioned earlier for developers and business operators of AI. And we also have AI utilization guidelines which consists of 10 principles to address the dangers associated with AI systems. And it was also developed by Ministry of Internal Affairs and Communication. But these were for the developers, users, and data providers of AI. This user perspective guidelines was made because AI may change its implication and output continuously by learning from data in the process of its users, process of its uses. Also, we have governance guidelines for implementation of AI principles issued by the Ministry of Economy, Trade, and Industry, which guides how to analyze the risks associated with AI system implementations and offers some examples to help organizations adopt suggested principles. So these non-regulatory, non-binding soft laws are used by prominent Japanese companies to develop their AI policies and communicate them to external parties. As for the sector-specific, I wouldn’t go into detail right now, but Japan is amending sector-specific hard laws, such as the Act on Improving Transparency and Fairness of Digital Platforms and Financial Instruments in Exchange Act, which requires businesses to take appropriate measures and disclose information about risks. Also, for doctors, there is notification from the ministry that the doctor owes the responsibility of final decision in the treatment that uses AI. So we have soft law approach at horizontal level combined with some hard law approach by revising existing laws. Thank you.
Michael Karanicolas:
Let’s go next to Carlos Afonso Sousa, the Director of the Institute for Technology and Society of Rio de Janeiro and a professor at Rio de Janeiro State University Law School.
Carlos Affonso Souza:
So thanks, Michael. It’s a pleasure to be here among friends to discuss this very important topic on how we think about regulation of AI in the region. So in this my brief introduction, just to say a bit that it seems that even though AI like national strategies that do end up like sharing a common language, of course, like different states, they will have different priorities, they will have different approaches, and they will have, of course, different long-term visions about like how AI will end up producing relevant economic, political and cultural changes in society. And especially when we look in the region and by region, I think about Latin America, we see that different countries are looking to the issue of governance and regulation of AI. And we have for that specific purpose, Argentina, Brazil, Colombia, Peru, Mexico, all being very active in this discussion. But one thing that I would like to pinpoint that we can see right now in the region, and I think that’s something that we might scale up to a discussion to different regions, is how we’re moving through almost like this three-step process, in which it all began with a very broad ethical principles about AI, that end up turning to a second phase in which different countries end up designing their different national strategies. to think about AI and we now seems like we are in this third phase in which different countries are actually actually regulating about AI through hard law through different mechanisms and that’s I think one of the the greatest moments for us to take a look on especially because governance and regulation on itself it’s a form of technology and we need to understand how are we approaching those different topics concerning the future of AI and making sure that regulation and governance is appropriate to deal with the challenges that we’re facing right now and at the same time come up with solutions that could be future proof in terms of the challenges that we are going to face forward so I’ll just stop here in this very brief introduction but just to to take to provide this quick look on the region and seeing different countries going through this different stage on thinking about national strategies regulation and governance tools for for AI so thanks Michael perfect Courtney Raj is the director of the Center for Journalism and Liberty at
Michael Karanicolas:
the Open Markets Institute and a member of the IGF’s multi-stakeholder advisory
Courtney Radsch:
group thank you so much so in the United States the focus right now is on creating frameworks for figuring out what governance of AI should look like and what regulation should look like and I think one of the challenges is that we talk about AI as if it is a brand new thing without actually thinking about its components and breaking down what exactly it is we mean by AI including the infrastructure data cloud computing computational power as well as decision-making so right now a few of the major kind of regulatory initiatives or standard-setting initiatives include the blueprint for an AI Bill of Rights by the White House Office of Science Technology and Policy which is mainly focused on risk management and mitigation it includes a set of five principles and associated practices that are designed to help guide the design use deployment of automated systems these are like automated decision-making systems so again only one small component of AI designed to protect the rights of the American public in in this age through safe effective systems dealing with algorithmic discrimination protections data privacy notice and explanation human alternatives considerations and fallbacks and it is intended to inform policy decision and guide regulatory agencies and rulemaking but it is non-binding the OSTP is soliciting input to develop a comprehensive national AI strategy and it is focused on promoting fairness and transparency and AI meanwhile the National AI Commission Act which is a proposal that would create a 20-member multi-stakeholder Commission to explore AI regulation within the federal government itself is focused on responsible AI and specifically how the responsibility for regulation is distributed across agencies their capacity to address regulatory challenges alignment among enforcement package actions and a binding risk-based approach much like the EU I would say so there is support for the creation of a new federal agency dedicated to regulating AI which include which could include licensing activities for AI technology, although there are alternative views which think that some of this regulatory expertise should be embedded within each individual agency. There is also at the federal level the Safe Innovation Framework, which sets priorities for AI legislation, focusing on security, accountability, and protecting foundations and explainability, as well as a proposed privacy bill, the American Data Protection and Privacy Act, which would set out rules for AI, including, again, risk assessment obligations. The federal agencies are providing guidance to regulated entities. So for example, the FTC is regulating deceptive and unfair practices attributed to AI and are increasingly using their antitrust authority to impose some antitrust impositions looking at whether they can break it down some companies. I’d also just add that at least nine states have enacted AI legislation, with another 11 with proposed legislation. And we need to, I think, look at competition interventions as well, which is not yet part of the regulatory landscape, but is occurring with some court cases happening alongside these regulatory standard-setting initiatives. Thank you.
Michael Karanicolas:
So we have three fantastic panelists in the Zoom as well. Let’s go first to Simon Chesterman, who is the David Marshall Professor and Vice Provost for Educational Innovation at the National University of Singapore, as well as the Senior Director of AI Governance at AI Singapore.
Simon Chesterman:
Thanks so much, and I’m sorry not to be there in person. But coming from the Singapore and Southeast Asian perspective, I think one of the challenges that every jurisdiction is facing is we’re wary both of under-regulating and of over-regulating. and you expose your citizens to risk, over-regulate, and particularly for small jurisdictions, you risk driving innovation elsewhere. And so when the European Union adopts the AI Act, META might determine that it’s not gonna roll out threads, but it’s not gonna withdraw from that market completely. If a very small jurisdiction like Singapore adopted something like that, it might lead some of the tech companies to opt out of that jurisdiction completely. So that’s one of the sort of baseline considerations that I think is operative here. A second consideration is that in these discussions, certainly over the last eight or so years, the tendency has been to try and come up with new sets of rules, very much like sort of Isaac Asimov laws of robotics, that will address this problem of AI. But as Courtney just said, AI is not that new, and indeed laws are not that new. And I think that kind of approach often misunderstands the problem as too hard and too easy. Too hard in that it assumes that you’ve gotta come up with entirely new rules, whereas a lot of my own work has been essentially arguing that most laws can govern most AI use cases most of the time. But that approach also misunderstands the problem as being too easy, because I think it fails to understand that the real devil is in the application. It’s in the application of rules to new use cases. So in Singapore’s context, rather than coming up with a whole slew of new laws, we have had some tweaks. So for example, the Road Traffic Act had to be adjusted so that leaving a vehicle unattended wasn’t necessarily a crime, which would be a problem for autonomous vehicles and so on. But at the larger level, what we’ve really focused on is two things, human centricity and transparency. And the majority of the model AI governance framework that was adopted here back in 2019 is looking at use cases, what this actually means in practice. Because saying that AI shouldn’t be biased is merely repeating anti-discrimination laws. Discrimination should be illegal, whether it’s done by a person, a company, or by a machine. But applying that to particular use cases can be a challenge. So recently, Singapore released AI Verify, which is a tool which is intended to help companies police themselves, help organizations police themselves and determine whether or not they’re actually holding themselves up to the standards that they’ve been espousing and whether more work needs to be done. So I’m looking forward to a really interesting discussion, but I’ll hand the time back to the chair. Thank you very much.
Michael Karanicolas:
Thanks. Let’s go next to Tamiwa Elori. Tamiwa is a postdoctoral research fellow at the Center for Human Rights at the University of Pretoria. Tamiwa, are you there? Oh, yes. Yes, I am.
Tomiwa Ilori:
Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiatives in Africa on AI governance. And quickly, according to the African Observatory, on Responsible Artificial Intelligence, there are at least 466 AI policy and governance items or used in this conversation initiatives that make direct reference to AI in the African region. And it covers quite a broad period from 1960 to 2023. Those initiatives are categorized in various ways. First, some are categorized as laws, some are categorized as policies, some as reports, some as organizations or projects. Currently, across the region, there is no major treaty or law or standard when it comes to AI governance. When it comes to policies, there are just about two to three of them. And when it comes to… organizations and projects that are currently all working on AI governance, there are about 25 of them. So I wanted to give just about a high level summary of what is happening with respect to initiatives across the region. And these initiatives cover at least 17 policy areas, including access to information and accountability, data sharing and management, digital connectivity and computing and so on. Generally, these initiatives are led by government, multilateral organizations, public funded research, academia and the private sector. And the jurisdiction this initiative cover include the national, we’re looking at countries like Mauritius, Kenya and Egypt already have a kind of national AI policy. Then we have regional initiatives such as the AU Working Group on AI and also documents that refer tangentially to the regulation and governance of artificial intelligence systems, such as digital transformation strategy that covers from 2020 to 2030. Then also the African Union Data Policy Framework. Then we also have jurisdiction like global jurisdiction, like the OECD AI initiatives, and also subnational initiatives. Quickly, that said, artificial intelligence governance in Africa is still very much in its infancy. And most approaches for now are soft, but we already seen growing interest towards a hard blow approach. And that was just a recent example from, for example, the Kenyan government that has signified interest to now pass a law with respect to regulating AI systems. However, while governance may tarry for a while, interest are increasing from diverse key stakeholders such as governments, businesses, civil society, regional institutions, and many others. What this signals is that governance will not only catch up. We not only have to catch up, I mean, but when it does, it needs to be dynamic and respond to the unique challenges faced by Africa as a region in order to ensure that we do not replicate ongoing inequalities. I will stop there for now. Thank you.
Michael Karanicolas:
Thank you. And finally, let’s go to Irakli Khodeli, who is a program specialist for UNESCO to introduce their initiatives in this area.
Irakli Khodeli:
Thank you very much, Michael. Good day, everyone. Thank you for inviting UNESCO to join this panel. My name is Irakli Khodeli. As announced, I’m from the Ethics of Science and Technology team of UNESCO, and I’ll be contributing to our discussion today from a specific angle, angle of UNESCO’s recommendation on ethics of AI, focusing on the tool for its proven potential to guide countries on AI governance and AI regulation. The recommendation was adopted, so in a way, I’ll be very happy to be bringing in a global perspective on AI governance, global because the recommendation that I’ve mentioned was adopted almost two years ago by 193 member states of UNESCO. It is grounded in overarching fundamental values, such as human rights, human dignity, diversity, environmental sustainability, peaceful societies, and then these broad values are translated into 10 principles. There was a lot of mention of the principles already. There is perhaps nothing new in the UNESCO principles, either some of, for instance, Kyoko has mentioned some of the principles that were guiding the national discussions in Japan and also OECD principles were mentioned. What does make UNESCO’s framework distinctive is the specific emphasis on gender, because UNESCO believes that this should be actually disassociated from the general discussion on discrimination, because there are some specific and. and severe harms and threats to gender diversity, gender equality, and there’s also an emphasis on environmental sustainability because oftentimes in the global discussions this is under overlooked, this dimension. And then finally these values and principles are translated into concrete policy action areas by the recommendation to show the governments how you can actually operationalize these principles in specific policy contexts, whether this is education and scientific research, whether it’s economy and labor, whether it’s healthcare and social well-being, etc. There are 11 different, in communication and information, there are 11 different policy areas of the recommendation. Now there has been, as you’re aware, a lot of discussion globally focusing on the risks posed by AI, ranging from benign to catastrophic and from unintended to very much intended and deliberate harms. And we understand that the risks are significant and these risks are also cross-border. This AI also is closely related to pillars of the UN, such as sustainable development, human rights, gender equality, peace. So in this sense, a UN-led effort in our view is critical, not only because AI requires a global multilateral forum for governance, but also because unregulated AI could undermine other multilateral priorities like the sustainable development goals and others. So what I would like to postulate today in our discussion is that UNESCO’s recommendation represents a comprehensive normative background that can guide the design and operation of the global governance mechanism. I will end with saying that despite this focus on the global governance, we must admit that the successful regulation happens at the national level. Ultimately, it is the national governments that are responsible for setting up institutions and laws for AI governance. And here again, the recommendation on ethics of AI comes in handy because we are currently working with governments around the world, both in the global north and global south, to help them make a concrete use of this recommendation by reinforcing their institutions and the regulatory frameworks based on this overall ethical framework. Thank you very much. Really looking forward to engaging
Michael Karanicolas:
with these discussions with you today. Thanks. So I think that’s a fantastic framing of different initiatives as they’re taking place in different parts of the world and by different agencies. I want to start now by opening things up with a discussion of the north-south, global north-global south, majority world-minority world dynamics that are at play in the broader regulatory landscape, and particularly the pressures from standard setting emerging from major regulatory blocks and the challenges that that creates in trying to make space for, particularly for smaller nations or for voices from the majority world. Why don’t we, I think that Simon might be a good place to start there in terms of the challenges that smaller nations face in trying to make their own way from a regulatory perspective, and then we’ll maybe go to someone else from there.
Simon Chesterman:
Sure, thanks so much, and again it’s great to be part of this conversation. I think as Carlos said earlier, there are phases that countries go through. It starts with principles, but indeed those principles themselves, this sort of set of ideas that, as Irakli said now, we’ve seen in the UNESCO document, you can actually trace their origins back through primarily western technology companies. It was around 2016 to 2018 that western technology companies, partly because it was around that time that the Cambridge Analytica scandal revealed to many that the risks of errant AI went beyond a weird Amazon recommendation or a biased credit or hiring decision to actually potentially impacting elections, and now we’ve seen with generative AI everyone’s suddenly realizing that AI could actually affect their jobs. So we’ve seen this spread around the world, and it is now I think a truly global discourse, but there are three challenges I think facing small countries in particular. The first is the one I’ve already highlighted, the sort of whether to regulate, because if you’re a small jurisdiction and you regulate too quickly, one of the concerns is that all you will do is drive innovation elsewhere. That can happen to big countries as well. An example of this is when in In 2001, the United States imposed a moratorium on stem cell research, and that really just led to a lot of that research moving elsewhere. So that’s the first question, whether to regulate for fear of driving innovation elsewhere. The second is when to regulate. And here, there’s a useful idea that some of you might be familiar with called the Collingridge Dilemma. This goes back to David Collingridge’s book called The Social Control of Technology. And basically what he argued back in 1980 is that in an early stage of innovation, regulation is easy, but you don’t know what the harms are. You don’t know what you should do, and the risk of over-regulation is significant. The longer you wait, however, the clearer the harms become, but also the cost of regulation goes way up. And so I think, again, for smaller jurisdictions, there is this wariness of losing out on the benefits of artificial intelligence. And as we carry on this discussion, I do think it’s important to keep in mind that there are risks associated with over-regulating as well as under-regulating AI. The third challenge, and this faces many countries around the world, but again, in particular, smaller countries, is that in many ways, the biggest shift over the past decade of machine learning is the extent to which fundamental research as well as application has moved from public to private hands. Back 10 years ago, at the start of the machine learning revolution, a lot of the research was going on in publicly funded universities. Now, a decade later, almost all of it is happening in private institutions, in companies. And that means a couple of things. Firstly, it means it greatly shortens the speed of deployment from an idea to application. We’ve seen that in generative AI in particular. But secondly, again, it limits the ability of governments to constrain behavior, to nudge behavior, or even to be involved in the deployment cycle. So with those ideas, I’ll hold off, but again, I’m really looking forward to an exchange of views. Thank you.
Michael Karanicolas:
Let’s go to Carlos next, and then I would like to hear from someone in the room. So please express your interest now if you’re interested in intervening in this.
Carlos Affonso Souza:
And since we’re talking about regulation, of course, architecture is a form of regulation as well, and the architecture of this room might be non-inviting for people to come up and provide their ideas and to join the conversation. So please feel free to do that. And just to offer a quick segue on what Simon was saying, one challenge that we might face when we think about governance and regulation of AI in the majority world is that AI might be invisible, might be something really ethereal, hard to grasp on how regulators could enter into this discussion to begin with. And of course, that will end up creating the effects that examples that we can take out from different countries that have already regulated this topic end up as creating a very strong influence on the models, the categories, the way in which these conversations is actually being set up in those countries. And we now face a challenge because we want countries from the majority world to be protagonists in the discussion about governance and regulation of AI. But at the same time, when we think about especially the largest countries in the majority world, they end up serving mostly as a resource for users of the AI most famous applications than anything else. And this is something that we need to pay extra attention because when we think about the regulation and the governance of AI, we need to think about what we are trying to communicate, what are we addressing? Because one thing is the deployments and the creation and the design. of AI. And another thing is the usage of this AI application. And when it comes to the majority world, we can see pretty often that the applications were not going to be designed, created in those countries, but they will be used heavily in those countries. So it’s quite obvious that the discussion about how to regulate not only the creation, but also the use of those applications will be key for the successful experiences of those initiatives on regulation and governance. I’ll just stop here, Michael.
Michael Karanicolas:
Perfect. Let’s go here to the back and then over this way afterwards.
Audience:
Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is involved in policymaking. It was interesting to listen to Simon who mentioned that under-regulation and over-regulation, that has to be a balance. And the thing is that we have a national AI policy. It’s in the draft stage and we’re currently getting input from a lot of stakeholders. Now, it does touch upon the aspect of generative AI because it’s the newer phenomena, but it’s had a really disruptive effect in so many ways. And we talk about the ethical use of AI, but as long as generative AI, for example, I’ll consider a use case that as long as it’s assistive in nature, it’s acceptable. But beyond that, it can be considered unethical. So I just want to learn from the panelists that how can we strike a balance over there? Because at the government level, if we look at the education sector, there are a lot of problems already being raised by different government institutions, educational institutions, universities, that generative AI is misused. So at a policy level, I would like to know, I mean, how can we address this problem? Thank you. All right. Thanks very much. Sliming Zhu from Australian National Science Agency. So we basically, our staff chairs Australian’s AI standards body, and also we developed Australian’s ESCO AI principles back in 2019. So as a science agency, interestingly, we are not allowed to comment on policy and regulation because we provide scientific evidence into these policy discussions. But I think I want to raise two points. One is no matter the standardization and the policy and the regulation, you need to measure the risks, the size of the risks. And that’s a scientific question. And we need to have an international research alliance on how to measure those risks. Once you fully understand how to measure those risks, then you can probably reduce those risks and make a very informed decision. The second point I want to make is, a lot of these are trade-offs decisions. Only those risks are well understood and measured, you can make those trade-offs. For example, when US Statistical Bureau released their data for further research, they had to make a very concrete trade-off between data utility and privacy. You have to make a conscious decision to sacrifice some privacy for the gain of some benefits. And that informed decision is done by stakeholder groups from privacy advocates. But it’s even more complex than that. After Lady Dice, I think there are studies coming out to say, actually, privacy, utility, and fairness, they all have trade-offs. Having privacy-preserving approaches sometimes harm fairness and sometimes promote fairness. You have to have the fairness foundation for that as well. And many of these are context-driven. I don’t know whether people have seen the recent CHATGBD vision system model. And they basically said, one of the use cases they have is blind people wanting to have face recognition. That’s the number one feature they requested. Because blind people do say, I just want to have the same level of ability of normal humans in a room to recognize face. But based on face recognition risks and the various legislations, they are not allowed to have that. And that’s the number one feature they have requested. But I think that’s an interesting discussion to say how the standardization of policy and the science will enable this kind of trade-off decisions. Thank you. Milton, did you? So I just wanted to raise a question about something. I can’t remember which panelist said that when we talk about regulation, we’re necessarily talking fundamentally about national governments. And if you look at what AI consists of, break it down into its component parts, as Courtney said, you’re looking at a combination, really, of data resources, software programs, networks, and devices, computing devices. And all of those are globalized markets. And we… with the internet, and here’s where I’m trying to create a link to internet governance, which is what this forum is supposed to be about, although we might wanna rename it the AIGF. The internet makes it all very easy to distribute applications and to distribute data resources very quickly and hard to control. So I understand that many forms of applications will be regulated at the national level. Like medical devices or something, where you have a nicely defined thing, but AI as a whole is going to be a very globalized form of human interaction, and I don’t think that national governments are all going to solve this by themselves.
Michael Karanicolas:
Let’s hear from Tamiwa, and then maybe one more intervention, and then we’ll move on to the next question.
Tomiwa Ilori:
Thank you very much, Michael. And discussing not-so dynamics, especially from an African perspective when it comes to AI governance, for me, I think the race towards global AI governance will favor the boat. And while I will not delve into the ethics of that sentence, it is the reality, especially in Africa, especially with how the region is often bedeviled with importation, especially of standards, and sometimes even being referred to only as standard stakeholders, not people who design standards for themselves. And we know in international law, as it is in international politics, smaller nations are seldom bold, and they often end up as pawns or testing grounds for bad governance attempts. However, in my view, smaller nations in this context can be bold if. if they strategize and work together with like-minded initiatives or systems. And when I use the word small, I also use small in terms of progress with AI governance and initiatives on the ground. The way I see it, it is a long way for a small nation to move alone, but that journey towards responsible AI governance could be shorter if we work with others who may share maybe similar goals and intended results. That would be my quick contribution on that. Thanks.
Michael Karanicolas:
So let’s go to one more intervention from the room and then I’m going to move on to it. Yeah.
Audience:
Janet Hoffman, Germany. I have a question to Simon Chesterman on the situation in Singapore. You pointed out that Singapore is a small jurisdiction and thereby always face the risk of driving companies out of the country. But I was thinking of the fact that Singapore has quite a number of really successful companies under public ownership. So I was wondering whether that not creates perfect conditions for regulatory sandboxes where you can in fact test what type of regulation works and what effects it has on the companies.
Michael Karanicolas:
Sure. Simon, did you want to respond to that? Sure.
Simon Chesterman:
So it’s a great question. And indeed, regulatory sandboxes is something we’ve been exploring in particular in the fintech sector. The Monetary Authority of Singapore has used this technique, which is not unique to Singapore. The basic idea is you give a kind of safe regulatory playground where there are reduced risks that enables companies to test out new use cases. But the larger point about the danger of driving innovation. elsewhere really is a concern not limited to Singapore’s domestic economy, but to attracting the big tech companies apart from anything else to Singapore, which we saw 11 years ago when Singapore adopted the Personal Data Protection Act. Its legislation was specifically said to be aiming at balancing the rights of users against the legitimate needs of business. And so I think the combination of the small size, the openness to the world and the regulatory flexibility of a country like Singapore does give us an opportunity, but we’ve still got to operate within those kind of constraints. Maybe if it’s appropriate, I can very quickly just respond to earlier comments. I didn’t catch his name, but the gentleman from Pakistan. One of the key arguments that I think needs to be spread around the world about the use of generative AI is that if you’re going to use these things in particular, if you can use them in a public sector context, you’ve got to keep in mind two things. Firstly, that if you share data with these generative AI systems like Chattopadhyay and similar capabilities, you’re essentially sharing that data with private agencies. So you need to be very careful what you share. The second is it needs to be clear that whatever comes out of it, if you use that, you are responsible for it. And then really quickly, Li Ming, great to see you at a distance even, and I think it was Milton, I’d link both those two to say that the levels of regulation that we’re talking about, you need to think of three. We do need the regulatory hammer. As Irakli said, you do need states are the only entities with real coercive powers that are going to be essential for harsh regulation when that’s needed, and that’s going to be an important level. But above and below that, you also need self-regulation, you need industry standards, you need interoperability that will in practice be the most common form of regulatory intervention, that kind of standard setting. do need some measure of coordination, not just coordination of standards, which is what I think Milton was talking about, but also to Li-Ming’s point, you need the ability to share information about crises. And so I won’t get into it now, but elsewhere I’ve written, as others have, about possible comparisons with the International Atomic Energy Agency and the efforts to share information about safe uses of nuclear power in exchange for a promise not to weaponize that technology. I want to pick up on something on what you mentioned previously about the
Michael Karanicolas:
Collingridge dilemma for the next question. We’re in relatively early phases of it. There are a lot of unknowns of this technology, but there are also a lot of clear manifestations of potential and existing harms. So the regulatory questions are certainly not speculative on this, but we are in the relatively early phases of implementation, wide-scale implementation at least. Are there lessons to be drawn from previous eras of tech governance in how we approach the regulatory picture? Are there successes and failures of previous regulatory frameworks that can teach us about what works and what doesn’t? Maybe I’ll go to Courtney on this first. Thanks so much.
Courtney Radsch:
Yeah, so my work is primarily focused on the so-called global south or majority world with a focus on the Middle East. And I think if you look at previous eras of tech governance, whether social media, search, app stores, online marketplaces, even standards, they were all rolled out and they remain controlled by a few monopolistic tech firms. And so we need to really take this as instructive. The debate about AI governance has failed to grapple with the issue of market power. We are taking the economic ownership and control of AI as a given. And while the discussions around how to prevent AI from inflicting harm are important and the issues of preventing exploitation and discrimination are absolutely necessary, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market. I think that is the biggest takeaway. No matter how well we design our rules, we will struggle to enforce them effectively on corporations that are too large to control, that can treat fines as the cost of doing business, and that can decide to simply, for example, censor news in an entire country if they don’t want to comply with the law, as we saw Meta do in Canada recently. So I think that we have to look at AI again in its component parts, as I mentioned earlier, and think about the dominance that we’re already seeing by literally a handful of big tech firms that are providing the leading AI AI foundation models, they are taking aggressive steps to co-opt independent rivals through investment, partnership agreements, and their dominance over, for example, key cloud computing platforms. We know that between Meta and Google and Amazon, for example, nearly a thousand startup firms were bought with no merger oversight, no FTC intervention. This has to change because I think, as we’ve discussed the kind of small, large divide, the big economy, small economies, this is kind of relevant, but also irrelevant when you have massive firms that are creating new capabilities, creating new technologies that national governments do not have power over. And so I think that we have to look at reshaping the structure of markets and ensuring that we crack down on anti-competitive practices in the cloud market, look at common carrier rules so that, for example, regulators should be considering forcing Microsoft, Amazon, and Google to divest their cloud businesses in order to eliminate the conflict of interest that incentivize them to self-preference, for example, their own AI models over those of rivals as we have seen in app stores, in search, in the way that Amazon constrains and forces small businesses to comply with the rules it sets because if you’re not on Amazon marketplace, it’s very hard for you to do business. In many countries, if you’re not on Google search, you might as well not exist if you’re a news organization. So there’s much to be learned, but we need to get out of this idea that this is somehow some new, really scary thing that we’re trying to govern. I mean, like, all right, again, the components, data, how do we separate out data, computational power, software applications, cloud computing, and think about each of those component parts as well as thinking about risk assessments, these more risk frameworks that are really at a far end of the application layer and implications of a certain subset of AI systems.
Michael Karanicolas:
The multidimensional nature of how power is concentrating is certainly well taken. Let’s, if we’re thinking about a longer view of technological developments and regulation, maybe let’s go back to Iraq, UNESCO, who certainly has been present through, UNESCO at least has been present through a lot of these different areas of governance and would be interested to hear their thoughts.
Irakli Khodeli:
Sure, thank you very much, Michael, and also to all the other participants. participants for very insightful comments and discussion. This is actually a really nice question for me to also get back to something that Milton has mentioned in terms of the difficulty for member states to govern something that is so cross-border in nature, and that has to do with things like the flow of data across borders, internet, et cetera. Because that relates to how have we, are there cases where we have successfully regulated an emerging technology? And my answer there goes that you need to have, and I might be reiterating some of the points that Simon has made and other speakers have made, is that the successful regulation of any technology, in our view, takes regulatory frameworks existing at different levels. Of course, at the global level, that’s precisely, again, responding to Milton’s questions, that’s precisely why you need a global governance mechanism that coordinates and ensures compatibility and interoperability between different layers of regulation. Usually at the global level, you have the softest level of regulation, it could be a declaration, it could be a recommendation, but it could also be a convention, which would be a more binding document. And this is what, at the international level right now, at the UN level, that’s what the conversation is about, is what kind of regulatory mechanism that you have. And let’s not forget the importance of the regional organizations and regional arrangements. Of course, European Union comes immediately to mind, and it has been mentioned many, many times, but ideally, we would want to have the same type of movement within African Union, within ASEAN. in Asia, Council of Europe already has a movement, a concrete process towards an instrument, of course OECD, so a regional organization, regional regulation would be very important. Then again national, we cannot avoid the fact that whether it comes to redressing cases where harm has been done or enforcement of different mechanisms, then the national level is indispensable. And let’s not also forget sub-national level. Courtney has mentioned for instance a lot of state-level activity on AI regulation. We’re also aware that in other countries also similar processes exist. In India for instance there is a lot of legislative activism at the state level, below the national level. So all these different levels I think can effectively work together to regulate the technology. And I’ll end with a concrete example. Bioethics for instance is something that UNESCO has been engaged for a long time. Simon has mentioned stem cell research. Perhaps that is not the best specific example because maybe for the US that is an example of over-regulation. But bioethics, so we have the Universal Declaration on Bioethics and Human Rights at UNESCO that all member states, all countries around the world basically, have signed on to. Then you have an example of a more stronger framework, Oviedo Convention of the Council of Europe, also in bioethics that provides more stringent framework. And then that is translated into specific very binding and strong regulation at the country level in the European countries to protect people against the risks that are emerging from biological and medical sciences technologies. So that’s a concrete example, thanks.
Michael Karanicolas:
Yeah, I think that the structural framing is helpful. I would add trade associations as well and private sector standard-setting bodies as well, which can be enormously influential. I’ll also note though that you talk about how these different levels of regulation, these different structures can work together. They can also compete and work at cross-purposes, which I think adds interesting dimension to how norms get set and applied. Let’s go to another comment in the room and then to Carlos.
Audience:
Can you use this one? Just a moment. We have lots of microphones right here, probably too many. We can diffuse them out. So I think the microphone was broken, it was not just my fault. Anyway, Ingrid Volkmann, University of Melbourne, Digital Policy. I think this debate about power is really interesting. And it’s power between, or new power dynamics rather, between the global North and South, with the Western companies producing a lot of data across the world, etc. We’ve addressed that. But I think there is another dimension and that is the granulation of data. Because in the global South, perhaps there is not the same quality of fine granulated data that’s available in the global North. So through that process alone, I think there is a lot of risk that could be produced through AI. And I don’t know a solution to that. I know that the ITU has a lot of initiatives around AI for good, with farming and medical, etc. in the global South. But I think this issue of data granulation is perhaps also another one that could be addressed in the power debate we’re having. Thank you.
Carlos Affonso Souza:
Super quickly, since we are discussing what are the lessons learned on the at least 25 years of thinking about internet regulation, maybe one thing we should take into account is that copyright and freedom of expression were the two issues addressed early on in the regulation of the internet. And by the time social media ended up appearing in the global scenario, we had this surge of personal data protection laws that was fundamental for us to understand what internet regulation looks like in the last decade. So when we shift gears into the discussion about AI regulation, it looks like we have at least two very interesting questions in comparison to this experience that we had with the regulation of the internet. So first one is like how much the modeling of personal data protection laws such as concerning about risk analysis, we will end up influencing the way in which AI regulation will be shaped. And the second one is the decisions that were end up being taken on the issue of platform liability in different countries in different regions. And how can we to on a certain way, take that into the discussion about the damages caused by AI? Because it looks like, first of all, we need to ask ourselves what type of AI we’re talking about, what type of damages are we talking about. And especially, I think we have an entirely different discussion when it comes to AI. Because when it comes to AI, we have this opportunistic discussion in which if the AI application end up causing trouble and damage to other people, the AI application is dumb. And you have, sorry, quite the opposite. The AI application is super smart. And the robot, the application, decides on itself to cause the harm. But on the opposite end, when the AI application end up providing you profits, such as in this discussion about copyright, you want the machine or the application to be as dumb as possible. So you as a developer, you as the deployer of the AI, you end up having all the profits of having this type of application out there in the market. So I think this is a type of discussion that we didn’t have back then in the discussion about internet regulation. And this is very unique to the discussions that we have right now on AI, this opportunistic usage of the autonomy of the AI application. I think this put us in a very different set of questions.
Michael Karanicolas:
I think the copyright example, anytime you’re talking about learning from previous eras of regulation, the copyright example is incredibly salient. And I would say, I think that even today, enforcement of IP rights online today is vastly stronger than, say, enforcement of privacy rights. And the reason for that, it’s entirely a legacy of the early prioritization of harms that were viewed as the most pressing and the most urgent to address early on in regulatory efforts. And I think the point about needing to be deliberate and careful in selecting how harms are understood and prioritized in the current regulatory, as we grapple with developing new regulations now, is incredibly important. Because this will ripple forward over time as these technologies continue to proliferate.
Courtney Radsch:
Yeah, just to build on that, I mean, I think we have to think about also that there’s a political economy of the protocols that are created by technical standards setting bodies. This idea that, for example, robot TXT, that’s a standard, or HTTP versus HTTPS, are standards that were created in technical communities without necessarily considering the political economic implications of the abilities that were being created through those standards. And so to build on your two points here on copyright, the ability to just hoover up all of this rights-protected data to create large language models without any compensation to content creators, news producers, et cetera, has huge repercussions on the economics of certain industries. In my own work, I focus on the journalism industry, but also then on broader society, work, et cetera. And so I think we need to take that into consideration, that technical standards are not neutral. They have political economic impacts. And we have to think about neutralizing big tech’s unfairly acquired data advantage proactively. And we should think about the fact that a representative from Meta stood on the AI high level panel as a representative. We are recreating a lot of the problems of the past by elevating the same big tech companies versus seeing a greater diversity of technology and technical community. You have an overabundance of the big tech representatives and corporate tech representatives in a lot of the multi-stakeholder processes. So we need to, I think, reorient that.
Michael Karanicolas:
OK, so we’re doing a lot of beating up on big tech and the tech sector at the moment. So let me ask then, the next question I want to get to is, what is the appropriate role for industry in regulation and standard setting? What does it mean to have meaningful multi-stakeholder engagement? I want to go to the room. So we have one over there. And I want to know, is anybody here from either industry, from tech sector, who could contribute as well, either here or from the back? No? Yes, well, let’s go to the corner first.
Audience:
Can you hear me now? Okay. This is Guo Wu, actually from TWIGF. And it’s kind of interesting. I learned the natural language processing in 1980. I don’t know how many of you is, you know, learning the natural language processing in the early days. And it’s really a big difference between the 1984 and now. Because in the early days, we are talking about the algorithm, but these days we are talking about a massive database. You know, the machine is learning from the massive database. But I don’t know anybody who do have a study in this kind of situation. Because today, the AI is learning from massive of the database. But the problem is, let’s try to think about a two group. A group is a group of the people that produce the huge data. And such the machine can learn from this huge data from the group A. And think about it as another group of the people. They don’t produce a lot of data. Let’s mean machine cannot learn enough from this group B. Now when the AI machine, after this whole study, and then try to generate their comment or whatever the result, is that it’s possible because this data is less, this is more. So there would be generate kind of discrimination. to the good B and prefer the good B. I don’t know anybody study such of a case. Yeah, I’m gonna make one more call for anybody from private sector industry to discuss the role of the private sector in regulation, in crafting good regulations and standard setting. If nobody speaks up, you don’t get to complain if our outcome document is not fair. So well, let me frame it this way then. I mean, we hear a lot about multi-stakeholderism. What does it mean to have a meaningful multi-stakeholder process in terms of crafting either standards or regulation in this space? Why don’t we go to Kyoko first and then someone in the room who would like to chime in or in the Zoom.
Kyoko Yoshinaga:
So let me talk about the industry role. Industry should consider developing or using responsible AI as part of their corporate social responsibility or as part of their environmental, social and governance, ESG practices. Since the way in which they develop and sell or use AI will have huge impact on society as a whole. So I would like to point three main things what organizations can do. One is to create guidelines on the development and use of AI, including a code of conduct, internal R&D guidelines and AI utilization principles and provide publicly accessible documents such as AI policy on how the organization develops and utilizes AI systems. Like we did for privacy policy. And this is very meaningful. And I know this because I was working in a think tank developing AI. AI systems. I was in charge of AI risk management and compliance. But when we made those and we made AI policy publicly available, all the people involved in this developing AI process will have responsible, will be really responsible in making this ethically. And so it’s like a manifestation, but I think this manifestation is very important and effective for the developer companies and user companies to be responsible for making and using AI appropriately. And many companies in Japan, like Sony, Fujitsu, NEC, NTT data, they already developed AI policies based on the guidelines, which I mentioned at the beginning. And it seems to be working well, even if we have non-binding guidelines, software approach. I’m seeing a similar situation now of what I’ve seen back in 2005. In 2005, Ministry of Economy, Trade, and Industry created what we call Information Security Governance Policy Framework. At that time, there were many information security incidents and the government realized that we need to do something. And I was in the think tank assisting to make that Information Security Governance Policy Framework. But we made three tools for establishing information security governance. One is we made an information security benchmark to help organization rigorously and comprehensively self-assess the gap between their organization’s current condition and the desirable level of information security. Second, we made a model for information security reporting to encourage companies to disclose their information security efforts. And three, we made a guideline for business continuity planning to encourage companies to develop such plans. Now, this initiative has led many companies to build robust information security governance. So in the context of AI governance, creating informed frameworks may encourage management to establish robust AI governance within their organization, perhaps functioning as part of their ESG efforts.
Michael Karanicolas:
So let’s go to Simon next, and then someone else in the room if they wanna join in, if they wanna chime in.
Simon Chesterman:
Thanks, yeah, on the role of companies, I do think it’s sort of amazing how things have changed. So back in 2011, Ryan Kala, who’s a great scholar in this area, wrote something very silly, I think, where he argued that in order to encourage research into AI, we needed to give companies immunity from suit. Otherwise, the risks would be so great that they wouldn’t innovate. Now, clearly that hasn’t happened. Jump forward to today, and you’ve got companies lining up to call for regulation. But they’re doing that for at least three reasons. One reason is, I think many of them do actually accept that some regulation would be useful. Second, they know that some kind of regulation is coming, they’d like to be part of that conversation. But thirdly, especially for the big market players, they know that if the regulatory costs go up, that becomes additional barriers to entry for their competitors, so it’s good for them. So by all means, I think it’s important to involve companies in these processes. And I echo what Kiyoko and others have said about the importance of standards, the importance of these emerging interoperable standards is gonna be very, very important. But we’re also gonna be clear-eyed about the incentives that drive these companies, which is to make money, which is one reason why a lot of this stuff that’s being deployed now is really, seems to be making money in two ways. that have been revealed as the sort of money-making aspects of AI. The first is to monetize human attention, and we’ve seen that through the surveillance capitalism, the experience of social media. And the second is to replace human labor. And so for all these reasons, I do think it’s important to involve companies, but also to understand that their role is, yes, they’ve got to pay attention to ESG and so on, the triple bottom line, but ultimately they are businesses. And if we, the community, or if regulators make a determination that these companies are too big, then it’s necessary to either… You’ve got three choices. You’ve got the litigation path, which the US is going down at the moment with the slim, slim possibility that some of these FTC or DOJ actions might actually lead up to the breakup of companies. You’ve got the European approach, which is to say, okay, we’re just going to identify gatekeepers and these six companies are now going to be subject to a much heavier regulation. Or you’ve got the Chinese approach, which is to say, well, just through executive action, Alibaba is going to be broken up into six companies and address the problem that way. So yes, by all means, I think it’s important to involve companies, but also to understand their perspectives, where they’re coming from, and not expect them to be turkeys that vote in favor of Christmas.
Michael Karanicolas:
Yeah, I think that that leads pretty neatly to the next question that I wanted to raise, which is related to risk-based versus rights-based approaches towards regulation and challenges of, not to say self-regulation, because I think that there’s probably going to be good consensus in this room and in most rooms that we don’t want to, we’re not satisfied with a self-regulatory solution, but the emphasis within a lot of early regulatory models on self-assessment and risk assessment as a critical component of regulatory structures. That’s an important part of the EU’s regulation in this space, draft regulation in this space. In the US, the AI Commission Bill has explicitly endorsed a risk-based assessment model. Are there thoughts on challenges and the role of this kind of assessment in effective regulation, the role of companies in carrying out these assessments, challenges in finding an effective avenue towards developing an effective framework if it relies on internal assessments by companies and the need to develop that in a robust way? Melton?
Audience:
Oh, yeah. Well, I think the risk assessment approach that is in one American bill and in the European bills are kind of a joke. Basically, they are asking for self-assessments. And this is not because I’m anti-industry and don’t trust them to do this. I think it’s just going to be a ticking-the-box exercise. And the point that I think we need to think about is that you don’t know what the risks are going to be in many cases. These things don’t exist yet, right? And so that’s what makes me laugh about the European model. So people are supposed to sort themselves into the five different risk levels. But how do you know what the risk is until it happens? And so I don’t believe in this kind of ex-ante forms of regulation where the government pretends that it is all-knowing and it thinks it can decide. And I’d like to bring your attention more to a rights-based approach based on property rights, which is whenever you have a new technology, you create new forms of property. So we saw this in the domain name industry, Michael. We saw these things were nothing. They were given out for free. And then suddenly, they were valuable. And they conflicted with trademark rights. So we had a policymaking process ex-post where we figured out who had a right to what. And now with so-called surveillance capitalism, we are discovering the value of data resources. And we have to renegotiate the boundary between who owns or who controls what data when users interact with platforms. And I think it’s a mistake to view that as this extraction process where a helpless human just gets data taken away from them by you’re engaged in an exchange. You are getting something. And you are giving up something. And we have to decide how that data gets monitored and owned and regulated. And that’s not an easy problem. So I would think that with AI, the issue is going to be a lot about property rights. And it’s interesting to see how we’re replaying some of the conservative protectionist stuff about a copyright now. Remember, when we started the internet, some copyright people were saying, every time you move a file from one server to another, you’re making an illegitimate copy. And that would have killed the internet. They wanted the definition of property rights in digital items to be so strict that we simply would not have had an internet.
Michael Karanicolas:
So we have to be careful about that. I’ll also say it’s interesting because regulatory ambiguity in other use cases can lead to overly cautious approaches if it’s accompanied by either aggressive enforcement and or coercion. or extremely severe penalties. So in the speech realm, a vague law is always viewed as being really dangerous because if it can be aggressively enforced, people see a really clear of the line. But it’s just not clear to me that any of the proposed AI regulatory frameworks would incorporate or could incorporate that level of enforcement. And so that’s why I think it’s interesting that ambiguity can work both ways, but it’s unlikely to work that way in this context. Let’s go, Courtney, and then, yeah.
Courtney Radsch:
Yeah, I think one of the problems, to definitely agree with Milton on the risk-based approach, you just don’t know, and that limits what you’re even talking about. We’re not talking about, for example, regulation that is aimed at reclassifying some types of companies as common carriers or having public utility type requirements. Common carrier requirements, for example. I think that, I mean, yes, the way that we address property rights on the internet gave rise to the internet. On the other hand, the way that we implemented some of those copyrights or lack thereof, and some of the digital advertising structures that emerged has killed off a large part of the news media industry, which is considered an essential component to democratic systems. So there is a trade-off. I think we have to think unfettered innovation is not necessarily good. So I think that the rights versus risk-based assessment does not get to many of the issues at stake. We talk a lot about individual level data. I think you’re right, Milton, you know this. User data, there is some exchange, but there’s a lot of data that is not individual data. It’s sensor data, it’s environmental data, it is data about movement and data about data. That is also incredibly valuable and currently dominated again by larger firms that have more access to data, et cetera. So we have to, I think really, I feel like the rights and the risk-based approach are important for a specific subset of AI, particularly when we’re talking about maybe generative AI, decision-making AI systems in certain sectors, but that is only a small component of AI. And we have to think about public interest-oriented regulation and a wider set of policy interventions.
Michael Karanicolas:
So let’s, did you raise your, oh, I’m sorry. Oh, yeah.
Audience:
Thank you. I wanted to respond to Milton and politely disagree. First of all, I mean, no, that’s nothing new. It, my objection concerns the fact that you think it’s ridiculous to ask platforms to assess the risks. First of all, all companies have lots of experience with risk modeling as a technique. It’s not new to them. They’re used to doing that. And now they are asked to assess the risks vis-a-vis some fairly specific groups, vulnerable groups, to wellbeing, to all sorts of things. And as we know, through various leaks, they know themselves what they’re doing to specific user groups. Also, that is not new to them. And finally, the DSA gives researchers now privileged access to data produced by platforms in the area of risk. And I think it will be possible to some extent for research to assess how platforms assess the risks they impose on societies. It will be, I think, fairly interesting to see, particularly how platforms deal with the question of general risks to society. I have no clue how they’re going to operationalize that. They will have to do that to tick boxes, but there are research groups that will be able to hold them to account on the ways how they approach this problem.
Michael Karanicolas:
Yeah, I think a lot of us poor non-EU researchers are jealous of our colleagues that are gonna be able to do some really interesting research based on that. I have-
Courtney Radsch:
At least funding to do that, right? You’re relying on underfunded civil society and academia to provide oversight of powerful, wealthy companies that do their own risk assessments, but may fire the people who find the risks or bury that research. So it’s not a perfect solution.
Michael Karanicolas:
All right, so let’s… So Carlos, did you want to enter in? And then I think we have a comment on the call.
Carlos Affonso Souza:
So just very quickly to react to Milton’s provocative comments on the status of regulation. And I think there’s something for us to take into account here is that for countries to have something about regulation of AI, it’s almost like a brand, a signal of being part of a group that is thinking about the future. And that’s been leading us to situations in which we come up with regulation that are far from perfect, but we keep hearing that in different countries, in different discussions, people say it’s better than nothing. So I think this is a moment in which we should think about, should we be happy with having something that is like better than nothing, just to be part of a group of countries that have already. connected to something, I think quite the opposite. We are in a very important moment in which we could learn from the experience from abroad, coming up with lessons learned and best practices to come with interesting and innovative solutions. But just to react to Milton’s comments, I think that when we look to some, especially thinking about the influence of the European solution to some of those topics, we have shadows of the European solutions appearing in different countries, solutions that might not even function properly, but legislators will say, hey, we have done something. So by the end of the day, better than nothing.
Michael Karanicolas:
Yeah, I think there’s an interesting tension between the need for, the undoubted need and benefits of engagement and mutual learning and sharing best practices and the importance of factoring local contexts into regulatory processes, right? Like obviously I don’t know that the world benefits from 195 different, radically different frameworks, but it can also be problematic when countries sort of cut and paste, say an EU model or an American model into their local context, which can either lack an appropriate regulatory structure of related legislation, right? Like the EU Act, but without the GDPR and without the DSA and without the Digital Markets Act that are also important components of the same regulatory ecosystem, or that just import a conception of harm that’s not necessarily fit for purpose based on a local context. We had a comment in the corner.
Audience:
Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few words, and I thought we were drilling towards what I was trying to say earlier on, but it’s actually maybe helped me a bit more now that everyone else has spoken. So the word assessment, I think is where I’m going to start. And so that’s expressly a measurement activity is probably where I’d start with that one. So how are we going to measure all these things? So how are we going to measure compliance, performance? How am I going to measure how I can trust a system or that it’s safe depending on the context, because every context is different. So there’s a lot of thought. It’s a bit of a paradigm shift that’s coming, and it’s coming in our part of the world as well as just generally. And when I mean our part of the world, I mean the measurement part of the world rather than any geographical part. So how do we measure these things? And before standards, things called pre-normative standards, which can be anywhere from two to 20 years in development before you get to the standard. So this is where how you can measure what it says on the tip. And so there’s a lot of work that needs to go from that side of things. And so the kind of work that missed us in America is what MPL does in the UK. So there are a hundred signatories to this thing around world nation states. So there could be an interesting platform where some collaborations and multi-stakeholder approach occurs. And the reason I say that is the organizations like us, we sit on that cusp between industry, academia, and civil society. And then just to hit on the, what can industry do for us part is we need to collaborate with industry. They provide access. They will provide access to resources, be that compute, be it to the models. They bring us access to case studies and use cases. And also that knowledge and understanding, we can help them, they can help us to help them. Because then we open things up and we get different lenses can be supported on various things. And then the last thing is this context thing. So it’s not just about quantitative measurement, it’s about qualitative. So what does a socio-technical test bed to measure the trustworthy outputs of AI actually look like? that’s something that the world needs to work on together. Thank you. Thanks.
Kyoko Yoshinaga:
Yes, I understand that the EU, US and Japan are all taking risk-based approach right now. And I think it is important to examine what the risks are beforehand. But regulating this risk precautionary with hard law is somewhat dangerous. Because the risks varies according to context. Also, the level of AI technology varies among countries. And so, we should not impose hard law to other countries. But to agree with the basic principles and leave it to each country to decide whether to take hard law or software approaches. Like factors like corporate culture, if the companies are obedient or not, safety, the level of technology, it should all be taken into account how to regulate AI. For example, one of the threats caused by AI is the intrusion to privacy. For example, surveillance or real-time biometric ID systems. In that case, it is important to have personal data protection law. And these factors vary among countries. So, we should not say this law is better than the other. I think each government should make regulations on their own way considering these factors in their own context. Thanks. So, that just about takes us to time. I was a bit daunted when I saw
Michael Karanicolas:
the IGF schedule get released and saw that there were so many different sessions on AI. I’m not going to say that ours is the best. I might put that in our outcome report. But I certainly I learned a lot from the perspectives expressed here, both among our panelists and from the rest of you in the room. And I think that it’s an incredibly important conversation given both the importance and urgency of these challenges and this unusual combination of having something that urgently needs attention, but it’s also incredibly important to get right. So thanks again to all of our panelists. Thanks again to all of you who participated and I look forward to keeping the conversation going. Thanks, Michael. Thanks everyone. Thanks, Michael. Thank you. Have a nice day.
Speakers
Audience
Speech speed
172 words per minute
Speech length
2755 words
Speech time
964 secs
Arguments
Striking a balance in regulation over generative AI is challenging
Supporting facts:
- The speaker is heading a provincial government entity in Pakistan involved in policymaking
- The national AI policy of Pakistan is in draft stage and is receiving input from stakeholders
- Generative AI has caused disruptive effects
Topics: Generative AI, Regulation, National AI Policy, Ethics
Singapore’s small jurisdiction size can hinder the implementation of regulations
Supporting facts:
- Singapore’s small jurisdiction could potentially drive businesses away due to regulations
Topics: Regulation, Jurisdiction, Business
Structural framing of the interaction of different bodies in norm-setting is helpful
Supporting facts:
- Trade associations and private sector standard-setting bodies can be highly influential
Topics: data governance, innovation policies
Different levels of regulation can work against each other, creating conflicts
Supporting facts:
- These structures can work at cross-purposes and compete.
Topics: regulation, trade associations
Data granularity in the global South could pose risks for AI
Supporting facts:
- In the global South, there might not be the same fine granularity of data that’s available in the global North, which may produce risks through AI.
Topics: data granularity, AI risks, global South
There’s need to address the power dynamics between the global North and South
Supporting facts:
- Western companies produce a lot of data across the world
Topics: Power dynamics, global North and South
The role of the private sector in regulation and standard setting important in AI
Supporting facts:
- The host Michael Karanicolas called for private sector participation in the discussion
Topics: regulation, AI, private sector
AI systems learning from massive data may create discrimination
Supporting facts:
- Guo Wu noted a shift in AI learning, from algorithms in 1984 to massive data learning today
- He expressed concerns about potential discrimination for groups that do not produce a lot of data for AI to learn from
Topics: AI, data bias, discrimination
Risk assessment approach in AI regulation is ineffective
Supporting facts:
- The EU and US bills ask for self-assessments
- The risk levels are unclear as they are sorted before the technology is fully realized
- The approach assumes a government can forecast risks before they occur
Topics: AI Regulation, Risk Assessment
It’s not unreasonable to ask platforms to assess risks
Supporting facts:
- Companies have a lot of experience with risk modeling.
- Platforms have knowledge about their impacts on specific user groups.
- The DSA allows researchers access to data produced by platforms regarding risk.
Topics: Risk Modeling, Data Management, Digital Services Act (DSA)
Need to define ways to measure AI compliance and performance
Supporting facts:
- The audience member mentions the need to understand how to measure various factors like compliance, performance, and trust in AI systems.
- The concept of pre-normative standards was brought up, which can take from 2 to 20 years to develop before being established as a standard.
Topics: AI regulation, AI measurement
Collaboration with industry is essential
Supporting facts:
- The speaker expressed the need to collaborate with industry, referring to their ability to provide resources, case studies, and knowledge.
- The statement was made within the context of mutual benefit – that these organizations can help industry, and vice versa.
Topics: Industry collaboration, Resource sharing, Knowledge sharing
The need for understanding and measuring the impact of AI within different context
Supporting facts:
- The speaker mentioned that every context is different, so the impact and effectiveness of AI need to be measured accordingly.
- A socio-technical test bed was brought up as a possible tool for measuring the trustworthy outputs of AI.
Topics: AI Impact, Contextual measurement
Report
During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors.
It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders. Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications.
There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI. The discussion also explored the need for context-based trade-offs in AI usage.
One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned.
This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications. The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively.
This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use. The impact of jurisdiction size on regulation was also discussed. The example of Singapore’s small jurisdiction size potentially driving businesses away due to regulations was mentioned.
However, it was suggested that Singapore’s successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise. Data governance and standard-setting bodies were also acknowledged as influential in AI regulation.
Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.
The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI.
This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice. Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement.
However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.
The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.
Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.
A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation.
Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential. The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized.
It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards. Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge.
The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation. In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised.
The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts.
Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.
Carlos Affonso Souza
Speech speed
150 words per minute
Speech length
1572 words
Speech time
627 secs
Arguments
Regulation of AI is moving through a three-step process: broad ethical principles, national strategies, and hard law
Supporting facts:
- Several countries in Latin America, including Argentina, Brazil, Colombia, Peru, and Mexico are very active in the discussion about governance and regulation of AI.
- Governance and regulation itself is a form of technology.
Topics: AI regulation, Ethics of AI, National Strategies
Regulation of AI in the majority world is a challenge due to its invisible and intangible nature.
Supporting facts:
- AI might be invisible, something really ethereal, hard to grasp.
Topics: AI regulation, AI governance
There is a need for countries in the majority world to create their own regulations and governance of AI.
Supporting facts:
- Large countries in the majority world primarily serve as users of the AI applications rather than developers.
Topics: AI governance, Regulatory frameworks
Regulations should address not only the creation but also the use of AI applications.
Supporting facts:
- The applications are not going to be designed or created in the majority world countries, but they will be heavily used there.
Topics: AI usage, AI governance
The experience of internet regulation can be useful when considering AI regulation.
Supporting facts:
- Copyright and freedom of expression were the two issues addressed early on in the internet regulation.
- The surge of personal data protection laws fundamental for us to understand what internet regulation was like over the last decade.
Topics: AI regulation, Internet regulation
Personal data protection laws and decisions on platform liability will likely have significant influence on the shape of AI regulation.
Topics: AI regulation, Data Protection Laws, Platform liability
Understanding the type of AI and the nature of its damages is essential to the regulation of AI.
Topics: AI regulation, AI types, AI damages
Countries are regulating AI to signal that they are future-forward
Supporting facts:
- Countries are coming up with imperfect regulations but consider it better than nothing
- Having some regulation on AI is seen as a status symbol of being future-oriented
Topics: AI regulation, International relations, Branding
European AI regulation solutions are being adopted by other countries
Supporting facts:
- Legislators adopt European AI regulation solutions even if they’re aware of their issues
- This adoption is done in an attempt to show that something is being done towards regulating AI
Topics: AI regulation, European Union, Legal adoption
Report
In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology.
The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws. However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions.
Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms. One of the challenges in regulating AI in the majority world lies in the nature of the technology itself.
AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI. Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications.
This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society. Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation.
The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation. Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation.
It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.
Countries’ motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally.
This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI. In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks.
The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation.
The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.
Courtney Radsch
Speech speed
166 words per minute
Speech length
1868 words
Speech time
674 secs
Arguments
In the United States, the focus is on creating frameworks for governance and regulation of AI
Supporting facts:
- The White House Office of Science Technology and Policy is creating a blueprint for an AI Bill of Rights
- National AI Commission Act is focused on responsible AI and how the responsibility for regulation is distributed across agencies
- At least nine states have enacted AI legislation
Topics: Artificial Intelligence, Regulation, Governance, United States
AI governance is failing to grapple with market power
Supporting facts:
- Previous eras of tech governance like social media, search, app stores, online marketplaces, even standards, were all rolled out and remain controlled by a few monopolistic tech firms
- Nearly a thousand startup firms were bought by Meta, Google, and Amazon with no FTC intervention
Topics: Tech Governance, AI, Market Power
Current structure of markets needs reshaping to eliminate anti-competitive practices
Supporting facts:
- Dominance over key cloud computing platforms incentivize firms like Microsoft, Amazon, and Google to self-preference their own AI models
- National governments do not have power over the capabilities and technologies created by massive firms
Topics: Market Structure, Anti-Competitive Practices
Technical standards set by communities have political and economic implications
Supporting facts:
- Examples of standards set include HTTP, HTTPS, robot TXT
- Big tech companies are able to accumulate vast amounts of rights-protected data without compensation, affecting the economy of other industries
Topics: Technical standards setting, Political economy, Tech communities
Unfettered innovation is not necessarily good
Supporting facts:
- The way we implemented some copyrights has killed off a large part of the news media industry
Topics: AI Regulation, Tech Companies
Data is not just limited to individual user data, but also includes environmental and sensor data, including data about data.
Supporting facts:
- This type of data is incredibly valuable and dominated by larger firms
Topics: Data, AI, Information Management
Non-EU researchers struggle to provide oversight of powerful, wealthy companies due to underfunding
Supporting facts:
- Non-EU researchers rely on underfunded civil society and academia for oversight
Topics: Research, Funding, Oversight
Report
In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI.
The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies. Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors.
This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices. However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon.
These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.
Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.
The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system.
It is argued that a more diverse representation in the tech community is needed to neutralize big tech’s unfair data advantage. The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes.
The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI. The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well.
The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation. A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments.
It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed. In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation.
However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies.
These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.
Irakli Khodeli
Speech speed
138 words per minute
Speech length
1211 words
Speech time
527 secs
Arguments
UNESCO’s recommendation on AI ethics offers a critical guide for AI governance on a global level
Supporting facts:
- The recommendation was adopted two years ago by 193 member states of UNESCO
- The principles are grounded in values such as human rights, human dignity, diversity, environmental sustainability, peaceful societies
- The principles are operationalized into 11 different policy contexts
Topics: AI governance, ethics of AI, UNESCO, policy context, principles
AI governance needs to be grounded in gender and environmental sustainability
Supporting facts:
- UNESCO principles disassociate gender discussion from the general discussion on discrimination
- Strong emphasis on environmental sustainability recognizing that it’s often overlooked in the global discussions
Topics: AI governance, gender diversity, environmental sustainability
The global governance of AI is critical to avoid undermining other multilateral priorities.
Supporting facts:
- The risks posed by AI are significant – from benign to catastrophic, unintended to deliberate harms
- AI is closely related to pillars of the UN, such as sustainable development, human rights, gender equality, peace
Topics: AI governance, risks of AI, UN priorities, multilateral priorities
National governments play a significant role in AI governance
Supporting facts:
- Successful regulation happens at the national level
- It’s the national governments’ responsibility to set up institutions and laws for AI governance
Topics: AI governance, national governments, UNESCO
Successful regulation of any technology takes regulatory frameworks existing at different levels including global, regional, national, and sub-national.
Supporting facts:
- The conversation at the UN level right now is about what kind of regulatory mechanism to have
- The European Union, African Union, and ASEAN are examples of regional organizations playing a role in regulation
- At the national level, countries are indispensable in enforcement of different mechanisms
- Examples exist of legislative activism at the sub-national level in the United States and India
Topics: Global governance, Data flow, Internet regulation, Technology regulation, Artificial Intelligence
Bioethics provides a concrete example of how a multi-level governance model can function.
Supporting facts:
- UNESCO’s Universal Declaration on Bioethics and Human Rights and the Council of Europe’s Oviedo Convention are given as global and regional governance examples respectively
- These are translated into binding regulations at the country level
Topics: Bioethics, Multi-level governance model, International Law, Stem Cell Research
Report
The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies.
These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies. To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization’s commitment to bridging the gap between theoretical principles and practical implementation.
By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes. One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability.
The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.
Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace.
Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities. While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level.
It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO. In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach.
Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation.
Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level. To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants.
The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale. Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully.
UNESCO’s Universal Declaration on Bioethics and Human Rights, along with the Council of Europe’s Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.
In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly.
This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.
Kyoko Yoshinaga
Speech speed
124 words per minute
Speech length
1186 words
Speech time
574 secs
Arguments
Japan adopts soft law approach to AI governance
Supporting facts:
- Japan introduced principles for AI R&D as a non-binding international framework
- Soft laws are used by Japanese companies to develop AI policies
Topics: AI policy in Japan, AI governance
Japan is amending sector-specific hard laws such as the Act on Improving Transparency and Fairness of Digital Platforms and Financial Instruments in Exchange Act
Supporting facts:
- The Act requires businesses to disclose information about risks
Topics: Japanese AI laws, Transparency in AI
Industry should consider developing or using responsible AI as part of their corporate social responsibility or as part of their environmental, social and governance, ESG practices.
Supporting facts:
- Major companies in Japan like Sony, Fujitsu, NEC, NTT data, have already developed AI policies based on particular guidelines.
- Kyoko Yoshinaga was involved in a think tank developing AI systems and was in charge of AI risk management and compliance.
Topics: Artificial Intelligence, Corporate Social Responsibility, ESG
The creation of informed frameworks may encourage management to establish robust AI governance within their organization.
Supporting facts:
- Back in 2005, Information Security Governance Policy Framework was created by the Ministry of Economy, Trade, and Industry which helped many companies to build robust information security governance.
- This initiative can be applied in the context of AI governance.
Topics: AI Governance, Corporate Management
Each government should make AI regulations considering their own context.
Supporting facts:
- The level & threats of AI technology varies among countries
- Factors like corporate culture, safety, technology level should be accounted.
Topics: AI regulation, contextual factors, national approach
Personal data protection law is important for threats caused by AI like privacy intrusion.
Supporting facts:
- Threats caused by AI include surveillance and real-time biometric ID systems
Topics: AI, privacy, personal data protection law
Report
Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry.
Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance.
Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.
Michael Karanicolas
Speech speed
159 words per minute
Speech length
2197 words
Speech time
828 secs
Arguments
Michael Karanicolas hosted a session on AI governance, aiming to foster a discussion on the development of new regulatory trends around the world, especially considering the influence of major regulatory blocks like China, the US, and the EU.
Supporting facts:
- The session was organized through a collaboration between the School of Law and the School of Engineering at UCLA, Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy.
- The aim of this session was to recognize the global influence of major regulatory blocks on AI development and to understand the tension between rulemaking within these power centres and AI impacts outside of this privileged minority.
Topics: AI governance, regulatory trends, China, US, EU
AI as a whole is going to be a very globalized form of human interaction
Supporting facts:
- AI consists of data resources, software programs, networks, and computing devices which are all part of globalized markets.
Topics: AI, Globalization, Internet Governance
The enforcement of IP rights online is vastly stronger than enforcement of privacy rights.
Supporting facts:
- This is a legacy of the early prioritization of harms that were viewed as the most pressing to address early on in regulatory efforts.
Topics: Internet regulation, IP rights, Privacy rights
Engagement, mutual learning and sharing best practices is beneficial in the field of AI regulation
Topics: AI Regulation, Engagement, Learning, Best Practices
Factoring local contexts into regulatory processes is important
Supporting facts:
- The problem of countries pasting an EU model or an American model into their local context
Topics: AI Regulation, Local Context
Cut and paste model of adopting international regulatory structures can be problematic
Supporting facts:
- EU Act might lack appropriate local regulatory structure
- The model might not fit for purpose based on a local context
Topics: Regulatory structures, International Policy, AI Regulation
Report
During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world.
The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally. The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority.
It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.
Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions. The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it.
As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively.
The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance. Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world.
It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts.
The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance. Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation.
The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic.
It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context. In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally.
It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights.
The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.
Simon Chesterman
Speech speed
201 words per minute
Speech length
2278 words
Speech time
681 secs
Arguments
Every jurisdiction is wary both of under-regulating and over-regulating AI.
Supporting facts:
- Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt elsewhere for innovation.
- Under-regulation may expose citizens to unforeseen risk.
Topics: AI Regulation, Jurisdiction
A new set of rules is not necessary to regulate AI.
Supporting facts:
- The real challenge lies in the application of existing rules to new use cases of AI.
- Most laws can govern most AI use cases most of the time.
Topics: AI Regulation, Existing Laws
Human-centricity and transparency have been the main focus in Singapore’s approach of AI governance.
Supporting facts:
- The majority of Singapore’s AI governance framework focuses on various use cases and determining what implications the regulations have in practice.
- Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, for example changing the Road Traffic Act to allow autonomous vehicles.
Topics: AI Governance, Human Centricity, Transparency
AI shouldn’t be biased and this notion is covered under anti-discrimination laws.
Supporting facts:
- Stating that AI shouldn’t be biased is a repetition of anti-discrimination laws which state no entity whether a person, company or machine, should discriminate.
Topics: AI Ethics, Anti-discrimination Laws
Smaller jurisdictions face three major challenges concerning AI regulation – whether to regulate, when to regulate and power concentration in private hands.
Supporting facts:
- If a jurisdiction regulates AI too quickly, it could drive innovation elsewhere.
- The Collingridge Dilemma illustrates the tension between regulating early but without clarity on harms against delaying regulation but with the cost of regulation rising.
- Most of AI research and development has moved from public institutions to private companies, impacting the ability of governments to constrain behavior.
Topics: AI regulation, small jurisdictions, Collingridge Dilemma, innovation shift
The regulations of AI are influenced by primarily western technology companies.
Supporting facts:
- Principles of AI regulation can be traced back to western technology companies.
- Public awareness and concern about the risks of AI were triggered by events like the Cambridge Analytica scandal.
Topics: AI regulations, western technology companies
Regulatory sandboxes in the fintech sector is a useful technique to foster innovation
Supporting facts:
- The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable new use-cases testing
Topics: Regulatory Sandbox, Fintech
Need for balance in regulation to avoid driving innovation elsewhere
Supporting facts:
- Singapore’s Personal Data Protection Act aims to balance users’ rights and the needs of businesses
Topics: Regulation, Innovation
Regulation needs to be at multiple levels – state regulations, self-regulations and industry standards
Topics: Regulation, State Regulation, Self-regulation, Industry Standards
Role of companies in AI research and regulation
Supporting facts:
- Companies are becoming more open to regulation for various reasons.
- Ryan Kala in 2011 suggested immunity from suit for companies to encourage AI research.
- Bigger corporations might use increased regulatory costs as barriers for their competitors.
Topics: AI, research, regulation, Ryan Kala, innovation
Report
The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere.
On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation. Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases.
However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI. Thirdly, Singapore’s approach to AI governance is highlighted.
The focus of Singapore’s AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles.
This approach reflects Singapore’s commitment to ensuring human-centrality and transparency in AI governance. Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.
The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.
Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.
The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal.
This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies. Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.
In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users’ rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.
Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.
In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points.
It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized.
Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.
Tomiwa Ilori
Speech speed
141 words per minute
Speech length
743 words
Speech time
316 secs
Arguments
AI governance in Africa is in its infancy
Supporting facts:
- There are at least 466 AI policy and governance items referred to in the African region
- There is no major treaty, law or standard when it comes to AI governance in Africa
- Countries like Mauritius, Kenya and Egypt already have a national AI policy
Topics: Artificial intelligence, Governance, Africa
Interest in AI governance is growing among various stakeholders in Africa
Supporting facts:
- Artificial intelligence governance initiatives are led by government, multilateral organizations, public funded research, academia and the private sector
- The Kenyan government has signified interest to now pass a law with respect to regulating AI systems
Topics: Artificial intelligence, Governance, Africa
The race towards AI governance will favor the boat, especially from an African perspective
Supporting facts:
- The region often imports standards and is usually referred to as standard stakeholders, not people who design standards for themselves
- Smaller nations often end up as pawns or testing grounds for bad governance attempts
Topics: AI governance, African perspective, Standardization
Report
AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies.
For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations. Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance.
This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.
However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts.
This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities. On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives.
International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.
In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies.
Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.