Searching for Standards: The Global Competition to Govern AI | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Kyoko Yoshinaga, Civil Society, Asia-Pacific Group
- Tomiwa Ilori, Civil Society, African Group
- Simon Chesterman, Government, Asia-Pacific Group
- Carlos Affonso Souza, Civil Society, Latin American and Caribbean Group (GRULAC)
- GABRIELA RAMOS, Intergovernmental Organization, Intergovernmental Organization
- Courtney Radsch, Civil Society, Western European and Others Group (WEOG)
Moderators:
- Michael Karanicolas, Civil Society, Western European and Others Group (WEOG)
Table of contents
Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.
Knowledge Graph of Debate
Session report
Michael Karanicolas
During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world. The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally.
The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority. It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.
Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions.
The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it. As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively. The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance.
Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world. It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts. The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance.
Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation. The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic. It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context.
In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally. It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights. The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.
Tomiwa Ilori
AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies. For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations.
Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance. This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.
However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts. This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities.
On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives. International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.
In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies. Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.
Carlos Affonso Souza
In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology. The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws.
However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions. Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms.
One of the challenges in regulating AI in the majority world lies in the nature of the technology itself. AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI.
Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications. This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society.
Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation. The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation.
Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation. It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.
Countries' motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally. This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI.
In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks. The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation. The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.
Irakli Khodeli
The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies. These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies.
To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization's commitment to bridging the gap between theoretical principles and practical implementation. By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes.
One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability. The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.
Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace. Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities.
While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level. It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO.
In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach. Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation. Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level.
To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants. The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale.
Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully. UNESCO's Universal Declaration on Bioethics and Human Rights, along with the Council of Europe's Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.
In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly. This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.
Kyoko Yoshinaga
Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry. Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance. Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.
Simon Chesterman
The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere. On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation.
Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases. However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI.
Thirdly, Singapore's approach to AI governance is highlighted. The focus of Singapore's AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles. This approach reflects Singapore's commitment to ensuring human-centrality and transparency in AI governance.
Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.
The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.
Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.
The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal. This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies.
Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.
In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users' rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.
Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.
In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points. It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized. Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.
Audience
During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors. It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders.
Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications. There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI.
The discussion also explored the need for context-based trade-offs in AI usage. One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned. This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications.
The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively. This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use.
The impact of jurisdiction size on regulation was also discussed. The example of Singapore's small jurisdiction size potentially driving businesses away due to regulations was mentioned. However, it was suggested that Singapore's successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise.
Data governance and standard-setting bodies were also acknowledged as influential in AI regulation. Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.
The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI. This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice.
Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement. However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.
The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.
Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.
A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation. Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential.
The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized. It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards.
Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge. The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation.
In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised. The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts. Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.
Courtney Radsch
In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI. The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies.
Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors. This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices.
However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon. These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.
Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.
The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system. It is argued that a more diverse representation in the tech community is needed to neutralize big tech's unfair data advantage.
The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes. The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI.
The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well. The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation.
A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments. It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed.
In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation. However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies. These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.
Speakers
A
Audience
Speech speed
172 words per minute
Speech length
2755 words
Speech time
964 secs
Arguments
Striking a balance in regulation over generative AI is challenging
Supporting facts:
- The speaker is heading a provincial government entity in Pakistan involved in policymaking
- The national AI policy of Pakistan is in draft stage and is receiving input from stakeholders
- Generative AI has caused disruptive effects
Topics: Generative AI, Regulation, National AI Policy, Ethics
Singapore's small jurisdiction size can hinder the implementation of regulations
Supporting facts:
- Singapore's small jurisdiction could potentially drive businesses away due to regulations
Topics: Regulation, Jurisdiction, Business
Structural framing of the interaction of different bodies in norm-setting is helpful
Supporting facts:
- Trade associations and private sector standard-setting bodies can be highly influential
Topics: data governance, innovation policies
Different levels of regulation can work against each other, creating conflicts
Supporting facts:
- These structures can work at cross-purposes and compete.
Topics: regulation, trade associations
Data granularity in the global South could pose risks for AI
Supporting facts:
- In the global South, there might not be the same fine granularity of data that's available in the global North, which may produce risks through AI.
Topics: data granularity, AI risks, global South
There's need to address the power dynamics between the global North and South
Supporting facts:
- Western companies produce a lot of data across the world
Topics: Power dynamics, global North and South
The role of the private sector in regulation and standard setting important in AI
Supporting facts:
- The host Michael Karanicolas called for private sector participation in the discussion
Topics: regulation, AI, private sector
AI systems learning from massive data may create discrimination
Supporting facts:
- Guo Wu noted a shift in AI learning, from algorithms in 1984 to massive data learning today
- He expressed concerns about potential discrimination for groups that do not produce a lot of data for AI to learn from
Topics: AI, data bias, discrimination
Risk assessment approach in AI regulation is ineffective
Supporting facts:
- The EU and US bills ask for self-assessments
- The risk levels are unclear as they are sorted before the technology is fully realized
- The approach assumes a government can forecast risks before they occur
Topics: AI Regulation, Risk Assessment
It's not unreasonable to ask platforms to assess risks
Supporting facts:
- Companies have a lot of experience with risk modeling.
- Platforms have knowledge about their impacts on specific user groups.
- The DSA allows researchers access to data produced by platforms regarding risk.
Topics: Risk Modeling, Data Management, Digital Services Act (DSA)
Need to define ways to measure AI compliance and performance
Supporting facts:
- The audience member mentions the need to understand how to measure various factors like compliance, performance, and trust in AI systems.
- The concept of pre-normative standards was brought up, which can take from 2 to 20 years to develop before being established as a standard.
Topics: AI regulation, AI measurement
Collaboration with industry is essential
Supporting facts:
- The speaker expressed the need to collaborate with industry, referring to their ability to provide resources, case studies, and knowledge.
- The statement was made within the context of mutual benefit - that these organizations can help industry, and vice versa.
Topics: Industry collaboration, Resource sharing, Knowledge sharing
The need for understanding and measuring the impact of AI within different context
Supporting facts:
- The speaker mentioned that every context is different, so the impact and effectiveness of AI need to be measured accordingly.
- A socio-technical test bed was brought up as a possible tool for measuring the trustworthy outputs of AI.
Topics: AI Impact, Contextual measurement
Report
During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors.
It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders. Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications.
There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI. The discussion also explored the need for context-based trade-offs in AI usage.
One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned.
This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications. The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively.
This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use. The impact of jurisdiction size on regulation was also discussed. The example of Singapore's small jurisdiction size potentially driving businesses away due to regulations was mentioned.
However, it was suggested that Singapore's successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise. Data governance and standard-setting bodies were also acknowledged as influential in AI regulation.
Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.
The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI.
This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice. Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement.
However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.
The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.
Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.
A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation.
Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential. The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized.
It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards. Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge.
The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation. In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised.
The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts.
Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.
CA
Carlos Affonso Souza
Speech speed
150 words per minute
Speech length
1572 words
Speech time
627 secs
Arguments
Regulation of AI is moving through a three-step process: broad ethical principles, national strategies, and hard law
Supporting facts:
- Several countries in Latin America, including Argentina, Brazil, Colombia, Peru, and Mexico are very active in the discussion about governance and regulation of AI.
- Governance and regulation itself is a form of technology.
Topics: AI regulation, Ethics of AI, National Strategies
Regulation of AI in the majority world is a challenge due to its invisible and intangible nature.
Supporting facts:
- AI might be invisible, something really ethereal, hard to grasp.
Topics: AI regulation, AI governance
There is a need for countries in the majority world to create their own regulations and governance of AI.
Supporting facts:
- Large countries in the majority world primarily serve as users of the AI applications rather than developers.
Topics: AI governance, Regulatory frameworks
Regulations should address not only the creation but also the use of AI applications.
Supporting facts:
- The applications are not going to be designed or created in the majority world countries, but they will be heavily used there.
Topics: AI usage, AI governance
The experience of internet regulation can be useful when considering AI regulation.
Supporting facts:
- Copyright and freedom of expression were the two issues addressed early on in the internet regulation.
- The surge of personal data protection laws fundamental for us to understand what internet regulation was like over the last decade.
Topics: AI regulation, Internet regulation
Personal data protection laws and decisions on platform liability will likely have significant influence on the shape of AI regulation.
Topics: AI regulation, Data Protection Laws, Platform liability
Understanding the type of AI and the nature of its damages is essential to the regulation of AI.
Topics: AI regulation, AI types, AI damages
Countries are regulating AI to signal that they are future-forward
Supporting facts:
- Countries are coming up with imperfect regulations but consider it better than nothing
- Having some regulation on AI is seen as a status symbol of being future-oriented
Topics: AI regulation, International relations, Branding
European AI regulation solutions are being adopted by other countries
Supporting facts:
- Legislators adopt European AI regulation solutions even if they're aware of their issues
- This adoption is done in an attempt to show that something is being done towards regulating AI
Topics: AI regulation, European Union, Legal adoption
Report
In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology.
The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws. However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions.
Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms. One of the challenges in regulating AI in the majority world lies in the nature of the technology itself.
AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI. Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications.
This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society. Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation.
The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation. Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation.
It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.
Countries' motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally.
This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI. In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks.
The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation.
The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.
CR
Courtney Radsch
Speech speed
166 words per minute
Speech length
1868 words
Speech time
675 secs
Arguments
In the United States, the focus is on creating frameworks for governance and regulation of AI
Supporting facts:
- The White House Office of Science Technology and Policy is creating a blueprint for an AI Bill of Rights
- National AI Commission Act is focused on responsible AI and how the responsibility for regulation is distributed across agencies
- At least nine states have enacted AI legislation
Topics: Artificial Intelligence, Regulation, Governance, United States
AI governance is failing to grapple with market power
Supporting facts:
- Previous eras of tech governance like social media, search, app stores, online marketplaces, even standards, were all rolled out and remain controlled by a few monopolistic tech firms
- Nearly a thousand startup firms were bought by Meta, Google, and Amazon with no FTC intervention
Topics: Tech Governance, AI, Market Power
Current structure of markets needs reshaping to eliminate anti-competitive practices
Supporting facts:
- Dominance over key cloud computing platforms incentivize firms like Microsoft, Amazon, and Google to self-preference their own AI models
- National governments do not have power over the capabilities and technologies created by massive firms
Topics: Market Structure, Anti-Competitive Practices
Technical standards set by communities have political and economic implications
Supporting facts:
- Examples of standards set include HTTP, HTTPS, robot TXT
- Big tech companies are able to accumulate vast amounts of rights-protected data without compensation, affecting the economy of other industries
Topics: Technical standards setting, Political economy, Tech communities
Unfettered innovation is not necessarily good
Supporting facts:
- The way we implemented some copyrights has killed off a large part of the news media industry
Topics: AI Regulation, Tech Companies
Data is not just limited to individual user data, but also includes environmental and sensor data, including data about data.
Supporting facts:
- This type of data is incredibly valuable and dominated by larger firms
Topics: Data, AI, Information Management
Non-EU researchers struggle to provide oversight of powerful, wealthy companies due to underfunding
Supporting facts:
- Non-EU researchers rely on underfunded civil society and academia for oversight
Topics: Research, Funding, Oversight
Report
In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI.
The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies. Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors.
This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices. However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon.
These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.
Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.
The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system.
It is argued that a more diverse representation in the tech community is needed to neutralize big tech's unfair data advantage. The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes.
The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI. The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well.
The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation. A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments.
It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed. In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation.
However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies.
These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.
IK
Irakli Khodeli
Speech speed
138 words per minute
Speech length
1211 words
Speech time
527 secs
Arguments
UNESCO's recommendation on AI ethics offers a critical guide for AI governance on a global level
Supporting facts:
- The recommendation was adopted two years ago by 193 member states of UNESCO
- The principles are grounded in values such as human rights, human dignity, diversity, environmental sustainability, peaceful societies
- The principles are operationalized into 11 different policy contexts
Topics: AI governance, ethics of AI, UNESCO, policy context, principles
AI governance needs to be grounded in gender and environmental sustainability
Supporting facts:
- UNESCO principles disassociate gender discussion from the general discussion on discrimination
- Strong emphasis on environmental sustainability recognizing that it's often overlooked in the global discussions
Topics: AI governance, gender diversity, environmental sustainability
The global governance of AI is critical to avoid undermining other multilateral priorities.
Supporting facts:
- The risks posed by AI are significant - from benign to catastrophic, unintended to deliberate harms
- AI is closely related to pillars of the UN, such as sustainable development, human rights, gender equality, peace
Topics: AI governance, risks of AI, UN priorities, multilateral priorities
National governments play a significant role in AI governance
Supporting facts:
- Successful regulation happens at the national level
- It's the national governments' responsibility to set up institutions and laws for AI governance
Topics: AI governance, national governments, UNESCO
Successful regulation of any technology takes regulatory frameworks existing at different levels including global, regional, national, and sub-national.
Supporting facts:
- The conversation at the UN level right now is about what kind of regulatory mechanism to have
- The European Union, African Union, and ASEAN are examples of regional organizations playing a role in regulation
- At the national level, countries are indispensable in enforcement of different mechanisms
- Examples exist of legislative activism at the sub-national level in the United States and India
Topics: Global governance, Data flow, Internet regulation, Technology regulation, Artificial Intelligence
Bioethics provides a concrete example of how a multi-level governance model can function.
Supporting facts:
- UNESCO's Universal Declaration on Bioethics and Human Rights and the Council of Europe's Oviedo Convention are given as global and regional governance examples respectively
- These are translated into binding regulations at the country level
Topics: Bioethics, Multi-level governance model, International Law, Stem Cell Research
Report
The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies.
These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies. To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization's commitment to bridging the gap between theoretical principles and practical implementation.
By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes. One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability.
The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.
Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace.
Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities. While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level.
It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO. In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach.
Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation.
Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level. To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants.
The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale. Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully.
UNESCO's Universal Declaration on Bioethics and Human Rights, along with the Council of Europe's Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.
In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly.
This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.
KY
Kyoko Yoshinaga
Speech speed
124 words per minute
Speech length
1186 words
Speech time
574 secs
Arguments
Japan adopts soft law approach to AI governance
Supporting facts:
- Japan introduced principles for AI R&D as a non-binding international framework
- Soft laws are used by Japanese companies to develop AI policies
Topics: AI policy in Japan, AI governance
Japan is amending sector-specific hard laws such as the Act on Improving Transparency and Fairness of Digital Platforms and Financial Instruments in Exchange Act
Supporting facts:
- The Act requires businesses to disclose information about risks
Topics: Japanese AI laws, Transparency in AI
Industry should consider developing or using responsible AI as part of their corporate social responsibility or as part of their environmental, social and governance, ESG practices.
Supporting facts:
- Major companies in Japan like Sony, Fujitsu, NEC, NTT data, have already developed AI policies based on particular guidelines.
- Kyoko Yoshinaga was involved in a think tank developing AI systems and was in charge of AI risk management and compliance.
Topics: Artificial Intelligence, Corporate Social Responsibility, ESG
The creation of informed frameworks may encourage management to establish robust AI governance within their organization.
Supporting facts:
- Back in 2005, Information Security Governance Policy Framework was created by the Ministry of Economy, Trade, and Industry which helped many companies to build robust information security governance.
- This initiative can be applied in the context of AI governance.
Topics: AI Governance, Corporate Management
Each government should make AI regulations considering their own context.
Supporting facts:
- The level & threats of AI technology varies among countries
- Factors like corporate culture, safety, technology level should be accounted.
Topics: AI regulation, contextual factors, national approach
Personal data protection law is important for threats caused by AI like privacy intrusion.
Supporting facts:
- Threats caused by AI include surveillance and real-time biometric ID systems
Topics: AI, privacy, personal data protection law
Report
Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry.
Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance.
Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.
MK
Michael Karanicolas
Speech speed
159 words per minute
Speech length
2197 words
Speech time
828 secs
Arguments
Michael Karanicolas hosted a session on AI governance, aiming to foster a discussion on the development of new regulatory trends around the world, especially considering the influence of major regulatory blocks like China, the US, and the EU.
Supporting facts:
- The session was organized through a collaboration between the School of Law and the School of Engineering at UCLA, Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy.
- The aim of this session was to recognize the global influence of major regulatory blocks on AI development and to understand the tension between rulemaking within these power centres and AI impacts outside of this privileged minority.
Topics: AI governance, regulatory trends, China, US, EU
AI as a whole is going to be a very globalized form of human interaction
Supporting facts:
- AI consists of data resources, software programs, networks, and computing devices which are all part of globalized markets.
Topics: AI, Globalization, Internet Governance
The enforcement of IP rights online is vastly stronger than enforcement of privacy rights.
Supporting facts:
- This is a legacy of the early prioritization of harms that were viewed as the most pressing to address early on in regulatory efforts.
Topics: Internet regulation, IP rights, Privacy rights
Engagement, mutual learning and sharing best practices is beneficial in the field of AI regulation
Topics: AI Regulation, Engagement, Learning, Best Practices
Factoring local contexts into regulatory processes is important
Supporting facts:
- The problem of countries pasting an EU model or an American model into their local context
Topics: AI Regulation, Local Context
Cut and paste model of adopting international regulatory structures can be problematic
Supporting facts:
- EU Act might lack appropriate local regulatory structure
- The model might not fit for purpose based on a local context
Topics: Regulatory structures, International Policy, AI Regulation
Report
During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world.
The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally. The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority.
It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.
Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions. The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it.
As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively.
The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance. Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world.
It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts.
The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance. Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation.
The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic.
It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context. In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally.
It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights.
The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.
SC
Simon Chesterman
Speech speed
201 words per minute
Speech length
2278 words
Speech time
681 secs
Arguments
Every jurisdiction is wary both of under-regulating and over-regulating AI.
Supporting facts:
- Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt elsewhere for innovation.
- Under-regulation may expose citizens to unforeseen risk.
Topics: AI Regulation, Jurisdiction
A new set of rules is not necessary to regulate AI.
Supporting facts:
- The real challenge lies in the application of existing rules to new use cases of AI.
- Most laws can govern most AI use cases most of the time.
Topics: AI Regulation, Existing Laws
Human-centricity and transparency have been the main focus in Singapore's approach of AI governance.
Supporting facts:
- The majority of Singapore's AI governance framework focuses on various use cases and determining what implications the regulations have in practice.
- Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, for example changing the Road Traffic Act to allow autonomous vehicles.
Topics: AI Governance, Human Centricity, Transparency
AI shouldn't be biased and this notion is covered under anti-discrimination laws.
Supporting facts:
- Stating that AI shouldn't be biased is a repetition of anti-discrimination laws which state no entity whether a person, company or machine, should discriminate.
Topics: AI Ethics, Anti-discrimination Laws
Smaller jurisdictions face three major challenges concerning AI regulation - whether to regulate, when to regulate and power concentration in private hands.
Supporting facts:
- If a jurisdiction regulates AI too quickly, it could drive innovation elsewhere.
- The Collingridge Dilemma illustrates the tension between regulating early but without clarity on harms against delaying regulation but with the cost of regulation rising.
- Most of AI research and development has moved from public institutions to private companies, impacting the ability of governments to constrain behavior.
Topics: AI regulation, small jurisdictions, Collingridge Dilemma, innovation shift
The regulations of AI are influenced by primarily western technology companies.
Supporting facts:
- Principles of AI regulation can be traced back to western technology companies.
- Public awareness and concern about the risks of AI were triggered by events like the Cambridge Analytica scandal.
Topics: AI regulations, western technology companies
Regulatory sandboxes in the fintech sector is a useful technique to foster innovation
Supporting facts:
- The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable new use-cases testing
Topics: Regulatory Sandbox, Fintech
Need for balance in regulation to avoid driving innovation elsewhere
Supporting facts:
- Singapore's Personal Data Protection Act aims to balance users' rights and the needs of businesses
Topics: Regulation, Innovation
Regulation needs to be at multiple levels - state regulations, self-regulations and industry standards
Topics: Regulation, State Regulation, Self-regulation, Industry Standards
Role of companies in AI research and regulation
Supporting facts:
- Companies are becoming more open to regulation for various reasons.
- Ryan Kala in 2011 suggested immunity from suit for companies to encourage AI research.
- Bigger corporations might use increased regulatory costs as barriers for their competitors.
Topics: AI, research, regulation, Ryan Kala, innovation
Report
The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere.
On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation. Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases.
However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI. Thirdly, Singapore's approach to AI governance is highlighted.
The focus of Singapore's AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles.
This approach reflects Singapore's commitment to ensuring human-centrality and transparency in AI governance. Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.
The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.
Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.
The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal.
This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies. Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.
In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users' rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.
Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.
In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points.
It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized.
Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.
TI
Tomiwa Ilori
Speech speed
141 words per minute
Speech length
743 words
Speech time
316 secs
Arguments
AI governance in Africa is in its infancy
Supporting facts:
- There are at least 466 AI policy and governance items referred to in the African region
- There is no major treaty, law or standard when it comes to AI governance in Africa
- Countries like Mauritius, Kenya and Egypt already have a national AI policy
Topics: Artificial intelligence, Governance, Africa
Interest in AI governance is growing among various stakeholders in Africa
Supporting facts:
- Artificial intelligence governance initiatives are led by government, multilateral organizations, public funded research, academia and the private sector
- The Kenyan government has signified interest to now pass a law with respect to regulating AI systems
Topics: Artificial intelligence, Governance, Africa
The race towards AI governance will favor the boat, especially from an African perspective
Supporting facts:
- The region often imports standards and is usually referred to as standard stakeholders, not people who design standards for themselves
- Smaller nations often end up as pawns or testing grounds for bad governance attempts
Topics: AI governance, African perspective, Standardization
Report
AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies.
For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations. Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance.
This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.
However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts.
This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities. On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives.
International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.
In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies.
Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.