A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288

10 Oct 2023 02:30h - 04:00h UTC

Event report

Speakers and Moderators

Speakers:
  • Marlena Wisniak, Civil Society, Western European and Others Group (WEOG)
  • Michel Souza, Civil Society, Latin American and Caribbean Group (GRULAC)
  • Yunwei Aaryn, Government, Western European and Others Group (WEOG)
  • Naimi Shahla, Private Sector, Intergovernmental Organization
  • Khodeli Irakli, Intergovernmental Organization, Intergovernmental Organization
  • Rumman Chowdhury, Civil Society, Intergovernmental Organization
  • Oluseyi Oyebisi, Civil Society, African Group
Moderators:
  • Ian Barber, Civil Society, Western European and Others Group (WEOG)
  • Marina Atoji Atoji, Civil Society, Latin American and Caribbean Group (GRULAC)

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Tara Denham

Canada is leading the way in taking AI governance seriously by integrating digital policy with human rights. The Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada is actively working on the geopolitics of artificial intelligence, ensuring that AI development and governance uphold human rights principles.

The Canadian government is actively involved in developing regulation, policy, and guiding principles for AI. They have implemented a directive on how government will handle automated decision making, including an algorithmic impact assessment tool. To ensure responsible development and management of AI, the government has published a voluntary Code of Conduct and is working on AI and Data Act legislation. Additionally, the government requires engagement with stakeholders before deploying generative AI, demonstrating their commitment to responsible AI implementation.

Stakeholder engagement is considered essential in AI policy making, and Canada has taken deliberate steps to involve stakeholders from the start. They have established a national table that brings together representatives from the private sector, civil society organizations, federal, provincial, and territorial governments, as well as Indigenous communities to consult on AI policies. This inclusive approach recognizes the importance of diverse opinions and aims to develop policies that are representative of various perspectives. However, it is acknowledged that stakeholder engagement can be time-consuming and may lead to tensions due to differing views.

Canada recognizes the significance of leveraging existing international structures for global AI governance. They have used the Freedom Online Coalition to shape their negotiating positions on UNESCO Recommendations on AI ethics. Additionally, they are actively participating in Council of Europe negotiations on AI and human rights. However, it is noted that more countries and stakeholder groups should be encouraged to participate in these international negotiations to ensure a comprehensive and inclusive global governance framework for AI.

There is also a need for global analysis on what approaches to AI governance are working and not working. This analysis aims to build global capacity and better understand the risks and impacts of AI in different communities and countries. Advocates emphasize the importance of leveraging existing research on AI capacity building and research, supported by organizations like the International Development Research Centre (IDRC).

Furthermore, there is a strong call for increased support for research into AI and its impacts. IDRC in Canada plays a pivotal role in funding and supporting AI capacity-building initiatives and research. This support is crucial in advancing our understanding of AI's potential and ensuring responsible and beneficial implementation.

In conclusion, Canada is taking significant steps towards effective AI governance by integrating digital policy with human rights, developing regulations and policies, and engaging stakeholders in decision-making processes. By leveraging existing international structures and conducting global analysis, Canada aims to contribute to a comprehensive and inclusive global AI governance framework. Additionally, their support for research and capacity-building initiatives highlights their commitment to responsible AI development.

Marlena Wisniak

The analysis highlights several important points regarding AI governance. One of the main points is the need for mandatory human rights due diligence and impact assessments in AI governance. The analysis suggests that implementing these measures globally presents an opportunity to ensure that AI development and deployment do not infringe upon human rights. This approach is informed by the UN Guiding Principles for Business and Human Rights, which provide a framework for businesses to respect human rights throughout their operations. By incorporating human rights impact assessments into AI governance, potential adverse consequences on human rights can be identified and addressed proactively.

Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engagement is viewed as a collaborative process in which diverse stakeholders, including civil society organizations and affected communities, can meaningfully contribute to decision-making processes. The inclusion of external stakeholders is seen as crucial to ensure that AI governance reflects the concerns and perspectives of those who may be affected by AI systems. By involving a range of stakeholders, AI governance can be more comprehensive, responsive, and representative.

Transparency is regarded as a prerequisite for AI accountability. The analysis argues that AI governance should mandate that AI developers and deployers provide transparent reporting on various aspects, such as datasets, performance metrics, human review processes, and access to remedy. This transparency is seen as essential to enable meaningful scrutiny and assessment of AI systems, ensuring that they function in a responsible and accountable manner.

Access to remedy is also highlighted as a crucial aspect of AI governance. This includes the provision of internal grievance mechanisms within tech companies and AI developers, as well as state-level and judicial mechanisms. The analysis argues that access to remedy is fundamental for individuals who may experience harm or violations of their rights due to AI systems. By ensuring avenues for redress, AI governance can provide recourse for those affected and hold accountable those responsible for any harm caused.

The analysis also cautions against over-broad exemptions for national security or counter-terrorism purposes in AI governance. It argues that such exemptions, if not carefully crafted, have the potential to restrict civil liberties. To mitigate this risk, any exemptions should have a narrow scope, include sunset clauses, and prioritize proportionality to ensure that they do not unduly infringe upon individuals' rights or freedoms.

Furthermore, the analysis uncovers a potential shortcoming in AI governance efforts. It suggests that while finance, business, and national security are often prioritized, human rights are not given sufficient consideration. The analysis calls for a greater focus on human rights within AI governance initiatives, ensuring that AI systems are developed and deployed in a manner that respects and upholds human rights.

The analysis also supports the ban of AI systems that are fundamentally incompatible with human rights, such as biometric surveillance in public spaces. This viewpoint is based on concerns about mass surveillance and discriminatory targeted surveillance enabled by facial recognition and remote biometric recognition technologies. Banning such technologies is seen as necessary to safeguard privacy, freedom, and prevent potential violations of human rights.

In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the importance of multistakeholder participation and the need to engage stakeholders in the process of policymaking. This is seen as a means to balance power dynamics and address the potential imbalance between stakeholders, particularly as companies often possess financial advantages and greater access to policymakers. The analysis highlights the need for greater representation and involvement of human rights advocates in AI governance processes.

Another observation relates to the capacity and resources of civil society, especially in marginalized groups and global majority-based organizations. The analysis urges international organizations and policymakers to consider the challenges faced by civil society in terms of capacity building, resources, and finance. It emphasizes the need for more equitable and inclusive participation of all stakeholders to ensure that AI governance processes are not dominated by powerful actors or leave marginalized groups behind.

Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations, especially in countries with repressive regimes or authoritarian practices. This observation draws attention to the concept of the "Brussels effect," wherein EU regulations become influential worldwide. It highlights the potential for countries with stronger regulatory frameworks to shape AI governance practices globally, emphasizing the importance of considering the implications and potential impacts of regulations beyond national borders.

In conclusion, the analysis underscores the importance of incorporating mandatory human rights due diligence, stakeholder engagement, transparency, access to remedy, and careful consideration of exemptions in AI governance. It calls for greater attention to human rights within AI governance efforts, the banning of AI systems incompatible with human rights, and the inclusion of diverse perspectives and voices in decision-making processes. The analysis also raises attention to the challenges faced by civil society and the potential influence of laws in one country on global regulations. Overall, it provides valuable insights for the development of effective and responsible AI governance frameworks.

Speaker

Latin America faces challenges in meaningful participation in shaping responsible AI governance. These challenges are influenced by the region's history of authoritarianism, which has left its democracies weak. Moreover, there is a general mistrust towards participation, further hindering Latin America's engagement in AI governance.

One of the main obstacles is the tech industry's aggressive push for AI deployment. While there is great enthusiasm for AI technology, there is a lack of comprehensive understanding of its limitations, myths, and potential risks. Additionally, the overwhelming number of proposals and AI guidance make it difficult for Latin America to keep up and actively contribute to the development of responsible AI governance.

Despite these challenges, Latin America plays a crucial role in the global chain of AI technological developments. The region is a supplier of vital minerals like lithium, which are essential for manufacturing AI systems. However, the mining processes involved in extracting these minerals often have negative environmental impacts, including pollution and habitat destruction. This has led to mixed sentiments regarding Latin America's involvement in AI development.

Latin America also provides significant resources, data, and labor for AI development. The region supplies the raw materials needed for hardware manufacturing and offers diverse datasets collected from various sources for training AI models. Additionally, Latin America's workforce contributes to tasks such as data labeling for machine learning purposes. However, these contributions come at a cost, with negative impacts including environmental consequences and labor exploitation.

It is crucial for AI governance to prioritize the impacts of AI development on human rights. Extracting material resources for AI development has wide-ranging effects, including environmental degradation and loss of biodiversity. Moreover, the health and working conditions of miners are often disregarded, and there is a lack of attention to data protection and privacy rights. Incorporating human rights perspectives into AI governance is necessary.

Another concerning issue is the use of AI for surveillance purposes and welfare decisions by governments, without adequate transparency and participation standards. The deployment of these technologies without transparency raises concerns about citizen rights and privacy.

To address these challenges, it is necessary to strengthen democratic institutions and reduce asymmetries among regions. While Latin America provides resources and labor for AI systems designed elsewhere, AI governance processes often remain distant from the region. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and participation are essential.

In conclusion, Latin America faces obstacles in meaningful participation in shaping responsible AI governance due to the aggressive push for AI deployment and its history of authoritarianism. However, the region plays a crucial role in the global AI technological chain by providing resources, data, and labor. It is important to consider the impacts of AI development on human rights and promote transparency and participation in AI governance. Strengthening democratic institutions and addressing regional asymmetries are necessary for a more inclusive and equitable AI governance process.

Ian Barber

The analysis conducted on AI governance, human rights, and global implications reveals several key insights. The first point highlighted is the significant role that the international human rights framework can play in ensuring responsible AI governance. Human rights are deeply rooted in various sources, including conventions and customary international law. Given that AI is now able to influence many aspects of life, from job prospects to legal verdicts, it becomes essential to leverage the international human rights framework to establish guidelines and safeguards for AI governance.

Another important aspect is the ongoing efforts at various international platforms to develop binding treaties and recommendations on AI ethics. The Council of Europe, the European Union, and UNESCO are actively involved in this process. For instance, the Council of Europe is working towards the development of a binding treaty on AI, while the European Union has initiated the EU AI Act, and UNESCO has put forth recommendations on the ethics of AI. These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups.

Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective governance cannot be traversed alone, and it is crucial to ensure meaningful engagement from relevant stakeholders. These stakeholders include voices from civil society, private companies, and international organizations. Their input, perspectives, and expertise can contribute to the development of comprehensive AI governance policies that consider the diverse needs and concerns of different stakeholders.

One noteworthy observation made during the analysis is the importance of amplifying the voices of the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. It is crucial to address this imbalance and include voices from diverse backgrounds and regions in discussions on AI governance. A workshop has been suggested as a call to action to begin the ongoing collective effort in addressing the complexities brought about by AI.

The analysis also emphasizes the need to consider regional perspectives and involvement in global AI development. Regions' developments are essential factors to be taken into account when formulating AI policies and strategies. This ensures that the implications and impact of AI are effectively addressed on a regional level.

Furthermore, the analysis highlights the significance of African voices in the field of responsible AI governance and the promotion of human rights. Advocating for strategies or policies on emerging technologies specifically tailored for African countries can contribute to better outcomes and equitable development in the region.

Another noteworthy point is the need to bridge the gaps in discourse between human rights and AI governance. The analysis identifies gaps in understanding how human rights principles can be effectively integrated into AI governance practices. Addressing these gaps is essential to ensure that AI development and deployment are in line with human rights standards and principles.

In conclusion, the analysis underscores several important considerations for AI governance. Leveraging the international human rights framework, developing binding treaties and recommendations on ethics, fostering stakeholder engagement, considering global majority voices, including regional perspectives, and amplifying African voices are all critical aspects of responsible AI governance. Additionally, efforts should be made to bridge the gaps in discourse between human rights and AI governance. By integrating human rights principles and adhering to the international rights framework, AI governance can be ethically sound and socially beneficial.

Shahla Naimi

The analysis explores the impact of AI from three distinct viewpoints. The first argument suggests that AI has the potential to advance human rights and create global opportunities. It is argued that AI can provide valuable information to human rights defenders, enabling them to gather comprehensive data and evidence to support their causes. Additionally, AI can improve safety measures by alerting individuals to potential natural disasters like floods and fires, ultimately minimizing harm. Moreover, AI can enhance access to healthcare, particularly in underserved areas, by facilitating remote consultations and diagnoses. An example is provided of AI models being developed to support the 1000 most widely spoken languages, fostering better communication across cultures and communities.

The second viewpoint revolves around Google's commitment to embedding human rights into its AI governance processes. It is highlighted that the company considers the principles outlined in the Universal Declaration of Human Rights when developing AI products. Google also conducts human rights due diligence to ensure their technologies respect and do not infringe upon human rights. This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrounding the technology.

The third perspective emphasizes the need for multi-stakeholder and internationally coordinated AI regulation. It is argued that effective regulation should consider factors such as the structure, scope, subjects, and standards of AI. Without international coordination, fragmented regulations with inconsistencies may arise. Involving multiple stakeholders in the regulatory process is vital to consider diverse perspectives and interests.

Overall, the analysis highlights AI's potential to advance human rights and create opportunities, particularly in information gathering, safety, and healthcare. It underscores the importance of embedding human rights principles into AI governance processes, as demonstrated by Google's commitments. Furthermore, multi-stakeholder and internationally coordinated AI regulation is crucial to ensure consistency and standards. These viewpoints provide valuable insights into the ethical and responsible development and implementation of AI.

Pratek Sibal

A recent survey conducted across 100 countries revealed a concerning lack of awareness among judicial systems worldwide regarding artificial intelligence (AI). This lack of awareness poses a significant obstacle to the effective implementation of AI in judicial processes. Efforts are being made to increase awareness and understanding of AI in the legal field, including the launch of a Massive Open Online Course (MOOC) on AI and the Rule of Law in seven different languages. This course aims to educate judicial operators about AI and its implications for the rule of law.

Existing human rights laws in Brazil, the UK, and Italy have successfully addressed cases of AI misuse, suggesting that international human rights law can be implemented through judicial decisions without waiting for a specific AI regulatory framework. By proactively applying existing legal frameworks, countries can address and mitigate potential AI-related human rights violations.

In terms of capacity building, it is argued that institutional capacity building is more sustainable in the long term compared to individual capacity building. Efforts are underway to develop a comprehensive global toolkit on AI and the rule of law, which will be piloted with prominent judicial institutions such as the Inter-American Court of Human Rights and the East Africa Court of Justice. This toolkit aims to enhance institutional capacity to effectively navigate the legal implications of AI.

Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure inclusivity and accessibility. This includes the development of a comic strip available in various languages and a micro-learning course on defending human rights in the age of AI provided in 25 different languages.

Canada's AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact. These projects have supported the growth of communities in creating language datasets and developing applications in healthcare and agriculture, thereby increasing the capacity of civil society organizations in these regions.

The evolution of international standards and policy-making has seen a shift from a traditional model of technical assistance to a more collaborative, multi-stakeholder approach. This change involves engaging stakeholders at various levels in the development of global policy frameworks, ensuring better ownership and effectiveness in addressing AI-related challenges.

Pratek Sibal, a proponent of the multi-stakeholder approach, emphasizes the need for meaningful implementation throughout the policy cycle. Guidance on developing AI policies in a multi-stakeholder manner has been provided, covering all phases from agenda setting to drafting to implementation and monitoring.

Dealing with authoritarian regimes and establishing frameworks for AI present complex challenges with no easy answers. Pratek Sibal acknowledges the intricacies of this issue and highlights the need for careful consideration and analysis in finding suitable approaches.

In conclusion, the survey reveals a concerning lack of awareness among judicial systems regarding AI, hindering its implementation. However, existing human rights laws are successfully addressing AI-related challenges in several countries. Efforts are underway to enhance institutional capacity and involve communities in strengthening human rights in the age of AI. The positive impact of Canada's AI for Development projects and the shift towards a collaborative, multi-stakeholder approach in international standards and policy-making are notable developments. Dealing with authoritarian regimes in the context of AI requires careful consideration and exploration of suitable frameworks.

Audience

Different governments and countries are adopting varied approaches to AI governance. The transition from policy to practice in this area will require a substantial amount of time. However, there is recognition and appreciation for the ongoing multi-stakeholder approach, which involves including various stakeholders such as governments, industry experts, and civil society.

It is crucial to analyze and assess the effectiveness of these different approaches to AI governance to determine the most successful strategies. This analysis will inform future decisions and policies related to AI governance and ensure their efficacy in addressing the challenges posed by AI technologies.

UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly, they have also involved children in the process. This approach of engaging children in policy creation has proven to be valuable, as their perspectives and experiences have enriched the final product. Inclusion and engagement of children in policy creation and practices around AI are viewed as both meaningful and necessary.

Furthermore, efforts are being made to ensure responsible AI in authoritarian regimes. Particularly, there is ongoing work on engaging Technical Advisory Groups (TAG) for internet freedoms in countries such as Myanmar, Vietnam, and China. This work aims to promote responsible AI practices and address any potential human rights violations that may arise from the use of AI technologies.

Implementing mechanisms to monitor responsible AI in authoritarian regimes is of utmost importance. These mechanisms can help ensure that AI technologies are used in ways that adhere to principles of human rights and minimize potential harms.

Interestingly, it is noted that implementing policies to monitor responsible AI is relatively easier in human rights-friendly countries compared to authoritarian ones. This observation underscores the challenges faced in authoritarian regimes where governments may exert greater control over AI technologies and policies.

In conclusion, the various approaches to AI governance taken by governments and countries need careful analysis to determine their effectiveness. Engaging children in policy creation and promoting responsible AI in authoritarian regimes are fundamental steps in fostering a safe and inclusive AI ecosystem. Implementing mechanisms to monitor responsible AI poses a particular challenge in authoritarian contexts. However, policies for monitoring responsible AI are relatively easier to implement in human rights-friendly countries. These insights highlight the ongoing efforts required to develop effective AI governance frameworks that protect human rights and promote responsible AI use.

Oluseyi Oyebisi

The analysis highlights the importance of including the African region in discussions on AI governance. It notes that the African region is coming late to the party in terms of participating in AI governance discussions and needs to be included to ensure its interests are represented. The argument presented is that African governments, civil society, and businesses should invest in research and engage more actively in global conversations regarding AI governance.

One of the main points raised is the need for Africa to build technical competence to effectively participate in international AI negotiations. It is mentioned that African missions abroad must have the right capacity to take part in these negotiations. Furthermore, it is noted that universities in Africa are not yet prepared for AI development and need to strengthen their capabilities in this area.

Additionally, the analysis suggests that African governments should consider starting with soft laws and working with technology platforms before transitioning to hard laws. It is argued that this approach would allow them to learn from working with technology platforms and progress towards more rigid regulations. The need for regulation that balances the needs of citizens is emphasized.

The analysis also highlights the need for African governments, civil society, and businesses to invest in research and actively engage in global platforms related to AI governance. It is mentioned that investment should be made in the right set of meetings, research, and engagements. Bringing Africans into global platforms is seen as a crucial step towards ensuring their perspectives and needs are considered in AI governance discussions.

Overall, the expanded summary emphasizes the need to incorporate the African region into the global AI governance discourse. It suggests that by building technical competence, starting with soft laws, and actively engaging in research and global platforms, African countries can effectively contribute to AI governance and address their specific development challenges.

Speakers

A

Audience

Speech speed

168 words per minute

Speech length

450 words

Speech time

160 secs

Click for more

IB

Ian Barber

Speech speed

203 words per minute

Speech length

3949 words

Speech time

1168 secs

Click for more

MW

Marlena Wisniak

Speech speed

169 words per minute

Speech length

1895 words

Speech time

671 secs

Click for more

OO

Oluseyi Oyebisi

Speech speed

156 words per minute

Speech length

1058 words

Speech time

407 secs

Click for more

PS

Pratek Sibal

Speech speed

168 words per minute

Speech length

2632 words

Speech time

941 secs

Click for more

SN

Shahla Naimi

Speech speed

197 words per minute

Speech length

1782 words

Speech time

542 secs

Click for more

S

Speaker

Speech speed

171 words per minute

Speech length

680 words

Speech time

239 secs

Click for more

TD

Tara Denham

Speech speed

195 words per minute

Speech length

2361 words

Speech time

728 secs

Click for more