A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288
Event report
Speakers and Moderators
Speakers:
- Marlena Wisniak, Civil Society, Western European and Others Group (WEOG)
- Michel Souza, Civil Society, Latin American and Caribbean Group (GRULAC)
- Yunwei Aaryn, Government, Western European and Others Group (WEOG)
- Naimi Shahla, Private Sector, Intergovernmental Organization
- Khodeli Irakli, Intergovernmental Organization, Intergovernmental Organization
- Rumman Chowdhury, Civil Society, Intergovernmental Organization
- Oluseyi Oyebisi, Civil Society, African Group
Moderators:
- Ian Barber, Civil Society, Western European and Others Group (WEOG)
- Marina Atoji Atoji, Civil Society, Latin American and Caribbean Group (GRULAC)
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Tara Denham
Canada is leading the way in taking AI governance seriously by integrating digital policy with human rights. The Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada is actively working on the geopolitics of artificial intelligence, ensuring that AI development and governance uphold human rights principles.
The Canadian government is actively involved in developing regulation, policy, and guiding principles for AI. They have implemented a directive on how government will handle automated decision making, including an algorithmic impact assessment tool. To ensure responsible development and management of AI, the government has published a voluntary Code of Conduct and is working on AI and Data Act legislation. Additionally, the government requires engagement with stakeholders before deploying generative AI, demonstrating their commitment to responsible AI implementation.
Stakeholder engagement is considered essential in AI policy making, and Canada has taken deliberate steps to involve stakeholders from the start. They have established a national table that brings together representatives from the private sector, civil society organizations, federal, provincial, and territorial governments, as well as Indigenous communities to consult on AI policies. This inclusive approach recognizes the importance of diverse opinions and aims to develop policies that are representative of various perspectives. However, it is acknowledged that stakeholder engagement can be time-consuming and may lead to tensions due to differing views.
Canada recognizes the significance of leveraging existing international structures for global AI governance. They have used the Freedom Online Coalition to shape their negotiating positions on UNESCO Recommendations on AI ethics. Additionally, they are actively participating in Council of Europe negotiations on AI and human rights. However, it is noted that more countries and stakeholder groups should be encouraged to participate in these international negotiations to ensure a comprehensive and inclusive global governance framework for AI.
There is also a need for global analysis on what approaches to AI governance are working and not working. This analysis aims to build global capacity and better understand the risks and impacts of AI in different communities and countries. Advocates emphasize the importance of leveraging existing research on AI capacity building and research, supported by organizations like the International Development Research Centre (IDRC).
Furthermore, there is a strong call for increased support for research into AI and its impacts. IDRC in Canada plays a pivotal role in funding and supporting AI capacity-building initiatives and research. This support is crucial in advancing our understanding of AI’s potential and ensuring responsible and beneficial implementation.
In conclusion, Canada is taking significant steps towards effective AI governance by integrating digital policy with human rights, developing regulations and policies, and engaging stakeholders in decision-making processes. By leveraging existing international structures and conducting global analysis, Canada aims to contribute to a comprehensive and inclusive global AI governance framework. Additionally, their support for research and capacity-building initiatives highlights their commitment to responsible AI development.
Marlena Wisniak
The analysis highlights several important points regarding AI governance. One of the main points is the need for mandatory human rights due diligence and impact assessments in AI governance. The analysis suggests that implementing these measures globally presents an opportunity to ensure that AI development and deployment do not infringe upon human rights. This approach is informed by the UN Guiding Principles for Business and Human Rights, which provide a framework for businesses to respect human rights throughout their operations. By incorporating human rights impact assessments into AI governance, potential adverse consequences on human rights can be identified and addressed proactively.
Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engagement is viewed as a collaborative process in which diverse stakeholders, including civil society organizations and affected communities, can meaningfully contribute to decision-making processes. The inclusion of external stakeholders is seen as crucial to ensure that AI governance reflects the concerns and perspectives of those who may be affected by AI systems. By involving a range of stakeholders, AI governance can be more comprehensive, responsive, and representative.
Transparency is regarded as a prerequisite for AI accountability. The analysis argues that AI governance should mandate that AI developers and deployers provide transparent reporting on various aspects, such as datasets, performance metrics, human review processes, and access to remedy. This transparency is seen as essential to enable meaningful scrutiny and assessment of AI systems, ensuring that they function in a responsible and accountable manner.
Access to remedy is also highlighted as a crucial aspect of AI governance. This includes the provision of internal grievance mechanisms within tech companies and AI developers, as well as state-level and judicial mechanisms. The analysis argues that access to remedy is fundamental for individuals who may experience harm or violations of their rights due to AI systems. By ensuring avenues for redress, AI governance can provide recourse for those affected and hold accountable those responsible for any harm caused.
The analysis also cautions against over-broad exemptions for national security or counter-terrorism purposes in AI governance. It argues that such exemptions, if not carefully crafted, have the potential to restrict civil liberties. To mitigate this risk, any exemptions should have a narrow scope, include sunset clauses, and prioritize proportionality to ensure that they do not unduly infringe upon individuals’ rights or freedoms.
Furthermore, the analysis uncovers a potential shortcoming in AI governance efforts. It suggests that while finance, business, and national security are often prioritized, human rights are not given sufficient consideration. The analysis calls for a greater focus on human rights within AI governance initiatives, ensuring that AI systems are developed and deployed in a manner that respects and upholds human rights.
The analysis also supports the ban of AI systems that are fundamentally incompatible with human rights, such as biometric surveillance in public spaces. This viewpoint is based on concerns about mass surveillance and discriminatory targeted surveillance enabled by facial recognition and remote biometric recognition technologies. Banning such technologies is seen as necessary to safeguard privacy, freedom, and prevent potential violations of human rights.
In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the importance of multistakeholder participation and the need to engage stakeholders in the process of policymaking. This is seen as a means to balance power dynamics and address the potential imbalance between stakeholders, particularly as companies often possess financial advantages and greater access to policymakers. The analysis highlights the need for greater representation and involvement of human rights advocates in AI governance processes.
Another observation relates to the capacity and resources of civil society, especially in marginalized groups and global majority-based organizations. The analysis urges international organizations and policymakers to consider the challenges faced by civil society in terms of capacity building, resources, and finance. It emphasizes the need for more equitable and inclusive participation of all stakeholders to ensure that AI governance processes are not dominated by powerful actors or leave marginalized groups behind.
Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations, especially in countries with repressive regimes or authoritarian practices. This observation draws attention to the concept of the “Brussels effect,” wherein EU regulations become influential worldwide. It highlights the potential for countries with stronger regulatory frameworks to shape AI governance practices globally, emphasizing the importance of considering the implications and potential impacts of regulations beyond national borders.
In conclusion, the analysis underscores the importance of incorporating mandatory human rights due diligence, stakeholder engagement, transparency, access to remedy, and careful consideration of exemptions in AI governance. It calls for greater attention to human rights within AI governance efforts, the banning of AI systems incompatible with human rights, and the inclusion of diverse perspectives and voices in decision-making processes. The analysis also raises attention to the challenges faced by civil society and the potential influence of laws in one country on global regulations. Overall, it provides valuable insights for the development of effective and responsible AI governance frameworks.
Speaker
Latin America faces challenges in meaningful participation in shaping responsible AI governance. These challenges are influenced by the region’s history of authoritarianism, which has left its democracies weak. Moreover, there is a general mistrust towards participation, further hindering Latin America’s engagement in AI governance.
One of the main obstacles is the tech industry’s aggressive push for AI deployment. While there is great enthusiasm for AI technology, there is a lack of comprehensive understanding of its limitations, myths, and potential risks. Additionally, the overwhelming number of proposals and AI guidance make it difficult for Latin America to keep up and actively contribute to the development of responsible AI governance.
Despite these challenges, Latin America plays a crucial role in the global chain of AI technological developments. The region is a supplier of vital minerals like lithium, which are essential for manufacturing AI systems. However, the mining processes involved in extracting these minerals often have negative environmental impacts, including pollution and habitat destruction. This has led to mixed sentiments regarding Latin America’s involvement in AI development.
Latin America also provides significant resources, data, and labor for AI development. The region supplies the raw materials needed for hardware manufacturing and offers diverse datasets collected from various sources for training AI models. Additionally, Latin America’s workforce contributes to tasks such as data labeling for machine learning purposes. However, these contributions come at a cost, with negative impacts including environmental consequences and labor exploitation.
It is crucial for AI governance to prioritize the impacts of AI development on human rights. Extracting material resources for AI development has wide-ranging effects, including environmental degradation and loss of biodiversity. Moreover, the health and working conditions of miners are often disregarded, and there is a lack of attention to data protection and privacy rights. Incorporating human rights perspectives into AI governance is necessary.
Another concerning issue is the use of AI for surveillance purposes and welfare decisions by governments, without adequate transparency and participation standards. The deployment of these technologies without transparency raises concerns about citizen rights and privacy.
To address these challenges, it is necessary to strengthen democratic institutions and reduce asymmetries among regions. While Latin America provides resources and labor for AI systems designed elsewhere, AI governance processes often remain distant from the region. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and participation are essential.
In conclusion, Latin America faces obstacles in meaningful participation in shaping responsible AI governance due to the aggressive push for AI deployment and its history of authoritarianism. However, the region plays a crucial role in the global AI technological chain by providing resources, data, and labor. It is important to consider the impacts of AI development on human rights and promote transparency and participation in AI governance. Strengthening democratic institutions and addressing regional asymmetries are necessary for a more inclusive and equitable AI governance process.
Ian Barber
The analysis conducted on AI governance, human rights, and global implications reveals several key insights. The first point highlighted is the significant role that the international human rights framework can play in ensuring responsible AI governance. Human rights are deeply rooted in various sources, including conventions and customary international law. Given that AI is now able to influence many aspects of life, from job prospects to legal verdicts, it becomes essential to leverage the international human rights framework to establish guidelines and safeguards for AI governance.
Another important aspect is the ongoing efforts at various international platforms to develop binding treaties and recommendations on AI ethics. The Council of Europe, the European Union, and UNESCO are actively involved in this process. For instance, the Council of Europe is working towards the development of a binding treaty on AI, while the European Union has initiated the EU AI Act, and UNESCO has put forth recommendations on the ethics of AI. These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups.
Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective governance cannot be traversed alone, and it is crucial to ensure meaningful engagement from relevant stakeholders. These stakeholders include voices from civil society, private companies, and international organizations. Their input, perspectives, and expertise can contribute to the development of comprehensive AI governance policies that consider the diverse needs and concerns of different stakeholders.
One noteworthy observation made during the analysis is the importance of amplifying the voices of the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. It is crucial to address this imbalance and include voices from diverse backgrounds and regions in discussions on AI governance. A workshop has been suggested as a call to action to begin the ongoing collective effort in addressing the complexities brought about by AI.
The analysis also emphasizes the need to consider regional perspectives and involvement in global AI development. Regions’ developments are essential factors to be taken into account when formulating AI policies and strategies. This ensures that the implications and impact of AI are effectively addressed on a regional level.
Furthermore, the analysis highlights the significance of African voices in the field of responsible AI governance and the promotion of human rights. Advocating for strategies or policies on emerging technologies specifically tailored for African countries can contribute to better outcomes and equitable development in the region.
Another noteworthy point is the need to bridge the gaps in discourse between human rights and AI governance. The analysis identifies gaps in understanding how human rights principles can be effectively integrated into AI governance practices. Addressing these gaps is essential to ensure that AI development and deployment are in line with human rights standards and principles.
In conclusion, the analysis underscores several important considerations for AI governance. Leveraging the international human rights framework, developing binding treaties and recommendations on ethics, fostering stakeholder engagement, considering global majority voices, including regional perspectives, and amplifying African voices are all critical aspects of responsible AI governance. Additionally, efforts should be made to bridge the gaps in discourse between human rights and AI governance. By integrating human rights principles and adhering to the international rights framework, AI governance can be ethically sound and socially beneficial.
Shahla Naimi
The analysis explores the impact of AI from three distinct viewpoints. The first argument suggests that AI has the potential to advance human rights and create global opportunities. It is argued that AI can provide valuable information to human rights defenders, enabling them to gather comprehensive data and evidence to support their causes. Additionally, AI can improve safety measures by alerting individuals to potential natural disasters like floods and fires, ultimately minimizing harm. Moreover, AI can enhance access to healthcare, particularly in underserved areas, by facilitating remote consultations and diagnoses. An example is provided of AI models being developed to support the 1000 most widely spoken languages, fostering better communication across cultures and communities.
The second viewpoint revolves around Google’s commitment to embedding human rights into its AI governance processes. It is highlighted that the company considers the principles outlined in the Universal Declaration of Human Rights when developing AI products. Google also conducts human rights due diligence to ensure their technologies respect and do not infringe upon human rights. This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrounding the technology.
The third perspective emphasizes the need for multi-stakeholder and internationally coordinated AI regulation. It is argued that effective regulation should consider factors such as the structure, scope, subjects, and standards of AI. Without international coordination, fragmented regulations with inconsistencies may arise. Involving multiple stakeholders in the regulatory process is vital to consider diverse perspectives and interests.
Overall, the analysis highlights AI’s potential to advance human rights and create opportunities, particularly in information gathering, safety, and healthcare. It underscores the importance of embedding human rights principles into AI governance processes, as demonstrated by Google’s commitments. Furthermore, multi-stakeholder and internationally coordinated AI regulation is crucial to ensure consistency and standards. These viewpoints provide valuable insights into the ethical and responsible development and implementation of AI.
Pratek Sibal
A recent survey conducted across 100 countries revealed a concerning lack of awareness among judicial systems worldwide regarding artificial intelligence (AI). This lack of awareness poses a significant obstacle to the effective implementation of AI in judicial processes. Efforts are being made to increase awareness and understanding of AI in the legal field, including the launch of a Massive Open Online Course (MOOC) on AI and the Rule of Law in seven different languages. This course aims to educate judicial operators about AI and its implications for the rule of law.
Existing human rights laws in Brazil, the UK, and Italy have successfully addressed cases of AI misuse, suggesting that international human rights law can be implemented through judicial decisions without waiting for a specific AI regulatory framework. By proactively applying existing legal frameworks, countries can address and mitigate potential AI-related human rights violations.
In terms of capacity building, it is argued that institutional capacity building is more sustainable in the long term compared to individual capacity building. Efforts are underway to develop a comprehensive global toolkit on AI and the rule of law, which will be piloted with prominent judicial institutions such as the Inter-American Court of Human Rights and the East Africa Court of Justice. This toolkit aims to enhance institutional capacity to effectively navigate the legal implications of AI.
Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure inclusivity and accessibility. This includes the development of a comic strip available in various languages and a micro-learning course on defending human rights in the age of AI provided in 25 different languages.
Canada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact. These projects have supported the growth of communities in creating language datasets and developing applications in healthcare and agriculture, thereby increasing the capacity of civil society organizations in these regions.
The evolution of international standards and policy-making has seen a shift from a traditional model of technical assistance to a more collaborative, multi-stakeholder approach. This change involves engaging stakeholders at various levels in the development of global policy frameworks, ensuring better ownership and effectiveness in addressing AI-related challenges.
Pratek Sibal, a proponent of the multi-stakeholder approach, emphasizes the need for meaningful implementation throughout the policy cycle. Guidance on developing AI policies in a multi-stakeholder manner has been provided, covering all phases from agenda setting to drafting to implementation and monitoring.
Dealing with authoritarian regimes and establishing frameworks for AI present complex challenges with no easy answers. Pratek Sibal acknowledges the intricacies of this issue and highlights the need for careful consideration and analysis in finding suitable approaches.
In conclusion, the survey reveals a concerning lack of awareness among judicial systems regarding AI, hindering its implementation. However, existing human rights laws are successfully addressing AI-related challenges in several countries. Efforts are underway to enhance institutional capacity and involve communities in strengthening human rights in the age of AI. The positive impact of Canada’s AI for Development projects and the shift towards a collaborative, multi-stakeholder approach in international standards and policy-making are notable developments. Dealing with authoritarian regimes in the context of AI requires careful consideration and exploration of suitable frameworks.
Audience
Different governments and countries are adopting varied approaches to AI governance. The transition from policy to practice in this area will require a substantial amount of time. However, there is recognition and appreciation for the ongoing multi-stakeholder approach, which involves including various stakeholders such as governments, industry experts, and civil society.
It is crucial to analyze and assess the effectiveness of these different approaches to AI governance to determine the most successful strategies. This analysis will inform future decisions and policies related to AI governance and ensure their efficacy in addressing the challenges posed by AI technologies.
UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly, they have also involved children in the process. This approach of engaging children in policy creation has proven to be valuable, as their perspectives and experiences have enriched the final product. Inclusion and engagement of children in policy creation and practices around AI are viewed as both meaningful and necessary.
Furthermore, efforts are being made to ensure responsible AI in authoritarian regimes. Particularly, there is ongoing work on engaging Technical Advisory Groups (TAG) for internet freedoms in countries such as Myanmar, Vietnam, and China. This work aims to promote responsible AI practices and address any potential human rights violations that may arise from the use of AI technologies.
Implementing mechanisms to monitor responsible AI in authoritarian regimes is of utmost importance. These mechanisms can help ensure that AI technologies are used in ways that adhere to principles of human rights and minimize potential harms.
Interestingly, it is noted that implementing policies to monitor responsible AI is relatively easier in human rights-friendly countries compared to authoritarian ones. This observation underscores the challenges faced in authoritarian regimes where governments may exert greater control over AI technologies and policies.
In conclusion, the various approaches to AI governance taken by governments and countries need careful analysis to determine their effectiveness. Engaging children in policy creation and promoting responsible AI in authoritarian regimes are fundamental steps in fostering a safe and inclusive AI ecosystem. Implementing mechanisms to monitor responsible AI poses a particular challenge in authoritarian contexts. However, policies for monitoring responsible AI are relatively easier to implement in human rights-friendly countries. These insights highlight the ongoing efforts required to develop effective AI governance frameworks that protect human rights and promote responsible AI use.
Oluseyi Oyebisi
The analysis highlights the importance of including the African region in discussions on AI governance. It notes that the African region is coming late to the party in terms of participating in AI governance discussions and needs to be included to ensure its interests are represented. The argument presented is that African governments, civil society, and businesses should invest in research and engage more actively in global conversations regarding AI governance.
One of the main points raised is the need for Africa to build technical competence to effectively participate in international AI negotiations. It is mentioned that African missions abroad must have the right capacity to take part in these negotiations. Furthermore, it is noted that universities in Africa are not yet prepared for AI development and need to strengthen their capabilities in this area.
Additionally, the analysis suggests that African governments should consider starting with soft laws and working with technology platforms before transitioning to hard laws. It is argued that this approach would allow them to learn from working with technology platforms and progress towards more rigid regulations. The need for regulation that balances the needs of citizens is emphasized.
The analysis also highlights the need for African governments, civil society, and businesses to invest in research and actively engage in global platforms related to AI governance. It is mentioned that investment should be made in the right set of meetings, research, and engagements. Bringing Africans into global platforms is seen as a crucial step towards ensuring their perspectives and needs are considered in AI governance discussions.
Overall, the expanded summary emphasizes the need to incorporate the African region into the global AI governance discourse. It suggests that by building technical competence, starting with soft laws, and actively engaging in research and global platforms, African countries can effectively contribute to AI governance and address their specific development challenges.
Session transcript
Ian Barber:
Hope everyone’s doing well. Thank you so much for joining this session. One of the many this week on AI and AI governance, but with a more focused view and perspective on global human rights approach to AI governance. My name is Ian Barber. I’m legal lead at Global Partners Digital. We’re a civil society organization based in London working to foster an online environment underpinned by human rights. We’ve been working on AI governance and human rights for several years now. So I’m very happy to be a co-organizing facilitating this alongside Transparencia Brazil, who is our online moderator. So thank you very much. What I’ll be doing over the next few minutes is providing a bit of introduction to this workshop, setting the scene, introducing our fantastic speakers, both in person and online, and providing a bit of structure as well for the discussion that we’re having today and some housekeeping rules. Really, this workshop is meant to acknowledge that we stand at the intersection of two realities, the increasing potential of artificial intelligence on one hand and the ongoing relevance of the international human rights framework on the other. When we think of a human rights-based approach to AI governance, a few things come to mind. Firmly and truly grounding policy approaches in the international human rights framework, the ability to assess risks to human rights, promoting open and inclusive design and deployment of AI, as well as ensuring transparency and accountability amongst other elements and measures. And given this, it’s probably not news to anyone in the room that the rapid design, development, and deployment of AI demands our attention, our understanding, and our collaborative efforts across various different stakeholders. Human rights, which are enshrined in various sources, such as conventions. and customer international law, and its dynamic interpretations and evolution, it works to guide us towards our world continually where people can exercise and enjoy their human rights to thrive without prejudice or discrimination or other forms of injustice. And like any technology, AI poses both benefits and risks to enjoyments of human rights. I’m sure you’ve attended other sessions this week where you spoke in a bit more detail about what those look like in various sectors and on different civil, political, economic and social rights. But today, what we’re gonna be doing is narrowing in on a few key questions. The first is how can the international human rights framework be leveraged to ensure responsible AI governance in a rapidly changing context and world that we live in? And I think this question is important because it underscores how AI is now able to influence so many things from our job prospects, our ability to express ourselves, legal verdicts. And so how do we ensure that human rights continue to be respected, protected and promoted is key. Secondly, we must reflect upon the global implications for human rights in the kind of ongoing proliferation of AI governance frameworks that we’re seeing today. And also, and in the potential absence of effective frameworks, what is the result and what are we looking at? There has been this ongoing proliferation of efforts at the global, regional, national level to provide frameworks, rules and other types of normative structures and standards that are supposed to promote and safeguard human rights. For example, just to highlight a few, there’s ongoing efforts at the Council of Europe to develop a binding treaty on AI. There’s the European Union’s efforts with the EU AI Act. There’s UNESCO’s recommendations on the ethics of AI, which is finalized but currently undergoing implementation. And other efforts such as the more recently proposed. UN high level advisory body on AI. But at this point, we’ve yet to see comprehensive and binding frameworks enacted at this point, which might be considered, you know, effective and sufficient to protect human rights. And without these safeguards and protections, we therefore risk kind of exacerbating inequality, silencing marginalized groups and voices and inadvertently creating a world where AI serves more as a divider than it does promoter and for equality. So what do we wanna see and what do we want to do to ensure that this is not the case and not the future that we’re looking at? And lastly, over the next 80 or so minutes, the path towards responsible AI governance is not one that can be kind of traversed alone. So we need to navigate these challenges together, fostering meaningful engagement by all relevant stakeholders. That’s why on this panel, we have voices from civil society, from private companies, from international organizations, which are all needed. And we also need to particularly amplify voices from the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. And that’s very much the case when it comes to AI as well. So this workshop is, it’s not just a gathering I see, it is one, it’s one for information sharing, but it’s also a call to action. It’s really, I think, the beginning of an ongoing collective effort to address a range of complexities that have come about from AI and to really work to ensure the ongoing relevance of our shared human values and for human rights. So with that intro and framing, I’d like to get started, get the ball rolling and kind of drawing from the diverse range of experiences here, really talk about what we want in terms of a global human rights approach to responsible AI governance. And to do that, we have an all-star lineup of speakers from, again, a number of different. stakeholders. I’m going to briefly introduce them, but I encourage you to all, when you make your interventions, to provide a bit more background on where you come from, the type of work you do, and really why you’re here today and your motivations. And in no particular order, we have Marlena Wisniak from the European Center for Nonprofit Law, to my left. We have Vladimir Jure from Direcho Societales, who’s over there. We also have Tara Denham from Global Affairs Canada, and we have Pratik as well from UNESCO. So thank you for all being here in person. And online, we have Sholana Mae from Google, and Oyabisi Olesi from the Nigeria Network of NGOs, or NNNGO. In terms of structure, we have a bit of time on our hands. And what we’re going to do then is divide the session into two parts. The first part is going to be looking at a particular focus on the international human rights framework, and also this ongoing proliferation of regulatory processes on AI that I’ve kind of alluded to already. We’ll then take a pause for questions from the audience, as well as those joining online as well. And I want to give a special shout out to Marina from Transparencia Brazil, who is taking in questions and feeding them into me so that we can have a hybrid conversation. And then after this first part, we’ll stop and we’ll have a second part, and that’ll look a bit more at inclusion of voices in these processes, and how engagement from the global majority is imperative. And that will be followed by a final brief Q&A session, and then closing remarks. So I hope that makes sense. I hope that sounds structured enough and productive, and I look forward to your questions and interventions later. But let’s get into the meat of things. Looking at the international human rights framework, we’re at a point where there are various efforts on global AI governance happening at a breakneck speed. And there’s a number of them that I’ve mentioned, including the hearing. Rishima process that was just spoken about yesterday, if you guys read the main event. So my first question and kind of my prompt is to my left Marlena, really given your work at ECNL and kind of the ongoing efforts you have to advocate for rights respecting approaches on these types of AI regulatory processes, what do you consider or think is missing in terms of aligning them with the International Human Rights Framework? And again, if you could provide a brief background and introduction, that’d be great, thanks.
Marlena Wisniak:
Sure, thanks so much Ian and hi everyone. Welcome to day two, I think it is of IGF. It feels like a week already. So my organization, the European Center for Nonprofit Law is a human rights org that focuses on civic space, freedom of assembly and association. And also we work a lot on freedom of expression and privacy. And over the past five years, we’ve noticed that AI was a big risk and some extent opportunity, but great potential for harm as well for activists, journalists and human rights defenders around the world. So the first five years of our work in this space were rather quiet, or I’d say it was more of a niche area with only a handful of folks working at the intersection of human rights and AI. And by handful, I really mean like 10 to 15. And this year, the discussion around AI has really expanded very, very quickly and it may be a chat GPT kind of trailblazer issue, but it’s great to see that at the UN there is interest for this topic and panels like this that bring a human rights based approach to AI. So Ian mentioned a couple of the ongoing regulations. I won’t bore you this morning with a lot of legalese, but the core frameworks that we focus on advocate for a human rights based approach at ECNL are obviously the EU AI Act and trilogues are happening as I speak right now. Council of Europe Convention on AI. national laws as well, we’ve seen these expand a lot around the world recently. We engage in standardization bodies, so like the US NIST, a National Institute for Standards and Technology, and the EU CENCENELEC, and of course, international organizations like OECD and the UN, and you mentioned, Ian, Hiroshima process, that’s one we’re following closely as well. In the coming years, as the AI Act is said to be accepted in the next couple of weeks, and definitely by early 2024, we’ll be following the implementation of the Act, and so I’ll use this as a segue to talk to you a little bit about what are the core elements that we see should be part of any AI framework and AI governance from a human rights-based approach, and that begins with human rights to diligence and meaningful human rights impact assessments in line with the UN Guiding Principles for Business and Human Rights. So we really see, with AI, an opportunity to implement mandatory human rights to diligence, including human rights impact assessments in the EU space that also involves other laws, but beyond EU, globally, the UN and other institutions, and FORA have an opportunity right now to actually mandate meaningful, inclusive, and rights-based impact assessments. That means meaningfully engaging stakeholders as well, especially external stakeholders like civil society organizations and affected communities around the world. So stakeholder engagement is a necessary and cross-cutting component of AI governance, development, and use, and at ECNL, we look both at how to govern AI and then how it’s developed and how it’s deployed around the world. We understand stakeholder engagement is a collaborative process where diverse stakeholders, both internal and external, meaning those that that develop the technologies themselves can meaningfully influence decision making. So on the governance side of things is when we consult in these processes, including a multi-stakeholder forum like IGF, do our voices actually heard? Can they impact the final text and provisions of any laws or policies that are implemented? And on the AI design and development side of things, when tech companies or any deployer of AI consults of external stakeholders, do they actually implement, do they include their voices and do these voices inform and shape final decision making? In the context of human rights impact assessments of AI systems, stakeholder engagement is particularly effective to understand what kind of AI systems are even helpful or useful and how do they work. So looking at the product and service side of AI, machine learning or any algorithmic-based data analytics systems, we really can, we can shape better regulation and develop better systems by including these stakeholders. Importantly, external stakeholders can identify specific potential positive or adverse impacts on human rights, such as the implications, benefits and harms of these systems on people and looking at marginalized and already vulnerable groups in particular. If you’re interested to learn more about stakeholder engagement, check out our framework for meaningful engagement. So shameless plug to Google or go on our website and look up framework for meaningful engagement where we provide concrete recommendations for engaging internal and external stakeholders in AI systems. And these recommendations can also be used for AI governance as a whole. Moving on, I’d like to touch base, touch on transparency briefly, which in addition to human rights impact assessments and stakeholder engagement we see as a prerequisite for AI accountability and a rights-based global AI governance. So, not to go too much to detail, but we believe that AI governance should mandate that AI developers and deployers report on data sets, including training data sets, performance and accuracy metrics, false positives and false negatives, human in the loop and human review, and access to remedy. If you’d like to learn more about that, I urge you to look at our recent paper, published with Access Now just a couple weeks ago, on the EU Digital Services Act, with a spotlight on algorithmic systems, and we outline our vision for what meaningful transparency would look like. Finally, access to remedy is a key part of any governance mechanism that includes both internal agreements mechanisms within tech companies and AI developers, as well as, obviously, state remedy at the state level and judicial mechanisms, which are, as a reminder, states have the primary responsibility to protect human rights and give remedy when these are harmed. And one, I’d say, aspect that we often see in AI governance efforts, especially by governments, to include an exemption for national security or counter-terrorism and, broadly, emergency measures. And at ECNL, we caution against over-broad exemptions that are too vague, broadly defined, as these can be, at best, misused, as worst, weaponized to restrict non-civil liberties. So, if there are any exemptions for things like national security or counter-terrorism in AI governance, we really urge to have a narrow scope, include sunset clauses for emergency measures, meaning that if any exemptions are in place, they will end within due time, and focus on proportionality. And finally, what is missing? So what we see today, both in the EU and globally as well, is that AI governance efforts mostly take a risk-based approach. And the risk part is often to finance, business, I mentioned national security, terrorism, these kind of things, but rarely human rights. And the AI act itself in the EU is regulated under product liability and market approach, not fundamental rights. In our research paper of 2021, we outlined key criteria for evaluating the risk level of AI systems from a human rights-based approach. And that means that we recommend determining the level of risk based on the product design, the severity of the impact, any internal due diligence mechanisms, causal link between the AI system and adverse human rights impacts, and potential for remedy. And all these examples help us really focus on the harms of AI to human rights. Last thing, and then I’ll stop here, where AI systems are fundamentally incompatible with human rights, such as biometric surveillance deployed in public spaces, including facial and emotional recognition, we, along with a coalition of civil society organizations, advocate for a ban of such systems. And we’ve seen proliferation of laws, like in the US, for example, the state level, and right now, in the latest version of the AI Act adopted by the European Parliament of such bans. So that means prohibiting the use of facial recognition and remote biometric recognition technologies that enable mass surveillance and discriminatory targeted surveillance in public and publicly accessible spaces by the government. And we urge the UN and other processes, such as the Hiroshima, to include such bans. Thank you, Ian.
Ian Barber:
Thank you, Marlena. That was amazing. I think you actually just followed up to my. my immediate question, which was what is really needed when it comes to AI systems that do pose an unacceptable risk to human rights? So thank you for preemptively responding. And I very much agree that having mandatory due diligence, including impact assessments of human rights, is imperative. I think what you spoke to in terms of stakeholder engagement rings true, as well as the issue of transparency and the needs for that to foster meaningful accountability and also introducing remedies. So thank you very much for that overview. I think based on that, considering that there are these initiatives and there are so many different elements to consider, whether it’s transparency, accountability, or scope, I’ll turn to you, Tara, and ask, given all this, how is a government such as Canada approaching AI governance and considering human rights? What are, in terms of both your domestic priorities, in terms of kind of regional or international engagement? So if you could speak a bit to how these are all feeding together, that’d be great. Thank you.
Tara Denham:
Sure. Thank you. And thank you for inviting me to participate on the panel. So as I said, I’m Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada, which I think also warrants perhaps a bit of an explanation, but I think actually aligns really well as a starting position. Because within the Office of Human Rights, Freedoms, and Inclusion is actually where we’ve been embedded the responsibility for digital policy and cybersecurity policy from a global affairs perspective. And so that was our starting point for which, since the integration of those policy positions and that policy work a number of years ago, it was always starting from a human rights perspective. And so this goes back, I think, about six or seven years that we actually created this office and integrated the human rights perspective into our digital policy from the beginning and some of our initial positions on the development of AI considerations and the geopolitics of artificial intelligence. So I think that, in and of itself, is perhaps unique in some of the. structures. Having said that, I would also acknowledge that a lot of the government structures, we are all trying to figure out how to approach this, but as the DG responsible for these, it does give a great opportunity to, from the beginning, integrate that human rights policy position. When we were first starting to frame some of our AI thinking from a foreign policy lens, it was always from the human rights perspective. I can’t say that that has always meant we’ve known how to do it, but I could say that’s always been pushing us to think and challenge ourselves of how can we use the existing human rights frameworks, how can we advocate that at every venture, including domestically. I wanted to give, perhaps, a snapshot of the framing of how we’re approaching it in Canada, some of our national perspectives, and then how we’re linking that to the international, and of course, integrating how we address some of the integrating a diversity of voices into that in a concrete way. I would say when we started talking about this a number of years ago, it was the debate, and I’m sure many of you participated in this debate, it was a lot around should it be legislation first, should it be guiding principles, are there frameworks, are we going to do voluntary. For a number of years, that was the cycle we were in, and I would say over the last year and a half to two years, that’s not a debate anymore. We have to do all of them, and they’re going to be going at the same time. Right now, I think where I’m standing is it’s more about how are we going to integrate and how are we going to feed off of each other as we’re moving domestic at the same time as the international. We have to, typically, from a policy perspective, you would have your national positions defined and those would inform your international positions. Right now, the world is just moving at an incredible pace, so we’re doing it at the same time, and we have to find those… intersections but also takes a conscious decision across government, and when I say across government, I mean across our national government. And of course, this is within the framework, which we’re all very familiar with, which is domestically, we are also all aiming to harness AI to the greatest capacities, because of all of the benefits that there are, but we’re always very aware of the risks. And so that is a very real tension that we need to always be integrating into the policy discussions that we’re having. And our belief and our position in our national policy development and international is that is where the diversity of voices are absolutely required, because the risk views will be very different depending on the voice and the community that you’re inviting and that you’re actually engaging in the conversation in a meaningful way. So it’s not just inviting to the conversation, it’s actually listening and then shaping your policy position. So in Canada, what we’ve seen is, and I’m not going to go into great detail, but just to give you a snapshot of where we’ve started is like within the last four years, we’ve had a directive on how automated decision making will be handled by the government of Canada, and that was accompanied by an algorithmic impact assessment tool. That was sort of the first wave of direction that we gave in terms of how the government of Canada was going to engage with automated decision making. Then over the last little while, again, in the last year, there’s been a real push related to generative AI. So now in, I think it was just in the last couple months, there was the release of a guide on how to use generative AI within the public sector. A key point I wanted to note here is that it is a requirement to engage stakeholders before deploying generative AI by the government of Canada. Before we’re actually going to roll it out, we have to engage with those that will actually be impacted. be enacted, whether it be for public use or service delivery. And then just last month, a voluntary Code of Conduct on Responsible Development and Management of Advanced Generative AI Systems. This, again, we’ve seen the U.S. with similar announcements. We’ve seen the G7, work that we’re doing in the G7. And a lot of these codes of conduct and principles coming out at the same time, and this is also accompanied in Canada by working through legislation, so that we also have an AI and Data Act going through legislation. So, as I said, these are the basis of the regulations and the policy world that we’re working in within Canada. And what I comment there is that these are all then developed by multiple departments. Okay, so that’s where I think we’re challenging ourselves as policymakers, because we have to also increase our capability to work across the sectors, across the departments. And I would say from where we started with when we were developing Canada’s Directive on Automated Decision Making, through to the actual Code of Conduct that was just announced, that was moving from, you know, informal consultations across the country, trying to engage with private sector and academia, to the voluntary code being consulted. So, we have a national table set up now, which does include private sector, civil society, federal, provincial, territorial governments, Indigenous communities. So, we’ve also had to make a journey through what it means from sort of ad hoc consultation to formalized consultation when we’re actually developing these codes. So then, how does that translate internationally? As we’re learning domestically at a rapid pace, perhaps I can just pull on a few examples of how we’ve then tried to reflect that internationally. And I’m going to harken back to the UNESCO. UNESCO Recommendations on the Ethics of AI from 2021. So, this is where, again, it was making that conscious decision about harnessing our national tables that were in place to define our negotiating positions when we would be going internationally, given that, again, our national positions weren’t as defined. And then we also wanted to leverage the existing international structures, and I think that’s really important as we talk about the plethora of international structures at play. So, this is where we’ve used the Freedom Online Coalition. So, you have to look at the structures that you have, the opportunities that exist, and what are the means by which we can do wide consultation on the negotiating positions that we’re taking. So, for the UNESCO Recommendations, that’s where we use the Freedom Online Coalition, and they have the advisory network, which also includes civil society and tech companies. So, again, it’s about proactively seeking those opportunities, shaping your negotiating positions in a conscious way, and then bringing those to the table. We’re also involved in the Council of Europe negotiations on AI and human rights, which is, again, leveraging our tables, but it’s also advocating to have a more diverse representation of countries at the table. So, you have to seize the opportunity. We do see this as an opportunity to engage effectively in this negotiation, and we want to continue to advocate that more countries are participating, and that more stakeholder groups can engage. So, maybe I’ll just finish by saying some of the lessons that we’ve learned from doing this. It’s really easy to recite that and make it sound like it was, you know, easy to do. It’s not. Some of the lessons I would pull out, number one, stakeholder engagement requires a deliberate decision to integrate from the start. And I guess the most important word in that is That one is deliberate. You have to think about it from the beginning, you have to put that in place. As I’ve said a few times, you have to think about and make sure that you’re creating that space for the voices to be heard, and then actually following through on that. The second one, it does take time, it’s complex, and there will be tensions, and there should be tensions, because if there’s not tensions in the perspectives, then you probably haven’t created a wide enough table of a diversity of voices. So you have to, I think my team is probably tired of me saying this, but you have to get comfortable with living in a zone of discomfort. If you’re not in a zone of discomfort, you’re probably not pushing your policy, your own, your view, and again, I’m coming from a policy perspective, and you have to do that to find the best solutions. As policymakers, it is going to also drive us to sort of increase our expertise. So we’re seeing a lot of, you know, yes, we would traditionally come to the tables with our policy knowledge, and our human rights experience, and those sort of elements, but I think, you know, we’ve tried a lot of different things in terms of integrating expertise into our teams, integrating expertise into our consultations, so you have to sort of think about what it’s going to mean in a policy world to now do this, and finally, I’ll just say, again, leveraging the structures that are in place. We have to optimize what we have. It’s, I think, sometimes easier to say, well, it’s broken, and let’s create something new, but I do want to think that we can continue to optimize, and if we’re going to create something new, we, again, it’s a conscious decision to think about what is missing from what we have that needs to be improved upon. Perhaps I’ll stop there.
Ian Barber:
Thank you, Tara. That was great and really comprehensive. I think in the beginning, you alluded to the challenges in applying the international human rights system to the work that you’re doing, but I’m glad Canada is very much doing that and taking this multi-pronged approach. approach that does put human rights front and center, both the national and international levels. And I really agree that there is very much a need to have deliberate stakeholder engagement and appreciate the work that you’ve been doing on that. And also the need to leverage existing structures and ensuring that these conversations are truly global, inclusive, and ensuring that the expertise is there as well. So thank you so much. And I think your comments on UNESCO actually serve as a perfect segue to my next prompt, which I’ll be turning to Pratek to discuss a bit about that. So UNESCO developed the recommendations on the Ethics of AI a couple of years ago. I think as it’s been alluded to, that the conversation has kind of gone from, do we need voluntary things to, or self-regulatory or non-binding to do we perhaps need more binding? And I think that is very much the direction to travel now. But I’m curious to hear from you a bit more about your experience at UNESCO in terms of implementing the recommendation at this point and how UNESCO in general will be playing a larger role in AI governance moving forward and on human rights. So thank you.
Pratek Sibal:
Thanks Ian. How much time do I have? You have five to six minutes, but there’s no rush. I wanna hear your comments and your interventions. First of all, thanks for organizing this discussion on human rights-based approaches to AI governance. I will perhaps focus more on the implementation part and share some really concrete examples of the work that we are doing with both rights holders and duty bearers. Perhaps first, it’s good to mention that the recommendation on the Ethics of AI is human rights-based. It has human rights as a core value and it is really informed by human rights. Now, I would focus more on the judiciary first. So while we are talking about development of voluntary. frameworks, non-voluntary binding and so on, there’s a separate discussion about whether it’s even possible in this fractured world that we are living in to have a binding instrument, it’s very difficult. It’s not a choice, if you are going to go and negotiate something today, it’s very difficult to get a global view. So we have a recommendation which is adopted by 193 countries, so that’s an excellent place to start with, and I’m really looking forward to the work that colleagues at the Council of Europe are doing to have a regional and also they work with other countries. Now, so we started to also, in my team, in my files, looking at the judiciary, because you can already start working with duty bearers and implement international human rights law through their decisions. But the challenge that you face is that a lot of times they don’t have enough awareness about what AI is, how does it work, there’s a lot of myth involved. And there is also this assumption that technologies out there, it will, if you’re using an AI system or in a lot of countries, they’re using for predictive purposes, they will be like, oh yeah, it’s the computer algorithm which is giving the score, it must be right. So all these kind of things need to be broken down and explained, and then the relevant links with international human rights law needs to be established. This is what we started to do in some time around 2020. We at UNESCO have an initiative called the Global Judges Initiative. Which started in 2013, where we are working on freedom of expression, access to information and safety of journalists. And through this work, we’ve reached about 35,000 judicial operators in 160 countries through both online trainings in the form of massive open online courses to in-person trainings, to helping national judicial training institutions develop curriculum. Around 2020, we started to discuss artificial intelligence. Of course, the recommendation was under development and we were thinking already about how can we actually implement beyond the great agreement that we have amongst countries. And we first launched a survey to this network and about 1200 judicial operators, when I say judicial operators, judges, lawyers, prosecutors, people working in legal administrations responded to this survey from about 100 countries. And they said two things. First, we want to learn on how AI can be used within the judicial processes and the administrative processes, because in a lot of countries, they are overworked and understaffed. And I’ve been talking to judges and they’re like, yeah, if I take a holiday, my colleagues have to work like 16 hours a day. And that is a key driver for them to look at how can the workload be streamlined. The next aspect is really about what are the legal and human rights implications of AI. And when it comes to say, freedom of expression, safety of access to information. Let me give you some examples here. So we have, for instance, in Brazil, there was a case in the Sao Paulo metro system, they were using facial recognition system on their doors to detect your emotions and then show advertisement. And so, I think it was a data protection authority in Brazil, which said that you can’t do that. You have no permission to collect this data and so on. And this did not require really an AI framework. So my point is that we should not think in just one direction that we have to work on a framework and then implement human rights. But we already have international human rights law, which is part of jurisprudence in a lot of countries, which can directly be used, actually. So let’s not give a lot of people the reason to wait. Let’s have a regulation in our country. Giving you some other examples, we’ve seen in Italy, for instance, they have these food delivery apps like Deliveroo, and there’s another one called Foodino. And they had two cases there where basically one of those apps, I don’t remember which one, was penalizing the food delivery drivers if they were canceling their scheduled deliveries for whatever reason. And they were giving them a negative score. So the algorithm was found to be biased. It was rating those who canceled, giving more negative points to them vis-a-vis the others. And the Data Protection Authority basically said from the GDPR that you cannot have this going. We had the case Marlena was mentioning about facial recognition in the public sphere. I think it was in the UK, the South Wales Police Department was using facial recognition systems in the public sphere. And this went to the Court of Appeals, and then they said, oh, you can’t do this. So this is the work. These are just examples of what is already happening and how people have already applied international human rights standards and so on. Now, what are we doing next? So in our program with our work with the judiciary, we launched in 2022 a massive open online course on AI and the rule of law, which covers all these dimensions. and we made it available in seven languages. And it was kind of a participative dialogue. We had the president of the Inter-American Code of Human Rights, we had the Chief Justice of India, we had professors, we had people from the civil society coming and sharing their experiences from different parts of the world, because everyone wants to learn in this domain. There’s like, as Canada, you were mentioning, there’s a lot of scope to learn from what other practices in other countries. And so that was our first product, which reached about 4,500 judicial operators in 138 countries. Now we realize that doing individual capacity building is one thing, but we need to focus more on institutional capacity building, because that’s more sustainable in the long term. So we’ve now, with also the support of the European Commission, developed a global toolkit on AI and the rule of law, which is essentially a curriculum, which has four modules, which is talking about human rights impact assessments that Marlena was talking about before. We are actually going to go to the judiciary and say, okay, this is how you can break things down. This is how you look at data. But what is the quality of data? When you’re using an AI system, how do you check whether the algorithm is, what was the data use, whether it was representative or not? So we are breaking these things down practically for them to start questioning, at least. You don’t expect judges to become AI experts at all, but at least to have that mindset to say, oh, it’s a computer, but it is not infallible. So we need to create that. So we have this curriculum, which we developed through also almost a year long process now of reviews and so on. Now we have the pilot. toolkit available, which we are implementing first with the Inter-American Court of Human Rights in November, actually next month, for a regional training. We will also then get their feedback because it’s important to work with the community on what works for them, also from the trainers. We are going to hopefully do it for the EU. We are going to do it in East Africa with the East African Court of Justice next year. In fact, we are hosting a conference with them later this month in Kigali. So we are at this moment now piloting this work, organizing these national and regional trainings with the judiciary, and then as a next step, hoping that this curriculum is picked up by the national judicial training institutions and integrated. And then they own it, they shape it, they use it. And that is how we see that it becomes international human rights standards, percolating down to enhanced capacities through this kind of a program. And also as an open invitation, the toolkit that we have, we are just piloting it. So also open to having feedback from the human rights experts here on how we could further improve and strengthen it. So I think perhaps I’ll briefly just mention the rights holders side. And we’ve also developed some tools for basically youth or even general public, you could say, to engage them in a more interesting way. So we have a comic strip on AI, which is now available in English, French, Spanish, Swahili. And I think there’s a language in Madagascar that is also, and in German and Slovenian soon. So these are tools that we make available to the communities to also then co-own, develop their language versions, because Part of strengthening human rights globally is also to making that content available in different languages. So people who can associate with it better. We have a course on defending human rights in the age of AI, which is available in 25 languages. It’s a micro learning course on a mobile phone that we developed in a very collaborative way with UNITAR, which is a United Nations training and research institution, as well as a European project called Saltopi, which involved youth volunteers who wanted to take it to their communities and say, oh, actually, in our country, we want this to be shared, and so on. So there are a number of tools that we have, and then communities of practice with whom we work on capacity building and actually translating some of this high-level principles, frameworks, policies into, hopefully, a few years down the line, into judgments, which become binding on governments, on companies, and so on. I’ll stop here. Thank you.
Ian Barber:
Thank you. That’s great. And thank you for reminding us that we already do have a lot of frameworks and tools that can already be leveraged and are taking place in the domestic context as well. Really commend on your work on AI and human rights in the judiciary. I think that it’s important to consider that we do need to work on the institutional knowledge capacity that you were speaking to and also working with various stakeholders in an inclusive manner. So thank you. At this point, we’ve heard from Marlena about what’s truly needed from a human rights-based approach. AI governance, we’ve heard from Tara what some governments and states like Canada are doing to champion this approach in some ways, domestic and national levels. You’ve heard from Pratik about the complementary work that’s being done by international organizations and the implementation and work happening there. So I think I want to pause at this point to see if anyone on the panel has any immediate reactions to anything that’s been said. And then we might have time for one quick question before we change directions a little bit. But if there are any immediate reactions, feel free to jump in. If not, that’s OK, too. And from online, if not. So, yeah, we can also go to a brief question, if that’s possible, please feel free to jump in. I think there’s a microphone there, but we can also hand one over. If you could introduce yourself, that’d be great too, thank you.
Audience:
Okay, thank you. I’m Stephen Foslu, I’m a policy specialist at UNICEF, and it’s really great to hear about the different initiatives that are happening and the different approaches. And maybe it’s natural, like in the previous session, Thomas Schneider was saying, it’s natural that we will see many different governments and countries approaching this differently, because nobody really knows how to do this. So this is more kind of a, I guess, a request just to think about not just the what, the governance, but also the how, and to do analysis of these different approaches and to see what works from voluntary codes of conduct to more kind of industry-specific legislation. And I think that’s almost really the next phase as we go from policy to practice. And this will play out over a number of years, but that would really be helpful from the UNESCOs and from the OECDs, who are already starting to build up this knowledge base. But clearly, there are going to be some things that work well and some that don’t. We also engage children. We did a policy, created a policy guidance on AI for children, and engaged children in that process. And it was a very meaningful and necessary process that really informed and enriched the product. So it’s really encouraging to hear about the multi-stakeholder approach that’s ongoing, not just ad hoc. But yeah, that’s kind of a request. And perhaps if you have any thoughts on kind of how you see these approaches may play out if we look ahead, and what kind of role the organizations that you’re in might play, not just kind of documenting and looking at how… how it may be, what may be governed, but actually, and how. Thank you.
Tara Denham:
First of all, as a mom, I would love to see that information about AI in children. That’s fantastic. But it did, on your comment about needing to do the analysis about what’s working and what’s not, I think one, and again, this is where we need to also build that capacity globally, because it’s one thing for Canada to do analysis and maybe what’s working in Canada, but we have to really understand what are the risks, how is it impacting in different communities and different countries. But this is where we have been working, and I don’t know if there’s any colleagues in the room, but we have the International Development Research Centre in Canada, IDRC, and they do a lot of the funding and capacity building in different nodes around the world and specific on AI capacity building and research. And so that’s where we’ve also had to really link up so that we can be leveraging as fast as possible the research that they’re also supporting. So again, it’s just always, again, it’s challenging ourselves as policymakers that we have to keep seeking it out, but there is that research and we just need more of it. I think I just wanted to advocate for that. Thank you.
Marlena Wisniak:
Yeah, thanks so much for that question. Definitely support multistakeholder participation and engaging stakeholders in the process of policymaking itself. One challenge that we see a lot is that there’s no level playing field between different stakeholders. So I don’t know if there are many companies in the room, but we often see that companies have a disproportionate advantage, I’d say, financial and access to policymakers. When I mentioned at the beginning of my intervention that there’s a handful of human rights folks that participate in AI governance, it really is another statement comparing to hundreds of actually thousands. of folks in the public policy sector of, or section of companies. So that’s something that I would urge international organizations and policy makers at the national level to consider that civil society really comes from, it’s an uphill battle in terms of capacity, resources, financial, and obviously, these are marginalized groups and global majority-based orgs are disproportionately hit by that. So Canada, as a Canadian, as Canadian government, I imagine you’re primarily engaged with national stakeholders, which is obviously important, and I also encourage you to think how Canadian laws can influence, for example, global majority-based regulation. That’s something we think about a lot in the EU with the so-called Brussels effect, understanding that many countries around the world, especially those with more repressive regimes or authoritarian practices, do not have necessarily the democratic institutional pillars that the EU or Canada would have. So that just added nuance to multistakeholderism, yes, and in a way that really enables inclusive and meaningful participation of all. Thank you.
Pratek Sibal:
So couple of quick points. First, also, on Canada, I think they’re doing a fantastic job in, for instance, Africa and Latin America with AI for Development project, and I have seen since 2019 the kind of communities that have come up and have been supported to develop, say, language datasets, which can then lead to development of applications or in healthcare or in agriculture or just to strengthen in a more sustained way capacities of civil society organizations that can inform decision-making and policy-making, and we at UNESCO have particularly also benefited from this because when we have the recommendation on the ethics of AI, which is being implemented. in a lot of countries, we work in a multi-stakeholder manner, right? We generally have a national multi-stakeholder group which convenes and works. And there, the capacity of civil society organizations to actually analyze national context, contribute to these discussions is very important. So the work that Canada or IDRC and so on are doing is actually, I have over the past four or five years seen results of that in my work itself already. So there’s good credit due there. On your point about policymaking at the international level and recommendations and so on, I think, so the process of international standard and policymaking has kind of evolved over the years. Like we used to be in a mode of technical assistance many years ago that someone will go to a country and help them develop a policy, an expert will fly in, stay there for some months and work. I think that model is changing. And that model is changing in the sense that you are developing policies or frameworks, I would say, at the global level with the stakeholders from the national or whatever level involved in the development of these frameworks. So what happens is that when they are developing something at the global level and when they have to translate it at the national level, they would naturally go towards this framework on which they have worked and they have great knowledge of. And that is one, it’s an implicit way of policy development which is over the few years that, not few, it’s been actually since the early 2000s, this is the model, because otherwise there’s not enough funding available and also it’s not sustainable because you don’t develop global frameworks which are done in a more consultative way. manner. So there is more ownership of these frameworks, which are then become the natural tool, go to tool for at the national level as well. So that’s, I think, an interesting way to develop. And that’s why we are talking about multi-stakeholderism. A lot of times in fora like this, multi-stakeholderism just becomes a buzzword. Yes, we should have everyone on the table. That is not what it means. We need to be, and we’ve actually produced a guidance on how to develop AI policies from drafting, from agenda setting to drafting to implementation and monitoring along the policy cycle in a multi-stakeholder manner. And there’s a short video also I’m happy to share later, if we can share it with the community. Thank you very much.
Ian Barber:
I know we have one speaker. Just really quickly, if you could make your question and then I have three more interventions from people including online. So maybe they can consider your question and their responses. And if not, then we can come back to it at the end. I just want to ensure that we make time for them. So if you can be brief, that’d be very much appreciated. So I can ask a
Audience:
question. Okay, thank you so much. Svetlana Zenz, Article 19. I’m working on engaging TAG for internet freedoms on Asia countries, Myanmar, Vietnam, China. And my question actually is, I mean, I think it’s like more of a UNESCO and Canada at some point because, I mean, the ones who are providing some global policies. Would you recommend some mechanisms which we could implement in authoritarian regime countries to monitor the responsible AI, especially from the private sector side? Because in the Western world or the world which is like more human rights friendly, it’s more easier to implement those policies rather than in authoritarian countries. Thank you. Thank you very much. We’ll be coming back to these questions
Ian Barber:
as well and I think that’s actually a little bit of a good segue to the next intervention. I’m going to turn to Shala who’s joining online from Google, from the private sector, as it’s important to consider also. stakeholders in the room. Shel, if you’re connected with us, I think my question for you is, aside from these government and the multilateral efforts, it’s obviously clear that the private sector plays a key role in promoting human rights and AI governance frameworks. So if you could speak about, really, your work at Google, what’s its perspective and ongoing efforts on AI governance and how you’re working to promote human rights. And if you can speak to the questions, it’s been asked, that’d be fantastic as well. Thank you so much for joining and your patience.
Shahla Naimi:
Sure, thank you so much for having me today. And apologies, I was unable to join in person. But I really do appreciate the chance to join virtually. I’ll try to keep this brief. I want to make sure we get to a more dynamic set of questions. And I know there are other speakers as well. But to take a step back, I sat on Google’s human rights program. And that is, for those who are not familiar, it’s a central function responsible for ensuring that we’re upholding our human rights commitments. And I can share more on that later. But it really applies across all the company’s products and services across all regions. And so this includes overseeing the strategy on human rights, advising product teams on potential actual human rights impacts. Quite relevant to this discussion, it’s conducting human rights due diligence and engaging external experts, rights holders, stakeholders, et cetera. And so maybe just to take a brief step back, I’ll just share a little bit of our starting point as a company, which is really true excitement about the ways that AI can advance human rights and really create opportunities for people across the globe. And so I think that that doesn’t just mean in terms of potential advancements, but really progress that we’re already seeing putting more information in the hands of human rights defenders in whatever country they are in, keeping people safer from floods and fires, particularly knowing that it affects disproportionately the global majority, increasing access to health care, one that I’m particularly. excited about is something we call our 1,000 languages initiatives, which is really working on building AI models that support the 1,000 most widely spoken languages. We obviously live in a world where there are over 7,000 languages, and so I think it’s a drop in the bucket, but we hope that it’s sort of a useful starting point. But to sort of, again, turn to our topic at hand, none of this is possible if AI has not developed responsibility, and as was sort of noted in the introduction, this really is an effort that necessarily needs to have government, civil society organizations, and the private sector involved in a really deeply collaborative process, maybe one that we haven’t even seen before, potentially. For us as a company, the starting point for responsible AI development and deployment is human rights. So for those who are maybe less familiar with the work that we do in this space is, you know, Google’s made a number of commitments to respecting the rights enshrined by the Universal Declaration of Human Rights, which is turning 75 this year, and it’s implementing treaties, as well as the UN guiding principles on business and human rights, which I think Marlena mentioned in the beginning. So, you know, what does that actually look like in practice? So, you know, as part of this, years ago in 2018, when we established our AI principles, we embedded human rights into them. So for those who are not familiar, our AI principles describe our objectives to develop technology responsibly, but also outline some specific application areas that we will not pursue, and that includes technologies whose purpose contravenes international law and human rights. So if I’m kind of providing a bit of a tangible example, let’s imagine that we’re sort of thinking of developing a new product like BARD, which we released earlier this year. This would go through our AI principles review via our responsible innovation team, and as part of that process, my team would also conduct human rights due diligence to identify any potential harms and develop alongside various themes, legal, and product teams in particular, appropriate mitigations around them. And so one example of this, which we can sort of share around, which is a public case study that we’ve released is around our celebrity recognition API. So back in, this would have been 2019, you know, we already saw that the streaming era had brought, you know, a really remarkable explosion of video content. And in many ways, that was fantastic. More documentaries, more access for filmmakers to sort of showcase and share their work globally and so on. But there was also a really big challenge, which was the video was pretty much unsearchable without, you know, expensive, labor-intensive tagging processes. This made it really difficult and expensive for creators. So, you know, a discussion popped up about better image and video capabilities to recognize sort of an international roster of celebrities as a starting point. So our AI principles review in this process triggered kind of additional human rights due diligence, and we brought on Business for Social Responsibility, BSR, which some are familiar with, to help us conduct sort of a formal human rights assessment on the potential impact of a tool like this on human rights. Kind of fast forward, the outcome of this was a very tightly scoped offering, one that defined celebrity quite carefully, established manual customer review processes, instituted really an expanded terms of service. All of this actually ended up also later forming our company-wide stance on facial recognition, and, you know, took into consideration quite a bit of stakeholder engagement in the process. Though it was developed more recently than this particular human rights assessment, I’ll also plug in the ECNL framework for meaningful engagement, because it served as a really helpful guide for us since it’s released. So I just want to share this example for two reasons. One is just human rights and sort of the established ways of assessing impact on human rights have been embedded into our internal AI governance processes from the… beginning. And two, as a result of that, we’ve actually been doing human rights due diligence on AI related products and features for three years. And that’s been a priority for us as a company for quite a long time. To sort of take a very brief kind of note to to sort of your the second part of your question. I’ll just sort of flag that I think we really do need everybody at the table. And that’s not always the case right now, as as others had mentioned, you know, we were excited, just as an example, to be part of the moment at the White House over the summer at the US White House over the summer, that brought together industry to commit to advancing responsible practices in the development of AI. And earlier this fall, we did sort of release our company’s progress against those commitments. And that included launching a beta of synth idea, a synth ID, which is a new tool we developed for watermarking and identifying AI generated images, a really core component of informing the development of that particular product was concerns from civil society organizations and academics, and individuals and sort of the global majority keeping in mind that we have 75 elections happening globally next year, really concerns around misinformation and the potential proliferation of misinformation, establishing and a dedicated AI red team, co establishing the frontier model form to sort of develop standards and benchmarks for emerging safety issues. But we’re, you know, we think these commitments and companies progress against them is an important step in the ecosystem of governance, but they really are just a step. So we’re particularly eager to see kind of more space for industry to come together with governments and civil society organizations, more conversations like this. I think Tara mentioned the Freedom Online Coalition. So it could be through existing spaces like FOC, or the Global Network Initiative, but also, you know, potentially new spaces, as we find that it’s necessary. And so I’ll just kind of mention one last thing briefly, because I know where I’m probably over my time. because it did sort of come up more specifically. I’ll just flag that when developing AI regulation at Google at the very least, we sort of think about it in a few ways. We’ve been thinking about it as something called the four S’s. You know, the structure of the regulation. Is it international? Is it domestic? Is it vertical? Is it horizontal? The scope of the regulation, how’s AI being defined? Which is not the easiest thing to do in the world. The subjects of regulation, developers, deployers, and finally the standards of the regulations. What risks, how do we consider those difficult trade-offs that were mentioned earlier by some, I think the person who asked the first question. So these are just sort of some of the things that we’re taking into consideration in this process, but we’re really hoping that more multi-stakeholder conversations will lead to some international coordination on this front, because our concern is that, you know, otherwise we’ll have a bit of a hodgepodge of regulation around the world. And in the worst case scenario, I think it makes it difficult for companies to comply and stifles innovation, potentially cuts off populations from what could be potentially transformative technology. And it might not be so much the case for us at Google where we’ve, you know, we have the resources to make significant investments in compliance and regional expertise, but we do think it would be, could be a potential issue for smaller players and sort of future players in this space. So I’ll pause there because I think I probably took up too much time, but I appreciate it and looking forward to the Q and A.
Ian Barber:
Thank you so much for that overview. That was great. And thank you for highlighting the work that’s happening at Google to support human rights in this context, particularly you’re working on due diligence, for example, as well as you noting the need for collaboration and considering global majority perspectives. I think that’s key as well. So what I’d like to do now is turn to Vladimir as our second to last intervention of the session, and then hopefully turning to a couple of questions at the end. I think that we’ve heard from a couple of different stakeholders. at this point, but I think the question for you is, do you think that the global majority is able to engage in these processes? Do you think that they are able to effectively shape the conversations that are happening at this point? And I think that, you know, that I chose to see Dallas has spoken about the need to consider local perspectives and I’m curious to hear from you is, why is this so critical and kind of what is the work that you’re doing now? And if we can keep an intervention to about four or five minutes, that’d be fine, but don’t wanna cut you off, thank you. Okay, I’ll try to be brief. Well, first of all, thank you so much for the question.
Speaker:
It’s a hard question and also for the invitation to be part of this panel, I’m very glad to be here. I’m Vladimir Garay, part of Derechos Digitales, Latin American digital rights organization and for the last couple of years, we’ve been researching about the deployment of AI systems in a region in the context of public policy. Part of that work has been founded by IDRC, so thank you. And I’m gonna tell you a little bit more about that later, but if you’re interested, you can go to ia.derechosdigitales.org and if the URL in Spanish confuse you, you come to me and I can give you one of these and you can find it more easily. So regarding your question, even though there are interesting efforts being developed right now, I think Latin America mostly have lacked the ability to meaningfully engage and shape processes for responsible AI governance and this is consequence of different challenges faced by the Latin American region on the local, the regional and the global context. For example, on the local context, one of the main challenges has to do with the designing of governance instances that are inclusive and that can engage meaningfully with a wide range of actors, which is at least partly consequence of a long history of authoritarianism that results on frail democracies that are suspicious of. participation, that are dismissive of human rights impacts, or that lack the necessary institutional capacities to implement solutions that acknowledge broad, inclusive, transparent participation. On the global context, we have to address the eagerness of the tech industry for pushing aggressively a technology that is still not completely mature in terms of our understanding of it, how we think about it, how we think about its limitations, and how do we demythologize it. And one of the consequences of this is the proliferation of different proposals for guidance, legal, ethical, and more, so many that it’s hard to keep up. So there’s a sense of overwhelming necessity and inability, which is a difficulty in itself. Now, also in the global context, I think Latin America and global majority perspectives are often overlooked and disregarded in the international debate about technology governance, probably because from a technical or an engineering standpoint, the number of artificial intelligence systems that are being developed in Latin America might seem marginal, which is true, especially when compared to those created in North America, Europe, and part of Asia. But our region has a fundamental role in the production of AI systems, and a better understanding of global majority and Latin American countries’ relationship with AI can be illuminating, not just for Latin America, but for the AI governance fields as a whole. How should it look like and what should it include? So first, I think it’s important to consider the different roles of global majority countries, and in particular, Latin American countries, in the global chain of artificial intelligence development. Our region has a fundamental role in the production of AI systems, for example, as a provider of lithium and other minerals necessary for the manufacturing of different components of AI systems. Of course, as you all know, Mining consumes big amounts of non-renewable energy and has important environmental impacts, including air pollution and water contamination that can lead to the destruction of habitats
Ian Barber:
and the loss of biodiversity. It also has a severe impact on the health of the miners, many of whom work in precarious conditions. Latin America also provides data, raw data, that is collected from different sources by different means and that is used to train and refine AI models, data that is often collected as a consequence of the lack of proper protection of people’s rights to their personal information. And most of the time, people’s data get input on AI systems without people’s consent or even their knowledge. Latin America also provides labor, labor necessary to train AI systems by labeling data for machine learning. These are usually low-paid jobs performed also under very precarious conditions that can have harmful impacts on the emotional and mental health of people, for example, when reviewing data for content moderation purposes. It is also the very foundation of any AI system, but its value is severely underestimated and not properly compensated. In summary, Latin America provides material resources necessary for the development of AI systems that are being designed somewhere else and later sold back to us and deployed in our countries, perpetuating logics of dependency and extractivism. So we are both the providers of the inputs and the paying clients for the outputs, but the processes that determine AI governance are often far removed from our region. In general, AI governance should consider the different impacts of AI development on human rights, including the ones that are a result of the extraction of these material resources, environmental human rights, workers’ rights. and the right to data protection, privacy, and autonomy, which are greatly impacted in regions like Latin America. Now, at Derecho Digitales, we have been looking into different implementations of AI systems through public policy, because the main way most people interact with this type of technologies in a region is in their relationship with the state, even if they’re not always aware of this. And what we’ve seen is that states are using AI for mediating the relationship with citizens, for surveilling purposes, for making decisions regarding welfare assistance, and for controlling the access and the use of welfare programs. However, most of the time, our research shows that these technologies are deployed without meeting transparency or participation standards, they lack human right approaches, and do not consider open, transparent, and participatory evaluation processes. There are many reasons for this, from corruption to the lack of capacities, and disregard for human right impacts, as I mentioned earlier. But we need to overcome this reality,
Speaker:
which implied to address the asymmetries among different regions related to the strengthening of democratic institutions. International cooperation is key, and civil society organizations in the region are playing a major role promoting that change. So I’ll keep it here for now. Thank you.
Ian Barber:
Thank you, Vladimir, for speaking about the need for regional perspectives and highlighting how these need to feed into global conversations, and including specifically how regional developments are necessary to consider in the context of AI development. I think that’s really helpful. I’m gonna turn to our last speaker now, Oyebisi, who I believe is joining us from about 5 a.m., and has been online for a very long time, so definitely deserves a round of applause, so last but definitely not least. So my question to you finally is, building on the previous comments, how do we ensure that, similarly, that African voices are represented in efforts on responsible AI governance and to promote human rights? And I’m gonna weave in a question from online that we’ve received as well, which I think might be related if you’re able to respond to that as well, which is, what suggestions can be given to African countries as they prepare strategies or policies on emerging technologies such as AI, specifically considering the risks and benefits? So again, thank you so much for your patience and thank you for being with us. Cheers.
Oluseyi Oyebisi:
Yes, and thank you so much, Haiyan, for inviting me to speak this morning. I think in terms of African voices, we all would agree that the African region is coming late to the party at this time. And we now need to find a way of peer pressuring the continent to get into the debate. Doing this would mean that we are also doing ourselves as other regions a favor, understanding that the continent has a very huge population and that human rights abuses on the continent itself would also snowball into developmental challenges that we do not want across the world. So this is the context for which we would have to ensure that we are not leaving the African continent behind, especially given the fact that our governments have not been able to figure. And this would speak to the question that has been asked by that colleague. Our governments have not prioritized the governance of AI. Of course, we need to think of the governance of AI within the hard and the soft slot, but also understanding the life cycle of the AI itself. And how do we ensure that along all of the life cycle, we have a government that understands that, a civil society organization as well that understands that and a business that understands that and was great listening to. the colleague from Google who was talking about how Google has a human rights program. How do we then, within a multi-stakeholder approach, bring that understanding to anticipate some of the rights challenges we might see with artificial intelligence, but also then plan as a truly multi-stakeholder approach to be able to mitigate those. And this is where governments would now need to see civil society organizations not as enemies, but as allies, and helping to bring those voices together. Of course, we should understand that at some point, the politics of AI would also come to bear because on the continent itself, we do not have all of the resources in terms of intellectual property to be able to develop the coding and all of these algorithms that follow that. Our universities are not prepared for that yet. But again, dealing with the technicalities as well, we have to also build some level of competence. Plus also understanding that in terms of international governance of AI and the setting up of international bodies, the African region would have to ensure that our missions abroad, especially those that would be relating with the UN, must have the right capacity to take part in the negotiations. And that’s why, again, I like how a colleague from Canada said that we would have these contestations and they are very necessary because it is within these contestations that we’ll be able to bring the diversity of opinions and thoughts to the table, such that we have policies that can help us to address some of these challenges that we might see now and in the future. But how are we going to prepare ourselves as Africans to be able to negotiate and negotiate better? And this speaks to the role of the African Union. including ECOWAS and other regional bodies. I do think the European Union is also setting the agenda and the kind of model for Africans and other regions to also follow in terms of the deep dive that they’ve done with the AI treaty and how they are using that to help shape how we can have a good human rights approach to AI itself. So now answering the question directly that you posed to me is to say that whatever advice we would give the African government would also be within the context of what we have seen. I want us to understand that hard laws may not necessarily be the starting point for African government. It might be soft laws, working with technology platforms to look at code of conducts and using lessons from that to progress to add laws. Of course, also understanding that governments must begin to think regulation in ways that balances the need of citizens and some of the disadvantages that you do not see or we do not want to see, but that we bring citizens themselves into the conversation such that we are also encouraging innovation. As much as we’re encouraging innovation, we’re also ensuring that the rights of others are not abused. It’s going to be a long walk to freedom. However, that journey must start with Africans, African civil society, African businesses, African governments investing in the right set of meetings, investing in the right set of research, investing also in the right set of engagements that can get us, again, to become part of the global conversation, but also understanding that. the regional elements of the conversations also must be taken on board. Especially given the fact that human rights abuses across the region is becoming alarming and that we now have more governments that are interested in not opening the space, not you know being intrusive, rather you know they want to muffle voices, you know, they also are not opening freedom of association itself is also affected. So when you look at the civic space ranking of civics for the region itself, it then again gives the picture as to how some way somehow as this some of these conversations might not necessarily be something that would excite the region. But again this is an assumption, we can still again begin to look for that stakeholder pressure in ways that brings the African governments to the table, in ways that helps them to see the need for this and also the need for us to get our voices into global platforms.
Ian Barber:
Thank you Oyebisi, it’s great and thank you for stressing again the importance of the multi-stakeholder approach, the need for civil society and governments to work together and bringing in this diversity of perspectives and African voices and governments to the table which requires preparation as well. So thank you. I guess to the organizers in the IGF, I’m not sure what the timing is in terms of whether we’ll be kicked out of the room or not, so if there’s a session immediately afterwards I’m not entirely certain but I don’t see anyone cutting me off. I think it’s a lunch break, so what I’ll do is I’ll just say some brief final comments and then if anyone has any particular questions or wants to come up to the speakers that might be a more helpful way of moving forward. I don’t want to stand in between people and their food, never a good position to be in. Pratek if you want to make one final… I think there was a question from…
Pratek Sibal:
I mean, I have no answer, but I think it’s an important question. So we, if I think it’s always tricky, particularly when we are dealing with authoritarian regimes and to put in frameworks, which may be used in whatever way possible. So I have no answer, but I think it’s an important question. So we should give some time to that.
Ian Barber:
Thank you. I just want to say that I think we began this session with a really crucial acknowledgement that there are truly glaring gaps in what is existing in the discourse between human rights and AI governance, and that it’s a really key for all stakeholders to come in for global perspectives from the industry, from civil society, from governments, from other champions on these issues. I think we’ve just started to shine a spotlight on these issues. So I think that we’ve also journeyed through what is really needed in terms of looking at a human rights approach to AI governance. I think it’s one piece of the pie, but a critical one. And I think that it’s just key that we continue to firmly root all efforts on AI governance in the international rights framework. So thank you so much to the speakers in person here and those online. Thank you for your patience and apologies for going over and apologies for not being able to field all the questions. But I would encourage you guys to continue to come up personally and speak to speakers yourself. Thank you. Thank you.
Speakers
Audience
Speech speed
168 words per minute
Speech length
450 words
Speech time
160 secs
Arguments
Analyses of different approaches to AI governance and determining what works, is essential
Supporting facts:
- Different governments and countries are approaching AI governance differently
- Transition from policy to practice will take a number of years
- Appreciation for the ongoing multi-stakeholder approach
Topics: AI governance, Industry-specific legislation, Voluntary codes
Engagement and inclusion of children in policy creation and practices around AI is meaningful and necessary
Supporting facts:
- UNICEF created a policy guidance on AI for children and engaged children in that process
- The process enriched the final product
Topics: AI for children, Policy guidance, Child participation
Need for implementation of mechanisms to monitor responsible AI in authoritarian regimes
Supporting facts:
- There is an ongoing work on engaging TAG for internet freedoms in Asia countries like Myanmar, Vietnam, China
Topics: Responsible AI, Internet Freedoms, Authoritarian Regimes, Policies, Private Sector AI
Report
Different governments and countries are adopting varied approaches to AI governance. The transition from policy to practice in this area will require a substantial amount of time. However, there is recognition and appreciation for the ongoing multi-stakeholder approach, which involves including various stakeholders such as governments, industry experts, and civil society.
It is crucial to analyze and assess the effectiveness of these different approaches to AI governance to determine the most successful strategies. This analysis will inform future decisions and policies related to AI governance and ensure their efficacy in addressing the challenges posed by AI technologies.
UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly, they have also involved children in the process. This approach of engaging children in policy creation has proven to be valuable, as their perspectives and experiences have enriched the final product.
Inclusion and engagement of children in policy creation and practices around AI are viewed as both meaningful and necessary. Furthermore, efforts are being made to ensure responsible AI in authoritarian regimes. Particularly, there is ongoing work on engaging Technical Advisory Groups (TAG) for internet freedoms in countries such as Myanmar, Vietnam, and China.
This work aims to promote responsible AI practices and address any potential human rights violations that may arise from the use of AI technologies. Implementing mechanisms to monitor responsible AI in authoritarian regimes is of utmost importance. These mechanisms can help ensure that AI technologies are used in ways that adhere to principles of human rights and minimize potential harms.
Interestingly, it is noted that implementing policies to monitor responsible AI is relatively easier in human rights-friendly countries compared to authoritarian ones. This observation underscores the challenges faced in authoritarian regimes where governments may exert greater control over AI technologies and policies.
In conclusion, the various approaches to AI governance taken by governments and countries need careful analysis to determine their effectiveness. Engaging children in policy creation and promoting responsible AI in authoritarian regimes are fundamental steps in fostering a safe and inclusive AI ecosystem.
Implementing mechanisms to monitor responsible AI poses a particular challenge in authoritarian contexts. However, policies for monitoring responsible AI are relatively easier to implement in human rights-friendly countries. These insights highlight the ongoing efforts required to develop effective AI governance frameworks that protect human rights and promote responsible AI use.
Ian Barber
Speech speed
203 words per minute
Speech length
3949 words
Speech time
1168 secs
Arguments
The international human rights framework can be leveraged to ensure responsible AI governance
Supporting facts:
- Human rights are enshrined in various sources, such as conventions and customary international law
- AI is now able to influence many aspects of life, from job prospects to legal verdicts
Topics: AI governance, human rights
Global implications for human rights need to be considered in the ongoing proliferation of AI governance frameworks
Supporting facts:
- There are ongoing efforts at the Council of Europe to develop a binding treaty on AI
- There’s the European Union’s efforts with the EU AI Act, there’s UNESCO’s recommendations on the ethics of AI
- Without these safeguards and protections, we risk exacerbating inequality, silencing marginalized groups
Topics: AI governance, global implications, human rights
It is important to foster meaningful engagement by all relevant stakeholders in AI governance
Supporting facts:
- The path towards responsible AI governance is not one that can be tresversed alone
- The panel represents voices from civil society, private companies, international organizations
Topics: AI governance, stakeholder engagement
The need for regional perspectives and involvement in global AI development
Supporting facts:
- Regions’ developments are necessary to consider in the context of AI development
Topics: Regional Involvement, AI development, Global Cooperation
Importance of African voices in efforts on responsible AI governance and promotion of human rights
Topics: Regional Involvement, AI Governance, Human Rights
There are glaring gaps in the discourse between human rights and AI governance.
Supporting facts:
- They began the session with acknowledgement of gaps in discourse between human rights and AI governance.
Topics: AI Governance, Human Rights
It’s crucial for all stakeholders to come in for global perspectives from industry, civil society, governments and other champions on these issues.
Topics: AI Governance, Stakeholder Engagement, Global Perspectives
A human rights approach to AI governance is needed.
Topics: AI Governance, Human Rights
Efforts on AI governance should be firmly rooted in the international rights framework.
Topics: AI Governance, International Rights Framework
Report
The analysis conducted on AI governance, human rights, and global implications reveals several key insights. The first point highlighted is the significant role that the international human rights framework can play in ensuring responsible AI governance. Human rights are deeply rooted in various sources, including conventions and customary international law.
Given that AI is now able to influence many aspects of life, from job prospects to legal verdicts, it becomes essential to leverage the international human rights framework to establish guidelines and safeguards for AI governance. Another important aspect is the ongoing efforts at various international platforms to develop binding treaties and recommendations on AI ethics.
The Council of Europe, the European Union, and UNESCO are actively involved in this process. For instance, the Council of Europe is working towards the development of a binding treaty on AI, while the European Union has initiated the EU AI Act, and UNESCO has put forth recommendations on the ethics of AI.
These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective governance cannot be traversed alone, and it is crucial to ensure meaningful engagement from relevant stakeholders.
These stakeholders include voices from civil society, private companies, and international organizations. Their input, perspectives, and expertise can contribute to the development of comprehensive AI governance policies that consider the diverse needs and concerns of different stakeholders. One noteworthy observation made during the analysis is the importance of amplifying the voices of the global majority.
Historically, many regions across the world have been left out of global dialogues and efforts at global governance. It is crucial to address this imbalance and include voices from diverse backgrounds and regions in discussions on AI governance. A workshop has been suggested as a call to action to begin the ongoing collective effort in addressing the complexities brought about by AI.
The analysis also emphasizes the need to consider regional perspectives and involvement in global AI development. Regions’ developments are essential factors to be taken into account when formulating AI policies and strategies. This ensures that the implications and impact of AI are effectively addressed on a regional level.
Furthermore, the analysis highlights the significance of African voices in the field of responsible AI governance and the promotion of human rights. Advocating for strategies or policies on emerging technologies specifically tailored for African countries can contribute to better outcomes and equitable development in the region.
Another noteworthy point is the need to bridge the gaps in discourse between human rights and AI governance. The analysis identifies gaps in understanding how human rights principles can be effectively integrated into AI governance practices. Addressing these gaps is essential to ensure that AI development and deployment are in line with human rights standards and principles.
In conclusion, the analysis underscores several important considerations for AI governance. Leveraging the international human rights framework, developing binding treaties and recommendations on ethics, fostering stakeholder engagement, considering global majority voices, including regional perspectives, and amplifying African voices are all critical aspects of responsible AI governance.
Additionally, efforts should be made to bridge the gaps in discourse between human rights and AI governance. By integrating human rights principles and adhering to the international rights framework, AI governance can be ethically sound and socially beneficial.
Marlena Wisniak
Speech speed
169 words per minute
Speech length
1895 words
Speech time
671 secs
Arguments
Human rights due diligence and meaningful human rights impact assessments are essential in AI governance.
Supporting facts:
- ECNL sees an opportunity to implement mandatory human rights due diligence, including human rights impact assessments, in AI governance globally.
- This approach aligns with the UN Guiding Principles for Business and Human Rights.
Topics: AI governance, Human rights due diligence, Impact assessments
Stakeholder engagement is a necessary and cross-cutting component of AI governance.
Supporting facts:
- Stakeholder engagement is a collaborative process where diverse stakeholders can meaningfully influence decision making.
- External stakeholders, like civil society organizations and affected communities, should be included in the process.
Topics: AI governance, Stakeholder Engagement
Transparency is a prerequisite for AI accountability.
Supporting facts:
- AI governance should mandate that AI developers and deployers report on datasets, performance metrics, human review, and access to remedy.
Topics: AI governance, Transparency, Accountability
Access to remedy is a key part of any governance mechanism.
Supporting facts:
- This includes both internal grievance mechanisms within tech companies and AI developers, as well as state-level and judicial mechanisms.
Topics: AI governance, Access to remedy
AI governance should be careful about over-broad exemptions for national security or counter-terrorism.
Supporting facts:
- Over-broad exemptions can potentially restrict civil liberties.
- Any exemptions should have a narrow scope, include sunset clauses, and focus on proportionality.
Topics: AI governance, National Security, Counter-terrorism
AI governance efforts mainly take a risk-based approach focusing on finance, business, national security, but rarely human rights.
Supporting facts:
- The risk level of AI systems should be determined based on several criteria focused on human rights including product design, severity of impact, internal due diligence mechanisms, causal link between the AI system and adverse human rights impacts, and potential for remedy.
Topics: AI governance, Risk-based approach, Human Rights
Support multistakeholder participation and engaging stakeholders in the process of policymaking.
Supporting facts:
- Remarks about imbalance between stakeholders, highlighting companies often have disproportionate advantages in financial and access to policymakers.
- Mentioned the small number of human rights advocates participating in AI governance when compared to the plethora of individuals in the public policy sector of companies.
Topics: policymaking, stakeholder engagement, AI governance
Report
The analysis highlights several important points regarding AI governance. One of the main points is the need for mandatory human rights due diligence and impact assessments in AI governance. The analysis suggests that implementing these measures globally presents an opportunity to ensure that AI development and deployment do not infringe upon human rights.
This approach is informed by the UN Guiding Principles for Business and Human Rights, which provide a framework for businesses to respect human rights throughout their operations. By incorporating human rights impact assessments into AI governance, potential adverse consequences on human rights can be identified and addressed proactively.
Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engagement is viewed as a collaborative process in which diverse stakeholders, including civil society organizations and affected communities, can meaningfully contribute to decision-making processes.
The inclusion of external stakeholders is seen as crucial to ensure that AI governance reflects the concerns and perspectives of those who may be affected by AI systems. By involving a range of stakeholders, AI governance can be more comprehensive, responsive, and representative.
Transparency is regarded as a prerequisite for AI accountability. The analysis argues that AI governance should mandate that AI developers and deployers provide transparent reporting on various aspects, such as datasets, performance metrics, human review processes, and access to remedy.
This transparency is seen as essential to enable meaningful scrutiny and assessment of AI systems, ensuring that they function in a responsible and accountable manner. Access to remedy is also highlighted as a crucial aspect of AI governance. This includes the provision of internal grievance mechanisms within tech companies and AI developers, as well as state-level and judicial mechanisms.
The analysis argues that access to remedy is fundamental for individuals who may experience harm or violations of their rights due to AI systems. By ensuring avenues for redress, AI governance can provide recourse for those affected and hold accountable those responsible for any harm caused.
The analysis also cautions against over-broad exemptions for national security or counter-terrorism purposes in AI governance. It argues that such exemptions, if not carefully crafted, have the potential to restrict civil liberties. To mitigate this risk, any exemptions should have a narrow scope, include sunset clauses, and prioritize proportionality to ensure that they do not unduly infringe upon individuals’ rights or freedoms.
Furthermore, the analysis uncovers a potential shortcoming in AI governance efforts. It suggests that while finance, business, and national security are often prioritized, human rights are not given sufficient consideration. The analysis calls for a greater focus on human rights within AI governance initiatives, ensuring that AI systems are developed and deployed in a manner that respects and upholds human rights.
The analysis also supports the ban of AI systems that are fundamentally incompatible with human rights, such as biometric surveillance in public spaces. This viewpoint is based on concerns about mass surveillance and discriminatory targeted surveillance enabled by facial recognition and remote biometric recognition technologies.
Banning such technologies is seen as necessary to safeguard privacy, freedom, and prevent potential violations of human rights. In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the importance of multistakeholder participation and the need to engage stakeholders in the process of policymaking.
This is seen as a means to balance power dynamics and address the potential imbalance between stakeholders, particularly as companies often possess financial advantages and greater access to policymakers. The analysis highlights the need for greater representation and involvement of human rights advocates in AI governance processes.
Another observation relates to the capacity and resources of civil society, especially in marginalized groups and global majority-based organizations. The analysis urges international organizations and policymakers to consider the challenges faced by civil society in terms of capacity building, resources, and finance.
It emphasizes the need for more equitable and inclusive participation of all stakeholders to ensure that AI governance processes are not dominated by powerful actors or leave marginalized groups behind. Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations, especially in countries with repressive regimes or authoritarian practices.
This observation draws attention to the concept of the “Brussels effect,” wherein EU regulations become influential worldwide. It highlights the potential for countries with stronger regulatory frameworks to shape AI governance practices globally, emphasizing the importance of considering the implications and potential impacts of regulations beyond national borders.
In conclusion, the analysis underscores the importance of incorporating mandatory human rights due diligence, stakeholder engagement, transparency, access to remedy, and careful consideration of exemptions in AI governance. It calls for greater attention to human rights within AI governance efforts, the banning of AI systems incompatible with human rights, and the inclusion of diverse perspectives and voices in decision-making processes.
The analysis also raises attention to the challenges faced by civil society and the potential influence of laws in one country on global regulations. Overall, it provides valuable insights for the development of effective and responsible AI governance frameworks.
Oluseyi Oyebisi
Speech speed
156 words per minute
Speech length
1058 words
Speech time
407 secs
Arguments
African region needs to be included in AI governance discussion.
Supporting facts:
- The African region is coming late to the party at this time.
- Human rights abuses on the continent itself would also snowball into developmental challenges.
Topics: AI governance, Regional inclusion, African technologic development
Africa needs to build technical competence and participate actively in international AI negotiations.
Supporting facts:
- African missions abroad must have the right capacity to take part in the negotiations.
- Universities in Africa are not prepared for AI development yet.
Topics: AI governance, Technical Competence
African governments should consider soft laws, working with technology platforms at first before moving to hard laws.
Supporting facts:
- It might be soft laws, working with technology platforms to look at code of conducts, using lessons from that to progress to hard laws.
- Governments must begin to think regulation the balances the need of citizens.
Topics: AI governance, Policy development
Report
The analysis highlights the importance of including the African region in discussions on AI governance. It notes that the African region is coming late to the party in terms of participating in AI governance discussions and needs to be included to ensure its interests are represented.
The argument presented is that African governments, civil society, and businesses should invest in research and engage more actively in global conversations regarding AI governance. One of the main points raised is the need for Africa to build technical competence to effectively participate in international AI negotiations.
It is mentioned that African missions abroad must have the right capacity to take part in these negotiations. Furthermore, it is noted that universities in Africa are not yet prepared for AI development and need to strengthen their capabilities in this area.
Additionally, the analysis suggests that African governments should consider starting with soft laws and working with technology platforms before transitioning to hard laws. It is argued that this approach would allow them to learn from working with technology platforms and progress towards more rigid regulations.
The need for regulation that balances the needs of citizens is emphasized. The analysis also highlights the need for African governments, civil society, and businesses to invest in research and actively engage in global platforms related to AI governance. It is mentioned that investment should be made in the right set of meetings, research, and engagements.
Bringing Africans into global platforms is seen as a crucial step towards ensuring their perspectives and needs are considered in AI governance discussions. Overall, the expanded summary emphasizes the need to incorporate the African region into the global AI governance discourse.
It suggests that by building technical competence, starting with soft laws, and actively engaging in research and global platforms, African countries can effectively contribute to AI governance and address their specific development challenges.
Pratek Sibal
Speech speed
168 words per minute
Speech length
2632 words
Speech time
941 secs
Arguments
Judicial systems lack awareness about what AI is
Supporting facts:
- Survey of 1,200 judicial operators across 100 countries
- Launch of a Massive Open Online Course on AI and the Rule of Law in seven languages
Topics: AI awareness, International human rights law, Rule of Law
Institutional capacity building is more sustainable in the long term, as opposed to individual capacity building
Supporting facts:
- Development of a global toolkit on AI and the rule of law
- Piloting the toolkit with Inter-American Court of Human Rights and East Africa Court of Justice
Topics: AI governance, Capacity building
Pratek Sibal appreciates the efforts made by Canada in AI for Development projects in Africa and Latin America
Supporting facts:
- He has witnessed the growth of communities that these projects have supported to create language datasets and application development in healthcare and agriculture.
- He states that the increased capacity of civil society organizations is noticeable in his work at UNESCO.
Topics: AI for Development, Canada, Africa, Latin America
Pratek Sibal talks about the evolution in the process of international standard and policy-making, shifting from the traditional model of technical assistance to a more collaborative, multi-stakeholder approach.
Supporting facts:
- In the traditional model, an expert from an international organization would go to a country and help them develop a policy.
- The new model involves stakeholders from all levels in the development of global policy frameworks.
- Developing global frameworks in a more consultative manner leads to better ownership of these frameworks at the national level.
Topics: Policy-making, International Standards, Multistakeholder Approach
Pratek Sibal finds the issue of dealing with authoritarian regimes and putting frameworks into place to be tricky
Topics: authoritarian regimes, policy making
Report
A recent survey conducted across 100 countries revealed a concerning lack of awareness among judicial systems worldwide regarding artificial intelligence (AI). This lack of awareness poses a significant obstacle to the effective implementation of AI in judicial processes. Efforts are being made to increase awareness and understanding of AI in the legal field, including the launch of a Massive Open Online Course (MOOC) on AI and the Rule of Law in seven different languages.
This course aims to educate judicial operators about AI and its implications for the rule of law. Existing human rights laws in Brazil, the UK, and Italy have successfully addressed cases of AI misuse, suggesting that international human rights law can be implemented through judicial decisions without waiting for a specific AI regulatory framework.
By proactively applying existing legal frameworks, countries can address and mitigate potential AI-related human rights violations. In terms of capacity building, it is argued that institutional capacity building is more sustainable in the long term compared to individual capacity building.
Efforts are underway to develop a comprehensive global toolkit on AI and the rule of law, which will be piloted with prominent judicial institutions such as the Inter-American Court of Human Rights and the East Africa Court of Justice. This toolkit aims to enhance institutional capacity to effectively navigate the legal implications of AI.
Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure inclusivity and accessibility. This includes the development of a comic strip available in various languages and a micro-learning course on defending human rights in the age of AI provided in 25 different languages.
Canada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact. These projects have supported the growth of communities in creating language datasets and developing applications in healthcare and agriculture, thereby increasing the capacity of civil society organizations in these regions.
The evolution of international standards and policy-making has seen a shift from a traditional model of technical assistance to a more collaborative, multi-stakeholder approach. This change involves engaging stakeholders at various levels in the development of global policy frameworks, ensuring better ownership and effectiveness in addressing AI-related challenges.
Pratek Sibal, a proponent of the multi-stakeholder approach, emphasizes the need for meaningful implementation throughout the policy cycle. Guidance on developing AI policies in a multi-stakeholder manner has been provided, covering all phases from agenda setting to drafting to implementation and monitoring.
Dealing with authoritarian regimes and establishing frameworks for AI present complex challenges with no easy answers. Pratek Sibal acknowledges the intricacies of this issue and highlights the need for careful consideration and analysis in finding suitable approaches. In conclusion, the survey reveals a concerning lack of awareness among judicial systems regarding AI, hindering its implementation.
However, existing human rights laws are successfully addressing AI-related challenges in several countries. Efforts are underway to enhance institutional capacity and involve communities in strengthening human rights in the age of AI. The positive impact of Canada’s AI for Development projects and the shift towards a collaborative, multi-stakeholder approach in international standards and policy-making are notable developments.
Dealing with authoritarian regimes in the context of AI requires careful consideration and exploration of suitable frameworks.
Shahla Naimi
Speech speed
197 words per minute
Speech length
1782 words
Speech time
542 secs
Arguments
AI can advance human rights and create global opportunities
Supporting facts:
- AI can provide more information to human rights defenders
- It keeps people safer from floods and fires
- Increases access to health care
- Building AI models that support 1000 most widely spoken languages
Topics: AI, Human Rights, Global Opportunities
AI regulation must be multi-stakeholder and internationally coordinated
Supporting facts:
- Regulation should take into account: structure, scope, subjects, and standards
- Without international coordination, there might be a hodgepodge of regulation
Topics: AI Regulation, Multilateralism, International Coordination
Report
The analysis explores the impact of AI from three distinct viewpoints. The first argument suggests that AI has the potential to advance human rights and create global opportunities. It is argued that AI can provide valuable information to human rights defenders, enabling them to gather comprehensive data and evidence to support their causes.
Additionally, AI can improve safety measures by alerting individuals to potential natural disasters like floods and fires, ultimately minimizing harm. Moreover, AI can enhance access to healthcare, particularly in underserved areas, by facilitating remote consultations and diagnoses. An example is provided of AI models being developed to support the 1000 most widely spoken languages, fostering better communication across cultures and communities.
The second viewpoint revolves around Google’s commitment to embedding human rights into its AI governance processes. It is highlighted that the company considers the principles outlined in the Universal Declaration of Human Rights when developing AI products. Google also conducts human rights due diligence to ensure their technologies respect and do not infringe upon human rights.
This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrounding the technology. The third perspective emphasizes the need for multi-stakeholder and internationally coordinated AI regulation. It is argued that effective regulation should consider factors such as the structure, scope, subjects, and standards of AI.
Without international coordination, fragmented regulations with inconsistencies may arise. Involving multiple stakeholders in the regulatory process is vital to consider diverse perspectives and interests. Overall, the analysis highlights AI’s potential to advance human rights and create opportunities, particularly in information gathering, safety, and healthcare.
It underscores the importance of embedding human rights principles into AI governance processes, as demonstrated by Google’s commitments. Furthermore, multi-stakeholder and internationally coordinated AI regulation is crucial to ensure consistency and standards. These viewpoints provide valuable insights into the ethical and responsible development and implementation of AI.
Speaker
Speech speed
171 words per minute
Speech length
680 words
Speech time
239 secs
Arguments
Latin America lacks meaningful participation in shaping responsible AI governance
Supporting facts:
- Frail democracies influenced by long history of authoritarianism
- Suspicion towards participation
Topics: AI Governance, Participation, Responsible AI
The eagerness of the tech industry to aggressively push for AI deployment hinders Latin America’s engagement in AI governance
Supporting facts:
- AI Technology not completely mature in understanding, limitations and mythologies
- Difficulty to keep up with the AI guidance given the overwhelming number of proposals
Topics: AI Deployment, Tech Industry, Latin American Perspectives
Latin America plays a crucial role in the global chain of AI technological developments
Supporting facts:
- Latin America is a provider of lithium and other minerals necessary for the manufacturing of AI systems
- Mining for these minerals leads to environmental impacts including air and water pollution and destruction of habitats
Topics: Latin America, AI Development, Global Chain
Latin America provides resources, data, and labor for AI development while facing negative impacts
Supporting facts:
- Latin America provides the raw materials necessary for hardware manufacturing.
- They provide data collected from various sources to train AI models.
- The region also provides labor for tasks such as data labelling for machine learning purposes.
Topics: AI Development, Resource Extraction
Governments are using AI to mediate relationships with citizen without transparency or participation
Supporting facts:
- AI is being used for surveillance purposes and welfare decisions.
- These technologies are deployed without meeting transparency or participation standards.
Topics: AI Governance, Government Transparency
Report
Latin America faces challenges in meaningful participation in shaping responsible AI governance. These challenges are influenced by the region’s history of authoritarianism, which has left its democracies weak. Moreover, there is a general mistrust towards participation, further hindering Latin America’s engagement in AI governance.
One of the main obstacles is the tech industry’s aggressive push for AI deployment. While there is great enthusiasm for AI technology, there is a lack of comprehensive understanding of its limitations, myths, and potential risks. Additionally, the overwhelming number of proposals and AI guidance make it difficult for Latin America to keep up and actively contribute to the development of responsible AI governance.
Despite these challenges, Latin America plays a crucial role in the global chain of AI technological developments. The region is a supplier of vital minerals like lithium, which are essential for manufacturing AI systems. However, the mining processes involved in extracting these minerals often have negative environmental impacts, including pollution and habitat destruction.
This has led to mixed sentiments regarding Latin America’s involvement in AI development. Latin America also provides significant resources, data, and labor for AI development. The region supplies the raw materials needed for hardware manufacturing and offers diverse datasets collected from various sources for training AI models.
Additionally, Latin America’s workforce contributes to tasks such as data labeling for machine learning purposes. However, these contributions come at a cost, with negative impacts including environmental consequences and labor exploitation. It is crucial for AI governance to prioritize the impacts of AI development on human rights.
Extracting material resources for AI development has wide-ranging effects, including environmental degradation and loss of biodiversity. Moreover, the health and working conditions of miners are often disregarded, and there is a lack of attention to data protection and privacy rights.
Incorporating human rights perspectives into AI governance is necessary. Another concerning issue is the use of AI for surveillance purposes and welfare decisions by governments, without adequate transparency and participation standards. The deployment of these technologies without transparency raises concerns about citizen rights and privacy.
To address these challenges, it is necessary to strengthen democratic institutions and reduce asymmetries among regions. While Latin America provides resources and labor for AI systems designed elsewhere, AI governance processes often remain distant from the region. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and participation are essential.
In conclusion, Latin America faces obstacles in meaningful participation in shaping responsible AI governance due to the aggressive push for AI deployment and its history of authoritarianism. However, the region plays a crucial role in the global AI technological chain by providing resources, data, and labor.
It is important to consider the impacts of AI development on human rights and promote transparency and participation in AI governance. Strengthening democratic institutions and addressing regional asymmetries are necessary for a more inclusive and equitable AI governance process.
Tara Denham
Speech speed
195 words per minute
Speech length
2361 words
Speech time
728 secs
Arguments
AI and its governance is taken seriously in Canada with a dedicated office integrating digital policy with human rights.
Supporting facts:
- The Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada integrates digital policy with human rights. The office has been working on the geopolitics of artificial intelligence.
Topics: AI governance, human rights
The government of Canada is developing regulation, policy, and guiding principles simultaneously in the realm of AI.
Supporting facts:
- Canada has a directive on how automated decision making will be handled by the government, an algorithmic impact assessment tool, and a guide on how to use generative AI within the public sector.
- Before deploying generative AI, the government is required to engage with those that will be impacted.
- The government has also published a voluntary Code of Conduct on Responsible Development and Management of Advanced Generative AI Systems and is working on AI and Data Act legislation.
Topics: AI regulation, policy development
Need for global analysis on what approaches to AI governance are working and not working
Supporting facts:
- Need to build capacity globally for understanding risks and impacts of AI in different communities and countries
- Advocates leveraging existing research on AI capacity building and research supported by entities like IDRC
Topics: AI governance, Global Analysis, Policy-making
Report
Canada is leading the way in taking AI governance seriously by integrating digital policy with human rights. The Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada is actively working on the geopolitics of artificial intelligence, ensuring that AI development and governance uphold human rights principles.
The Canadian government is actively involved in developing regulation, policy, and guiding principles for AI. They have implemented a directive on how government will handle automated decision making, including an algorithmic impact assessment tool. To ensure responsible development and management of AI, the government has published a voluntary Code of Conduct and is working on AI and Data Act legislation.
Additionally, the government requires engagement with stakeholders before deploying generative AI, demonstrating their commitment to responsible AI implementation. Stakeholder engagement is considered essential in AI policy making, and Canada has taken deliberate steps to involve stakeholders from the start. They have established a national table that brings together representatives from the private sector, civil society organizations, federal, provincial, and territorial governments, as well as Indigenous communities to consult on AI policies.
This inclusive approach recognizes the importance of diverse opinions and aims to develop policies that are representative of various perspectives. However, it is acknowledged that stakeholder engagement can be time-consuming and may lead to tensions due to differing views. Canada recognizes the significance of leveraging existing international structures for global AI governance.
They have used the Freedom Online Coalition to shape their negotiating positions on UNESCO Recommendations on AI ethics. Additionally, they are actively participating in Council of Europe negotiations on AI and human rights. However, it is noted that more countries and stakeholder groups should be encouraged to participate in these international negotiations to ensure a comprehensive and inclusive global governance framework for AI.
There is also a need for global analysis on what approaches to AI governance are working and not working. This analysis aims to build global capacity and better understand the risks and impacts of AI in different communities and countries.
Advocates emphasize the importance of leveraging existing research on AI capacity building and research, supported by organizations like the International Development Research Centre (IDRC). Furthermore, there is a strong call for increased support for research into AI and its impacts.
IDRC in Canada plays a pivotal role in funding and supporting AI capacity-building initiatives and research. This support is crucial in advancing our understanding of AI’s potential and ensuring responsible and beneficial implementation. In conclusion, Canada is taking significant steps towards effective AI governance by integrating digital policy with human rights, developing regulations and policies, and engaging stakeholders in decision-making processes.
By leveraging existing international structures and conducting global analysis, Canada aims to contribute to a comprehensive and inclusive global AI governance framework. Additionally, their support for research and capacity-building initiatives highlights their commitment to responsible AI development.