AI for Humanity: AI based on Human Rights (WorldBank)

4 Dec 2023 11:30h - 13:00h UTC

official event page

Table of contents

Disclaimer: This is not an official record of the UNCTAD eWeek session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the UNCTAD website.

Full session report

Moira Thompson Oliver

This analysis focuses on various topics related to AI and its impact on different sectors. It begins by highlighting the need for regulation in AI tools supplied to the US State Department, as Microsoft had stopped supplying AI tools pending regulations. This demonstrates the importance of having proper regulations in place to ensure the ethical and responsible use of AI technology.

The analysis also notes the challenges in defining AI within organisations. It states that creating a common definition of AI can take months, indicating the complex nature of AI and the need for clarity in understanding its role within organisations.

One of the key points emphasised in the analysis is the importance of diversity in AI development and embedding human rights awareness. It highlights the fact that most engineers in the tech industry are predominantly white and male. This brings attention to the need for greater diversity and inclusion in the development of AI systems to ensure that they are representative and fair.

The analysis also stresses the need for end-use evaluations of AI technology, considering both its usage and geographical locations. This highlights the importance of assessing how AI technology is being used and the potential impact it may have in different contexts.

Training on AI and human rights is another crucial point raised in the analysis. It emphasises the importance of implementing training at all levels of an organisation, from the board level to the engineers. This ensures that all stakeholders involved in AI development are knowledgeable about the ethical implications and human rights considerations.

Furthermore, the analysis discusses the potential use of AI as an accountability tool in security sectors. It states that AI can be used to promote accountability in the security sector, which can have significant implications for peace, justice, and strong institutions.

The impact of AI on human rights is also explored. The analysis mentions that AI risks can be enumerated by using a guide like the BTEC taxonomy, which highlights the potential ways in which human rights can be affected by AI. This underscores the importance of considering and safeguarding human rights in the development and deployment of AI systems.

Unintended biases in AI applications are also highlighted in the analysis. It provides an example of a pothole fixing programme that favoured rich neighbourhoods due to more mobile phone users. This illustrates how unintended biases can seep into AI applications and exacerbate existing inequalities.

The potential for AI to be used as a tool for misinformation is another area of concern. The analysis mentions a discussion at the UN Forum on Business and Human Rights, which highlighted the role of generative AI in creating misinformation in public discourses. This raises the need for vigilance and measures to address the potential misuse of AI technology.

The analysis also recognises the accessibility and impact of AI in daily life. It mentions an example where AI was used by an individual’s son for a simple task like finding a recipe based on available ingredients, illustrating the widespread use and relevance of AI in everyday situations.

The potential use of AI in conflict settings is discussed, highlighting the role of generative AI in such situations. It acknowledges that AI can be utilised in conflict settings, potentially impacting peace, justice, and strong institutions.

The analysis further explores the use of AI in detecting climate risks and change. It mentions AI tools that enable farmers to monitor weather conditions and soil quality, helping them determine the optimal time to harvest. It also highlights the use of AI in detecting air quality, emphasising its potential for addressing climate-related challenges.

Due diligence and risk assessment are identified as crucial aspects of AI deployment. The analysis stresses the importance of constantly evaluating the impacts of AI tools and technologies and addressing any potential risks promptly.

Lastly, the analysis supports ongoing discussions and the creation of international frameworks for AI. It acknowledges the need for frameworks at an international level to ensure the responsible and ethical development, deployment, and use of AI technology.

In conclusion, this analysis provides valuable insights into various aspects of AI and its impact on different sectors. It highlights the need for regulation, diversity, and human rights awareness in AI development. It emphasises the importance of end-use evaluations, training, and accountability in AI deployment. It also explores unintended biases, misinformation, accessibility, and AI’s potential in conflict and climate settings. Additionally, it underlines the significance of due diligence, risk assessment, and international cooperation in shaping the future of AI.

Olivier Elas

The International Telecommunication Union (ITU) is actively working on addressing the challenges posed by Artificial Intelligence (AI) while also placing a strong emphasis on human rights and the achievement of the Sustainable Development Goals (SDGs). They started working on AI challenges in 2017 and co-lead an interagency working group on AI with UNESCO. The primary objective of this group is to deliver concrete outcomes based on human rights principles.

The ITU’s AI for Good initiative is an annual summit that aims to bring tangible benefits to society. This initiative plays a vital role in delivering technical outcomes such as machine learning for 5G and health. Furthermore, it has made significant contributions to the establishment of technical standards in the field of AI.

The ITU also recognises the importance of embedding human rights into the standardisation process. They are actively working with different study groups to develop technical recommendations that focus on human rights. The UN High Commissioner for Human Rights has asked the ITU, ISO, and EAC to integrate human rights into their standards. It’s worth noting that the ITU has been working on various digital rights issues for many years, including ICT for girls, gender balance, universal access, and accessibility.

Olivier Elas, representing the ITU, strongly advocates for the application of human rights principles within the context of AI and digital technology. He highlighted the ITU’s leading initiatives to apply human rights in the area of AI, specifically mentioning the ‘AI for Good’ initiative. Elas also mentioned that the ITU is aligning its efforts with general assembly resolutions focused on AI and human rights.

Additionally, the ITU has focus groups studying the impact of quantum computing on AI. This demonstrates their commitment to exploring emerging technologies and their potential implications.

There is growing recognition of the need for accountability and transparency in AI. While AI systems work with models and data sets, many of them lack transparency. This lack of transparency hinders the ability to audit AI systems. However, some companies, such as Hugging Face, are starting to address this issue by opening their models and data sets.

It is important to note that the ITU’s role is primarily regulatory, and they can only provide recommendations to member states regarding embedding human rights principles. Olivier Elas acknowledges the limitations of the ITU, mentioning that it is challenging for them to do more than make recommendations due to their regulatory role.

In conclusion, the ITU is actively engaged in addressing the challenges of AI while prioritising human rights and the achievement of the SDGs. They are working towards delivering concrete outcomes based on human rights principles through their AI for Good initiative and focusing on embedding human rights into the standardisation process. Olivier Elas, representing the ITU, supports the application of human rights principles in AI and digital technology and highlights the alignment of their efforts with global resolutions. The ITU’s study groups are also exploring the impacts of quantum computing on AI. However, it is important to recognise that the ITU’s role is limited primarily to making recommendations to member states, and they face challenges in taking more meaningful actions beyond regulatory recommendations.

Mila Romanoff

Artificial intelligence (AI) poses significant risks to human rights and the environment, according to various arguments and evidence. AI is increasingly used in public decision-making, which raises concerns about potential harm to individuals and groups. Notably, the World Bank lacks guidelines to address AI-related human rights risks, further exacerbating the issue.

Furthermore, AI’s carbon emissions present environmental challenges. While AI can optimize energy use, a single training session for an AI model emits 25 times more carbon than a one-way flight from New York to San Francisco. Stricter standards and regulations for AI are needed, as ethical standards alone are insufficient. Countries like China, Brazil, and the EU are taking steps to implement more stringent regulations.

The potential for discrimination and bias in AI-driven predictive policing is another cause for concern. This can lead to unfair enforcement against specific communities, violating their rights to equality and fair legal processes. Land rights projects utilizing AI can also result in property disputes and interfere with communities’ rights to property and secure housing.

In addition to surveillance risks, AI-powered tools used to understand political tension can disrupt democratic processes and elections, particularly in countries with limited democratic values. It is important to consider risks beyond surveillance, including predictive policing, land rights management, and political tension understanding.

The regulation of AI parallels the evolution of data privacy rights, highlighting the need for robust regulatory frameworks. Self-learning algorithms in AI systems escalate the risks associated with data usage, necessitating adequate regulation. While data privacy has received attention, the threats posed by AI are greater, with concerns raised by numerous individuals and experts.

The UK government’s Bleachley Declaration is commended for its efforts towards AI governance. However, international consensus on AI safety risks is crucial for effective regulation. Overall, AI presents significant risks that require a balanced approach to safeguard human rights while managing uncertainties. The review has ensured the use of UK spelling and grammar throughout the summary.

Tim Engelhardt

The analysis explores the integration of human rights into the development and governance of artificial intelligence (AI). It emphasises the importance of conducting risk-proportionate human rights due diligence by states and businesses to effectively manage AI. This approach ensures that potential risks and human rights concerns associated with AI are adequately addressed. Including a human rights framework in AI governance helps to structure discussions and mitigate the risks involved.

Transparency and stakeholder engagement are crucial in AI governance. It is vital for states to inform the public about the use of AI systems, creating an atmosphere of openness and accountability. Human Rights Due Diligence guidance plays a pivotal role in tracking and communicating the impacts and methods used in AI systems. This enables stakeholders to effectively monitor the implications of AI technology.

However, the impact of AI in society can become problematic when deployed in problematic contexts. The analysis warns that AI, which permeates various aspects of society, can become a weapon in environments with existing problems. This highlights the importance of carefully considering the context in which AI is deployed to avoid exacerbating existing issues.

Furthermore, the analysis highlights how AI can infringe upon various rights. It emphasizes the impact of AI on security issues that affect life and liberty, particularly in law enforcement and due process rights. Additionally, facial recognition technologies used for monitoring assemblies can encroach upon the right to freedom of assembly. Moreover, the attempt of AI to recognize emotions can interfere with freedom of thought, opinion, and individual autonomy.

The military applications of AI are often overlooked in discussions. The analysis notes that these applications are frequently neglected, indicating a potential blind spot when considering the ethical and strategic implications of AI in military operations.

In the healthcare sector, the analysis points out that AI tools can have a negative impact on people’s access to healthcare. This is exemplified by instances where health insurance claims have been denied based on AI assessments. The denial of claims can result in restricted access to necessary healthcare services.

The analysis further highlights the potential for AI to centralise power and shape environments. It asserts that AI has the capability to concentrate decision-making authority and influence the dynamics of power in society.

Community involvement and empowerment in shaping AI tools are important considerations. The analysis suggests that communities affected by the implementation of AI are often excluded from its development. Strengthening community abilities to shape AI tools can lead to more inclusive and beneficial outcomes.

The analysis suggests that ongoing discussions and evaluations are necessary for effective AI governance. It acknowledges the existence of advisory bodies convening regularly to deliberate on AI governance. However, it emphasises the need for increased dialogue and evaluations to ensure that AI governance aligns with human rights standards and addresses the concerns raised by the technology.

Overall, the analysis highlights the significance of integrating human rights considerations in the development and governance of AI. It emphasises the need for risk-proportionate human rights due diligence, transparency, stakeholder engagement, and careful consideration of the societal context in which AI is deployed. The analysis also points out potential infringements on various rights, the often-overlooked military applications of AI, and the impact of AI on healthcare access. It calls for community involvement and empowerment in shaping AI tools and underscores the necessity of ongoing discussions and evaluations for effective AI governance.

David Satola

David Satola, an influential voice in the field of artificial intelligence (AI) and human rights, emphasises the importance of understanding the complex relationship between these two domains in World Bank-funded projects. He highlights the potential implications of AI on human rights, specifically in the context of social protection programs.

Satola acknowledges that AI technology alone cannot address underlying policy flaws and can even exacerbate certain issues. He cautions against the misuse of data collected for social protection programs, which could worsen problems instead of solving them.

Furthermore, Satola expresses concerns about the concentration of power that AI can create. He stresses the need for balance in the use and control of AI to prevent power from being overly concentrated in the hands of a few. It is crucial to ensure that beneficiaries of AI tools also benefit from its impact.

Satola also highlights the interconnected nature of AI with other emerging technologies such as 5G and quantum computing. As technology advances, it is vital to establish regulations regarding data usage and system operation to address the challenges posed by faster data processing.

In terms of governance, Satola advocates for a multi-stakeholder approach based on the successful model of internet governance in the late 90s and early 2000s. He suggests collaboration among governments, private sector entities, and civil society in finding an appropriate solution for AI governance, drawing on the Internet Governance Forum as a potential model.

Although Satola presents a neutral stance, he emphasises the need for a comprehensive and collaborative approach that involves various stakeholders. This is necessary to effectively address the complex issues surrounding AI, ensuring the protection of human rights and the promotion of equitable outcomes.

By analysing Satola’s perspectives, we gain valuable insights into the challenges and considerations at the intersection of AI and human rights. This underscores the importance of careful navigation and proactive measures to harness AI’s potential while safeguarding human rights and minimising social inequalities.

Audience

During a discussion on the intersection of AI and human rights, several key points were raised. DCAF, an organisation investigating AI as an accountability tool, highlighted the potential of AI to be used in security sectors to provide oversight and promote accountability. This suggests that AI can play a crucial role in holding security sectors around the world accountable for their actions.

Privacy concerns related to AI were also a topic of discussion. There was a universal concern about the right to privacy when it comes to AI. This implies that there is widespread recognition of the need to protect individuals’ privacy in the face of advancing AI technologies.

The impact of AI on other human rights beyond privacy was also explored. The speaker expressed curiosity about the specific human rights that are affected by AI, indicating a desire for a broader understanding of the potential implications of AI on human rights.

Maria Dmitriadou, a representative from the World Bank, was particularly interested in the implementation of AI in line with human rights. She emphasised the importance of AI applications that demonstrate sensitive approaches to human rights. Additionally, she highlighted the potential of AI to support goals such as reducing poverty and addressing vulnerabilities. This suggests that AI has the potential to contribute positively to the achievement of these social and economic objectives.

An audience member, a digital and AI trade lead from the British government’s Department for Science, Innovation, and Technology, questioned the role of the international community in controlling AI that poses infringements on human rights. In particular, the audience member proposed applying sanctions or banning AI that is misused by countries to infringe on citizens’ rights. This highlights the need for international cooperation and regulation to ensure that AI is used responsibly and does not compromise human rights.

In conclusion, the discussions on AI and human rights touched upon various important aspects. The potential of AI as an accountability tool in security sectors was highlighted, as well as concerns about privacy and the broader impact of AI on human rights. The World Bank representative highlighted the potential positive contributions of AI, especially in reducing poverty and addressing vulnerabilities. The role of the international community in controlling AI that infringes on human rights was also brought into question, with suggestions for sanctions or bans. These discussions shed light on the complex relationship between AI and human rights and underscore the importance of careful application and regulation of AI technologies to ensure their alignment with human rights principles.

A

Audience

Speech speed

186 words per minute

Speech length

549 words

Speech time

177 secs

DS

David Satola

Speech speed

161 words per minute

Speech length

2101 words

Speech time

781 secs

MR

Mila Romanoff

Speech speed

156 words per minute

Speech length

3132 words

Speech time

1208 secs

MT

Moira Thompson Oliver

Speech speed

175 words per minute

Speech length

3511 words

Speech time

1200 secs

OE

Olivier Elas

Speech speed

141 words per minute

Speech length

1326 words

Speech time

565 secs

TE

Tim Engelhardt

Speech speed

145 words per minute

Speech length

2778 words

Speech time

1150 secs