Practical Toolkits for AI Risk Mitigation for Businesses

9 Oct 2023 00:45h - 01:15h UTC

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Soujanya Sridharan

The analysis focuses on the impact of artificial intelligence (AI) on human rights in various sectors. It acknowledges that AI technology is being widely deployed and highlights the need for regulation and governance to manage the risks associated with its use.

One of the key findings is the development of toolkits to help businesses govern their human rights risks. These toolkits provide a framework for companies to understand and improve their human rights governance. However, it is noted that businesses are not always ready to apply these toolkits, and instead they can use them as a means to better understand pathways to enhanced human rights governance.

The analysis also emphasises that risks related to AI technology vary depending on the sector. It mentions specific risks in financial services, healthcare, and retail sectors. In financial services, risks include privacy concerns, challenges around financial access, and difficulties with grievance redressal. In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector anticipates risks to livelihood, standard of living, and worker autonomy. This highlights the need for sector-specific approaches to manage and mitigate these risks.

Furthermore, the analysis underscores the importance of regulation and standards in the context of AI. It suggests that there should be a concerted focus on regulating data use, access, and safeguarding, as well as auditing AI systems. Strengthening regulatory frameworks and standards is seen as crucial for ensuring responsible and ethical AI deployment.

Another significant finding is related to the use of generative AI, which reduces content production and dissemination costs to almost zero. This cost reduction raises concerns about the spread of hate speech, misinformation, and disinformation. The analysis argues that the widespread use of generative AI could result in an increase in these issues, requiring measures to effectively address them.

Moreover, the analysis highlights the potential impact of generative AI on employment opportunities, particularly in the software and legal services sectors. As generative AI can disrupt traditional labor costs, it may lead to job losses in areas where the cost of labor is a significant factor. This observation raises questions about the potential economic and societal consequences of AI adoption.

In conclusion, the analysis recognizes the immense potential of AI technology but stresses the need to govern and regulate its use to protect human rights. It recommends the development of robust regulatory frameworks, the application of toolkits for human rights governance, and sector-specific approaches to manage associated risks. The analysis also identifies the risks posed by generative AI, such as the spread of hate speech and job displacement, calling for further research and proactive measures to address these challenges. Overall, the analysis provides valuable insights into the complex relationship between AI and human rights, advocating for responsible and ethical AI deployment.

Nusrat Khan

The United Nations Development Programme (UNDP) is actively working with the Business and Human Rights Program to address the potential risks that AI technology poses to human rights. This collaboration is based on the UN guiding principles, which consist of frameworks for protection, respect, and remedy. The goal is to ensure that AI technology is developed and used in a manner that upholds human rights.

The UNDP’s digital strategy is focused on harnessing the power of digital technology for positive change, with the vision of creating a world where digital technology contributes to sustainable development. By partnering with the Business and Human Rights Program, the UNDP aims to mitigate the potential hazards associated with AI technology, such as privacy infringements, hate speech dissemination, and algorithm discrimination.

AI technology has the potential to bring both positive and negative impacts. On the positive side, it can create job opportunities, drive economic growth, and empower civil and human rights defenders. However, it also poses risks that must be addressed. It is the responsibility of businesses to protect human rights and provide mechanisms to address grievances. The UN guiding principles on business and human rights establish the state’s duty to protect human rights, businesses’ duty to respect human rights, and the obligation of both states and businesses to provide access to grievance redressal mechanisms. This highlights the importance of businesses actively participating in mitigating human rights risks.

To assist businesses in managing their human rights risks, the UNDP has developed toolkits based on the three-pillar framework of the UN guiding principles. These toolkits offer guidance on how to govern human rights risks and follow the human rights due diligence process. The UNDP also assists businesses in identifying gaps in mitigating human rights risks through ongoing monitoring and reporting.

Improving data representation is essential for enhancing the reliability of algorithms. Stakeholder consultations have revealed that the data sets used to create algorithms may not be adequately representative. Addressing this issue is crucial to ensure that AI technology is fair, unbiased, and inclusive.

Additionally, businesses have a role to play in upskilling their workers to adapt to automation in the retail sector. As the use of data technology increases, it is important for workers to acquire the necessary skills to effectively utilize these technologies in their jobs.

Feedback mechanisms are vital for accountability and transparency. Whether it is a gig worker trying to address a grievance or an individual who has been denied a loan, clear reasons need to be communicated. This can help build trust and ensure that decisions made by AI systems are fair and explainable.

In conclusion, the collaboration between the UNDP and the Business and Human Rights Program highlights the importance of addressing the potential risks of AI technology on human rights. The UNDP’s digital strategy aims to leverage digital technology for positive change. It is crucial for businesses to actively participate in mitigating human rights risks and providing mechanisms for addressing grievances. Improving data representation, upskilling workers, and establishing feedback mechanisms are key steps towards ensuring that AI technology is developed and used in a manner that upholds human rights. Research is also being conducted to study the differential impact of generative AI on gender in India, underscoring the need for a gender-responsive approach to AI development and implementation.

Audience

During the discussion, the speakers touched upon various key topics such as artificial intelligence (AI), human needs, organizational policies, and the digital sector. One of the speakers raised the question of the placement of AI within the model, suggesting that the focus should be on prioritising human needs. They believed that instead of placing AI at the core, it would be more beneficial to place human needs at the centre of AI development and policy-making.

Another speaker supported this viewpoint by implying that centring human needs in AI development and policy-making could yield better outcomes compared to giving AI precedence. They emphasised the need to consider the impact of such an arrangement on the facilitation of AI technology and policies.

The discussion also delved into specific risks associated with generative AI. However, no further information or evidence was provided to support this argument.

One participant expressed a desire to understand how traditional business analysis methods can be applied to the digital sector. Unfortunately, no additional details or examples were given to elucidate this point.

Similarly, another participant expressed an interest in exploring the relevance of downstream supply chain approaches in the digital sector. However, no specific supporting information was provided.

On a positive note, one speaker sought recommendations for governments or corporations regarding downstream supply chain approaches in the digital sector. Unfortunately, no further information was given to elaborate on this stance.

The discussion briefly touched upon the importance of understanding the research methodology used behind the framework and recommendations for AI. However, no specific details or examples were mentioned.

Another concern raised was the risk of AI in relation to gig workers. Unfortunately, no further information or evidence was provided to support or expand upon this concern.

There was also a discussion surrounding the impact of generative AI on the labour market, particularly in countries like India. Some supporting facts highlighted the near-zero cost of generative AI for content generation and the low labour costs in India’s information technology sector. This raised concerns about low-cost countries struggling to compete with AI’s nearly zero cost.

Lastly, one participant expressed an inquiry regarding the potential indirect impacts of AI and methods to quantify such effects. They specifically mentioned the difficulty and subjectivity in measuring the impact of AI on labour reduction. The speaker requested suggestions for quantifying these impacts.

In conclusion, the discussion revolved around crucial topics such as the relationship between AI and human needs, the risks associated with generative AI, downstream supply chain approaches in the digital sector, and measuring the indirect impacts of AI. While some arguments were supported by evidence and specific examples, others lacked further elaboration. The speakers’ views and recommendations demonstrate the need for further exploration and research in these areas.

Moderator

During a discussion on the role of gig workers in the AI industry, the speaker invited the audience to contribute further and provide input on the topic. However, no additional comments or questions were made, leading the discussion to be handed back to the speakers for concluding remarks. Just as it seemed the discussion was coming to an end, an audience member raised a question. They thanked the speakers for the engaging conversation and mentioned their involvement in other AI-related businesses, particularly data labeling. In the context of India, data labeling is seen as a form of cost-effective labor for AI, involving the task of sitting and labeling images. The audience member expressed gratitude for the discussion and the audience’s active participation. The discussion concluded with Nusrat taking over and bidding farewell to the participants.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more