Practical Toolkits for AI Risk Mitigation for Businesses
9 Oct 2023 00:45h - 01:15h UTC
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Soujanya Sridharan
The analysis focuses on the impact of artificial intelligence (AI) on human rights in various sectors. It acknowledges that AI technology is being widely deployed and highlights the need for regulation and governance to manage the risks associated with its use.
One of the key findings is the development of toolkits to help businesses govern their human rights risks. These toolkits provide a framework for companies to understand and improve their human rights governance. However, it is noted that businesses are not always ready to apply these toolkits, and instead they can use them as a means to better understand pathways to enhanced human rights governance.
The analysis also emphasises that risks related to AI technology vary depending on the sector. It mentions specific risks in financial services, healthcare, and retail sectors. In financial services, risks include privacy concerns, challenges around financial access, and difficulties with grievance redressal. In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector anticipates risks to livelihood, standard of living, and worker autonomy. This highlights the need for sector-specific approaches to manage and mitigate these risks.
Furthermore, the analysis underscores the importance of regulation and standards in the context of AI. It suggests that there should be a concerted focus on regulating data use, access, and safeguarding, as well as auditing AI systems. Strengthening regulatory frameworks and standards is seen as crucial for ensuring responsible and ethical AI deployment.
Another significant finding is related to the use of generative AI, which reduces content production and dissemination costs to almost zero. This cost reduction raises concerns about the spread of hate speech, misinformation, and disinformation. The analysis argues that the widespread use of generative AI could result in an increase in these issues, requiring measures to effectively address them.
Moreover, the analysis highlights the potential impact of generative AI on employment opportunities, particularly in the software and legal services sectors. As generative AI can disrupt traditional labor costs, it may lead to job losses in areas where the cost of labor is a significant factor. This observation raises questions about the potential economic and societal consequences of AI adoption.
In conclusion, the analysis recognizes the immense potential of AI technology but stresses the need to govern and regulate its use to protect human rights. It recommends the development of robust regulatory frameworks, the application of toolkits for human rights governance, and sector-specific approaches to manage associated risks. The analysis also identifies the risks posed by generative AI, such as the spread of hate speech and job displacement, calling for further research and proactive measures to address these challenges. Overall, the analysis provides valuable insights into the complex relationship between AI and human rights, advocating for responsible and ethical AI deployment.
Nusrat Khan
The United Nations Development Programme (UNDP) is actively working with the Business and Human Rights Program to address the potential risks that AI technology poses to human rights. This collaboration is based on the UN guiding principles, which consist of frameworks for protection, respect, and remedy. The goal is to ensure that AI technology is developed and used in a manner that upholds human rights.
The UNDP’s digital strategy is focused on harnessing the power of digital technology for positive change, with the vision of creating a world where digital technology contributes to sustainable development. By partnering with the Business and Human Rights Program, the UNDP aims to mitigate the potential hazards associated with AI technology, such as privacy infringements, hate speech dissemination, and algorithm discrimination.
AI technology has the potential to bring both positive and negative impacts. On the positive side, it can create job opportunities, drive economic growth, and empower civil and human rights defenders. However, it also poses risks that must be addressed. It is the responsibility of businesses to protect human rights and provide mechanisms to address grievances. The UN guiding principles on business and human rights establish the state’s duty to protect human rights, businesses’ duty to respect human rights, and the obligation of both states and businesses to provide access to grievance redressal mechanisms. This highlights the importance of businesses actively participating in mitigating human rights risks.
To assist businesses in managing their human rights risks, the UNDP has developed toolkits based on the three-pillar framework of the UN guiding principles. These toolkits offer guidance on how to govern human rights risks and follow the human rights due diligence process. The UNDP also assists businesses in identifying gaps in mitigating human rights risks through ongoing monitoring and reporting.
Improving data representation is essential for enhancing the reliability of algorithms. Stakeholder consultations have revealed that the data sets used to create algorithms may not be adequately representative. Addressing this issue is crucial to ensure that AI technology is fair, unbiased, and inclusive.
Additionally, businesses have a role to play in upskilling their workers to adapt to automation in the retail sector. As the use of data technology increases, it is important for workers to acquire the necessary skills to effectively utilize these technologies in their jobs.
Feedback mechanisms are vital for accountability and transparency. Whether it is a gig worker trying to address a grievance or an individual who has been denied a loan, clear reasons need to be communicated. This can help build trust and ensure that decisions made by AI systems are fair and explainable.
In conclusion, the collaboration between the UNDP and the Business and Human Rights Program highlights the importance of addressing the potential risks of AI technology on human rights. The UNDP’s digital strategy aims to leverage digital technology for positive change. It is crucial for businesses to actively participate in mitigating human rights risks and providing mechanisms for addressing grievances. Improving data representation, upskilling workers, and establishing feedback mechanisms are key steps towards ensuring that AI technology is developed and used in a manner that upholds human rights. Research is also being conducted to study the differential impact of generative AI on gender in India, underscoring the need for a gender-responsive approach to AI development and implementation.
Audience
During the discussion, the speakers touched upon various key topics such as artificial intelligence (AI), human needs, organizational policies, and the digital sector. One of the speakers raised the question of the placement of AI within the model, suggesting that the focus should be on prioritising human needs. They believed that instead of placing AI at the core, it would be more beneficial to place human needs at the centre of AI development and policy-making.
Another speaker supported this viewpoint by implying that centring human needs in AI development and policy-making could yield better outcomes compared to giving AI precedence. They emphasised the need to consider the impact of such an arrangement on the facilitation of AI technology and policies.
The discussion also delved into specific risks associated with generative AI. However, no further information or evidence was provided to support this argument.
One participant expressed a desire to understand how traditional business analysis methods can be applied to the digital sector. Unfortunately, no additional details or examples were given to elucidate this point.
Similarly, another participant expressed an interest in exploring the relevance of downstream supply chain approaches in the digital sector. However, no specific supporting information was provided.
On a positive note, one speaker sought recommendations for governments or corporations regarding downstream supply chain approaches in the digital sector. Unfortunately, no further information was given to elaborate on this stance.
The discussion briefly touched upon the importance of understanding the research methodology used behind the framework and recommendations for AI. However, no specific details or examples were mentioned.
Another concern raised was the risk of AI in relation to gig workers. Unfortunately, no further information or evidence was provided to support or expand upon this concern.
There was also a discussion surrounding the impact of generative AI on the labour market, particularly in countries like India. Some supporting facts highlighted the near-zero cost of generative AI for content generation and the low labour costs in India’s information technology sector. This raised concerns about low-cost countries struggling to compete with AI’s nearly zero cost.
Lastly, one participant expressed an inquiry regarding the potential indirect impacts of AI and methods to quantify such effects. They specifically mentioned the difficulty and subjectivity in measuring the impact of AI on labour reduction. The speaker requested suggestions for quantifying these impacts.
In conclusion, the discussion revolved around crucial topics such as the relationship between AI and human needs, the risks associated with generative AI, downstream supply chain approaches in the digital sector, and measuring the indirect impacts of AI. While some arguments were supported by evidence and specific examples, others lacked further elaboration. The speakers’ views and recommendations demonstrate the need for further exploration and research in these areas.
Moderator
During a discussion on the role of gig workers in the AI industry, the speaker invited the audience to contribute further and provide input on the topic. However, no additional comments or questions were made, leading the discussion to be handed back to the speakers for concluding remarks. Just as it seemed the discussion was coming to an end, an audience member raised a question. They thanked the speakers for the engaging conversation and mentioned their involvement in other AI-related businesses, particularly data labeling. In the context of India, data labeling is seen as a form of cost-effective labor for AI, involving the task of sitting and labeling images. The audience member expressed gratitude for the discussion and the audience’s active participation. The discussion concluded with Nusrat taking over and bidding farewell to the participants.
Session transcript
Soujanya Sridharan:
Class, teachers calling, people to the front. But thank you very much, everybody, for taking the time to be here. This is Nusrat, and I’m Sarayu. And we’re here to launch our report and toolkits that focused on artificial intelligence and the impact of human rights. It was focused on India, but our strong viewpoint has been that artificial intelligence and its impacts are something that every society will feel. So it’s something that’s very much a part of a global conversation. And thank you also, especially, for taking the time to come here while the opening event is going on. I do know that it’s a choice to be in this room. In terms of the structure of this, we hope it’s very much a conversation and not us talking one way. We’ll start with introducing why we undertook this work of building out the toolkits and writing the report. I’ll quickly highlight some key findings from the research. And then we can have a discussion of about seven or eight minutes or more, if people are willing to stay back. I hope that sounds all right, and look forward to the discussion. Over to you, Nusrat.
Nusrat Khan:
Thanks, Sarayu. Good morning, everyone, and thank you for being with us. My name is Nusrat Khan, and I work with the United Nations Development Program in India. I work for a specific program called the Business and Human Rights Program, which is under this program that we partnered with AAPTI to develop this piece of research, which has taken a good life of its own, I must say. I’m very, very pleased with the way it has been received. But speaking just briefly to introduce the program itself, it focuses on achieving sustainable economic development overall. And we premise our entire conversation in doing so on the UN guiding principles on business and human rights. This is, of course, a normative framework which was adopted by the UN Human Rights Council in 2011, and was almost unanimously adopted by countries, all member states. And basis that, I think the entire program is based on the premise that, of course, businesses are a force of good. There are a lot of positive outcomes of a business enterprise, like job creation, infrastructure development, information dissemination, knowledge dissemination at speed which information technology allows for. But sometimes, there can also be adverse impacts of business and enterprise. I think in creating profit maximization, it is important for businesses to be also mindful of their sometimes unintended consequences of their business actions. Of course, the UN guiding principles on business and human rights that I just spoke about is a three-pillar framework. It is the protect, respect, remedy framework. It outlines the state duty to protect human rights, and which really talks about regulation and laws. Many of these are being debated at the IGF currently on AI technology and digital technology overall. Respect framework, which places an obligation on businesses to respect human rights, and in doing so, take care of certain actions to address, mitigate, and prevent human rights risk across their supply chains. In this case, Sarayu will speak about a set of questions we’ve developed in our toolkit, which business enterprises across four sectors adopting the use of AI technology may take on to mitigate the risks across their operations. And finally, the third pillar is the pillar on remedy, an extremely important one, which calls for states and businesses both to provide access to grievance redressal mechanisms. So should there be any violation of a right, there needs to be a channel through which it can be addressed. This, in the course of the state structure, could mean the court of law, but in the course of a business operation could mean an in-house mechanism put in place by a business enterprise to address a grievance that a consumer may have or any other stakeholder may have. Next slide, please. Thank you. So of course, I mean, we all know that the new digital technology, including the AI technology, has of course brought unimaginable change to the lives of many people on the planet. At its very best, the positive outcomes have included job creation, economic growth, empowering civil and human rights defenders, et cetera, and even just the efficiency of science overall. Digital technology and its use has of course also accelerated the achievement of the sustainable development goals and allow us to sort of progress in a way to actually fulfill our mandate by 2030, which is the end time for the achievement of the sustainable development goals. There are, of course, some shadow sides of innovations as well, which have come into focus very, very sharply in the recent past. There is enough reports and evidence also generated by tech companies themselves speaking to the dangers of privacy infringements, dissemination of hate speech, which also fuels conflict in regions, but also algorithm discrimination. And this is something that our research also speaks about in quite a bit detail. And this could, of course, limit the way one accesses a job market, but also access to public services, financial services, or even in many cases the criminal justice system. And I think there is enough consensus amongst businesses, governments, and other stakeholders in the ecosystem that these risks must definitely be addressed. And just before, and of course the UNGPs, like I spoke about, provide a very comprehensive and consultative framework that can inform efforts by a range of actors, including governments and companies, to identify, prevent, and mitigate, and even remedy human rights harms related to digital technologies. Before I hand it over back to Sarayu, I’d like to also talk a little bit about UNDP’s own digital strategy, which has a long-term vision to create a world in which digital technology is empowering force for the people and the planet. And we intend in doing so to create a digitally-enabled programming that has amplified development outcomes by embedding digital mediums across UNDP programming, empowering digital ecosystems to support and create more inclusive and resilient ecosystems, and overall create a workforce that can support these two objectives. And I think within this context, the Business and Human Rights Program collaborated with Apti Institute to acknowledge the role of digital technology, like the artificial intelligence technology, in creating massive positive outcomes for the society, but also to be mindful and perhaps investigate a little bit what are some of the possible harmful impacts that it could have on rights across four sectors, financial, healthcare, retail, and gig. And to sort of elaborate more on findings and some of the solutions that we proposed, I request Saru to step in. Thank you.
Soujanya Sridharan:
Thank you very much, Nusrat. We were indeed very excited to have participated in doing this piece of work. It was interesting, not just from the perspective of the way in which we think about human rights mitigation, but also to bring together various layers of what we see as technological protections that might operate. We had two levels of insights, and I’ll very quickly summarize both. The first level being that we understood a little bit about how AI harms might emerge and operate. And then second, we had sector-specific findings around the ways in which human rights risks emerge and might be mitigated. And as Nusrat mentioned, we focused on four sectors in our work. We focused on platform work, we focused on retail, we focused on health and financial services as the four key sectors for study. And just to understand the selection of sectors, health and financial services was selected because they had consumer impacts, which is users and consumers experience the impact of AI deployment in health and financial services. And we looked at retail and platform work to understand the impacts of AI on individuals as workers. And so with this division, we proceeded into trying to understand how AI risks might emerge and then be mitigated. The first insight that we had from quite a lot of our work was that we tend to think of AI as a technological artifact that operates alone in isolation from company policies and regulatory frameworks. But across the four sectors, we learned that AI technology or the technological artifact that underlies AI works very closely and in tandem with company policy and governance that determines how it’s operated and how it’s deployed. Surrounding that is the layer of the regulatory framework, which determines what you cannot and can do. Just to give you examples at each level, AI technology might be the credit scoring algorithm that underlies and sits at the core of a product or a service. Company policies are decisions that sit as a layer outside that. And to give you an example, the algorithm that does work allocation in platform gig work, the operation of it is determined by decisions at the company level. So incentive structures, for example, the required hours, log on, log offs, deactivation policies, et cetera, are determined by company decisions. Outside that sits a layer of data protection regulation, for example, which determines how you might use or operate or engage with the data that you collect. So to think of AI regulation or human rights risks as disembodied from this sort of three tier structure that operates might be problematic and limiting what you can do to manage human rights risks. In order to understand human rights risks in consultation with the UNDP’s program, we decided to take a wide lens because there are the international frameworks that operate across the world and particularly in India. But specifically in India, we decided to look at the Indian constitution, which does encapsulate some rights, as well as look at various statutes that might have relevance both sectorally or operate across for all Indian citizens. This wide lens allowed us to take a expanded view of human rights and particularly account for emerging concerns that might come from the nature of the technology itself. The big finding, of course, were first that the risks and the nature of risks varied sector wise, but having said that there is no sector that is risk free. In financial services with a focus on AI-based credit scoring, we learned that there were risks to privacy, financial access, challenges around grievance redressal. With respect to gig work and AI algorithmic intermediation, we found that risks emerged with respect to the standard of living due to volatility of income, absence of social security, absence of privacy, as well as challenges around effective remediation. In the context of healthcare, particularly predictive healthcare analytics, we found risks to life, equality, privacy, as well as individual autonomy that emerged from the deployment of AI. With respect to retail, while this is a sector where AI-based automation is as yet emerging, we did see and did anticipate risks to livelihood, standard of living, and worker autonomy. The upshot of all of this is that we need to take a networked view of both impact as well as human rights mitigation and governance. We think there are three paradigms to it, the first being a need for regulation and active work in terms of regulation on data, data use, data access, guardrails, including the ways in which we audit AI and deal with AI, as well as systems and standards, some of which might need to have or emerge from global conversations. Second, we think business is a very key stakeholder in all of this, given that they are the ones very often making AI, although deployment context might vary. So discovering and highlighting business incentives, particularly trust, consumer adoption, as well as headline risks might be useful, but this, we believe, is a conversation that needs to continue to happen. As a result of all of this, particularly the emphasis on the role of businesses, we have built out our business and human rights report and toolkit. For those in the audience, and I don’t know if it applies online as well, you can scan that QR code to access the report digitally. But we also have a gargantuan copy here for those who wish to refer to it offline. But this is the report. It comprises two parts. There’s all of the learnings that we had from trying to unpack the application of AI and human rights risks in these four sectors. But what we have built, because of the role of companies in this process, is toolkits that companies themselves can apply to govern their human rights risks. If they’re not at a state of readiness where they can apply these human rights toolkits, what they can do is to use the toolkits as a way to understand pathways to better human rights governance. And I’ll pause here and happy to take any more questions, either about the methodology, findings, or the toolkits themselves. Over to you, Nisrat.
Nusrat Khan:
Just one thing, and I think the toolkits are based on the three-pillar framework of the UNGPs that I spoke about earlier, the Protect, Respect, Remedy. And I think they also sort of speak to the human rights due diligence process, which the UNGBs talk about, which is basically, it’s really a set of questions that businesses must sort of reflect upon to sort of identify where the gaps lie as far as human rights risks are concerned. And then proactively, of course, try and mitigate them. It’s also an ongoing process, and it is hardly a one-time process. And we’ve realized that that is very, very true also for use of artificial intelligence technology. Of course, it’s also very true for other sectors beyond the use of digital technology. So you will find that the questions are sort of bifurcated into the three sets, and speak for action, of course, by the state, but most importantly also by the business itself, and finally, on remediation by the state and the business.
Soujanya Sridharan:
So yeah, we would love for any questions. Just an administrative note, we’ve placed the report with the QR code link, should you wish to scan and access it. We’re happy to also leave our emails, should you wish, for the slide deck. I saw a few people taking pictures, so happy to email the slide deck, and the full slide deck, should you wish to look at a 100-page document as well. But we can pause now for questions. My colleague, Aastha Kapoor, is here to moderate this discussion here. We have my colleague, Vinay Narayan, who is online. So should there be any online listeners, please post your questions in the chat, and we can take them. Thank you. Yes, please. There’s a mic over there. Sorry to inconvenience. If you could just introduce yourself and ask a question, that’d be great.
Audience:
I think it’s on, yeah. My name is Shizuka Morika. Thank you for having us today. And I would like to know more about the model that you had on slide titled, AI is not the tech alone. And I am wondering why AI is in the center rather than human needs. And the impact of that model, when we have human needs, in going outside to facilitate AI technology and organizational policies. So I’m wondering. if you guys have thought it. How should we take that for you?
Moderator:
Okay, sure. There’s a question as well. Sorry, I’m just gonna, can you pass this along to him?
Audience:
Yeah, I’m Chang Ho, I’m a lawyer coming from Japan. So just two quick questions. So the one question is, among the risk you presented, is there any particular, peculiar or special risk in relation to generative AI? I mean, is it the same kind of analysis will apply or is there any kind of additional risk which you have identified? And my second question for both of you is to, because I have some expertise on the business in human life but not much on the digital life, and a lot of the toolkit around the business in human life is quite focusing on the supply chain in the downstream side. I mean, how we can source and a lot of human life due diligence and so on. So I just want to know, what is your recommendation in relation to the government or the corporation on this kind of downstream supply chain approach which is more relevant to the digital sector?
Moderator:
Thank you.
Audience:
Hi, I’m Richard. I represent the tech community of Nepal. So I definitely want to understand the research methodology, like how you came across those recommendations and framework. And the second is, I want to understand a little bit more about the risk of AI with gig workers. Thank you.
Soujanya Sridharan:
Thank you. I’ll try and take a few of these questions and pass the more difficult ones on to Nusrat. But to answer your question, ma’am, AI is not the tech alone. We were using that as a way to understand where human rights risks emerge from rather than thinking about the societal value of AI. I don’t know if that speaks to your question, but what we were trying to argue with that and happy to have a more detailed discussion offline as well was that there is a core technological artifact which sits in a company with the layer of governance over and above it and outside both of those sits the layer of regulatory governance which permits or lays the framework for what a company might or might not be able to do. We hope that that’s not a model by which the utility or the usefulness or the impact or even the feasibility of AI itself is addressed. I think our starting point, and we do remark on that in the report, is that the deployment of AI technology in some of these sectors is already underway. And then how do we understand and problematize where risks emerge from? So that’s how we’ve built that framework and that model. And so, yeah, that’s how we thought about it. To answer the question on generative AI, this report specifically does not tackle generative AI. We think there is a case, and we hope to undertake some work on that soon, to understand generative AI as almost a specific category. But we think the problems with generative AI arise from a couple of standpoints. First is that generative AI reduces the cost of content production to zero, which is that you basically put in a code and then with a few variations and some practice, content production in a variety of languages is quite close to zero. Now, overlay that with social media technologies or dissemination technologies. The cost of dissemination is also zero, which means that basically with no friction, you can generate and disseminate content, which might contribute to some of the concerns that Nusrat mentioned when she made her introductory remarks such as hate speech, misinformation, disinformation, et cetera. The usefulness of generative AI is not to be written off either, because it can be useful in enabling very targeted, distinct communications from the state, though this needs to be governed with some degree of caution. But having said that, the challenges we believe of generative AI emerge from both the reduction of the cost of production and the cost of dissemination to zero. And of course, it’s all overlaid with the fact that very often business interests are too central and at the forefront of driving some of these products and services. So I’ll pause there and happy to have a greater discussion offline. I’ll come back to the one on the digital rights and the supply. I feel like it requires a little bit more delineation, unless Nusrat has some thoughts to offer. But very quickly on the methodology, we had a multi-step methodology. We started with, of course, understanding the UNDP framework as well as mapping out the potential focus areas with respect to human rights themselves. Where would we source our understanding of human rights from? Then we spent a bit of time selecting the specific type of AI technology within each category. Though we did start with an understanding that we would focus on consumers and workers. Post that, within each sector, we followed a combined approach of secondary literature review and expert interviews within each sector. And that included where it was available, data analysis, as well as doctrinal analysis to understand the implications of the law in these sectors. So that was the methodology followed within each of the sectors. And we followed a roughly similar pattern. For the gig work, we did have a small segment qualitative where we did speak to gig workers. But that was not the central piece of the research. A lot of it relied on expert interviews and doctrinal analysis. So I’ll pause here. But happy to again come back to the conversation on gig workers later.
Nusrat Khan:
You’re right. I think a lot of the business and human rights sort of focus has been on due diligence downstream. And I think in this case also, the kind of recommendations that the report presents is, for example, for the state, a lot of the recommendation in the Indian context was the absence of law. Now we do have that law. Because I mean, this report was released a year back and we’ve just had the parliament pass a data protection law. But that was a glaring gap a year back. It’s from a regulation perspective. From a business perspective, and this again is sort of going into the thematic details. But as I understand, the data set representation on which the algorithms were created weren’t representative enough. Was finding also that emerged from the stakeholder consultation. So one of the recommendations that we had was improve your data representation across all four sectors. But also details like upskilling. A lot of the movement from, for example, I think in retail, where you move to automation, there is a clear obligation, of course, primary obligation, I would say, on the state to upskill its workers for the use of this data technology. But also on businesses to take up some of that upskilling role. Then I think with respect to gig workers, some of the issues that sort of came out were the surveillance. The massive amount of surveillance that a lot of these apps allow for. Whether right from health temperatures to movement, but also the final, other slightly off technology consequences like the lack of social security that gig workers generally overall have. Of course, we do have some states in India, I think one state in India, which now provides for that social security. But I think these were overall the kind of recommendations that the report presents. We also talk about the need for feedback mechanisms for whether it’s a gig worker trying to get in touch with a certain company with a certain grievance, or whether if you are somebody who’s applied for a loan and has been rejected, there needs to be sort of clear reasons communicated to you about why a certain loan has been rejected. And I think one thing for me personally, because I’m not a thematic expert on the subject, was the whole idea of explainability of AI. And I think the algorithms came out very clearly that companies need to create these algorithms and AI technology in a way that it’s explainable and transparent. And that is, in many ways, the foundation to sort of mitigating any sort of human rights grievance that any stakeholder which interfaces with that technology may have in the future. Yeah, I think that in a nutshell, is that?
Moderator:
I happen to come back to the gig workers question if you would like more detail, but happy to defer to you on that. I’m cognizant of the time. That’s why, are there any other questions or comments from the room? I’ve checked. Are there, yeah, any comments, questions? Great, we’ll hand it back to the speakers to do closing remarks. Oh, you have a question, great. Wait, there’s a mic there.
Audience:
Hi, my name is Christoph Zeng. I’m the founder of AAA.AI Association based in Geneva, Switzerland. And my concern is about exactly, as we mentioned earlier, the generative AI and its zero cost equivalent for content generation. Now, knowing that a country like India heavily depends on its relative low cost of labor, especially in the information technology sector. Now, even Indians cannot compete with generative AI, right? There’s no point in competing with something that’s close to zero cost, let alone Americans or Europeans. So, does your service have so far conducted research or survey on potential indirect impacts? Because these are quite difficult to measure. Using AI on one computer, having reduced man hours on another computer, that’s something very subjective to measure. How do you suggest to at least try to quantify this impact?
Soujanya Sridharan:
Happy to take that. And thank you very much for this question. I think it makes the case for very specific, I believe, sector focused inquiry on the ways in which generative AI would affect job loss. And it’s definitely a second order effect that emerges from the use of generative AI. Some of it is in what have already been noted areas such as code, software application, software development. Legal services is also another area where a significant amount of work and employment opportunities could be disrupted. Actually, any kind of job that… So, we as an organization hope to undertake that research in the immediate future, particularly understanding the specific implications of generative AI on a sectoral basis. But there is also, because of the cost of content production being zero and the cost of dissemination already being zero, there are very visible, immediately discernible first order effects such as disinfo, misinfo, which might be harder to quantify but are necessarily existent as well. So, we think it’s both categories. Thank you for bringing that up. Nusrat, any last remarks from you?
Nusrat Khan:
I think I’m learning about generative AI and the effects that it may have. And in fact, I mean, we’ve been talking about also sort of unpacking how the impact that it has on gender and the entire spectrum. So, we’re hoping to sort of come up with a piece of research which sort of allows for that unpacking, especially in a country like India, where there are already sort of certain skewed notions of gender in terms of also sort of gender equality, et cetera. So, a technology such as this one can sort of only amplify the harms that already exist in the society. So, we’re hoping to sort of unpack the use of generative AI and the impacts that it may have on, the differential impact it may have on gender, yeah.
Moderator:
Great, thank you so much, both of you. And also to add that we do also work on other kinds of businesses in AI, which is data labeling, which in the context of India, is that cheap labor re-imagined for AI, which is people sitting and labeling images, which is a large piece of work that we’ve also done in the past. But thank you so much for this conversation and this perspective and to the audience for engaging. Thanks.
Soujanya Sridharan:
Thank you, and just to reiterate, the report is available here with us if you want to read our 172-page report. It’s also available via the QR code. And we’re both, Aastha, I, Nusrat, we’re all around, so please catch us. QR code, I think it’s at the table.
Moderator:
I’ll just hand it over to Nusrat. Thank you very much, bye-bye. Thank you.
Speakers
Audience
Speech speed
158 words per minute
Speech length
501 words
Speech time
191 secs
Arguments
AI model and its relation with human needs and organizational policies
Supporting facts:
- She questioned the placement of AI in the model shared by the panel, suggesting that human needs should be central.
- She also mentioned the potential impact when human needs go outside to facilitate AI technology and policies.
Topics: AI, Human needs, Organizational policies, Technology
Specific risks related to generative AI
Topics: Artificial Intelligence, Risk Management
Interested in the relevance of downstream supply chain approach in the digital sector
Topics: Digital Sector, Supply Chain, Business Analysis
Understanding the research methodology behind the framework and recommendations
Topics: AI, Research Methodology
Concerns regarding the risk of AI with gig workers
Topics: AI, Gig Economy
Concern about the impact of generative AI on the labor market in countries like India
Supporting facts:
- Generative AI has nearly zero cost for generating content
- India’s labor costs are low, particularly in the information technology sector
- Even low-cost countries may struggle to compete with near-zero cost AI
Topics: Generative AI, Labor Market, India
Report
During the discussion, the speakers touched upon various key topics such as artificial intelligence (AI), human needs, organizational policies, and the digital sector. One of the speakers raised the question of the placement of AI within the model, suggesting that the focus should be on prioritising human needs.
They believed that instead of placing AI at the core, it would be more beneficial to place human needs at the centre of AI development and policy-making. Another speaker supported this viewpoint by implying that centring human needs in AI development and policy-making could yield better outcomes compared to giving AI precedence.
They emphasised the need to consider the impact of such an arrangement on the facilitation of AI technology and policies. The discussion also delved into specific risks associated with generative AI. However, no further information or evidence was provided to support this argument.
One participant expressed a desire to understand how traditional business analysis methods can be applied to the digital sector. Unfortunately, no additional details or examples were given to elucidate this point. Similarly, another participant expressed an interest in exploring the relevance of downstream supply chain approaches in the digital sector.
However, no specific supporting information was provided. On a positive note, one speaker sought recommendations for governments or corporations regarding downstream supply chain approaches in the digital sector. Unfortunately, no further information was given to elaborate on this stance. The discussion briefly touched upon the importance of understanding the research methodology used behind the framework and recommendations for AI.
However, no specific details or examples were mentioned. Another concern raised was the risk of AI in relation to gig workers. Unfortunately, no further information or evidence was provided to support or expand upon this concern. There was also a discussion surrounding the impact of generative AI on the labour market, particularly in countries like India.
Some supporting facts highlighted the near-zero cost of generative AI for content generation and the low labour costs in India’s information technology sector. This raised concerns about low-cost countries struggling to compete with AI’s nearly zero cost. Lastly, one participant expressed an inquiry regarding the potential indirect impacts of AI and methods to quantify such effects.
They specifically mentioned the difficulty and subjectivity in measuring the impact of AI on labour reduction. The speaker requested suggestions for quantifying these impacts. In conclusion, the discussion revolved around crucial topics such as the relationship between AI and human needs, the risks associated with generative AI, downstream supply chain approaches in the digital sector, and measuring the indirect impacts of AI.
While some arguments were supported by evidence and specific examples, others lacked further elaboration. The speakers’ views and recommendations demonstrate the need for further exploration and research in these areas.
Moderator
Speech speed
214 words per minute
Speech length
217 words
Speech time
61 secs
Report
During a discussion on the role of gig workers in the AI industry, the speaker invited the audience to contribute further and provide input on the topic. However, no additional comments or questions were made, leading the discussion to be handed back to the speakers for concluding remarks.
Just as it seemed the discussion was coming to an end, an audience member raised a question. They thanked the speakers for the engaging conversation and mentioned their involvement in other AI-related businesses, particularly data labeling. In the context of India, data labeling is seen as a form of cost-effective labor for AI, involving the task of sitting and labeling images.
The audience member expressed gratitude for the discussion and the audience’s active participation. The discussion concluded with Nusrat taking over and bidding farewell to the participants.
Nusrat Khan
Speech speed
158 words per minute
Speech length
1934 words
Speech time
733 secs
Arguments
The UNDP works with the Business and Human Rights Program to address potential hazards of AI technology on human rights.
Supporting facts:
- The program is based on the UN guiding principles on business and human rights
- The principles include protect framework, respect framework, and remedy framework
- The UNDP’s digital strategy focuses on creating a world where digital technology is a force for positive change
Topics: UNDP, Business and Human Rights Program, AI technology, Human Rights
Businesses have obligations to protect human rights and to provide means for addressing grievances.
Supporting facts:
- The UN guiding principles on business and human rights establish the state duty to protect human rights, the duties of businesses to respect human rights, and both states’ and businesses’ duties to provide access to grievance redressal mechanisms
Topics: Business Responsibility, Human Rights, Grievance Redressal
The toolkits are based on the three-pillar framework of the UNGPs
Supporting facts:
- The UNGPs framework includes Protect, Respect, Remedy
- The toolkits facilitate businesses to govern their human rights risks
- They follow the human rights due diligence process of the UNGPs
Topics: Human rights risks, Artificial Intelligence, Business Governance
Improve your data representation across all four sectors
Supporting facts:
- The findings from the stakeholder consultation revealed that the data set representation on which the algorithms were created weren’t representative enough
Topics: data representation, AI technology
Businesses should take up some of the upskilling role
Supporting facts:
- With automation in retail, there is a need to upskill workers for the use of data technology
Topics: businesses, upskilling, workers
Need for feedback mechanisms
Supporting facts:
- Whether it’s a gig worker trying to get in touch with a certain company with a certain grievance, or someone who’s applied for a loan and has been rejected, there needs to be clear reasons communicated
Topics: feedback mechanisms, AI technology, gig workers
Importance of explainability of AI
Supporting facts:
- Companies need to create these algorithms and AI technology in a way that it’s explainable and transparent
Topics: AI technology, algorithms, explainability
Nusrat Khan is learning about generative AI and the effects it may have
Supporting facts:
- Nusrat Khan is exploring generative AI
Topics: Generative AI, Artificial Intelligence
Generative AI could amplify existing societal harms, interpretations of gender and gender equality
Supporting facts:
- A technology such as this one can sort of only amplify the harms that already exist in society
Topics: Generative AI, Gender Equality, Society
Report
The United Nations Development Programme (UNDP) is actively working with the Business and Human Rights Program to address the potential risks that AI technology poses to human rights. This collaboration is based on the UN guiding principles, which consist of frameworks for protection, respect, and remedy.
The goal is to ensure that AI technology is developed and used in a manner that upholds human rights. The UNDP’s digital strategy is focused on harnessing the power of digital technology for positive change, with the vision of creating a world where digital technology contributes to sustainable development.
By partnering with the Business and Human Rights Program, the UNDP aims to mitigate the potential hazards associated with AI technology, such as privacy infringements, hate speech dissemination, and algorithm discrimination. AI technology has the potential to bring both positive and negative impacts.
On the positive side, it can create job opportunities, drive economic growth, and empower civil and human rights defenders. However, it also poses risks that must be addressed. It is the responsibility of businesses to protect human rights and provide mechanisms to address grievances.
The UN guiding principles on business and human rights establish the state’s duty to protect human rights, businesses’ duty to respect human rights, and the obligation of both states and businesses to provide access to grievance redressal mechanisms. This highlights the importance of businesses actively participating in mitigating human rights risks.
To assist businesses in managing their human rights risks, the UNDP has developed toolkits based on the three-pillar framework of the UN guiding principles. These toolkits offer guidance on how to govern human rights risks and follow the human rights due diligence process.
The UNDP also assists businesses in identifying gaps in mitigating human rights risks through ongoing monitoring and reporting. Improving data representation is essential for enhancing the reliability of algorithms. Stakeholder consultations have revealed that the data sets used to create algorithms may not be adequately representative.
Addressing this issue is crucial to ensure that AI technology is fair, unbiased, and inclusive. Additionally, businesses have a role to play in upskilling their workers to adapt to automation in the retail sector. As the use of data technology increases, it is important for workers to acquire the necessary skills to effectively utilize these technologies in their jobs.
Feedback mechanisms are vital for accountability and transparency. Whether it is a gig worker trying to address a grievance or an individual who has been denied a loan, clear reasons need to be communicated. This can help build trust and ensure that decisions made by AI systems are fair and explainable.
In conclusion, the collaboration between the UNDP and the Business and Human Rights Program highlights the importance of addressing the potential risks of AI technology on human rights. The UNDP’s digital strategy aims to leverage digital technology for positive change.
It is crucial for businesses to actively participate in mitigating human rights risks and providing mechanisms for addressing grievances. Improving data representation, upskilling workers, and establishing feedback mechanisms are key steps towards ensuring that AI technology is developed and used in a manner that upholds human rights.
Research is also being conducted to study the differential impact of generative AI on gender in India, underscoring the need for a gender-responsive approach to AI development and implementation.
Soujanya Sridharan
Speech speed
186 words per minute
Speech length
2658 words
Speech time
856 secs
Arguments
Launching of toolkits and a report about how AI impacts human rights
Supporting facts:
- The research focused on India, but it is applicable globally
Topics: Artificial Intelligence, Human Rights
AI technology works very closely and in tandem with company policy and governance and also the regulatory frameworks
Supporting facts:
- AI technology might be the credit scoring algorithm that underlies and sits at the core of a product or a service
- Company policies are decisions that sit as a layer outside that
- Surrounding that is the layer of the regulatory framework
Topics: AI, Company Policy, Governance, Regulation
Risks and the nature of risks vary sector-wise and there is no sector that is risk-free.
Supporting facts:
- In financial services, there were risks to privacy, financial access, and challenges around grievance redressal.
- With respect to healthcare, risks to life, equality, privacy, and individual autonomy
- In the context of retail, anticipated risks to livelihood, standard of living, and worker autonomy.
Topics: AI, Risk Management, Sector Transparency
There is a need for concerted focus on the regulation of AI and the strengthening of systems and standards.
Supporting facts:
- There is a need for regulation and active work in terms of regulation on data, data use, data access, guardrails, including the ways in which we audit AI and deal with AI, as well as systems and standards.
Topics: AI, Regulation, Standards
Businesses are a key stakeholder in managing AI risks and enhancing consumer trust.
Supporting facts:
- Discovering and highlighting business incentives, particularly trust, consumer adoption, as well as headline risks might be useful
Topics: AI, Business Responsibility, Consumer Trust, Risk Management
Toolkits have been developed for businesses to govern their human rights risks.
Supporting facts:
- Toolkits that companies themselves can apply to govern their human rights risks. If they’re not at a state of readiness where they can apply these human rights toolkits, what they can do is to use the toolkits as a way to understand pathways to better human rights governance.
Topics: Toolkits, Business Responsibility, Human Rights
Artificial Intelligence (AI) technology is being widely deployed in several sectors.
Supporting facts:
- AI is being used in companies, with a layer of governance over it
Topics: AI, technology, deployment
Human rights risks emerge from the use of AI technology.
Supporting facts:
- The research focuses on understanding where human rights risks emerge from AI
Topics: AI, human rights, risks
Research methodology involved secondary literature review, expert interviews within each sector, data and doctrinal analysis.
Supporting facts:
- Methodology started with understanding the UNDP framework, selecting a specific type of AI technology within each category, and a focus on consumers and workers
Topics: Research, methodology, expert interviews, data analysis, doctrinal analysis
Generative AI reduces the cost of content production and dissemination to almost zero.
Supporting facts:
- Generative AI being utilized enables very targeted, distinct communications
Topics: Generative AI, Content production, Dissemination
Generative AI could affect job loss, especially in sectors like software and legal services.
Supporting facts:
- Generative AI has the potential to disrupt employment opportunities in areas where cost of labor is a significant factor.
- Software development and legal services could be particularly affected.
Topics: Generative AI, Employment loss, Software industry, Legal services
The implications of generative AI need to be researched and understood on a sectoral basis.
Topics: Generative AI, Sector-based research, Employment impact
Cost zero content production and dissemination through generative AI could result in first order effects such as disinfo and misinfo.
Topics: Generative AI, Content generation, Disinformation, Misinformation
Report
The analysis focuses on the impact of artificial intelligence (AI) on human rights in various sectors. It acknowledges that AI technology is being widely deployed and highlights the need for regulation and governance to manage the risks associated with its use.
One of the key findings is the development of toolkits to help businesses govern their human rights risks. These toolkits provide a framework for companies to understand and improve their human rights governance. However, it is noted that businesses are not always ready to apply these toolkits, and instead they can use them as a means to better understand pathways to enhanced human rights governance.
The analysis also emphasises that risks related to AI technology vary depending on the sector. It mentions specific risks in financial services, healthcare, and retail sectors. In financial services, risks include privacy concerns, challenges around financial access, and difficulties with grievance redressal.
In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector anticipates risks to livelihood, standard of living, and worker autonomy. This highlights the need for sector-specific approaches to manage and mitigate these risks. Furthermore, the analysis underscores the importance of regulation and standards in the context of AI.
It suggests that there should be a concerted focus on regulating data use, access, and safeguarding, as well as auditing AI systems. Strengthening regulatory frameworks and standards is seen as crucial for ensuring responsible and ethical AI deployment. Another significant finding is related to the use of generative AI, which reduces content production and dissemination costs to almost zero.
This cost reduction raises concerns about the spread of hate speech, misinformation, and disinformation. The analysis argues that the widespread use of generative AI could result in an increase in these issues, requiring measures to effectively address them. Moreover, the analysis highlights the potential impact of generative AI on employment opportunities, particularly in the software and legal services sectors.
As generative AI can disrupt traditional labor costs, it may lead to job losses in areas where the cost of labor is a significant factor. This observation raises questions about the potential economic and societal consequences of AI adoption. In conclusion, the analysis recognizes the immense potential of AI technology but stresses the need to govern and regulate its use to protect human rights.
It recommends the development of robust regulatory frameworks, the application of toolkits for human rights governance, and sector-specific approaches to manage associated risks. The analysis also identifies the risks posed by generative AI, such as the spread of hate speech and job displacement, calling for further research and proactive measures to address these challenges.
Overall, the analysis provides valuable insights into the complex relationship between AI and human rights, advocating for responsible and ethical AI deployment.