WS #205 Contextualising Fairness: AI Governance in Asia
WS #205 Contextualising Fairness: AI Governance in Asia
Session at a Glance
Summary
This panel discussion focused on contextualizing fairness in AI governance for diverse cultural contexts, particularly in Asia and the Global South. The speakers explored how fairness in AI is understood and implemented differently across cultures. Tejaswita Kharel highlighted that in India, fairness encompasses equality, non-discrimination, and inclusivity, with unique considerations like caste that may not be relevant in Western contexts. Yik Chan Chin presented research on narratives of digital ethics, showing how concepts of fairness vary between cultures, such as China’s emphasis on harmony and role adequacy versus Western focus on individual rights.
Milton Mueller cautioned against overstating AI’s capabilities and emphasized the importance of understanding the technology’s limitations. He noted that many issues of contextualization have existed in computing for decades. The panel discussed challenges in creating representative datasets and evaluation frameworks for AI systems, with Chin suggesting that different regions could contribute best practices to develop an interoperable framework for fairness.
The discussion touched on concerns about hyper-contextualization and the feasibility of adapting AI to all cultural nuances. Mueller argued that market forces would largely determine the level of contextualization in AI applications. The panel also addressed issues of gender bias in AI datasets and the complexities of “cleaning” biased data. Overall, the conversation highlighted the need for nuanced, context-sensitive approaches to AI fairness that consider diverse cultural perspectives and practical implementation challenges.
Keypoints
Major discussion points:
– The need to contextualize AI fairness and ethics principles for different cultural contexts, especially in the Global South
– Challenges in developing representative and inclusive evaluation frameworks for AI systems
– The tension between global AI governance principles and local/regional implementation
– Limitations of current AI training data and models in representing diverse perspectives
– Practical challenges in “cleaning” biased data vs. expanding datasets
Overall purpose:
The goal of this discussion was to explore how AI fairness and ethics principles need to be adapted for different cultural contexts, particularly in Asia and the Global South. The panelists examined challenges in developing culturally-appropriate AI governance frameworks and evaluation methods.
Tone:
The tone was academic and analytical, with speakers presenting research findings and theoretical perspectives. There was general agreement on the need for contextualization, though some debate emerged around practical implementation challenges. The tone became slightly more urgent when discussing representation of marginalized groups in AI systems.
Speakers
– Nidhi Singh: Moderator
– Tejaswita Kharel: Project Officer at the Center for Communication Governance at the National University Delhi. Works on information technology law and policy, including data protection, privacy, and emerging technologies.
– Yik Chan Chin: Associate Professor in the School of Journalism and Communication at Beijing Normal University. Research interests include internet governance, digital ethics, policy, regulation and law, and AI and data governance.
– Milton Mueller: Professor specializing in the political economy of information communication. Co-founder of the Internet Governance Project.
Additional speakers:
– Emad Karim: Representative from UN Women’s Regional Office for Asia and the Pacific
Full session report
Contextualising Fairness in AI Governance: A Global South Perspective
This panel discussion, part of the Internet Governance Forum (IGF), explored the complexities of contextualising fairness in AI governance for diverse cultural contexts, with a particular focus on Asia and the Global South. The conversation delved into how fairness in AI is understood and implemented differently across cultures, highlighting the challenges in developing culturally-appropriate AI governance frameworks and evaluation methods.
Understanding Fairness Across Cultures
The panellists presented varied perspectives on how fairness is conceptualised in different regions. Tejaswita Kharel, from the Center for Communication Governance at the National University Delhi, emphasised that in India, fairness encompasses three key aspects: equality, non-discrimination, and inclusivity. She noted that unique considerations, such as caste, may be relevant in the Indian context but not in Western settings.
Yik Chan Chin, Associate Professor at Beijing Normal University, shared insights from a two-year research project on digital ethics narratives. While many countries accept a core set of ethical principles, Chin highlighted that the major differences lie in the narratives surrounding these principles. For instance, Chinese narratives of fairness focus on harmony and role ethics, contrasting with Western narratives that emphasise individual autonomy and formal equality. She provided specific examples of how concepts like privacy and data protection are understood differently in various Asian contexts.
Challenges in AI Fairness and Governance
Milton Mueller, a professor specialising in the political economy of information communication, introduced a sceptical perspective on AI, questioning its existence as a distinct entity and emphasizing that many issues attributed to AI are actually longstanding challenges in computing technology. He cautioned against overstating AI’s capabilities and noted that many issues of contextualisation have existed in computing for decades.
The panel discussed several challenges in creating representative datasets and evaluation frameworks for AI systems:
1. Bias in training data: Mueller pointed out that AI models are predominantly trained on English-language data, leading to inherent biases.
2. Limitations of bias measures: Existing bias measures can be gamed and may not truly address cultural issues.
3. Exclusion of diverse perspectives: Emad Karim, representing UN Women’s Regional Office for Asia and the Pacific, highlighted that women’s perspectives are often excluded from AI datasets.
4. Difficulty in “cleaning” biased data: Mueller argued that historical data inherently reflects past biases and cannot be easily “cleaned”. He suggested focusing on expanding and diversifying datasets rather than attempting to remove bias retroactively.
Mueller also discussed the CAML (Cultural Appropriateness Measure Set for LLMs) framework, highlighting its potential and limitations in addressing cultural biases in AI systems.
Approaches to Improving AI Fairness
The panellists proposed various approaches to enhance AI fairness and governance:
1. Collaboration: Kharel emphasised the need for collaboration between ethics experts and AI developers to bridge the gap between ethical principles and practical implementation.
2. Interoperable frameworks: Chin suggested developing interoperable frameworks that incorporate regional best practices.
3. Market-driven approach: Mueller proposed a market-driven approach to contextualisation based on demand.
4. Community-based models: Chin mentioned the potential of community-based small models to serve specific needs.
Tensions and Debates
The discussion revealed several areas of tension and debate:
1. Hyper-contextualisation: Concerns were raised about the feasibility of adapting AI to all cultural nuances, with Mueller arguing that market forces would largely determine the level of contextualisation in AI applications.
2. Data cleaning vs. expansion: While some advocated for “cleaning” biased data, Mueller emphasised the importance of expanding datasets to improve representation.
3. Global principles vs. local implementation: The panel grappled with the tension between developing global AI governance principles and adapting them for local or regional implementation.
4. Definition and measurement of fairness: The speakers diverged significantly in their definitions and approaches to fairness in AI, highlighting the complexity of implementing fairness across different cultural contexts.
Mueller also discussed Google’s Fairness in Machine Learning program and the controversy surrounding it, illustrating the challenges in implementing fairness measures in practice.
Practical Implications
Nidhi Singh raised an important point about the potential impact of AI on resource allocation in India’s public distribution system, highlighting the need to consider practical implications of AI fairness in governance contexts.
Conclusion and Future Directions
The discussion underscored the need for nuanced, context-sensitive approaches to AI fairness that consider diverse cultural perspectives and practical implementation challenges. Key takeaways include:
1. The importance of understanding local priorities and concerns when contextualising AI fairness.
2. The need for collaboration between ethics experts, AI developers, and regional stakeholders.
3. The potential for developing interoperable frameworks that incorporate best practices from different regions.
4. The ongoing challenge of addressing bias in AI systems, particularly in data collection and representation.
The panel acknowledged time constraints that limited the depth of discussion on some topics. Nidhi Singh suggested longer panels for future events to allow for more comprehensive exploration of these complex issues.
Future research and dialogue should focus on bridging cultural and methodological gaps in AI ethics and fairness, developing more sophisticated approaches to bias mitigation, and exploring ways to increase representation of marginalised groups in AI datasets and outputs.
Session Transcript
Nidhi Singh: you Hello everyone. Hi and welcome to our session on contextualizing fairness AI governance in India. I know that this is the last session on the third day of the IGF. So we’re very thankful for all of the people who’ve come. I also know it’s quite late considering all of our Asian participants are joining us quite late. So we’re very thankful you could all be here. We have a very interesting panel and a very interesting discussion that’s happening today. So I would like to keep some time at the end for audience participation. So I will be enforcing time limits a little strictly during our introductory remarks by the panel. So we can just I think we’ll jump right into it. I’m just going to talk a little bit about how this panel is based around the idea that while there’s a lot of work that’s happening around AI ethics and AI governance, but there’s no real one size fits all approach that can be directly implemented into all of the context. As we start looking into AI applications and how the use of these applications can benefit societies, we have to consider that a lot of these applications are in fact made in the Global North according to Global North norms and directly introducing them into the Global South tends to have a lot of problems. It leads to a lot of exclusion. So in this context, we are specifically talking about what fairness means, and then how you can make these systems fair, specifically to something as diverse as the Asian context, where a lot of countries in the Asia there from the Global South, a lot of them have larger populations. There are a lot of them might be developing economies that are linguistic barriers. So in these cases, how would you make something like an AI ethic work in these kind of cultural contexts? So I’m going to give a very brief remark here, and then I’m just going to introduce all of our panelists really quickly, and then we’ll move on to a quick round of questions. So to start with, we have Yik Chan Chin. She’s an Associate Professor in the School of Journalism and Communication at Beijing Normal University. She has previously worked at the University of Nottingham and the University of Oxford School of Law. Her research interests include internet governance, digital ethics, policy, regulation and law, and AI and data governance. Her ongoing projects include digital governance in China, and global AI and data governance. Dr. Chin is a co-leader of the UN IGF policy network on artificial intelligence. We have an excellent report that was released yesterday. So if you haven’t checked it out, I highly recommend you check that out as well. She’s a member of the Asia Pacific Internet Governance Forum and a multi-stakeholder steering group member of China Internet Governance Forum. Online we have with us Tejaswita Kharel. Tejaswita is a project officer at the Center for Communication Governance at the National University Delhi. Her work relates to various aspects of information technology law and policy, including data protection, privacy, and emerging technologies such as AI and blockchain. Her work on ethical governance and regulation of technology is guided by human rights-based perspectives, democratic values, and constitution principles. And finally, a more recent addition to our panel is Milton Miller. Professor Miller, so when we were looking through your bio, it was so long that I think we would have taken most of the panel just going over your work. We have had to greatly cut it down. So please check him out. You can just Google him. There are several links that pop up. We’ve just got a very brief bio introducing him here. Professor Miller is a prominent scholar specializing in the political economy of information communication. He has written seven books that we could find on Google Scholar and many, many articles and journals. He’s the co-founder of the Internet Governance Project, a policy analysis center for global internet governance. His areas of interest include cybersecurity, internet governance, and telecommunications and internet policy. So now I will just jump right into the questions. We’ll start with you, Jaswita. So when we’re talking about the entire conversation today is based around AI and bias and how you contextualize fairness. So can you talk to us a little about what fairness means specifically in the Indian context? So what are the kinds of contextual bias that you see in India, which are perhaps not fully accounted for
Tejaswita Kharel: in global conversations around AI bias at the moment? Every speaker strictly has five minutes and we want to have time in the end. So I will be sort of enforcing it. Thank you, Jaswita. All right. Hi, I’m Jaswita. So I’m going to be breaking this question into two parts. The first being, how do we look at fairness in AI in context of India? And then I’ll talk about what contextual biases there are in India. So to start, fairness is in terms of its own concept, it is a very subjective thing. There is no specific understanding of or like a definition of what fairness can even mean, which means that we must look at other factors that will guide our understanding of what fairness can mean, which in Indian context is three aspects. The first being equality, the second being non-discrimination and the third being inclusivity. Equality in the Indian context, especially for AI, will now come in from the constitution, which guarantees the right to equality. So when we look at equality in AI, the expectation of an AI system is that number one, it treats individuals equally under the same circumstances and it protects human rights. The second being that it ensures equitable technology access and third, that it guarantees equal opportunities and benefits from AI. And now when we move on to the second part, which is non-discrimination, non-discrimination addresses predominantly the question of biases in AI, which is more of the technical aspect in the sense that we’re trying to ensure that the data we have when we’re ensuring that when we’re creating AI systems, they are not biased. So when we look at non-discrimination, what we’re trying to do is we’re trying to prevent AI from deepening existing, like let’s say deepening historical and social divisions that may be based on various factors in India, such as religion, caste, sex and other factors that may be deeply rooted in the complex social fabric. Then when we look at the third aspect of what fairness means, it is inclusivity. When we consider inclusivity, we’re looking at it in the sense that it prevents exclusion from access to services and benefits that AI tools can guarantee. And it’s also in the context of ensuring that your grievance redressal mechanisms are inclusive. You want to ensure that whenever you’re creating a fair system, it is equal in terms of it’s treating all persons equally, it’s providing access to everyone in the same manner, it is ensuring that the data is not biased and therefore not perpetuating existing biases or even exacerbating them. And it’s also ensuring when you’re creating fair AI, it must ensure that each person has access to grievance redressal. So overall, the idea of fairness in AI, when we look at the Indian context, it’s encompassed by these three factors. Now I’ll go to the second aspect of the question, which is, what are the contextual biases that you see in India that may not already be there in the global north, or what the differences might be? So I will talk about this in the Indian context as well as slightly more genetic context, which is that I think the existing idea of what biases are comes from the global north, in the sense that till date, when we talk about AI bias, we predominantly use examples from the US. One such example is the COMPASS case study, where we realized that race was a very important factor when we were considering bias in AI. So a lot of the discussion around what AI fairness is, what AI bias is, is predominantly revolving around harms that have already showed up in the global north, which is now starting to translate in the global south. However, these factors that we’ve already identified, they may not apply in the same manner. What I mean by that is that there, of course, there are existing factors that may be similar in the context of the US or the other global north countries, which will also exist here, such as gender, religion, ability, class, ethnicity. But what is different in the Indian context is perhaps caste, not just in India, but also in other regions, such as Nepal, Bangladesh, Pakistan, there may be other factors which are not necessarily limited only to ethnicity, gender, religion, etc. So in specifically the Indian context, caste is a major factor which does not really show up when we’re considering biases in AI when we look it in the context of the Global North. So what the harm is when we’re considering AI bias or like, sorry, what the harm is when we’re considering factors of fairness only from the Global North perspectives is that we lose out on a lot of existing context, which means that the AI systems will not actually work. For example, if you tried to simply adopt an existing AI tool in the Indian context, which is, let’s say, created in the US, it would not work, because the existing context has not been taken into account, the data is not taken into account, which means that it will simply cause a lot of harm, and it will also be extremely ineffective. So that being said, the larger point that I’m trying to make right now is that when we’re looking at AI ethic principles, and I think in this context, specifically fairness, we have to ensure that these principles are tailored to the specific national context. And even within national context, there may be regional context, because especially in a country like India, where there’s so much diversity is important to consider all of the different contexts that are going to affect an AI system, which means that we cannot just have a one size fits all approach. And this approach is the key point of our broader discussion on contextualizing AI fairness, which is that we cannot just develop a general theory of algorithmic fairness, solely based on global north understandings. Each nation, with its own unique historical, cultural and social dynamics, has to carefully consider how fairness translates into its specific context when it’s intending to develop and or deploy any AI system. That is my point. Thank you for your attention. I look forward to hearing from the other panelists now.
Nidhi Singh: Thanks, Tejaswita. So I’ll move to one of our in person panelists right now, Yik Chan, you work extensively on AI in China, and from a more global perspective as well. So can you tell us a little about how you go about conducting this sort of research and what your methodology is? So what I mean is, in practical terms, contextualizing ethics to individual context is resource intensive for several countries. So how do you look at this in your work?
Yik Chan Chin: Yeah, I’m going to share presentation because it’s a bit complicated. So can I share now? Can you hear me? Okay, can I share it? Okay. Okay, can you see the presentation now? Okay, so thank you for inviting me. And so I’m going to present a work, which is a methodology about how to do it. So I hope this can clarify it, or maybe helpful from the academical point of view. And just a moment. So basically, this work is about narratives of digital ethics is conducted by the Austrian Academy of Science is a two year project, we collaborated with the Austrian Academy of Science, which actually fitted to today’s topics really well. So that’s why I use it as example. So what this project actually is about, actually, what we found is that in terms of digital ethics, you know, there’s not much difference in terms of the values, the core value, but what are the differences in narratives? So what is narratives? Narratives are stories that are told repeatedly, consists of a series of selective events, which have a particular character, so which will shape people’s understanding, you know, collective behave, or particular society. So this is what we call the narratives. So what we found from our two years research is that, and that’s not most of the country accept a core set of the principle, like a theorist, but what are the major difference is narratives. Okay, so in terms of fairness, and what what fairness means, globally, there’s a global consensus, fairness means non-discrimination, which includes a prevention of bias, inclusiveness in design impact, representatives and a high quality of data, and as well as equality. So this is from global consent. And, but with from our research, because it’s a bottom up research, we found actually we need a contextualized the principle of the digital ethics. So therefore, especially from the cultural dimensions, and so we, so we use the approach, probably, you know, it’s a situate situatedness. So which is a very common methodology, I think a mutant know well, in the STA’s research, science and technology study research. So we should actually focus on the differences in the social, cultural, political, and economical, and the institutional conditions. So they look at the differences instead of the commonalities. Okay, so this is what we are going to, we use in this, our research, we look at the differences, we use the situatedness approach, and especially, we get a lot of evidence from global south, because there’s a lack of the voice. So here is the methodology, we did a kind of semi-structural expert interview, if I remember correctly, it’s a 75 expert interview, and then it’s a workshop series, we’ve invited the talks, and we use also user cases discussion, like a panel discussions. And so this is how do we generate our data. So this is a building block of digital ethics narratives. So when we look at that, if you look at this site, we look at the key dimensions of the digital ethics, for example, what is the notion of good in different society, for example, like a harmony, virtual as a good, denotological, and the consequentialist as a good, and what the fairness means. So there are different building blocks, like a role adequacy as a fairness, and the material equality as a fairness, and the formal equality as a fairness. So then we have like a reference point, so who is the major actor, is whether community, or individual, or ecosystem, and whether the technology is beneficial, or victim, or actors opportunities, whether the technology, what the ethical concern is marginalization, safety, or autonomy, whether the actor is a government, or technical industry, or others, whether what kind of tools government should use, like education, law, regulation, or technology, or whether the legitimacy should be organically involved, or determined by the able, or self-determination. So we use these measures to analyze the narrative of the ethics. So then, so if you look at the fairness, that actually there’s three different categories of the narrative of fairness. The first one is called the role adequacy. What role adequacy means? It means your role, what is fair or not fair, is not determined by the society, but by someone, you know, some kind of, often based on an assumption that different role has been assigned by power outside of human society, such as God, or religions, you know. So like we have a lot of examples from the African tribes. So what do they mean? Fairness actually is determined aside by the God, by the religions, nature, or faith, or even the spiritual world, okay. So the second one is called material equality. What material equality means? The idea of equality is a result, okay. So we look at the result, whether the result is equal or not, and the otherwise formal equality, which is look at equal treatment procedures. So we accept, because we have a different starting point, so maybe the result is unfair, but at least the procedure, the treatment is equal. So they have three different narratives about the fairness. So look at the Chinese case. So we look at a different case around the world. So I just use China, but if you look at the, if you read our report, you can see cases around the world, from Europe, from Africa, from Japan, from India, from USA. So I just look at the Chinese case. What are they, the features? So they feature the fundamental ethical assumptions, harmony, and then it’s a role ethic, means it’s determined by the tradition, you know, what kind of traditions, or belief, and the whole, they look at, actually look at the equality, the whole system, the digital ethical system, and also they think that the technology is the opportunities, not a kind of threat. So they look at the technology as opportunities, and the conflict, the major conflict is marginalization, whether technology can bring the prosperities, and the major role to shape all this development is by government, and whether, what kind of tools the government should use is education, and the culture, and who should make decision is determined by the able, which is wisdom people. Okay, so this is a kind of harmony of type of the Chinese narratives in terms of fairness. And so, but actually recently there’s some new development with in China regarding to these narratives. The major change is about here. So what is changing recently, two or five years, we have a Chinese guest, so you can ask their opinion as well. So it’s actually, they do not, no longer see the technology purely as opportunities. They start to realize the risk and the victim, and they may become a victim of the new technology. And then before they are more focused on prosperity development, and now they’re also shift a little bit to safety and harm. And also now it’s more and more and more wars come from the technical industry, instead only from the government, and start to use law and the regulation, the ethics of the digital, for example. But still, they’re still determined by the able, and still the fundamental ethical assumptions, harmony and the role adequacy. So if we look at the American, the Silicon Valley types, so we can see very different, okay? So they are more look at the consequentialist, which is result, why is that this kind of technology will result in the fairest, okay? And the formal equality, which means the procedure, we have equal treatment to everybody, but it doesn’t, not necessarily look at the result, okay? And then it is up to individual to decide, and they see the technology as opportunities rather than the threat. And the main concern is autonomy, lack of freedoms, and also the self-determination. So it’s the individual to regulate themselves rather than government, and the economy, the market-driving approach rather than the culture and education. So we can see quite different, you know? So I think I stop here because I take too much time, and I leave for the question, okay? Thank you.
Nidhi Singh: Thank you so much for that. That was so interesting to see how fairness very practically is being defined in different contexts and how it’s changing over the last couple of years. That was a very useful intervention, and I think it’ll form the basis of our conversation. Milton, we’ll turn to you, speaking of practicality. You have a lot more practical experience with AI applications. So how far is AI going to go? And how far is it going to go? And how far is it going to go? So how far is it possible to contextualize an AI application to a cultural context? So far, we’ve been talking about ethics, but how far is it actually feasible to take these AI applications and contextualize them to culture? Have you seen any of these systems that have worked out well? Like, how have they worked exactly? Thank you.
Milton Mueller: Can you hear me? Am I on? Okay, thank you very much. Yeah, I am going to, yeah, first issue you a few sort of caveats of generally framing the topic. So first of all, I’m a, what you might call an AI skeptic. That is to say, I don’t really believe AI exists. I think the people have created this monster around it and mostly don’t know what the technology actually is or what it does. So one thing to keep in mind is that most of the time, we’re just talking about computing technology and many of the issues of contextualization have been around in computing for a long time. So think of the keyboard, for example, the whole keyboard was designed for the Roman alphabet, right? And what do Chinese or Arabic people do about this? Well, they have to deal with all kinds of workarounds based on their different scripts. And what impact does this have? Well, in some ways it excluded, but people adapted and they came up with workarounds or they just learned the Roman script, right? Another example are multilingual domain names, right? Where again, the domain names were an ASCII script. We went through some processes in ICANN to try to come up with a way of representing Arabic or Chinese script in the domain names Arabic or Chinese script in the domain name system. And we thought we were extending access by doing this, making the domain name system available to everybody. Turns out we were not. It turns out that people in these countries with different scripts don’t adopt these alternative domain names and it actually reduces their visibility and access to people who don’t speak that language. So it would have fragmented the domain name system. These multilingual domains exist, but they’re just not being adopted and not being used. So now let’s turn to AI. I got some notes here that I need to see. So a lot of what I’m gonna say is based on research at Georgia Tech, particularly by an Arabic AI specialist in our computer science department. Oh my God, this is recording everything I say. So the first thing you have to know about AI is that all of these big models were trained on what we call common crawl, right? Which is a way of crawling the internet and picking up all of this textual and image information. And the top languages on common crawl, English, 46%. Next is Russian, seven, German and Chinese, 6%. You get down to Arabic, it’s 1%. So Tarek, who by the way, I have to ask the other panelists since he’s working in Georgia Tech, is this knowledge coming from the global North or because he’s Arabic, is it coming from the global South? But we’ll deal with that question later. So he’s just explored the way that this rootedness in English text produces AI applications that produce bad outputs for Arabic cultural context. I’ll give you an example. You ask it to fill in a word. And let’s say you say, my grandma is Arab. For dinner, she always makes us fill in the blank. Now a standard AI application is gonna fill in something like lasagna because it’s all based on statistical prediction, right? But it should say something like Majboos or some kind of Arabic dish, right? So another interesting example is he says, GPT-4 is generating a story about a person with an Arab name. And if you use an English name or a French name, a European name, the story will be something like, oh, Philippe was this very smart boy who grew up and did this. If you use an Arab name, it’s sort of like Alas was a poor family where life was a daily battle for survival, right? So, you know, that can be very irritating. So what Tarek has done, he’s developed a measure set for LLMs that tries to determine the cultural level of appropriateness as he calls it. And it’s called CAML, Cultural Appropriateness Measure Set for LLMs, CAML. And again, this is not my research. This is Tarek Naus and Georgia Tech computer scientist team. So, and there’s also somebody at CDT named Alia Bhatia who’s done some research on how AI affects very small language groups, very small linguistic groups. And you can see how they kind of get erased. And again, you have to go back to other forms of information technology like the English language was homogenized by the invention of the printing press, right? So similar processes are going to happen with the massive scalability of AI, but also, and this is something we discussed with Tarek, you know, people are going to develop different models based on different training sets, right? And so the part of the solution to that is for, this is actually an opportunity as well as a threat for the so-called global South, which means that if they develop using their own resources, training sets and models that are trained on their cultural context, then they will have a product differentiation, a marketable difference with these big platform products, and they might be able to out-compete them in certain markets. So I think, again, these kinds of disparities and hiccups occur across the development of technology. And I think it’s bad to look at this kind of discrimination as a static thing that is some form of oppression. It’s more like a, a obvious flaw in the training set of the data sets used to train these systems. And it’s a remediable flaw. It can be fixed. It will be fixed. It’s a matter of investing in the resources to do so. I think that’s all I have time to say, right?
Nidhi Singh: Thank you so much. Yes, I think we are out of time. For the second question, we’re going to go a lot faster where everybody’s getting cut off for three minutes, because then we can have a little bit more time for questions. So speaking of how we can work towards fixing this, Tejasvita, I’ll address the question to you. How do you think we can have more representative and inclusive evaluation frameworks for these AI systems? Like, I think Mishra talked about the CAMEL framework. But are there any other ways that you know from your work on ethics in India that we can have these frameworks?
Tejaswita Kharel: Thank you. I think when we’re thinking of how we can create these frameworks, I think the issue is more of how there is disjointedness between people who want ethics and the people who can possibly deliver it in the actual AI system. So I think the first step to dealing with this problem is by actually resolving that problem, where when I’m saying equality, then we look at how you make things equal. How do you ensure that if equality means that you’re ensuring that your AI application treats everybody the same, then you must ensure that your AI system is being able to do that. Similarly, when we’re looking at non-discrimination, the major factor is bias, which means that we first need to remove bias from the data sets, which again is something that we will speak about, but the ones who are working on the AI systems will be the ones who create or work on creating clean databases. We will access these and then work towards implementing our ideas of AI fairness, which means that I think my main recommendation is that we understand bias, fairness, and all of these ethical principles and factors from the perspectives of the ones who actually do the work and get them to understand it from our side of things so that we can implement it in a way that’s actually reasonable instead of just demanding AI ethics and AI fairness. Yeah. Thank you.
Nidhi Singh: Thank you so much, Tejaswita. That actually leads me really nicely to my question because your point on how the ethics of AI or the ethics of anything and the practicality of how it’s delivered, that gap seems to be widening. So I’ll turn to Yukcha now. My question to you is somewhat related. AI governance right now, the ideas that you have around it are centered around principles and best practices and ethics. And yes, you have a few laws, but most of the world is going with best practices and ethics. Do you really think these are enough to guide issues like fairness? But if we weren’t using this, then what would we use as central tenet to guide fairness?
Yik Chan Chin: Yeah, I think the other work we are doing at the PNAI is interoperability. So first of all, we have to respect the regional diversity. And for example, when I say the fairness in the Chinese context, so a lot of the fairness we talk about in the Western society, for example, in Britain, we’re talking about gender, age, all this racial discrimination bias. But this will never be a problem. It’s not a major problem in China. We do not address gender, racial. It’s not a major concern. So what is a major concern at the moment, actually, first of all, is more about the consumer protections. So we have the algorithms, the provisions, which regulate what kind of algorithms, the automatic decision-making, the preference you can give. So basically, they have a special provision that says that you cannot damage people’s consumer rights. For example, you cannot discriminate people in terms of the price. So if I buy a ticket from one website, I got 800. Then I use the other different mobile, maybe Apple. Then I got 1,000. So this kind of discrimination is more from the consumer protections, but not from racial, gender. So this is one of the major concerns at the moment. We call it the protection of the consumer rights. The other one China is doing now is antitrust, because they want to, oh, this is also a major concern in terms of fairness. But this is not a major concern in most of the Western countries at the moment, in Britain, in America. But in China, it’s a major concern, how to provide a fair play field for everybody, for all the air company and the digital platform. So they are pushing forward the antitrust regulation and implementation in China. So I think we can see each society have different priorities. But if you ask what is the best practice, so it’s really difficult. So we also want to choose the best practice from all these case studies. So I think in the end, every country, they can contribute their best practice. For example, China can contribute their best practice in terms of how to address the consumer protection, or even antitrust. Maybe from the Western, I mean, Europe, they can contribute in terms of the discrimination against the racial or the gender issue. And so I think the best practice has to be coming from different regions. And in the end, we need to have an interoperable framework in terms of fairness. So each country, they have different priorities. But in the end, we probably have a minimum consensus on what are the building blocks of the fairness. OK, I think that’s the approach I would recommend. Thank you.
Nidhi Singh: Yes, thank you so much. I think that’s actually a very important conversation that we’ve sort of been having, I think, all week now, where we’re talking about maybe having more collaborative platforms where countries can come up. There’s no real point to, I think, building all of these solutions in isolation if we’re not going to share them. A lot of the countries do share commonalities. And then I think a lot of them are actually, they’re probably something that we can all look at when we’re looking at our own solutions. So for my final question before we open up to the audience, this is going to be, I think, a slightly different question from where the conversation’s going so far. So today, we’ve been talking for the last, I’d say, 40 minutes about contextualization. There are, however, some concerns around hyper-contextualization. So we can always say that, yeah, it’s great that you should always contextualize things to all of the contexts. Is that really even possible to contextualize to all of the concepts? There’s so many cultures. There’s so many languages. Would it actually be feasible to have an idea of fairness or AI systems or any sort of a computer system that’s contextualized to all of the cultural contexts and nuances that you have?
Milton Mueller: I turned the mic on. I just turned it on. That helps, right? So can we get too hyper-contextualized? And I think when we talk about, we’re talking about this in a governance context, right? So unfortunately, almost everybody in the IGF and in the UN system, when they talk about governance, they’re talking about hierarchical regulation by government. And they’re almost never talking about bottom-up regulation by markets, which is actually what’s going to be doing most of the governing. I mean, I just hate to inform you of this if you’re not aware of it already. But we get these AI applications produced because somebody thinks they’re going to get making money on them, right? So how much contextualization will we get? Will we get too much? Well, it depends on what the market will provide. If there is intensive demand for incredibly micro-contextualized applications, and I think there will be eventually. It will build up over time, of course. Then we will get micro-contextualized things. Think of a business in Indonesia in some very specific industry sector. Maybe these companies are building machine screws for nuclear power plants. I don’t know. That’s highly specialized. And the AI decisions, the inputs and outputs that would be relevant for those industrial players would be extremely contextualized. To be useful, they would have to be. And just a word about discrimination. So one of the things we have to understand is that so many of the mistakes and biases that you’re talking about have to do with the fairly primitive early origins. Like I said, we’re using Common Crawl to look at 46% English. Our facial recognition training has been based on US populations with 80% to 70% white people. So of course, the facial recognition. recognition is not the greatest, but again, that database will be expanded in multiple countries around the world, and the applications will have the potential to get better. The most famous case of facial recognition bias, racial bias, is actually not a case of racial bias. It was a police search in Detroit, Michigan, where we had a very grainy bad picture of a man who stole things from a store, and it was a black man. They went off and they told them that the record matched some guy that was innocent, so they went off and arrested this innocent black man. Now, the point was, the real person who stole the stuff was, in fact, black, so it was not racial discrimination. It was not racial bias. It was bad accuracy. Then, even more important than the bad accuracy was bad police practice. This guy did not go off and check whether this person he arrested had an alibi, which he had an airtight alibi. He could prove he wasn’t there in that store, and yet he arrested him anyway just because he was lazy. A lot of what we, again, talk about embeddedness and situatedness, look at the way AI fits into a specific context and how it’s used, and that is going to be determining how harmful or how beneficial its uses are going to be.
Nidhi Singh: Thank you so much. That’s actually really, I think, interesting. We’ve also been sort of looking at where the AI use sometimes, I think, depending on how it’s being used, it may not necessarily be just the AI, but it can magnify things that are already happening. You’re just rubber stamping those decisions along. Maybe you’re not using the human in the loop isn’t really being human enough to be counted there, so these problems are something that is coming up. Okay, I had another question, but I’m not going to ask it. We will move on to audience questions because we have 15 minutes, and I’m cognizant that a lot of people in the room seem to want to ask questions. If you have a question, you can just put your hand up, and Fawaz can help bring the mic around. Otherwise, Jaswita, if there are any questions in the chat, please let us know. Please introduce yourself once before you
Imad Karim: ask the question. Thank you. Can you hear me? Yes. My name is Imad Karim. I’m from UN Women’s Regional Office for Asia and the Pacific, and I’m going to put my UN Women hat and also considering that whole perception of fairness. We are also excluding half of the population on their perspective on AI. There is a lot of research coming out to say that there is a lot. Women’s perspective, history, narratives are not even included in those datasets because we inherited 10,000 years of civilization that was written by men for men, and that creates a huge gap in the AI outputs related to women and for women. Where do you see this as well? The more we go into those layers, we’re talking about women, but also women in remote areas, women with disabilities. The more you get into those layers, the less that will be represented in AI infrastructure datasets and outputs. I wonder if you have any reflection into how we can increase and fix those datasets or even have better roles when it comes to women’s representation in AI.
Yik Chan Chin: In our two-year workshop, we actually had a discussion on this question for a long time. We have a lot of representatives from Australia and from Africa, especially from the community village. The approach that they propose is community-based. In the end, you have those big, big models like open AI. Then, probably in China, they have Tencent, Alibaba. Also, I think it’s already happening in India. They have the India model, the small model. We do not call it a big model. It’s a small model. We have one example from the Australian Aboriginal community. They use their own language to develop their own dataset and develop their small model. In the end, I think there’s a diversity at a different level. We don’t need a big model. If you just serve your community, you can simply divide a small model, which does not really consume a lot of data and energies. That’s a specific model. I think this is how, in the end, just like Newton said, there’s a demand, then there’s a supply. I think that will be the way to tackle it in the end.
Nidhi Singh: Are there any other questions?
Audience: Hello, this is Xiao from the CENIC. I’m also a MAG member. I think it’s really a very interesting topic and a good discussion. I have a question. I think the bias of the AI in the data is closely linked with the culture and the nation’s history. Your bias is not my bias. My question is, because you have to, the data is already the past data. It’s already rooted with the history, with the culture. It’s biased. The data is already biased. How would you use your methodology or something else, your regulatory, to make the bias data no bias, make it fair? Thank you.
Milton Mueller: Yeah, I think the idea that you govern bias in AI by making, I think, what did she say was, Tejewada said something about, we need to clean the data. You’re going to go in and you’re going to scrub the bias out of the data. The data is like the dirty spots has it and you’re going to scrub them out. That’s just not how the system works, not at all. The data is the data. The data is a record of something that happened in an information system somewhere at some time. What you are going to change is you’re going to look at other collections of data, bigger collections of data. In that sense, you can engage in AI output governance via data governance by saying, well, for example, many people have spoken about being transparent about what data sets you have used. Then you have these metrics, these measures that the Georgia Tech researcher, Tarek, he does a critical analysis of these measures and points out how some of the main measures can be gamed. It’s just like if you know how Google will rank, what their algorithm uses to rank you in the search results, then you can put a bunch of junk into your website that pushes you up in those standings so people can game whatever metric is out there to optimize it, but they still may not have good results from a cultural bias perspective. One thing I would emphasize is that you’re dealing partly with the inherent limits of machine learning. Machine learning is taking all of these records, a whole bunch of them, processing them into a neural network that identifies patterns in the data. The data can be changed to change the outputs, or you can retroactively look at the outputs and say, we’re going to change them. I know Google has a whole program called Fairness in Machine Learning, which is somewhat controversial, but I think everybody here would kind of like it. Their idea is, we know that existing data will be biased. If you ask a straightforward question of a search engine, show me a CEO, most of the pictures, if not all of them, will be men. They said, we’re going to tweak our algorithm so that we will show more women in response to this question. They will deliberately make an inaccurate, from a statistical sense, an inaccurate representation of the data set based on tweaking their algorithms. That’s one. They even call this their Fairness in AI program. So they’re very concerned with fairness and the definition of fairness, which means some form of equal… representation. Now, that sort of got them into trouble because somebody asked their image generator to show a picture of the American Constitution, and their fairness algorithm had black people and Asian people in the Constitutional Convention of 1783, which is a complete misrepresentation of reality, but from a diversity representation standpoint is kind of cool, right? Oh, maybe there was a Chinese American there writing the Constitution, and there wasn’t, but wouldn’t it be kind of interesting to show that as happening? So that was a very controversial output, and a lot of criticism of Google came because of that, and as you probably know in the US now, DEI is very controversial and on the defensive, if you know what DEI is. So there’s two sides to this question, and the deeper question, the philosophical almost question is like, if you have a statistical regularity, you don’t necessarily know how it got there, but you definitely have a statistical regularity, is it biased? Is it unfair to act upon that statistical regularity? So if it is in fact true that German origin Georgia Tech professors are more risky driving their car, if it’s a statistical fact, let’s say my risk is 10% more than an Asian woman, can the insurance company charge me higher premiums, right? And you can say, oh, you’re biased against me. No, they’re saying, no, you’re more risky. So that’s the big deal.
Nidhi Singh: That’s a really interesting perspective, actually, because I think this is something that we’ve been working on, and I think this also circles back to a lot of the conversations around cultural context, because I think for a large part when we were having conversations about AI, for us, a lot of it is about things like the public distribution system, and how if you don’t have records, then certain villages get lesser allotment of rations, because they’re not counting women in the public distribution system. But it’s actually really interesting to see how you have metrics for fairness. And if you didn’t fake that metric for fairness, like you’re saying, then you just won’t have enough grains going to that village, which is like some of the issues that we’ve been seeing. So clearly, there needs to be some work done in this. I’m also just going to let Tejaswita come in really quick on this, because you were talking about fairness as well. Really quick, Tejaswita, because I think there are some questions in the chat, and I want to take at least one online question before we close.
Tejaswita Kharel: So I would say, when I’m talking about cleaning up data, I think it comes firstly in the sense of number one, before you start using it, if you know that your data is likely to be biased, for example, if you’re trying to create, like Nidhi said, public distribution, and you know that you don’t have enough information on certain people, or you know that there’s going to be issues arising out of it, or there’s proxies involved, you’ve cleaned that in the sense that I know it’s difficult to clean data with like foresight as to what’s going to happen next, in the sense that it’s very closely linked to possible harms, right? If you know that your data is like, if you know that your AI system is likely to be biased against people from certain groups, then you have to ensure that you’re cleaning your data set in a manner that removes certain proxies and makes everyone seem equal before the system. So I sort of agree in what you’re saying, in the sense that it’s not really easy to clean the data beforehand, because you can’t really identify what you’re supposed to clean, you can’t just be like, okay, these are the issues, it usually comes out of identifying what’s gone wrong, and then fixing it later. But now that we have seen a lot of things happen in the sense of, we’ve recognized what these larger harms are going to be, we know, to a large extent, who these harms are going to be against, there are possible ways to identify what’s going to happen next, and therefore clean this data beforehand, and work on it accordingly. Yeah, I’ll just limit to that much, because I see there’s one, I think there’s two questions online. Should I read them out? Okay. The first question is, given that fairness itself is subjective, and varies not just in regional contexts, but also in the application or use of AI in question, what may be some of the ways to reconcile these differences in the development of the tech of these technologies? So yeah, we got the second one as well. And then I think we literally have one minute to answer that. The second question is what emerging technologies or methodologies show promise in creating more nuanced context sensitive AI fairness assessments? Okay, I’m going to give
Nidhi Singh: all of our speakers like 30 to 45 seconds to answer. Sorry, I know that’s not enough time, but I think we’re literally at the close of the session. Yixuan, would you like to go first?
Yik Chan Chin: Yeah, I think in the end, we know, if you look at, even we have a global framework, like a two UN resolution and the UNESCO ethics guidance. So in the end, there’s still every country set up to that. So we have, we do have a minimum agreement on that. So the other thing that I think we’re not talking about is not a language model, but that may not be the only AI, you know, that’s a different AI system. So in the future, we may have a reason-based AI, so logical-based AI. So I think there’s a transitional period, we will see.
Milton Mueller: I don’t know how to answer either of those questions. Really, not in 30 seconds. So I’ll just pass.
Tejaswita Kharel: I mean, I do think they’re very difficult questions to answer real quick. But I think the first question, in terms of what may be some of the ways to reconcile the differences, when you’re looking at the context-based AI applications, when you’re looking at the context-based AI application, I think the answer is in the question, which is that you contextualize the AI fairness based on your specific AI use. If it’s being used a certain way, you identify how it’s being used and then identify what factors are important and therefore implement fairness into it. Yeah, unfortunately, I think I don’t have enough time to answer the other one. Thank you. Thank you so much, everyone. I think we all
Nidhi Singh: learned a lot. Please, for the people who are here, you can just come up to us and talk to us later. I think my main learning from today is that we should apply for a 90-minute panel next time, just so that there’s more time for everybody to ask questions. Thank you so much. That was an extremely interesting discussion. We will definitely be following up on a lot of the things that have come up. Thank you. Thank you. Thank you. Thank you. Thank you.
Tejaswita Kharel
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Fairness in India means equality, non-discrimination, and inclusivity
Explanation
Tejaswita Kharel explains that fairness in the Indian context encompasses three main aspects: equality, non-discrimination, and inclusivity. These principles are derived from constitutional guarantees and aim to ensure equal treatment, prevent bias, and promote access to AI benefits for all.
Evidence
Examples include treating individuals equally under the same circumstances, protecting human rights, ensuring equitable technology access, and preventing exclusion from AI services and benefits.
Major Discussion Point
Contextualizing AI fairness in different cultural settings
Agreed with
Yik Chan Chin
Milton Mueller
Agreed on
Need for contextualization in AI fairness
Need for collaboration between ethics experts and AI developers
Explanation
Tejaswita Kharel emphasizes the importance of collaboration between ethics experts and AI developers. She argues that this collaboration is necessary to bridge the gap between ethical principles and their practical implementation in AI systems.
Evidence
She suggests that ethics experts need to understand the technical aspects of AI, while developers need to grasp the ethical implications of their work.
Major Discussion Point
Approaches to improving AI fairness and governance
Need to identify potential harms before cleaning data
Explanation
Tejaswita Kharel argues for the importance of identifying potential harms before attempting to clean AI training data. She suggests that understanding likely biases and their impacts can guide more effective data preparation and system design.
Evidence
She gives an example of public distribution systems, where knowing that certain groups might be underrepresented in the data can help in addressing potential biases proactively.
Major Discussion Point
Limitations and complexities of addressing AI bias
Agreed with
Milton Mueller
Emad Karim
Agreed on
Challenges in addressing bias in AI systems
Differed with
Milton Mueller
Differed on
Approach to addressing bias in AI data
Yik Chan Chin
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Chinese narratives of fairness focus on harmony and role ethics
Explanation
Yik Chan Chin describes the Chinese perspective on fairness in AI as emphasizing harmony and role ethics. This approach is rooted in traditional values and focuses on the roles assigned by society rather than individual rights.
Evidence
The Chinese narrative views technology as an opportunity and prioritizes government-led education and cultural approaches to shape AI development.
Major Discussion Point
Contextualizing AI fairness in different cultural settings
Agreed with
Tejaswita Kharel
Milton Mueller
Agreed on
Need for contextualization in AI fairness
Western/Silicon Valley narratives emphasize individual autonomy and formal equality
Explanation
Yik Chan Chin contrasts the Chinese approach with Western/Silicon Valley narratives of fairness. These narratives focus on individual autonomy, formal equality, and market-driven approaches to AI development and regulation.
Evidence
The Western approach views technology as an opportunity and emphasizes self-determination and individual freedoms.
Major Discussion Point
Contextualizing AI fairness in different cultural settings
Consumer protection and antitrust are major fairness concerns in China
Explanation
Yik Chan Chin highlights that in China, fairness in AI is primarily focused on consumer protection and antitrust issues. This differs from Western concerns about racial or gender discrimination.
Evidence
Examples include regulations against price discrimination based on user data and efforts to provide a fair playing field for AI companies through antitrust measures.
Major Discussion Point
Challenges in developing fair and unbiased AI systems
Developing interoperable frameworks with regional best practices
Explanation
Yik Chan Chin suggests developing interoperable frameworks that incorporate best practices from different regions. This approach respects regional diversity while working towards a minimum consensus on fairness in AI.
Evidence
She mentions that each country can contribute their best practices, such as China’s approach to consumer protection or Europe’s focus on racial and gender discrimination.
Major Discussion Point
Approaches to improving AI fairness and governance
Differed with
Milton Mueller
Differed on
Role of market forces in AI contextualization
Community-based small models to serve specific needs
Explanation
Yik Chan Chin proposes the development of community-based small AI models to address specific local needs. This approach allows for greater diversity and contextualization in AI applications.
Evidence
She cites an example from an Australian Aboriginal community that developed its own small model using their language to serve their specific needs.
Major Discussion Point
Approaches to improving AI fairness and governance
Milton Mueller
Speech speed
138 words per minute
Speech length
2193 words
Speech time
949 seconds
Contextualizing AI requires understanding local priorities and concerns
Explanation
Milton Mueller emphasizes the importance of understanding local priorities and concerns when contextualizing AI. He argues that market demand will drive the level of contextualization in AI applications.
Evidence
He gives an example of a specialized AI application for a specific industry in Indonesia, which would require highly contextualized inputs and outputs to be useful.
Major Discussion Point
Contextualizing AI fairness in different cultural settings
Agreed with
Tejaswita Kharel
Yik Chan Chin
Agreed on
Need for contextualization in AI fairness
AI models are predominantly trained on English-language data, leading to biases
Explanation
Milton Mueller points out that current AI models are primarily trained on English-language data, which leads to biases. This results in poor performance or inappropriate outputs when applied to non-English contexts.
Evidence
He cites research showing that 46% of the data used to train large language models is in English, while languages like Arabic only represent 1% of the training data.
Major Discussion Point
Challenges in developing fair and unbiased AI systems
Agreed with
Tejaswita Kharel
Emad Karim
Agreed on
Challenges in addressing bias in AI systems
Existing bias measures can be gamed and may not truly address cultural issues
Explanation
Milton Mueller argues that current measures for addressing bias in AI can be manipulated and may not effectively solve cultural issues. He suggests that these measures might lead to inaccurate representations of reality in an attempt to achieve fairness.
Evidence
He mentions research from Georgia Tech that critically analyzes existing bias measures and shows how they can be gamed.
Major Discussion Point
Challenges in developing fair and unbiased AI systems
Market-driven approach to contextualization based on demand
Explanation
Milton Mueller proposes a market-driven approach to AI contextualization. He argues that the level of contextualization will depend on market demand for specific applications.
Evidence
He suggests that if there is intensive demand for micro-contextualized applications, the market will provide them over time.
Major Discussion Point
Approaches to improving AI fairness and governance
Differed with
Yik Chan Chin
Differed on
Role of market forces in AI contextualization
Difficulty in “cleaning” inherently biased historical data
Explanation
Milton Mueller challenges the idea of “cleaning” biased data, arguing that historical data inherently reflects past biases. He suggests that the focus should be on expanding data sets and adjusting algorithms rather than trying to remove bias from existing data.
Evidence
He gives an example of Google’s Fairness in Machine Learning program, which adjusts search results to show more diverse representations, even if they don’t accurately reflect historical data.
Major Discussion Point
Limitations and complexities of addressing AI bias
Differed with
Tejaswita Kharel
Differed on
Approach to addressing bias in AI data
Tension between statistical accuracy and fair representation
Explanation
Milton Mueller highlights the tension between maintaining statistical accuracy and achieving fair representation in AI outputs. He questions whether acting on statistical regularities, even if they reflect societal biases, should be considered unfair.
Evidence
He provides an example of insurance premiums based on statistical risk factors, questioning whether such differentiation based on accurate data should be considered biased or unfair.
Major Discussion Point
Limitations and complexities of addressing AI bias
Emad Karim
Speech speed
137 words per minute
Speech length
187 words
Speech time
81 seconds
Women’s perspectives are often excluded from AI datasets
Explanation
Emad Karim points out that women’s perspectives, history, and narratives are often excluded from AI datasets. This exclusion leads to biased AI outputs that do not adequately represent or serve women’s needs.
Evidence
He mentions research showing that inherited datasets reflect 10,000 years of civilization written by men for men, creating a significant gap in AI outputs related to women.
Major Discussion Point
Challenges in developing fair and unbiased AI systems
Agreed with
Milton Mueller
Tejaswita Kharel
Agreed on
Challenges in addressing bias in AI systems
Nidhi Singh
Speech speed
183 words per minute
Speech length
1924 words
Speech time
630 seconds
Challenge of hyper-contextualization given diverse cultures and languages
Explanation
Nidhi Singh raises concerns about the feasibility of hyper-contextualization in AI systems. She questions whether it’s possible to create AI systems that are contextualized to all cultural contexts and nuances, given the vast diversity of cultures and languages.
Major Discussion Point
Limitations and complexities of addressing AI bias
Agreements
Agreement Points
Need for contextualization in AI fairness
Tejaswita Kharel
Yik Chan Chin
Milton Mueller
Fairness in India means equality, non-discrimination, and inclusivity
Chinese narratives of fairness focus on harmony and role ethics
Contextualizing AI requires understanding local priorities and concerns
All speakers agreed that AI fairness needs to be contextualized to different cultural and regional settings, recognizing that fairness has different meanings and priorities in various contexts.
Challenges in addressing bias in AI systems
Milton Mueller
Tejaswita Kharel
Emad Karim
AI models are predominantly trained on English-language data, leading to biases
Need to identify potential harms before cleaning data
Women’s perspectives are often excluded from AI datasets
Speakers acknowledged the challenges in developing unbiased AI systems, particularly due to limitations in training data and the need to proactively identify potential harms.
Similar Viewpoints
Both speakers suggest that the development of AI fairness frameworks should be driven by regional needs and market demands, rather than a one-size-fits-all approach.
Yik Chan Chin
Milton Mueller
Developing interoperable frameworks with regional best practices
Market-driven approach to contextualization based on demand
Unexpected Consensus
Limitations of current bias mitigation approaches
Milton Mueller
Tejaswita Kharel
Existing bias measures can be gamed and may not truly address cultural issues
Need to identify potential harms before cleaning data
Despite coming from different perspectives, both speakers unexpectedly agreed on the limitations of current approaches to addressing bias in AI systems, emphasizing the need for more nuanced and proactive methods.
Overall Assessment
Summary
The main areas of agreement centered around the need for contextualizing AI fairness, recognizing the challenges in developing unbiased AI systems, and the limitations of current bias mitigation approaches.
Consensus level
Moderate consensus was observed among the speakers on the importance of contextualization and the challenges in addressing AI bias. This implies that future discussions on AI fairness and governance should prioritize regional and cultural considerations, as well as more sophisticated approaches to bias mitigation.
Differences
Different Viewpoints
Approach to addressing bias in AI data
Tejaswita Kharel
Milton Mueller
Need to identify potential harms before cleaning data
Difficulty in “cleaning” inherently biased historical data
Kharel advocates for proactively identifying potential harms to guide data preparation, while Mueller argues that historical data inherently reflects past biases and cannot be easily ‘cleaned’.
Role of market forces in AI contextualization
Milton Mueller
Yik Chan Chin
Market-driven approach to contextualization based on demand
Developing interoperable frameworks with regional best practices
Mueller proposes a market-driven approach to AI contextualization, while Chin suggests developing interoperable frameworks that incorporate best practices from different regions.
Unexpected Differences
Definition and measurement of fairness in AI
Tejaswita Kharel
Yik Chan Chin
Milton Mueller
Fairness in India means equality, non-discrimination, and inclusivity
Chinese narratives of fairness focus on harmony and role ethics
Tension between statistical accuracy and fair representation
The speakers unexpectedly diverge significantly in their definitions and approaches to fairness in AI. This highlights the complexity of defining and implementing fairness across different cultural contexts, which is a crucial challenge in global AI governance.
Overall Assessment
summary
The main areas of disagreement revolve around approaches to addressing bias in AI data, the role of market forces in AI contextualization, and the definition and measurement of fairness in AI across different cultural contexts.
difference_level
The level of disagreement among the speakers is moderate to high, particularly on fundamental issues such as the nature of fairness and how to address bias in AI systems. These differences highlight the significant challenges in developing globally applicable AI governance frameworks and underscore the need for continued dialogue and research to bridge cultural and methodological gaps in AI ethics and fairness.
Partial Agreements
Partial Agreements
Both speakers agree on the need for collaboration and integration of diverse perspectives in AI development, but differ in their specific approaches. Chin focuses on regional best practices, while Kharel emphasizes collaboration between ethics experts and AI developers.
Yik Chan Chin
Tejaswita Kharel
Developing interoperable frameworks with regional best practices
Need for collaboration between ethics experts and AI developers
Similar Viewpoints
Both speakers suggest that the development of AI fairness frameworks should be driven by regional needs and market demands, rather than a one-size-fits-all approach.
Yik Chan Chin
Milton Mueller
Developing interoperable frameworks with regional best practices
Market-driven approach to contextualization based on demand
Takeaways
Key Takeaways
Resolutions and Action Items
Unresolved Issues
Suggested Compromises
Thought Provoking Comments
Fairness is in terms of its own concept, it is a very subjective thing. There is no specific understanding of or like a definition of what fairness can even mean, which means that we must look at other factors that will guide our understanding of what fairness can mean, which in Indian context is three aspects. The first being equality, the second being non-discrimination and the third being inclusivity.
speaker
Tejaswita Kharel
reason
This comment introduces a nuanced framework for understanding fairness in the Indian context, breaking it down into three key aspects. It challenges the notion of a universal definition of fairness and emphasizes the need for contextual understanding.
impact
This set the tone for the discussion by highlighting the complexity and subjectivity of fairness, especially in diverse cultural contexts. It led to further exploration of how fairness is understood and implemented in different regions.
What we found from our two years research is that, and that’s not most of the country accept a core set of the principle, like a theorist, but what are the major difference is narratives.
speaker
Yik Chan Chin
reason
This insight from a two-year research project reveals that while there may be broad agreement on core principles of digital ethics, the major differences lie in the narratives surrounding these principles. It introduces the concept of ‘narratives’ as a key factor in understanding cultural differences in AI ethics.
impact
This comment shifted the discussion towards examining how different cultures and regions construct narratives around AI ethics, rather than focusing solely on the principles themselves. It deepened the conversation by adding a layer of cultural analysis.
I’m a, what you might call an AI skeptic. That is to say, I don’t really believe AI exists. I think the people have created this monster around it and mostly don’t know what the technology actually is or what it does.
speaker
Milton Mueller
reason
This provocative statement challenges the fundamental assumptions about AI that underpin much of the discussion on AI ethics and governance. It introduces a skeptical perspective that questions the very nature of what we’re discussing.
impact
This comment introduced a critical perspective that encouraged participants to question their assumptions about AI. It led to a more grounded discussion about the actual capabilities and limitations of current AI technologies.
The data is the data. The data is a record of something that happened in an information system somewhere at some time. What you are going to change is you’re going to look at other collections of data, bigger collections of data.
speaker
Milton Mueller
reason
This comment provides a pragmatic perspective on dealing with bias in AI, challenging the notion that we can simply ‘clean’ data of bias. It emphasizes the importance of expanding and diversifying data sets rather than trying to retroactively remove bias.
impact
This shifted the conversation from idealistic notions of removing bias to more practical approaches of managing and mitigating bias through data governance and collection practices. It added complexity to the discussion of fairness in AI.
Overall Assessment
These key comments shaped the discussion by introducing nuanced perspectives on fairness, cultural narratives, and the nature of AI itself. They moved the conversation beyond simplistic notions of AI ethics to explore the complexities of implementing fairness in diverse cultural contexts. The discussion evolved from theoretical concepts to more practical considerations of data governance and bias mitigation. Overall, these comments deepened the analysis, introduced critical perspectives, and encouraged a more nuanced understanding of the challenges in contextualizing AI fairness across different cultures and regions.
Follow-up Questions
How can we increase and fix datasets to better represent women, especially those from marginalized groups, in AI?
speaker
Emad Karim
explanation
This is important to address the exclusion of women’s perspectives and experiences in AI datasets and outputs, which can perpetuate biases and inequalities.
How can we make biased historical data fair for use in AI systems?
speaker
Xiao
explanation
This is crucial for addressing inherent biases in existing datasets that reflect historical and cultural prejudices, which can lead to unfair AI outputs.
What are some ways to reconcile differences in fairness across regional contexts and specific AI applications?
speaker
Online participant
explanation
This is important for developing AI systems that can be ethically applied across diverse cultural and regional settings while maintaining fairness.
What emerging technologies or methodologies show promise in creating more nuanced, context-sensitive AI fairness assessments?
speaker
Online participant
explanation
This is crucial for advancing the field of AI ethics and ensuring that fairness assessments can accurately capture and address the complexities of different contexts.
How can we bridge the gap between those who want ethics in AI and those who can actually implement it in AI systems?
speaker
Tejaswita Kharel
explanation
This is important for ensuring that ethical principles are effectively translated into practical implementations in AI systems.
How can different countries contribute their best practices to create an interoperable framework for AI fairness?
speaker
Yik Chan Chin
explanation
This is crucial for developing a global approach to AI fairness that respects regional diversity while establishing common standards.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online