WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy

17 Dec 2024 13:45h - 14:45h

WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy

Session at a Glance

Summary

This discussion focused on the challenges and opportunities of artificial intelligence (AI) in relation to human rights, inclusivity, and responsible development. Panelists from diverse backgrounds explored how AI systems can perpetuate societal biases and inequalities, particularly affecting marginalized communities and individuals with disabilities. They emphasized the need for a human-centered approach to AI development, incorporating diversity in teams, rigorous auditing, and transparency in algorithmic processes.

The conversation highlighted the importance of comprehensive regulatory frameworks and standardization to ensure AI accountability and fairness. Panelists stressed the critical role of governments in establishing clear guidelines and independent oversight mechanisms. They also discussed the significance of public awareness and education about AI systems to empower users and drive demand for responsible AI practices.

The discussion touched on specific examples of AI applications in healthcare, welfare systems, and assistive technologies, illustrating both the potential benefits and risks of these systems. Panelists agreed on the need for multi-stakeholder collaboration, including youth engagement, to address AI-related challenges effectively.

The importance of data quality, representation, and transparency in AI development was a recurring theme. Panelists advocated for proactive bias mitigation techniques and the establishment of clear mechanisms for individuals to challenge algorithmic decisions.

While acknowledging the complexities of making AI responsible and transparent, the participants concluded that pausing AI development is not a viable option. Instead, they called for continued efforts to improve AI explainability, enhance public understanding, and foster collaboration among all stakeholders to shape a more inclusive and ethical AI-driven future.

Keypoints

Major discussion points:

– The potential for algorithmic bias and discrimination in AI systems, especially impacting marginalized groups

– The need for human-centered approaches, diversity, and inclusion in AI development

– The importance of transparency, explainability, and accountability in AI systems

– The role of governments in regulating AI and establishing frameworks for responsible development

– The need for public awareness, education, and AI literacy

The overall purpose of the discussion was to explore the challenges and potential solutions for developing responsible, ethical, and inclusive AI systems that respect human rights and do not perpetuate or amplify existing societal biases and inequalities.

The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, there were also notes of optimism, especially towards the end, as speakers emphasized the potential for positive change through collaboration, education, and proactive policymaking. The tone shifted slightly from highlighting problems to focusing on potential solutions and calls to action.

Speakers

– Monica Lopez: CEO and co-founder of Cognitive Insights for Artificial Intelligence, expert in human intelligence, machine intelligence, human factors, and system safety

– Paola Galvez: Worked on AI readiness assessment and national AI strategy for Peru

– Ananda Gautam: Moderator

– Yonah Welker: Visiting lecturer at Massachusetts Institute of Technology, ambassador of EU projects to the MENA region

– Abeer Alsumait: Public policy expert with experience in cybersecurity, ICT regulation and data governance in the Saudi government

Additional speakers:

– Meznah Alturaiki: Representative of the Saudi Green Building Forum

– Aaron Promise Mbah: No specific role mentioned

Full session report

Expanded Summary of AI and Human Rights Discussion

Introduction

This discussion brought together experts from diverse backgrounds to explore the challenges and opportunities presented by artificial intelligence (AI) in relation to human rights, inclusivity, and responsible development. The panel included Monica Lopez, CEO and co-founder of Cognitive Insights for Artificial Intelligence and member of the global partnership on AI; Paola Galvez, who worked on the national AI strategy for Peru and has experience with Microsoft; Yonah Welker, a visiting lecturer at MIT; and Abeer Alsumait, a public policy expert with experience in cybersecurity, ICT regulation, and data governance in the Saudi government. The conversation was moderated by Ananda Gautam.

Key Themes and Discussion Points

1. Algorithmic Bias and Its Impact

A central theme of the discussion was the potential for algorithmic bias and discrimination in AI systems, particularly affecting marginalised communities and individuals with disabilities. Monica Lopez emphasised that algorithms are not neutral tools but powerful social mechanisms that can perpetuate or challenge existing power structures. This sentiment was echoed by Abeer Alsumait, who noted that AI systems have demonstrated bias against marginalised groups in various domains, including healthcare, as evidenced by the Pennsylvania University study she mentioned.

2. Addressing Algorithmic Bias and Promoting Responsible AI

To combat algorithmic bias, the speakers proposed several strategies:

a) Diversity in AI development teams: Monica Lopez stressed the crucial importance of diverse teams in AI development to mitigate bias. Paola Galvez emphasized the need for gender equity in AI.

b) Rigorous algorithmic auditing and transparency: Lopez advocated for comprehensive auditing processes and increased transparency in algorithmic decision-making, calling for standardization in AI audit documentation and metrics.

c) Proactive bias mitigation techniques: The implementation of proactive measures to identify and address bias before AI systems are deployed was recommended.

d) Comprehensive regulatory frameworks: There was consensus on the need for robust regulatory frameworks to guide responsible AI development, with multiple speakers referencing the EU AI Act as a potential model.

e) Ongoing community engagement: The speakers emphasised the importance of continuous dialogue with affected communities throughout the AI development process.

3. AI in Assistive Technologies

Yonah Welker highlighted the potential of AI to support people with disabilities through assistive technologies. He emphasized that creating disability-centric AI is a complex process requiring a multi-modal and multi-sensory approach. Welker also stressed the need for legal frameworks to complement assistive technologies and called for dedicated safety models and regulatory sandboxes for AI testing.

4. Government Role in Responsible AI Development

The panellists agreed on the critical role of governments in regulating AI adoption and ensuring responsible development. Abeer Alsumait discussed the Saudi Data and Artificial Intelligence Authority’s role in advancing AI governance. Paola Galvez advocated for structured public participation in AI policy development and stressed the importance of investment in AI skills development.

5. Education and Public Awareness on AI

Monica Lopez argued that increased public knowledge would drive demand for more responsible AI practices and create more engaged and critical AI users. An audience member raised the point that even professionals like judges require specialised understanding of AI systems, underscoring the breadth of educational needs across society.

6. Localization and Language in AI

Yonah Welker emphasized the need for localized AI solutions, especially for non-English languages, to ensure inclusivity and effectiveness across diverse populations.

7. Environmental Impact of AI

Paola Galvez raised concerns about the environmental impact of AI, highlighting the need to consider sustainability in AI development and deployment.

8. AI in Healthcare

Abeer Alsumait discussed challenges in healthcare AI, referencing the Pennsylvania University study that revealed biases in healthcare algorithms.

9. Content Recommendation and User Safety

An audience question addressed the issue of AI recommending potentially harmful content to vulnerable users, highlighting the need for responsible content curation and user protection.

10. Balancing AI Progress and Responsible Development

While acknowledging the complexities of making AI responsible and transparent, the participants concluded that pausing AI development is not a viable option. They advocated for continued efforts to improve AI explainability, enhance public understanding, and foster collaboration among all stakeholders to shape a more inclusive and ethical AI-driven future.

Conclusion and Future Directions

The discussion concluded with several key takeaways and action items, including the need for diverse development teams, rigorous auditing, proactive bias mitigation, comprehensive regulatory frameworks, ongoing community engagement, investment in education, and the development of national AI strategies.

Unresolved issues included how to effectively standardise AI audit documentation and metrics, balance rapid AI development with responsible implementation, make complex AI systems easily explainable to the general public, and ensure AI policies are effectively implemented and enforced.

The panelists, particularly Yonah Welker, expressed optimism about stakeholders working together to address these challenges and shape a more inclusive, ethical, and human-centred approach to AI development and governance.

Session Transcript

Monica Lopez: Okay, yes. So, can you hear me okay? Yes? All right. Well, first of all, thank you for the forum organizers for continuing to put together this summit on really such critical issues related to digital governance. I’m really excited to be here, at least online. And I also want to thank Paola Galvez for really bringing all of us from across the world together, whether virtually or in person. So, as a brief introduction, I’m Dr. Monica Lopez and I come from a technical background. So, I’m trained in the cognitive and brain sciences and I’ve been in the intersecting fields of human intelligence, machine intelligence, human factors, and system safety now for 20 years. I’m an entrepreneur and the CEO, co-founder of Cognitive Insights for Artificial Intelligence. And I essentially work with product developers and organizational leadership at large to develop robust risk management framework from a human centered perspective. I’m also an AI expert on scaling responsible AI for the global partnership on AI. So, I certainly do recognize many, many individuals. So, as for my contribution, I really do hope to complement the group here. I’m coming from the private sector perspective. So, certainly as we all know, today’s rapidly evolving digital landscape, we know that algorithms have essentially become the invisible architects, perhaps we can call that of our social, economic, and political experiences. And so, what we have are very complex mathematical models designed to process information and make decisions many times fully automated that now essentially underpin every aspect of our lives. And as we all well know at this point, from job recruitment to financial services, to criminal justice and social media interactions. And so, this promise of technological neutrality essentially masks a reality. One where algorithmic systems are not objective, but instead they are essentially reflections of the biases, the historical inequities and the systemic prejudices that are across our societies. And so they are essentially embedded in the design and training of data. And so this, as we all know, as well as this has direct human rights implications of algorithmic bias that are profound at this point and really far reaching. And these systems essentially perpetuate and amplify these existing inequalities and are creating digital mechanisms of exclusion that are systematically disadvantaging marginalized communities. And so just very quick before I enter into why we need a human centered perspective on this, but I’m sure very clear examples that you may be familiar with already are with facial recognition technology or FRT and that they have demonstrated significantly higher error rates for women and people of color. We continue to see that problem. AI driven hiring algorithms that have shown to discriminate against candidates based on gender, on race and other protected characteristics. And AI enabled criminal justice risk assessment tools. And we’ve certainly seen this, I’m based in the United States. And so we’ve shown that it has continued to perpetuate racial biases leading to more severe sentencing recommendations for black defendants compared to white defendants with similar backgrounds. So essentially, why do we have this? And the root of these challenges really lies in the fundamental nature of algorithmic development. And we know that machine learning models are trained on historical data that inherently reflect, as I mentioned earlier, these societal biases, power structures and systemic inequalities. And I want you to take a moment right now to consider what a data point even means. How a single- data point has limits. As those of you know, for those who work closely with data on a daily basis, and by that I mean whether you’re collecting it, whether you’re cleaning it, analyzing it, making conclusions from it, you know that the basic methodology of data is such that it systematically leaves out all kinds of information. And why? Because data collection techniques, they have to be repeatable across vast scales, and they require standardized categories, and while repeatability and standardization make database methods powerful, we have to acknowledge that they have power as a price. So it limits the kind of information we can collect. So when these models are then deployed, without any sort of critical examination, they don’t just reproduce existing inequities, they actually normalize and scale them. And so here is where I would argue, and I know the rest of the panel will continue to discuss this, but why a human-centered approach to algorithmic development offers essentially a critical, at this point, pathway to addressing these systemic challenges. And essentially what this means is that we need to reimagine technology as a tool for empowerment and well-being, instead of a tool for exclusion. And so in this regard, prioritizing human rights, equity, and meaningful inclusion, every single step of technological design through implementation. And by that, I mean across the entire AI lifecycle becomes essential. And so I work with a lot of clients, as I mentioned earlier, I am in the private sector, and there are key strategies right now that are very clear that we know that we can advance this human-centered approach. And I’ll just briefly mention five of them real quick. So first is we need comprehensive diversity across algorithmic development. I’m sure you’ve been hearing that a lot, but the problem is that the change has not, the transformative change has not really begun. And we know that if we diversify teams, we do get more responsible development of algorithmic systems. We do get new perspectives at the table. And so I would say that’s absolutely essential no matter what moving forward. The second element is rigorous algorithmic auditing and transparency. Again, that is another element that we have seen. It is now part, in fact, in part related to the European Union’s AI Act requirement. But what we need is we need to see this across the three perspectives of equality, equity, and justice. And this is not just for big tech companies to be engaging in. This is truly for everyone. And we know that irrespective of emerging legal requirements in some jurisdictions and some where there isn’t much work happening on the legal side, all organizations must implement mandatory algorithmic impact assessment to thoroughly examine the potential discriminatory outcomes before deployment. And then not just that, but continuously monitor those outcomes as more data are collected and models drift. And I have noticed that when companies do that, whether they’re small or medium-sized or large, we do see better outcomes. A third element is the establishment of proactive bias mitigation techniques. Now, there are all sorts of technical strategies for that. Some of them are based essentially on what I was mentioning in regards to, you really need to think about what data means. So careful curation of the trading data. We need to make sure it truly is representative and balanced across the data sets. It does matter and it does change outcomes. Implementation of fairness constraints. Also the development of testing protocols that actually specifically examine the potential for discriminatory outcomes. We know that when you identify that beforehand and you actually look for that, you will see it and you can actually mitigate change. and actually improve on the issue. The fourth element is of course, the classic need for legal and regulatory frameworks. And so here, I can’t stress enough at this point that governments and international bodies, we have to truly come up with comprehensive regulatory frameworks that treat algorithmic discrimination as a fundamental human rights issue. And from a business perspective, what this means is that there needs to be clear legal standards for algorithmic accountability. There also need to be very clear mechanisms for individuals to be able to challenge algorithmic decisions. There certainly are not enough. And even in some cases where we have the requirement for companies to actually put on their website, their auditing results, that is still not enough. And then of course, we need significant penalties for those systems. And then the last issue, which is the fifth, is that we need ongoing community engagement. I also cannot stress that inclusion does matter and it requires continuous dialogue with communities most likely to be impacted by algorithmic systems. And this is not an easy task. It’s a lot to ask for, but we know, and I’ve seen it with companies that actually make concerted efforts to create participatory design processes across the AI lifecycle. And that essentially means you’re establishing relevant feedback mechanisms of communications as you create and design these systems, you pilot them and you work with those individuals. And then you’re essentially end up empowering marginalized communities to actually want to actively provide their input because it is of value. So what I’m calling for here essentially to conclude is that we need this fundamental re-imagining of technological innovation. And we know at this point that algorithms are not neutral tools, but they’re very powerful social mechanisms. that either perpetuate or challenge existing power structures. So if we change now our methods, and every single one of us, in the design and deployment choices of today, then I think we will very profoundly actually shape the future of human rights in the digital age in a very positive way. And so I look forward to your questions, and I know we’re going to discuss this more in detail. So thank you. Thank you for listening.

Ananda Gautam: Thank you, Monica, for all your thoughts, and I think you have also covered the second part of the questions already. My apologies. I should have mentioned the time before. So I’ll go to Paola to give a short introduction, and for the first round, let’s wrap within five minutes, and then we’ll go for the second round of questions. For Monica, I think we’ll be going a bit short on the second round. We’ve covered almost most of the things. So, Paola, over to you.

Paola Galvez: Thank you, Ananda. Hello, everyone. Thank you so much for joining us to this very, very critical conversation. I’d like to start by posing a question. What does it take to make society more inclusive? You know, my interest in social impact began early, inspired in part by my grandfather, who is a judge in the Central Highlands of Purdue, who often spoke about societal disparities he witnessed. I went to law school believing it would really equip me with the tools to drive meaningful change in a country with high levels of inequalities and social disparities, like this country where I’m from. But my first year lacked inspiration at all. I think my courses were disconnected from real-world problems. But my perspective really changed in 2013 when I began an internship in Microsoft. I was looking at a demonstration video of seeing AI prototypes. It was 10 years ago. But it was this project that used artificial intelligence and helped the visually impaired perceive their surroundings that opened my eyes, that really showed me the profound potential that this technology can be as a catalyst for social change. So I said, I can really help as a lawyer to help leverage this technology as a force for inclusion, and I can use public policy and help drive human-centric and evidence-based policy. And that’s when my commitment started to transform Purdue into a more inclusive and detailed society. And I think that’s the path that led me to what I’m doing now, and I hope will help beyond. So I worked in the private sector for a long time. I was in the position that Dr. Monica Lopez was mentioning how private sector does. Then I received a proposition from the government to work there to help them with the national AI strategy and the digital assistance strategy. And most of my friends told me, you’re going to be so frustrated. The bureaucracy is going to kill you. Come on, you’re used to Microsoft, big tech. But I said, no, I can actually bring and shed a light on disruptive ways to govern. So I decided to do it. I’m a firm believer on participatory bottom-up processes. So the first thing I did was form a multi-stakeholder committee to do this policy. And we’re here at IGF, a global forum. We’re talking about AI and data at a global level. And I have seen firsthand a local experience bringing civil society, academia, private sector together to find solutions and challenges. And one of the most challenging things is AI policy. And I do believe that protecting democracy, human rights, the rule of law, and establishing clear guidelines on AI is a shared responsibility that a government alone cannot do, not a private sector company, nor academia. an endeavor that must be taken in a multi-stakeholder approach. But I do think that one stakeholder is crucial in this pursuit, and that is youth civil society. The youth must be included, and youth engagement is a critical area that we need to protect now. That’s what I believe in these first remarks and then I wanted to mention. Because I do see generative AI producing fake and biased synthetic content, large language models, reinforcing polarization, poorly designed AI power applications that are not compatible with assistive technologies leading to discrimination against youth with disabilities. And I have the expert here. Yona will mention more about that. But apart from that, I sincerely believe that AI holds immense potential as a technology if we use it wisely. AI systems can break down language barriers. I mean, if IGF is as powerful as it is, and the youth IGF and youth seek of internet society is powerful if we’re a community of more than 2,000 youth connected, and sometimes we use translation that is powered by AI. So that’s powerful. Or, of course, making resources more accessible to diverse youth populations. Sadly, AI has yet to live up to its potential. Dr. Monica Lopez mentioned most of its challenges, which I absolutely agree with. AI is reproducing the society’s bias. It is deepening inequalities. Someone heard, I heard someone saying, but that’s just the way the world is. The world is biased, Paola. What do you think? That’s what AI is going to do. And yes, that’s true, but I agree at one point that it depends on us how we want to develop this technology. It depends on us the results that this technology is going to provide us an output. Because data is the oxygen of AI, and transparency should be at its core. So it’s up to us to shape the future of AI now, to talk about the data that should be more representative. And the focus on IDF of bringing youth to the discussion, I think it’s a great tool to really congratulate, because we have a big youth community in this IDF. So I’m really looking forward to this discussion, and up to you, Ananda.

Ananda Gautam: Thank you so much, Paola, for touching a bit of how powerful it can be. And the work of your government bringing a multi-stakeholder committee was really commendable. I’d like to go with Yonah Wicker to give, I’ll also give you five minutes to briefly introduce you and touch upon the base that Dr. Monica and Paola has set up. Over to you.

Yonah Welker: Yes, thank you so much. It’s a pleasure to go back to Riyadh. Three years ago, I had the opportunity to curate the Global AI Summit of AI for the Good of Humanity, and we continued this movement. I’m a visiting lecturer for Massachusetts Institute of Technology, but also I’m an ambassador of EU projects to the MENA region. And my goal is to bring all of these voices and ideas to actual policies, let’s say EU AI Act or Code of Practice. And today, I specifically would love to address how it may affect the most vulnerable groups. And as Paola mentioned, individuals with disabilities. And that’s why I would love to quickly share my screen. Hopefully, you can see it. So 28 countries signed the agreement about AI safety, including not only Western countries, but the countries of the Global South, Nigeria, Kenya, countries of the Middle East, Saudi Arabia, and UAE. And the big question, how these actual frameworks can address designated and vulnerable groups. For instance, currently, there is a one billion people. It’s 15% of the world live with disabilities, according to the World Health Organization. And it’s important to understand that sometimes these disabilities are invisible. Let’s say neurodisabilities, at least one in six people living with one or more neurological conditions. And it’s actually a very complex task to bring all of these things to the frameworks. And let’s say why for EU, we have a whole combination of laws and frameworks. We address classifications and taxonomies in Accessibility Act and standardization. directive. We’re trying to address manipulation and addictive design at the level of AI Act, Digital Services Act, GDPR. We’re trying to understand and identify higher risks for systems related to certain critical infrastructure, transparency risks, prohibiting particular use of effective computing. But still it’s not enough because we need to understand how many systems we actually have, how many cases we have. And for instance, for assistive technologies, we have over 120 technologies for the racing OECD report and I had opportunity to contribute to this report. We use AI to augment smart wheelchairs, walking sticks, geolocation and city tools. We use AI to support hearing impairment using computer vision to turn sign language into text. We support cognitive accessibility including ADHD, dyslexia, autism. But we also should understand all the challenges which are coming with the AI including recognition errors than individuals with facial differences or asymmetry, craniofacial syndrome, just not properly identified by facial recognition system, as was mentioned by my colleague, or accused identification errors than individuals can understand AI interface. They can hear or see this signal or when they deal with excluding patterns and errors or exclusion by generative AI and language-based models. Also we have all the complexity driven by different machine learning techniques, supervised learning which are connected to errors induced by humans, unsupervised learning which brings all the errors and social disparities from the history, or reinforcement learning which is limited by training environments including robotics and assistive technologies. And finally we should understand that AI is not limited by software. it’s also about hardware, it’s about human centricity of physical devices, it’s about safety, motion and sensing components safety, power components and environmental safety, production and training cycle. So overall, working on disability-centric AI is not just about words, it’s an extremely complex process of building environments where we have a multi-model and multi-sensory approach. When we deal with the families, caregivers, patients and different types of users, then they try to understand and identify scenarios of misuse, actions and non-actions, so-called omission, potential manipulation or addictive design. So it’s why the next level of AI safety institutes, offices and oversight will include all these comprehensive parameters. When we talk not only about risk-based approach, but understanding different scenarios, workplaces, education, law enforcement, immigration, we think about taxonomies, frameworks and accidents repositories, working with UN, World Health Organization, UNESCO, OECD and finally we try to understand the intersectionality of disabilities, thinking about children and minors, women and girls and all the complexity of history behind these systems and context. Thank you. Thank you, Yonah, for your wonderful things, how AI could be used in

Abeer Alsumait: assistive technologies, but there are challenges like a very minor issue might also be a kind of we cannot accept minimal level of error in the use of like AI in healthcare system and we’ll come back to you on these questions. So I’ll ask Avir to talk about herself and I’ll give you five minutes as well. Please introduce yourself and we’ll do opening remarks. Thank you. Thank you. Hello everyone. It’s a privilege to be a part of this discussion and I would like also to thank Paola for initiating this and kick-starting it. I’d like to thank the rest of the panel and the moderators as well and event organizers. Just to introduce myself briefly, this is Avir Smyth, I’m a public policy expert with a little over a decade of experience in cybersecurity, ICT regulation and data governance in the Saudi government. I hold a master’s degree in public policy from Oxford University and a bachelor of science in computer and information sciences. My interest lies in shaping inclusive and sustainable digital policies that drive innovation and advance the digital economy. I would like to briefly just start the conversation of this session by mentioning examples that show while algorithms and AI promise efficiency and innovation, they have the power to replicate and amplify suicidal inequalities when not governed responsibly. The first example I would like to mention is from France. In France, a welfare agency used an algorithm to detect fraud in welfare and errors in payments and this algorithm, while in text, was a wonderful idea, in practice it ended up impacting specific segments of its population and marginalized groups, specifically single parents and individuals with disabilities far more than any others. It ended up tagging them more as high risk more frequently than the rest of the beneficiaries of the system. So this impact was profound on those individuals, it led to more investigations, a lot more stress and in some cases even suspension of benefits. So in October of this year, a coalition of human rights organizations launched legal action against the French government for this algorithm used by the welfare agencies, arguing that this algorithm actually violates the privacy laws and anti-discrimination regulations. So this case shows us the reminder of how risks can be inherent in some opaque systems and maybe governed AI tools. Another example I would like to quickly highlight in the healthcare sector, where a study in 2019 from Pennsylvania University highlighted an AI-driven healthcare system that was used to allocate medical resources for a little over 200 million patients, and that system relied on hysterical healthcare expenditure as a proxy for healthcare needs. So this algorithm was not considering the systematic disparity in healthcare access and spending in society at that time, and it ended up resulting in black patients being less likely to be flagged for need of enhanced care to a percent that reached 50 percent than their white counterparts. So even though this algorithm and this system was intended to streamline healthcare delivery, it ended up perpetuating inequality and deepening distrust in AI systems and in technology overall. So this example will underscore one undeniable truth that algorithms are not neutral when built on biased data or flawed assumptions, and it might lead to amplified existing injustices and exacerbate exclusion, often impacting the most vulnerable population. These challenges and these issues generated actions from governments on an international level, one of which, as mentioned by Dr. Lopez, the EU AI Act that was entered forth this year, and it classifies AI systems based on risk, classifying things such as welfare, employment, and healthcare as areas of high risk where very high standards of transparency, quality, and human intervention is required. A lot of nations and governments followed suit, I believe. One example for that is here in my country, in Saudi Arabia, the Saudi Data and Artificial Intelligence Authority, established a few years ago, started or adopted recently the AI ethics principles that emphasizes transparency, fairness, and accountability. Therefore, I believe governments play a very important role. While every actor and every player is really important in discussions and conversations, governments have critical roles in regulating and establishing responsibility and advancing the way forward for AI adoption in an equitable and fair way. Thank you.

Ananda Gautam: Thank you, Avit. So, I’ll come back to Dr. Monica. You have touched over how algorithmic bias are there and what could be the role of private sector. I’d like to touch you up on what are the measures that private sector could take to overcome those biases, along with the role of other stakeholders. If there are any best practices that could be shared, kindly share, and I’ll ask you to wrap up very soon. Thank you.

Monica Lopez: Absolutely, thank you. Thank you for that question. And I know I briefly mentioned some of them, but I think I’ll right now highlight some that in fact, many of our fellow colleagues have been already in fact mentioning. So the first one and one that is starting to happen, but not to the extent that I believe should happen more is the whole question of diversity in teams. Again, we hear this a lot. We hear that we need to bring in different perspectives to the table, but at the end of the day, unfortunately, and I have seen even startups, so small and medium-sized enterprises who make the argument, we don’t have enough resources, we can’t. And they actually do. And sometimes it’s as simple as bringing the very customers, the very clients that they intend to be, that they intend for their product or their service to be for to the discussion. So I would say that that is one very key element and we just need to make that a requirement at this point. And it needs to essentially be something, a best practice, frankly, at this point. And the other one is the bias audits. We are seeing certainly across legislation, the need for the requirement for, so one needs to comply now with providing audits for these systems, particularly on the topic of bias. So to ensure that they are non-discriminatory, non-biased. So that is a good thing. However, what ends up being the problem is that we haven’t yet standardized the type of documentation, the type of metrics and the benchmarks. So that is right now the conversation, at least in the, not just in the private sector, but certainly also in academia as to what should we, and at the, I also, I am in communication work with individuals from IEEE, from ISO, who set the industry standards. And so this is a very big topic right now, a debate as to how do we standardize what these audits. should look like, and how do we make sure that not only we standardize that, but we actually have the right committees in place, experts who can then review this documentation. So I would say that that in a way, while extremely important, sometimes does become a barrier of sorts, precisely because individuals, just organizations rather, companies don’t know exactly what needs to be put into these audits. So that’s the second element. And the third and final point here is the whole issue of transparency and explainability of these systems. We’ve heard many, many times about the black box nature, black box nature of these systems. But to be quite honest, we know much more about these systems. Developers do know the data that is involved. We do make mathematical assumptions. So there’s a lot of information at the very pre beginning stage of data collection of system creation, for which we have a lot of information about. And we’re not necessarily being very transparent about that in the first place. So I would say that that in and of itself is extremely important, but also is becoming a type of best practice because if you can establish that from the beginning, it does have downstream effect across the entire AI life cycle, which then becomes extremely important when you start integrating a system. And let’s say you have a problem, you have a negative outcome as is, someone ends up being harmed. And then you can essentially reverse engineer back again, if you have that initial very clear transparency put out in the beginning. We are starting to see some good practices around that, particularly around model cards, nutrition like labels that have all been, especially in healthcare, there’s examples given in healthcare. I do a lot of work with the healthcare industry. And so there’s a very big push right now to essentially standardize and normalize and nutrition like labels around AI model transparency that I think then should be utilized. across all systems, frankly, at this point, all contexts and domains. Thank you.

Ananda Gautam: Thank you, Dr. Monica. So I’ll go to Paula. I think after you guys complete, we can go. So I’ll go to Paula that you have already worked on the AI readiness assessment for the country and how countries and regions are making declarations, how it can be transitioned into the action, you know, like based on your experience. Can you share, please?

Paola Galvez: Sure, Ananda. And what you say, it’s key, right? How to pass from declarations to actions. We’ve seen so many commitments already. So great call. Thank you for the question. I’d say, first of all, we need to start by going into the international frameworks of AI. If there are countries that have not adopted, they will be left out. So that way, we ensure alignment with global standards, best practices, and that also helps with local business to join and be easy to go out to the borders. This is first. But second, when you start formulating the national AI policies, governments need to develop a structured and meaningful public participation process. This means receiving comments from all stakeholders, but it’s not only that, because that happens a lot in my country, I can tell you. By law, they need to publish 30 days, any regulation. Actually, it just happened. The second draft of the AI Act regulation was published. But what we need for a meaningful participation is government saying how they took this comment, and if they are not considering, why? I believe that the citizens and all the civil society organizations, the private sector, the committed, need to know what happened after they commented at any bill. Third, enhance transparency and accessibility. Any AI policy material must be readily accessible, complete, and accurate to the public. Then, independent oversight, I think it’s a must. Ananda, creating or designating an independent agency. Here, Abir mentioned the Saudi Data and AI Agency. I think that is a very good example. Sometimes, governments have a challenge with this, because they say, oh, it’s a huge amount of effort, people, resources, right? But if it’s not possible having a new one, then let’s think, maybe the Data Protection Authority can take over AI. Also, and I think this cannot be left behind, investment in AI policy, AI skills development, that’s a must. We can have the best AI law, but if we don’t help our people understand what is AI, how to read and know that the AI can hallucinate, we will be lost. So AI skills for the people is a must. And just to finish, from always what I’ve said, with a gender lens, because gender equity and diversity in AI is a must, as something that is not being looked at as it should be. You mentioned I conducted the AI readiness assessment methodology of UNESCO, and I’m proud to say that the UNESCO recommendation on the ethics of AI is the only document at the moment that has a chapter on gender. And it must be reviewed because it’s very comprehensive and it has practical policies that should be taken into consideration and into practice. And of course, environmental sustainability in AI policy should be considered, it is often overlooked. What is the impact on the energy? Should we promote an energy-efficient AI solution? Definitely. Minimizing carbon footprint, of course, and fostering sustainable practices, because this is, I will finish with this data, but when you send a question to a large language model, as we all know, HHTTP, cloud, Gemini, et cetera, it’s the same consumption that an airplane has in a year from Tokyo to New York. So we should be thoughtful on what are we sending to AI, or maybe Google can do it for us, too. Thank you.

Ananda Gautam: Thank you, Paula, for your strong thoughts. I’ll come back to Yona. You have mentioned about AI in assistive technologies. So now I’ll come back to how legal frameworks can complement on the assistive technologies while protecting the vulnerable population that are using those technologies. We have briefly underlined that minor, either might be a major in case of assistive technologies. Over to you, Yona.

Yonah Welker: Yes. So first of all, we have a few main elements of these frameworks. The first one is related to taxonomies and repositories in cases. And here I would love to echo my colleagues, Dr. Monica and Paula. We actually need to involve all of the stakeholders, and for instance, cooperating with OECD. involve over 80 organizations to understand existing barriers of access to these technologies. It’s affordability, it’s accessibility, it’s energy consumption, it’s safety, it’s adoption technique is the first thing. Second thing is the accuracy in original solutions. So one of the lessons we learned both working in EU and MENA region, we can’t localize open AI, we can’t localize Microsoft solutions, but we can build our own solutions, sometimes not large language models, but small language models, not with the 400 billion parameters, maybe with 5, 10, 15 billion parameters, but for more specific purposes or languages. For instance, when we’ve made the research for Hungarian language, we have in 1,000 times less sources of training for charge GPT in comparison to English. So we have a similar situation for many other non-English languages. It just doesn’t work, not only from original perspective, but from scientific research and development perspective. Another thing is a dedicated safety models. Sometimes we can’t fix all of the issues within the model, but we can build dedicated agents or additional solutions which track or improve our existing systems. For instance, currently for the Commission, I evaluate a few companies and technologies which will address the privacy concerns, compliance with the GDPR, with the data leakages, breaches, and also online harassment, hate speech, and other parameters. It’s also complemented with the safety environments and oversights. So it’s the job of the government to create so-called testbeds and regulatory sandboxes. It’s a kind of specialized centers where startups can come to in order to test their AI model, to make sure they’re on one hand compliant and also they build actually safe systems. It specifically relates to areas of a so-called critical infrastructure. These are areas of health, education, smart cities, and for instance, Saudi Arabia is known for so-called cognitive cities. All these areas are a part of our work when we’re trying to build efficient, resilient, and sustainable solutions. And finally is a cooperation with intergovernmental organizations. So for instance, we work on frameworks called digital solutions for girls with disabilities, with UNICEF. We work with UNESCO on AI for children. So we’re trying to reflect more specific scenarios, and adoption techniques related to specific ages, let’s say from eight to 12 years old, or specific regions, or specific gender, including both specific of adoption, but also safety considerations and even unique conditions or illnesses, which are very specific to particular region. For instance, we have a very different statistic related to diabetes, craniofacial syndrome, different types of cognitive and sensory disabilities if we compare the MENA region in EU. So it’s a very complex process. And as I’ve mentioned, now our policy is becoming overlapped. So even for privacy, for manipulation, for addictive design, we have an overlap not only in AI Act, but also for other frameworks, Digital Services Act, for data regulation. So some essential pieces of our vision exist in different frameworks. even governmental employees are aware of it. And the final thing is AI literacy adoption. So we’re working to improve the literacy of governmental workers and governors who will employ these policies and bringing to the life.

Ananda Gautam: Thank you, Yonah, so much. So I’ll come back to Avil. So we have been talking about the complexity of making AI responsible. And when it comes to making the responsible AI, it demands for ensuring accountability and transparency. While we are seeing many automated AI systems, who will be responsible if automated car kills a man in the street? This has been kind of serious question and there are other consequences. So in this context, how governments can ensure the responsible AI while ensuring the accountability and transparency? Kindly go through. Thank you.

Abeer Alsumait: Thank you. So I think this question actually relates to what Dr. Lopez mentioned. The keywords here are transparency and explainability. Of course, for sure, regulations and law establish responsibilities and make it sure every actor involved in any event, knows their role and knows when to be responsible. But also the fact that they can explain and they can be transparent at how they work and how they operate and how they might impact others and individuals specifically vulnerable populations is really key. And as Dr. Lopez mentioned, private sector knows more than maybe we understand. But we’re not very clear on how we want the transparency and explainability to work. And maybe my thoughts on that is government should work hand in hand, should push for standardization to happen as soon as possible, should be clear in establishing the responsibility and be clear about what it is, what it means to have a point for transparency, for AI and algorithm. One extra thing that I think government should also focus on is to establish a right, establish a way for individuals to challenge such systems and impactful algorithms on their life. So my idea is that there should be continuous evaluation and risk assessment of how it is actually working in the real life, in case any incident of bias or discrimination happens, there should be a clear way, clear procedure for individuals and for governments to start auditing, reviewing any system that’s in work and impacting lives of individuals.

Ananda Gautam: Thank you, Abir. Maybe we’ll come back to you going to the Q&A session. There is one contributor in our audience. I’ll ask her to provide her and then Martilda will bring what we have in the discussions in the chat and if there are any questions online and we’ll go to the question and answer. Over to you, please.

Audience: Thank you so much. My name is Zemizna Atareki and I’m representing the Saudi Green Building Forum, which is a non-governmental and non-profit organization that supports and promotes green practices as well as decreasing carbon emissions and decreasing energy consumption. Of course, it contributes to the digital transformation that the world now is witnessing. And for that, I would like to just participate and give an idea that we’re going through a critical perspective, which means that we’re having, as algorithms offers an immense potential to enhance our daily lives, yet we face a fundamental challenges relating to biases and exclusion. Now, many of these systems function as an opaque, as Dr. Monica said, lacking transparency, which of course perpetuates a social disparities and exacerbates discrimination against marginalized communities. Now, in the absence of the proper scrutiny and accountability, algorithms sometimes contribute to human rights violation instead of addressing them. What we should do about that as a civil society? We need to take an action, and we need to call for a greater transparency and accountability to ensure algorithms are open to scrutiny and include clear mechanisms for identifying and addressing biases. Of course, we need to integrate human rights into algorithm design, which means we need to focus on developing human-centered algorithms that prioritize the needs for marginalized groups. Of course, finally, we need to foster a multilateral collaboration to engage all stakeholders, as you all mentioned, to ensure algorithms are fair and inclusive, considering diverse cultural and social dimensions. Now, we recommend the following. First, we need to launch a global algorithmic transparency initiative that establishes an international platform to set standards for evaluating the impact for algorithms on human rights and promoting transparency. Second, design inclusive-oriented algorithms, which develops algorithmic tools that prioritize accessibility, improve service delivery for people with disabilities, and ensure greater inclusivity. And last but not least, implement training programs

Ananda Gautam: that build capacity of developers and design makers to understand the risks of algorithms, bias, and address them effectively. Thank you. Thank you so much. So if we have any kind of question on site, there are no online questions, I believe. So while asking questions, please also mention whom you are asking to so that it is easier to answer. Or if it is common, let it know as well. Please. OK. Thank you.

Audience: My name is Aaron Promise-Amba. And I worked on this with Paola and all of you here. So I’m very excited because of the insights we’ve been sharing. So I have a question. And I would like Dr. Monica to help me address it. I understand where you talked about algorithms helping for marketing and some other business, right? And then the kind of divide that comes with it, the risk that comes with it, that it can actually amplify the digital divide, especially with persons with disability, right? And then I’ve worked with some persons with disabilities using social media and all of that. And then there’s a particular case where I think Abe also mentioned something about depression, suicide, right? Suicide, right? So now you have someone click on Spotify to listen to music. Maybe he’s feeling down. And then after that, you see Spotify recommending music, suicide and music, right? That kind of music. So how do we address this, right? And Paola also mentioned something about, sorry, let me get the standardization, right? Having a policy. And then countries are making declarations, right? How to take action is how to take action on this. Then she talked about ownership, right? Public participation. Now, when you are talking, you talk about a particular policy that Nigeria, I’m from Nigeria, right? A particular policy that Nigeria has adopted. So I wanted to know, Nigeria has a lot of policies, even AI policy, right? We are always at the forefront of adopting when we look at other countries doing a lot of things and then we start doing our own. And then we have a lot of this document, but then there’s no implementation and enforcement, right? So now how do we ensure that it’s not just paperwork, right? We don’t just do all of these policies and it’s just the creator said. That’s, it’s actually been enforced and then it’s followed through onto an implementation on all of that. So if you can share some of your insights about that. Thank you very much.

Monica Lopez: Thank you for that question. a very complex, I mean, you really touched upon many, many aspects, but I think something that actually really stands out and perhaps Paola, I think had also mentioned this at one point, is that there really needs to be, I think at this point, what makes, so let me backtrack a second. So yes, everybody’s talking about regulation. Everybody’s talking about standards, normalization. Everybody’s talking about, we need implementation. How do we do this enforcement? But I think part of the problem lies in, we simply do not have enough public awareness and understanding. Because I think if we actually did have more of that, there would be more of a demand. And I see this in terms of, I mean, yes, we hear some even very tragic examples. So you did mention about, you know, someone who has depression and may use Spotify and then get recommended different new types of music to apparently, quote unquote, improve or fix, it’s one has to be careful with the words one uses here, deal with that situation. And we’ve seen two recent even suicides as a result of chatbot use, because of an anthropomorphization of these systems. And I think it really goes back to this question of many times, many users, unfortunately, maybe most users do not understand these systems fundamentally. That’s an education issue. That’s an education question. Because if you know and understand, then you can critically evaluate these systems. You can be more proactive because you know what’s wrong or you see the gap, you see what needs to be improved. And I say it as, so I’m also in, I didn’t mention this, but I’m also in academia and I do teach in the School of Engineering at Johns Hopkins University in the Washington DC, Maryland region in the United States. And I teach the courses on AI ethics and policy and governance to computer scientists and engineers. And I love when they come to the beginning of class with no awareness. And at the end, they are absolutely more engaged and they all say, we wanna go and be those engineers who can talk to policymakers. And so to me, that is very clear evidence, whether they’re high schoolers, undergraduate students, professionals, working professionals go back to school, graduate students, whatever it is, I see this change. And it’s changed because of the power of knowledge. So my main, really my call here is we need far more incentivization to make much more educated users in everyone, all ages. Then we’re gonna see the need for, and I really think that because there’s gonna be that demand from companies that we wanna ensure that our data is private. We wanna ensure that we’re not being harmed. We wanna ensure that we actually have benefits from these technologies. I’ll stop there. I think, yeah, others can add to it, I’m sure.

Ananda Gautam: Hello. Thank you, Monica, for your wonderful response. We have only five minutes left. I have been already one. So Martinda, is there any online discussion or question or any contribution? No? If there is any question, please feel free and contributions are also welcome. We have five minutes. Please keep the time in mind, both speakers and like speakers.

Audience: Thank you, I’ll be quick. It’s been a great discussion. We do get this, the point on education is very well made and we’ve realized in our work, I work in New Delhi in India, and we’ve realized even with very specialized sector of the population like judges and lawyers, it takes a lot of conversation, a lot of detailing to get to a point where something like bias that judges work with daily, for them to start to understand what bias in an AI system might look like. So my question, I guess what I’m trying to ask is when something requires such specialized and detailed understanding, then clearly the problem isn’t with people not being able to understand, maybe it’s with the technology not being at a stage where it’s readily explainable, where it’s easily explainable for societal use. So is there any merit to, frequently we keep getting these discussions on maybe there’s a need to pause, especially with technologies like deep fakes, which everyone who does research in this area knows are primarily going to be used for harm, or not primarily, but massively going to be used for harmful ends. So is there any credence or is there any currency to pushing for a pause at certain levels, or are we way past that? that point already, and we just have to mitigate now. That’s a small question. Sorry if it’s a little depressing in there. Yeah.

Ananda Gautam: Thank you so much for the question. If there are any questions, let’s take it. And I’ll give each speaker with one minute, and then wrap it up. Any questions, contributions from the floor? No. None from online. So each speaker can have one minute and respond. If not, they can proceed. Yeah. OK. Like just one liner that you want to give for the wrap-up. Thank you. You can start with Abir, maybe.

Abeer Alsumait: Quick question to end that. I think we’re just pondered about it. I don’t think there is a real answer. Are we beyond that point? I don’t think so. But should we pause? I also don’t think so, to be honest. I think we can put more effort into making technology more explainable and just bridging the gap little by little. And that’s, I think, what everyone, every actor and every player should work towards. That’s my thoughts on that.

Paola Galvez: Totally agree. Absolutely, we cannot pause. Because if some group decides to do it, then some others will continue. And it’s like just put a blanket in your eyes. So we cannot do it. But we can use what we have. And if our countries don’t have a data protection law or an AI national strategy, we need to pull for it to happen. Because if a country does not have the idea of how they want this technology to develop, what is the future of us as citizens? So I just leave this question for us. And let’s reflect on how we can contribute to the future of AI.

Ananda Gautam: Thank you, Paola. Now, Monica and Jona, please.

Monica Lopez: Yeah, I would agree absolutely with. the both comments. We can’t pause, we can’t ban, that’s not going to work, absolutely. We’re moving far too fast anyway at this point. But I would say that where there’s a will, there’s a way. So if we all come to the agreement and acknowledgement that we need, and I mean all of us, not just those of us right now here and our colleagues, everyone, that we need to do this, then I think it’s possible and we need to act.

Ananda Gautam: Yona, please.

Yonah Welker: Yes, I’m always on the positive side, because finally we have all the stakeholders together and it includes also the European Commission. I would love to quickly respond to the question of Aaron about the key words in suicide, because it’s actually about awareness. Yes, because if you know that recommendation engines use so-called stop words, if you know how the history of these engines works, you can easily fix it through regulatory sandboxes. And emerging companies and start-ups just come into the centers and you can provide the oversight to fix these issues. The same as a bias. Then you know that bias is not an abstract category, but just the problem of under- or over-representation. Just bigger error for smaller groups is purely data and mathematical things coming from society. You can clearly identify the issue. It can be a technical issue, it can be a social issue, and then you see it, you can fix it. And that’s why now we have these tools, testbeds, regulatory sandboxes, policy frameworks, and all the stakeholders working together to come up with real-life terms, understanding, and finally we can fix it together. Thank you.

Ananda Gautam: Thank you, Jona. Thank you, all of our panelists, and thank you, Paula, for organizing this. To all of our on-site audience and audiences online, and this is not the end. of the conversation. We have just began it. You can just connect with our speakers in the LinkedIn or wherever you are. Thank you so much everyone. Have a good rest of the day. Thank you all. Can you just stay our panelists? We can take a picture with you on the screen. Thank you.

M

Monica Lopez

Speech speed

158 words per minute

Speech length

2839 words

Speech time

1075 seconds

Algorithms reflect societal biases and perpetuate inequalities

Explanation

Algorithms are not neutral tools but reflections of existing biases and systemic prejudices in society. These biases are embedded in the design and training data of algorithmic systems.

Evidence

Examples include facial recognition technology showing higher error rates for women and people of color, AI-driven hiring algorithms discriminating based on gender and race, and criminal justice risk assessment tools perpetuating racial biases.

Major Discussion Point

Algorithmic Bias and Its Impact

Agreed with

Abeer Alsumait

Paola Galvez

Agreed on

Algorithmic bias perpetuates inequalities

Algorithmic bias has profound human rights implications

Explanation

The biases in algorithmic systems have far-reaching consequences for human rights. These systems can systematically disadvantage marginalized communities and create digital mechanisms of exclusion.

Major Discussion Point

Algorithmic Bias and Its Impact

Diversity in AI development teams is crucial

Explanation

Having diverse teams in algorithmic development is essential for responsible AI. This diversity brings new perspectives to the table and helps in creating more inclusive and unbiased systems.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Rigorous algorithmic auditing and transparency are necessary

Explanation

There is a need for comprehensive auditing of algorithmic systems to ensure they are non-discriminatory and unbiased. Transparency in these audits and in the overall functioning of AI systems is crucial.

Evidence

The European Union’s AI Act requirement for algorithmic auditing was mentioned.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Proactive bias mitigation techniques should be implemented

Explanation

Organizations must implement proactive measures to mitigate bias in AI systems. This includes careful curation of training data, implementation of fairness constraints, and development of testing protocols to examine potential discriminatory outcomes.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Comprehensive regulatory frameworks are needed

Explanation

Governments and international bodies need to develop comprehensive regulatory frameworks that treat algorithmic discrimination as a fundamental human rights issue. These frameworks should include clear legal standards for algorithmic accountability and mechanisms for individuals to challenge algorithmic decisions.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Agreed with

Paola Galvez

Agreed on

Need for comprehensive regulatory frameworks

Ongoing community engagement is essential

Explanation

Continuous dialogue with communities most likely to be impacted by algorithmic systems is crucial. This involves creating participatory design processes across the AI lifecycle and establishing relevant feedback mechanisms.

Evidence

Companies that make concerted efforts to engage with affected communities have seen better outcomes.

Major Discussion Point

Addressing Algorithmic Bias and Promoting Responsible AI

Agreed with

Paola Galvez

Agreed on

Importance of public participation and awareness

Public awareness and understanding of AI systems is lacking

Explanation

There is a general lack of public awareness and understanding about how AI systems work. This lack of knowledge makes it difficult for users to critically evaluate these systems and be proactive in identifying issues.

Evidence

Recent suicides as a result of chatbot use were mentioned, highlighting the dangers of anthropomorphizing AI systems.

Major Discussion Point

Education and Public Awareness on AI

Education is key to creating more engaged and critical AI users

Explanation

Educating users about AI systems is crucial for creating a more engaged and critical user base. This education can lead to more demand for responsible AI practices from companies and policymakers.

Evidence

The speaker’s experience teaching AI ethics and policy to computer scientists and engineers, who become more engaged and want to bridge the gap between technology and policy after learning about these issues.

Major Discussion Point

Education and Public Awareness on AI

A

Abeer Alsumait

Speech speed

134 words per minute

Speech length

1054 words

Speech time

471 seconds

AI systems have demonstrated bias against marginalized groups in various domains

Explanation

AI systems have shown biases that disproportionately affect marginalized groups. These biases have been observed in various sectors including welfare, healthcare, and criminal justice.

Evidence

Examples include a French welfare agency’s algorithm that disproportionately flagged single parents and individuals with disabilities as high risk for fraud, and a healthcare algorithm that underestimated the healthcare needs of black patients compared to white patients.

Major Discussion Point

Algorithmic Bias and Its Impact

Agreed with

Monica Lopez

Paola Galvez

Agreed on

Algorithmic bias perpetuates inequalities

Pausing AI development is not a viable option

Explanation

Despite the challenges and risks associated with AI, pausing its development is not considered a viable solution. The focus should be on addressing issues and improving the technology rather than halting progress.

Major Discussion Point

Balancing AI Progress and Responsible Development

Differed with

Paola Galvez

Differed on

Pausing AI development

Focus should be on making AI more explainable and bridging knowledge gaps

Explanation

Instead of pausing AI development, efforts should be directed towards making AI systems more explainable and understandable. This involves bridging knowledge gaps between AI developers and users.

Major Discussion Point

Balancing AI Progress and Responsible Development

P

Paola Galvez

Speech speed

152 words per minute

Speech length

1513 words

Speech time

597 seconds

AI can deepen inequalities if not developed responsibly

Explanation

While AI has the potential to be a catalyst for social change, it can also exacerbate existing inequalities if not developed and implemented responsibly. The output of AI systems depends on how we choose to develop this technology.

Major Discussion Point

Algorithmic Bias and Its Impact

Agreed with

Monica Lopez

Abeer Alsumait

Agreed on

Algorithmic bias perpetuates inequalities

Structured public participation is needed in AI policy development

Explanation

Governments need to develop a structured and meaningful public participation process when formulating national AI policies. This involves not only receiving comments from all stakeholders but also providing feedback on how these comments were considered.

Major Discussion Point

Government Role in Responsible AI Development

Agreed with

Monica Lopez

Agreed on

Importance of public participation and awareness

Investment in AI skills development is crucial

Explanation

There is a need for investment in AI skills development for the general public. Understanding AI is crucial for people to critically engage with these technologies and make informed decisions.

Major Discussion Point

Government Role in Responsible AI Development

Agreed with

Monica Lopez

Agreed on

Importance of public participation and awareness

Independent oversight of AI systems is necessary

Explanation

An independent agency or body should be established to oversee AI development and implementation. This oversight is crucial for ensuring responsible AI practices.

Evidence

The example of Saudi Data and AI Agency was mentioned as a good practice.

Major Discussion Point

Government Role in Responsible AI Development

Countries need to develop national AI strategies

Explanation

It is crucial for countries to develop national AI strategies to guide the development and use of AI technologies. Without such strategies, the future of citizens in relation to AI remains uncertain.

Major Discussion Point

Balancing AI Progress and Responsible Development

Agreed with

Monica Lopez

Agreed on

Need for comprehensive regulatory frameworks

Y

Yonah Welker

Speech speed

125 words per minute

Speech length

1479 words

Speech time

706 seconds

AI has potential to support people with disabilities

Explanation

AI technologies have significant potential in supporting people with disabilities. Various assistive technologies powered by AI can help improve the lives of individuals with different types of disabilities.

Evidence

Examples include AI-augmented smart wheelchairs, walking sticks, geolocation tools, and technologies that support hearing impairment and cognitive accessibility.

Major Discussion Point

AI in Assistive Technologies

Challenges exist in developing inclusive AI for assistive technologies

Explanation

Developing inclusive AI for assistive technologies comes with various challenges. These include recognition errors for individuals with facial differences, exclusion by generative AI, and issues related to different machine learning techniques.

Evidence

Examples of challenges include facial recognition systems not properly identifying individuals with facial differences or asymmetry, and errors in AI interfaces that some individuals cannot understand, hear, or see.

Major Discussion Point

AI in Assistive Technologies

Legal frameworks need to complement assistive technologies

Explanation

Legal frameworks should be developed to complement and support the use of AI in assistive technologies. These frameworks need to consider various aspects including accessibility, safety, and potential misuse scenarios.

Major Discussion Point

AI in Assistive Technologies

Collaborative efforts are needed to address AI challenges

Explanation

Addressing the challenges in AI development and implementation requires collaborative efforts from all stakeholders. This includes the use of tools like testbeds, regulatory sandboxes, and policy frameworks.

Evidence

The speaker mentioned the existence of tools like testbeds and regulatory sandboxes where emerging companies and startups can come to fix issues related to AI systems.

Major Discussion Point

Balancing AI Progress and Responsible Development

A

Audience

Speech speed

153 words per minute

Speech length

905 words

Speech time

353 seconds

Specialized understanding is needed even for professionals like judges

Explanation

Even specialized professionals like judges and lawyers require detailed conversations and explanations to understand concepts like bias in AI systems. This highlights the complexity of AI technologies and the challenges in making them easily explainable for societal use.

Evidence

The speaker’s experience working with judges and lawyers in New Delhi, India, to help them understand AI bias.

Major Discussion Point

Education and Public Awareness on AI

Agreements

Agreement Points

Algorithmic bias perpetuates inequalities

Monica Lopez

Abeer Alsumait

Paola Galvez

Algorithms reflect societal biases and perpetuate inequalities

AI systems have demonstrated bias against marginalized groups in various domains

AI can deepen inequalities if not developed responsibly

The speakers agree that AI systems and algorithms can reflect and amplify existing societal biases, potentially deepening inequalities if not developed and implemented responsibly.

Need for comprehensive regulatory frameworks

Monica Lopez

Paola Galvez

Comprehensive regulatory frameworks are needed

Countries need to develop national AI strategies

Both speakers emphasize the importance of developing comprehensive regulatory frameworks and national AI strategies to guide responsible AI development and implementation.

Importance of public participation and awareness

Monica Lopez

Paola Galvez

Ongoing community engagement is essential

Structured public participation is needed in AI policy development

Investment in AI skills development is crucial

The speakers agree on the need for public engagement, participation in AI policy development, and investment in AI skills development to create a more informed and engaged public.

Similar Viewpoints

All speakers agree that pausing AI development is not the solution. Instead, they advocate for continued development with a focus on making AI more explainable, addressing challenges collaboratively, and implementing national strategies and frameworks.

Monica Lopez

Abeer Alsumait

Paola Galvez

Yonah Welker

Pausing AI development is not a viable option

Focus should be on making AI more explainable and bridging knowledge gaps

Countries need to develop national AI strategies

Collaborative efforts are needed to address AI challenges

Unexpected Consensus

Importance of diversity in AI development

Monica Lopez

Yonah Welker

Diversity in AI development teams is crucial

Challenges exist in developing inclusive AI for assistive technologies

While coming from different perspectives (general AI development and assistive technologies), both speakers emphasize the importance of diversity and inclusivity in AI development, highlighting an unexpected area of consensus.

Overall Assessment

Summary

The speakers generally agree on the existence of algorithmic bias, the need for comprehensive regulatory frameworks, the importance of public participation and awareness, and the necessity of continued AI development with a focus on responsible practices.

Consensus level

There is a high level of consensus among the speakers on the main challenges and necessary actions for responsible AI development. This consensus suggests a shared understanding of the critical issues in AI governance and ethics, which could facilitate more coordinated efforts in addressing these challenges across different sectors and stakeholders.

Differences

Different Viewpoints

Pausing AI development

Abeer Alsumait

Paola Galvez

Pausing AI development is not a viable option

We cannot pause. Because if some group decides to do it, then some others will continue. And it’s like just put a blanket in your eyes. So we cannot do it.

While both speakers agree that pausing AI development is not feasible, they have slightly different reasons. Abeer Alsumait focuses on the need to improve the technology, while Paola Galvez emphasizes the risk of falling behind if some groups continue development.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were minimal, with speakers largely agreeing on the importance of responsible AI development, the need for regulatory frameworks, and the challenges of algorithmic bias. The primary differences were in emphasis and specific approaches rather than fundamental disagreements.

difference_level

The level of disagreement among the speakers was low. This general consensus implies a shared understanding of the challenges and potential solutions in AI development and governance, which could facilitate more unified approaches to addressing these issues.

Partial Agreements

Partial Agreements

All speakers agree on the need for regulatory frameworks and strategies for AI development, but they emphasize different aspects. Monica Lopez focuses on human rights and algorithmic accountability, Paola Galvez stresses the importance of national strategies, and Yonah Welker highlights the need for frameworks specific to assistive technologies.

Monica Lopez

Paola Galvez

Yonah Welker

Comprehensive regulatory frameworks are needed

Countries need to develop national AI strategies

Legal frameworks need to complement assistive technologies

Similar Viewpoints

All speakers agree that pausing AI development is not the solution. Instead, they advocate for continued development with a focus on making AI more explainable, addressing challenges collaboratively, and implementing national strategies and frameworks.

Monica Lopez

Abeer Alsumait

Paola Galvez

Yonah Welker

Pausing AI development is not a viable option

Focus should be on making AI more explainable and bridging knowledge gaps

Countries need to develop national AI strategies

Collaborative efforts are needed to address AI challenges

Takeaways

Key Takeaways

Algorithmic bias is a significant issue that perpetuates and amplifies societal inequalities

A human-centered approach is crucial for developing responsible AI

Diversity in AI development teams is essential to mitigate bias

Governments play a critical role in regulating AI and ensuring responsible development

Public awareness and education about AI systems is lacking but necessary

AI has potential benefits for assistive technologies but also poses challenges

Transparency and explainability of AI systems are crucial for accountability

Resolutions and Action Items

Implement comprehensive diversity across algorithmic development teams

Conduct rigorous algorithmic auditing and increase transparency

Establish proactive bias mitigation techniques

Develop comprehensive legal and regulatory frameworks for AI

Engage in ongoing community engagement, especially with marginalized groups

Invest in AI skills development and public education

Create independent oversight mechanisms for AI systems

Develop standardized documentation and metrics for AI audits

Unresolved Issues

How to effectively standardize AI audit documentation and metrics

How to balance rapid AI development with responsible implementation

How to make complex AI systems easily explainable to the general public

How to ensure AI policies are effectively implemented and enforced, not just created

Suggested Compromises

Instead of pausing AI development, focus on making technology more explainable and bridging knowledge gaps

Use existing regulatory frameworks and adapt them for AI governance

Develop smaller, more specific language models for regional needs instead of relying solely on large, generalized models

Thought Provoking Comments

Algorithms are not neutral tools, but they’re very powerful social mechanisms that either perpetuate or challenge existing power structures.

speaker

Monica Lopez

reason

This comment reframes algorithms from neutral technical tools to powerful shapers of society, highlighting their profound social impact.

impact

Set the tone for discussing the ethical implications and societal effects of AI throughout the conversation.

AI holds immense potential as a technology if we use it wisely. AI systems can break down language barriers.

speaker

Paola Galvez

reason

Provides a balanced perspective by highlighting AI’s positive potential alongside its risks.

impact

Shifted the discussion to consider both opportunities and challenges of AI, leading to more nuanced analysis.

Working on disability-centric AI is not just about words, it’s an extremely complex process of building environments where we have a multi-model and multi-sensory approach.

speaker

Yonah Welker

reason

Highlights the complexity of developing truly inclusive AI systems, especially for those with disabilities.

impact

Deepened the conversation around inclusivity in AI, prompting discussion of specific challenges and approaches.

Governments play a very important role. While every actor and every player is really important in discussions and conversations, governments have critical roles in regulating and establishing responsibility and advancing the way forward for AI adoption in an equitable and fair way.

speaker

Abeer Alsumait

reason

Emphasizes the crucial role of government regulation in ensuring responsible AI development.

impact

Shifted focus to policy and regulatory aspects of AI governance.

We simply do not have enough public awareness and understanding. Because I think if we actually did have more of that, there would be more of a demand.

speaker

Monica Lopez

reason

Identifies lack of public understanding as a key barrier to effective AI governance.

impact

Prompted discussion on the importance of AI literacy and education for the general public.

Overall Assessment

These key comments shaped the discussion by highlighting the complex societal impacts of AI, the need for balanced consideration of risks and opportunities, the importance of inclusivity, the role of government regulation, and the critical need for public education on AI. The conversation evolved from identifying problems to exploring multifaceted solutions involving various stakeholders, emphasizing a holistic approach to responsible AI development and governance.

Follow-up Questions

How to standardize the type of documentation, metrics, and benchmarks for AI bias audits?

speaker

Dr. Monica Lopez

explanation

Standardization is crucial for effective bias audits across the industry, but there’s currently a lack of consensus on how these audits should be conducted and documented.

How can we improve AI literacy and adoption among government workers and governors who will implement AI policies?

speaker

Yonah Welker

explanation

Improving AI literacy among policymakers is essential for effective implementation and enforcement of AI regulations.

How can we ensure that AI policies and regulations are not just paperwork but are actually enforced and implemented?

speaker

Audience member (Aaron Promise Mbah)

explanation

Many countries adopt AI policies, but there’s often a gap between policy creation and actual implementation, which needs to be addressed.

How can we address the issue of AI systems (like music recommendation algorithms) potentially exacerbating mental health issues?

speaker

Audience member (Aaron Promise Mbah)

explanation

This raises concerns about the unintended consequences of AI systems on vulnerable individuals and the need for safeguards.

Is there merit to pushing for a pause in the development of certain AI technologies, particularly those with high potential for harm like deepfakes?

speaker

Audience member (unnamed)

explanation

This question addresses the ethical dilemma of whether to slow down AI development in potentially harmful areas to allow for better safeguards and regulations.

How can we make AI technology more readily explainable for societal use?

speaker

Audience member (unnamed)

explanation

The complexity of AI systems makes it difficult for the general public to understand and critically evaluate them, which is crucial for responsible AI adoption.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.