Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Christian von Essen
The implementation of AI language understanding has yielded promising results in reducing the presence of inappropriate sexual content in search results. It was reported in 2022 that there had been a 30% decrease in such content from the previous year, thanks to the application of AI algorithms. This positive development has continued in subsequent years, with ongoing efforts to further decrease the presence of harmful content.
Addressing bias in AI is a crucial aspect of promoting equality, and specific measures have been taken to ensure that training data includes protected minority groups. To counteract bias, training data now includes groups such as “Caucasian girls,” “Asian girls,” and “Irish girls.” Additionally, patterns across different groups are utilized to automatically expand the scope from one group to another, effectively reducing biases in AI systems.
Success in mitigating bias is measured by comparing the performance of classifiers across different data slices, including LGBTQ, gender, and race. The goal is to ensure that the probability of predicting inappropriate content remains consistent across all data slices, regardless of individual characteristics.
The inclusion of corrective training data and the application of additional methods have led to significant improvements in the equality of quality across different data slices. These improvements are evident when comparing models to baseline models. Furthermore, the introduction of more methods and data further enhances these gains.
Counterfactual fairness in AI involves making sure that the outcome of a classifier doesn’t significantly change when certain terms related to marginalized minority groups are modified. For example, if a search query includes the term “black woman video,” the classifier should predict a similar outcome if the term is replaced with “black man video” or “white woman video.” This approach ensures fairness across all user groups, regardless of their background or identity.
Ablation, which is also a part of counterfactual fairness, focuses on maintaining fairness even when specific terms are removed from a query. The output of classifiers should not change significantly, whether the query includes terms like “black woman video,” “black woman dress,” or simply “woman dress.” This helps ensure fairness in AI systems by reducing the impact of specific terms or keywords.
Fairness in AI systems should not be limited to gender and race-related terms. The behavior of classifiers and systems should remain consistent across all data slices, including categories such as LGBTQ queries. This comprehensive approach ensures fairness for all users, irrespective of their identities or preferences.
Counterfactual fairness is considered a necessary initial step in augmenting training data and creating fair classifiers. By ensuring that classifiers’ predictions remain consistent across different query modifications or term replacements related to marginalized minority groups, AI systems can strive for fairness and inclusivity.
While the initial focus of language models like BERT was on creating credible and useful models, efforts to address bias and fine-tune these models were incorporated later. It was vital to establish the credibility and usefulness of such models before incorporating bias correction techniques.
As AI models continue to grow in size, selecting appropriate training data becomes increasingly challenging. This recognition highlights the need for meticulous data selection and representation to ensure the accuracy and fairness of AI systems.
Ensuring the representativeness of training data is seen as a priority before fine-tuning the models. By incorporating representative data from diverse sources and groups, AI systems can better account for the various perspectives and experiences of users.
The distinction between fine-tuning and the initial training step is becoming more blurred, making it difficult to identify where one ends and the other begins. This intermingling of steps in the training process further emphasizes the complexity and nuances involved in effectively training AI models.
In conclusion, the use of AI language understanding has made significant progress in reducing inappropriate sexual content in search results. Efforts to address bias and promote equality through the inclusion of training data for protected minority groups, comparing classifier performance across different data slices, and ensuring counterfactual fairness have proven successful. However, it is essential to extend fairness beyond gender and race to encompass other categories such as LGBTQ queries. The ongoing efforts to improve the credibility, bias correction, and selection of training data highlight the commitment to creating fair and inclusive AI systems.
Emma Gibson – audience
The Equal Rights Trust has recently launched a set of equality by design principles, which has received support from Emma Gibson. Emma, a strong advocate for gender equality and reduced inequalities, believes in the importance of incorporating these principles at all stages of digital technology development. Her endorsement highlights the significance of considering inclusivity and fairness during the design and implementation of digital systems.
Emma also emphasizes the need for independent audits to prevent digital systems from perpetuating existing biases. She emphasizes the importance of ensuring that these systems do not perpetuate discriminatory practices and instead promote fairness and justice. Conducting regular audits allows for the identification and effective addressing of any biases or discriminatory patterns within these digital systems.
The alignment between these principles and audits with the Sustainable Development Goals (SDGs) further reinforces their importance. Specifically, they contribute to SDG 5 on Gender Equality, SDG 10 on Reduced Inequalities, and SDG 16 on Peace, Justice, and Strong Institutions. By integrating these principles and performing regular audits, we can strive towards bridging the digital divide, reducing inequalities, and fostering a more inclusive and just society.
In conclusion, the equality by design principles introduced by the Equal Rights Trust, with support from Emma Gibson, offer valuable guidance for digital technology development. Emma’s advocacy for independent audits underscores the necessity of bias-free systems. By embracing these principles and conducting regular audits, we can work towards creating a more inclusive, equal, and just digital landscape.
Audience
The discussions surrounding gender inclusivity in AI highlight several concerns. One prominent issue is the presence of biased outputs, which are often identified after the fact and require corrections or fine-tuning. This reactive approach implies that more proactive measures are needed to address these biases. Furthermore, the training data used for AI might perpetuate gender gaps, as there is a lack of transparency regarding the percentage of women-authored data used. This opacity poses a challenge in accurately assessing the gender inclusivity of AI models.
Another factor contributing to gender gaps in AI is the digital divide between the Global North and the Global South. It has been observed that most online users in the Global South are male, which suggests a lack of diverse representation in the training data. This further widens the gender gap within AI systems.
To promote gender inclusivity, there is a growing consensus that greater diversity in training data is necessary. While post-output fine-tuning is important, it is equally essential to ensure the diversity of inputs. This can be achieved by using more representative training data that includes contributions from a wide range of demographics.
There are also concerns about the interaction between AI and gender inclusivity, particularly with regards to surveillance. The use of AI in surveillance systems raises questions about privacy, biases, and potential infringements on individuals’ rights. This highlights the need for careful consideration of the impact of AI systems on gender equality, as they can unintentionally reinforce existing power dynamics.
In terms of governance, there is a debate about the value of non-binding principles in regulating AI. Many international processes have attempted to set out guidelines for AI governance, but few are binding. This lack of consistency and overlapping initiatives raises doubts about the effectiveness of these non-binding principles.
On the other hand, there is growing support for the implementation of independent audit mechanisms to assess AI outcomes. An independent audit would allow for the examination of actions taken by companies like Google to determine whether they are producing the desired outcomes. This mechanism would provide a more objective assessment of the impact of AI and help hold companies accountable.
Investing in developing independent audit mechanisms for AI is seen as a more beneficial approach than engaging in non-binding conversations or relying solely on voluntary principles. This suggests that tangible actions and oversight are needed to ensure that AI systems operate in an ethical and inclusive manner.
The representation of women in the tech field remains extremely low. Factors such as language barriers and a lack of representation in visual search results contribute to this underrepresentation. To address this, there needs to be a greater focus on upskilling, reskilling, and the introduction of the female voice in AI. This includes encouraging more girls to pursue technology-related studies and creating opportunities for women to engage with AI-based technologies.
Overall, while there are challenges and concerns surrounding gender inclusivity in AI, there is also recognition of the positive vision and opportunities that AI adoption can provide for female workers. By addressing these issues and actively working towards gender equality, AI has the potential to become a powerful tool for promoting a more inclusive and diverse society.
Emma Higham
Google is leveraging the power of Artificial Intelligence (AI) to enhance the safety and inclusivity of their search system. Emma Higham, a product manager at Google, works closely with the SafeSearch engineering team to achieve this goal. By employing AI technology, they can test and refine their systems, ensuring a safer and more inclusive user experience.
Google’s mission is to organize the world’s information and make it universally accessible and helpful. Emma Higham highlights this commitment, emphasizing Google’s dedication to ensuring information is available to all. AI technology plays a vital role in this mission, facilitating efficient pattern matching at scale and addressing inclusion issues effectively.
Google’s approach prioritizes providing search results that do not shock or offend users with explicit or graphic content unrelated to their search. Emma Higham mentions that this principle is one of their guidelines, reflecting Google’s commitment to user safety and a positive search experience.
Guidelines are crucial for assessing search result quality and improving user satisfaction. Google has comprehensive guidelines for raters, aiming to enhance search result quality. These guidelines include the principle of avoiding shocking or offending users with unsought explicit content. Adhering to these guidelines ensures search results that meet user needs and expectations.
Addressing biases in AI systems is another important aspect for Google. Emma Higham acknowledges that AI algorithms can reflect biases present in training data. To promote fairness, Google systematically tests the fairness of their AI systems across diverse user groups. This commitment to accountability ensures equitable search results and user experiences for everyone.
Google actively collaborates with NGOs worldwide to enhance safety and handle crisis situations effectively. Their powerful AI system, MUM, enables more efficient handling of personal crisis searches. With operability in 75 languages and partnerships with NGOs, Google aims to improve user safety on a global scale.
In the development process of AI technology, Google follows a cyclical approach. It involves creating the technology initially, followed by fine-tuning and continuous improvement. If the technology does not meet the desired standards, it goes back to the first step, allowing Google to iterate and refine their AI systems.
Safety and inclusivity are essential considerations in the design of AI technology. Emma Higham emphasizes the importance of proactive design to ensure new technologies are developed with safety and inclusivity in mind. By incorporating these principles from the beginning, Google aims to create products that are accessible to all users.
AI has also made significant strides in language and concept understanding. Emma Higham highlights improvements in Google Translate, where AI technology has enhanced gender inclusion by allowing users to set their preferred form factor. This eliminates the need for default assumptions about a user’s gender and promotes inclusivity in language translation.
User feedback is paramount in improving systems and meeting high standards. Emma Higham provides an example of how user feedback led to improvements in the Google Search engine during the Women’s World Cup. Holding themselves accountable to user feedback drives Google to deliver better services and ensure their products consistently meet user expectations.
In conclusion, Google’s use of AI technology is instrumental in creating a safe and inclusive search system. Through collaboration with the SafeSearch engineering team, Google ensures continuous testing and improvement of their systems. Guided by their mission to organize information and make it universally accessible, AI aids pattern matching at scale and tackles complex mathematical problems. Google’s commitment to avoiding explicit content, addressing biases, and incorporating user feedback strengthens their efforts towards a safer and more inclusive search experience. Additionally, their partnership with NGOs and the development of MUM showcases their dedication to improving safety and handling crisis situations effectively. By embracing proactive design and incorporating user preferences, AI technology expands inclusivity in products such as Google Translate.
Bobina Zulfa
A recent analysis of different viewpoints on AI technologies has revealed several key themes. One prominent concern raised by some is the need to understand the concept of “benefit” in relation to different communities. The argument is that as AI technologies evolve and are adopted across various communities, it is vital to discern what “benefit” means for each community. This is crucial because technologies may produce unexpected outcomes and may potentially harm rather than help in certain instances. This negative sentiment stems from the recognition that the impact of AI technologies is not uniform and cannot be assumed to be universally advantageous.
On the other hand, there is a call to promote emancipatory and liberatory AI, which is seen as a positive development. The proponents of this argument are interested in moving towards greater agency, freedom, non-discrimination, and equality in AI technologies. The emphasis is on AI technologies being relevant to communities’ needs and realities, ensuring that they support the ideals of non-discrimination and equality. This perspective acknowledges the importance of considering the socio-cultural context in which AI technologies are deployed and the need to design and implement them in a way that reflects the values and goals of diverse communities.
Another critical view that emerged from the analysis is the need to move away from techno-chauvinism and solutionism. Techno-chauvinism refers to the belief that any and every technology is inherently good, while techno-solutionism often overlooks the potential negative impacts of technologies. The argument against these views is that it is crucial to recognize that not all technologies are beneficial for everyone and that some technologies may not be relevant to communities’ needs. It is essential to critically evaluate the potential harms and benefits of AI technologies and avoid assuming their inherent goodness.
The analysis also highlighted concerns regarding data cleaning work and labour. It is important to acknowledge and support the people who perform this cleaning work, as their labour has implications for their quality of life. This perspective aligns with the goal of SDG 8: Decent Work and Economic Growth, which emphasizes promoting decent work conditions and ensuring fair treatment of workers involved in data cleaning processes.
Furthermore, the analysis identified issues with consent in Femtech apps. Femtech refers to technology aimed at improving women’s health and well-being. The concerns raised encompass confusing terms and conditions and possible data sharing with third parties. The lack of meaningful consent regimes in Femtech apps can have significant implications for gender inequality. This observation underscores the need for robust privacy measures and clear and transparent consent processes in Femtech applications.
The analysis also noted the importance of considering potential issues and impacts of AI technologies from the early stages of development. Taking a proactive approach, rather than a reactive one, can help address and mitigate any potential negative consequences. By anticipating and addressing these issues, the development and implementation of AI technologies can be more socially responsible and in line with the ideals of sustainable development.
Skepticism was expressed towards the idea of using small data sets to detect bias. The argument is that limited data sets may not represent a significant portion of the global majority. If the data used in AI algorithms is not representative, it could lead to biased outcomes in the end products. This skepticism highlights the need to ensure diverse and inclusive data sets that reflect the diversity of communities and avoid reinforcing existing biases.
Finally, the analysis highlighted initiatives such as OECD’s principles that could help address the potential issues surrounding AI technologies. These principles stimulate critical thinking about the potential social, economic, and ethical impacts of AI technologies from the outset. Several organizations are actively promoting these principles, indicating a positive and proactive approach towards ensuring responsible and trustworthy AI development and deployment.
In conclusion, the analysis of different viewpoints on AI technologies revealed a range of concerns and perspectives. It is important to understand the notion of benefit for different communities and recognize that technologies may have unintended harmful consequences. However, there is also a call for the promotion of emancipatory and liberatory AI that is relevant to communities’ needs, supports non-discrimination and equality. Critical views on techno-chauvinism and solutionism emphasized the need to move away from assuming the inherent goodness of all technologies. Additional concerns included issues with data cleaning work and labour, consent in Femtech apps, potential issues and impacts from the start of AI technology development, skepticism towards using small data sets to detect bias, and the importance of initiatives like OECD’s principles. This analysis provides valuable insights into the complex landscape of AI technologies and highlights the need for responsible and ethical decision making throughout their development and deployment.
Jim Prendergast
Dr. Luciana Bonatti, a representative from the National University of Cordoba in Argentina, was unable to present due to an outbreak of wildfires in the area. The severity of the situation forced her and her family to evacuate their home, resulting in her unavoidable absence.
The wildfires that plagued the region prompted Dr. Bonatti’s evacuation, highlighting the immediate danger posed by the natural disaster. The outbreak of wildfires is a significant concern, not only for Dr. Bonatti, but also for the affected community as a whole. The intensity of the situation can be inferred from the negative sentiment expressed in the summary.
Jim Prendergast, perhaps an attendee or colleague, demonstrated empathy and solidarity towards Dr. Bonatti during this challenging time. Acknowledging her circumstances, Prendergast expressed sympathy and conveyed his well wishes, hoping for a positive resolution for Dr. Bonatti and her family. His positive sentiment demonstrates support and concern for her well-being.
It is worth noting the related Sustainable Development Goals (SDGs) mentioned in the summary. The wildfire outbreak in Argentina aligns with SDG 13: Climate Action, as efforts are necessary to address and mitigate the impacts of climate change-induced disasters like wildfires. Additionally, the mention of SDG 3: Good Health and Well-being and SDG 11: Sustainable Cities and Communities in relation to Jim Prendergast’s stance signifies the broader implications of the situation on public health and urban resilience.
In conclusion, Dr. Luciana Bonatti’s absence from the presentation was a result of the wildfire outbreak in Argentina, which compelled her and her family to evacuate. This unfortunate circumstance received empathetic support from Jim Prendergast, who expressed sympathy and wished for a positive outcome. The summary highlights the implications of the natural disaster in the context of climate action and sustainable development goals.
Lucia Russo
The Organisation for Economic Cooperation and Development (OECD) has developed a set of principles aimed at guiding responsible and innovative artificial intelligence (AI) development. These principles promote gender equality and are based on human-centered values and fairness, with a focus on inclusive growth and sustainable development. Currently, 46 countries have adhered to these principles.
To implement these principles, countries have taken various policy initiatives. For example, the United States has established a program to improve data quality for AI and increase the representation of underrepresented communities in the AI industry. Similarly, the Alan Turing Institute in the United Kingdom has launched a program to increase women’s participation in AI and examine gender gaps in AI design. The Netherlands and Finland have also worked on developing guidelines for non-discriminatory AI systems in the public sector. These policy efforts demonstrate a commitment to aligning national strategies with the OECD AI principles.
The OECD AI Policy Observatory serves as a platform for sharing tools and resources related to reliable AI. This platform allows organizations worldwide to submit their AI tools for use by others. It also includes a searchable database of tools aimed at various objectives, including reducing bias and discrimination. By facilitating the sharing of best practices and tools, the Observatory promotes the development of AI in line with the OECD principles.
In addition to the policy-focused initiatives, the OECD has published papers on generative AI and big trends in AI analysis. These papers provide analysis on AI models, their evolution, policy implications, safety measures, and the G7 Hiroshima process involving generative AI. While the OECD focuses on analyzing major trends in AI, it is not primarily focused on providing specific tools or resources.
There is an acknowledgement of the need for more alignment and coordination in the field of AI regulation. Efforts are being made to bring stakeholders together and promote coordination. For instance, the United Kingdom is promoting a safety summit to address AI risks, and the United Nations is advancing work in this area. The existence of ongoing discussions and developments demonstrates that the approach to AI regulation is still in the experimental phase.
The representation of women in the AI industry is a significant concern. Statistics show a low representation of women in the industry, with more than twice as many young men as women capable of programming in OECD countries. Only 1 in 4 researchers publishing on AI worldwide are women, and female professionals with AI skills represent less than 2% of workers in most countries. To address this issue, policies encouraging women’s involvement in science, technology, engineering, and mathematics (STEM) fields are important. Role models, early exposure to coding, and scholarships are mentioned as ways to increase women’s participation in these areas.
Furthermore, there is a need to promote and invest in the development of large language models in languages other than English. This would contribute to achieving Sustainable Development Goals related to industry, innovation, infrastructure, and reduced inequalities.
Overall, the OECD’s principles and initiatives provide a framework for responsible and inclusive AI development. However, there is a need for greater coordination, alignment, and regulation in the field. Efforts to increase women’s representation in the AI industry and promote diversity in language models are essential for a more equitable and sustainable AI ecosystem.
Jenna Manhau Fung
The analysis of the speeches reveals several significant findings. Firstly, it highlights that AI can eliminate unintentional human bias and bring more impartiality. This is valuable as it ensures fair decision-making processes and reduces discrimination that may arise from human biases. Leveraging AI technology can enable organizations to improve their practices and achieve greater objectivity.
Another important point emphasized in the analysis is the significance of involving users and technical experts in the policymaking process, particularly in relation to complex technologies like AI. By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leading to the creation of more comprehensive and effective policies. This ensures that policies address the diverse needs and concerns of different stakeholders and promote equality and inclusivity.
Moreover, the analysis underscores the importance of international standards in the context of AI and related industries. International standards can assist countries in modernizing their legal frameworks and guiding industries in a way that aligns with ethical considerations and societal needs. These standards promote consistency and harmonization across different regions and facilitate the adoption of AI technologies in an accountable and inclusive manner.
In addition to these main points, the analysis highlights the need for an inclusion mechanism for small-scale writers. It argues that such a mechanism is essential to address situations where the content of these writers does not appear in search engine results due to certain policies. This observation is supported by a personal experience shared by one of the speakers, who explained that her newsletter did not appear in Google search results because of existing policies. Creating an inclusion mechanism would ensure fair visibility and opportunities for small-scale writers, promoting diversity and reducing inequality in the digital domain.
Overall, the analysis emphasizes the transformative potential of AI in eliminating biases and promoting neutrality. It underscores the importance of involving users and technical experts in policymaking, the significance of international standards, and the need for an inclusion mechanism for small-scale writers. These insights reflect the importance of considering diverse perspectives, fostering inclusivity, and striving for fairness and equality in the development and implementation of AI technologies.
Moderator – Charles Bradley
Charles Bradley is hosting a session that aims to explore the potential of artificial intelligence (AI) in promoting gender inclusivity. The session features a panel of experienced speakers who will challenge existing beliefs and encourage participants to adopt new perspectives. This indicates a positive sentiment towards leveraging AI as a tool for good.
Bradley encourages the panelists to engage with each other’s presentations and find connections between their work. By fostering collaboration, he believes that the session can achieve something interesting. This highlights the importance of collaborative efforts in advancing gender inclusivity through AI. The related sustainable development goals (SDGs) identified for this topic are SDG 5: Gender Equality and SDG 17: Partnerships for the Goals.
Specific mention is made of Jenna Manhau Fung’s experiences in youth engagement in AI and policy-making, as well as her expertise in dealing with Google’s search policies. This recognition indicates neutral sentiment towards the acknowledgement of Fung’s insights and experiences. The related SDGs for this discussion are SDG 4: Quality Education and SDG 9: Industry, Innovation and Infrastructure.
Furthermore, Bradley invites audience members to contribute to the discussion and asks for questions, fostering an open dialogue. This reflects a positive sentiment towards creating an interactive and engaging session.
Another topic of interest for Bradley is Google’s approach to counterfactual fairness, which is met with a neutral sentiment. This indicates that Bradley is curious about Google’s methods of achieving fairness within AI systems. The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure.
The discussion on biases in AI systems highlights the need for trust and the measurement of bias. Google’s efforts in measuring and reducing biases are acknowledged, signaling neutral sentiment towards their work in this area. The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure.
Bradley believes that the work on principles will set the stage for upcoming regulation, indicating a positive sentiment towards the importance of establishing regulations for AI. The enforceable output of regulation is seen as more effective than principles alone. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure.
The session also explores the positive aspects of generative AI in the fields of coding and learning. It is suggested that generative AI can speed up the coding process and serve as a tool for individuals to learn coding quickly. This perspective is met with a positive sentiment and highlights the potential of AI in advancing coding and learning. The related SDGs for this topic are SDG 4: Quality Education and SDG 9: Industry, Innovation, and Infrastructure.
Moreover, Bradley emphasizes the importance of investing in AI training in languages other than English, implying a neutral sentiment towards the necessity of language diversity in AI. This recognizes the need to expand AI capabilities beyond the English language. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure.
Lastly, the role of role models in encouraging more young women to enter the fields of science and coding is discussed with a positive sentiment. Policies and actions to motivate women in science are emphasized, highlighting the importance of representation in these fields. The related SDGs for this topic are SDG 4: Quality Education and SDG 5: Gender Equality.
In conclusion, Charles Bradley’s session focuses on exploring the potential of AI in promoting gender inclusivity. The session aims to challenge existing beliefs, foster learning new perspectives, and encourage collaboration among panelists. It covers a range of topics, including youth engagement in AI, counterfactual fairness, measuring biases, guiding principles, generative AI in coding and learning, investing in language diversity, and the importance of role models. The session promotes open dialogue and aims to set the stage for future AI regulation.
Session transcript
Moderator – Charles Bradley:
Hi, everybody. This is the session after lunch, and we’re quite far away from lunch physically, so we’re just waiting for a few more people to walk into the room, so we’ll wait another minute Well, hi, everybody. My name’s Charles Bradley. I work at ADAPT, a tech policy and human rights consultancy based in London. I’m very excited to be here on the last day of this IGF. We’re in a very large room, and we would encourage people who are here to come to the table as we’ll try and ensure we have a good conversation in a bit. The more we can see your lovely faces, the more we can engage with you, and the more interesting this is going to be. I think this is my ninth IGF. It’s been a fun one, and there have been an inordinate number of discussions about AI, and this is another one. We have a great panel with lots of experience and a range of expertise on the topic that we’re going to talk about, and I’m going to try and make this as sort of focused and as practical as possible. There have been lots of conversations floating at all different levels of feet, and we really want to make sure that we leave this having learned something or having something that we believe being challenged. So that’s our sort of task today. So we actually leave the room with something new or thinking about something that we haven’t thought about in the same way before. The session is titled Leveraging AI to Support Gender Inclusivity, and there are obviously many, many routes that this could take. We really want to focus the session on leveraging AI as a tool for good. So how can AI actually be used to solve some of these problems? We’re going to sort of kick off, as nearly every session at the IGF does, with a round of sort of presentations and opening remarks from the panel. Rather than me go through a very long introduction of their names, organizations, which you will immediately forget, I will ask them to introduce themselves as they speak, and then we’re going to have plenty of time today for a discussion, both across the panelists and within the room. So I’d like us to leave the room knowing something new or having an existing sort of belief or something sort of being challenged. The other challenge I pose to you, which is unique, is that we actually engage with what people are saying in the room. So we’d like our speakers to think about what the other speakers have said and try to connect their work to their peers, and also for when we’re asking questions, to really engage with what’s already been said. I think that will really help us try and get to something interesting for today. So with that, I will pass to our first speakers, Christian and Emma from Google, who are joining us virtually. Christian and Emma, over to you.
Emma Higham:
Hi there, Charles. Can you see and hear me okay? Yes, we can. Fantastic. Thanks all so much for having us. My name is Emma Hyam, and I’m here from Google, where I work with the SafeSearch engineering team as a product manager. I’m here with my colleague Christian von Essen, who’s a lead engineer on the team, and we want to talk about one of the ways that Google is using AI to make search safer, but also more inclusive. This sometimes poses unique challenges, which we can dive into in a second. But in general, we’re really excited about the technology and the way that it is actually enabling us to test our systems and provide a more inclusive system, a more inclusive experience in a way that we can validate and return back to users. Now, Christian, I’ll pass to you to introduce yourself and then we can kick off with a few slides. I’ll just get them up.
Christian von Essen:
Sure, but you did a good job introducing me already, I think. So, hi, my name is Christian. I work for Google as a tech lead and a manager. I’ve been doing this for close to 10 years now, and the kind of work that we’re going to present here has been one of the biggest breakthroughs that we had in the last 10 years that I’ve been doing this.
Emma Higham:
Awesome. Well, if you don’t mind guys we’ll just spend a few minutes sharing a few slides because I think this will make it more tangible and then we’re looking forward to the discussion. So, I’ll start by saying that you know everything we do at Google goes back to our mission, organizing the world’s mission, organizing the world’s information to make it universally accessible and useful. And one of the things about the world’s information is it’s a lot of information, and the information needs that we see are also at a huge scale. And they’re dynamic, people come to us with new kinds of questions every day. In fact, 15% of all searches are new, daily. That means that we need systems that are also dynamic. We have hundreds of billions of web pages, 15% of queries are new every day. The question that Christian and I really ask ourselves in our job is how can we do content moderation, how can we offer safe systems which we designed to be inclusive. And how can we do it at scale. We want to do that while still returning useful search results, ones that answer your questions. So it’s a dynamic challenge, and what we find is with these kind of scaled dynamic problems pattern matching is really helpful. And one thing that I found as I’ve deep dived on AI is AI is really pattern matching at scale. It’s using computers to do pattern matching in a way that we perhaps weren’t able to do before. It’s a way to understand patterns that help us do math, but also that help us understand sometimes inclusion problems. So, I’ll start by just kind of one of the fundamental principles that guide our work here, and then I’m going to pass to Christian to share some of the tangible ways that we have tried to improve on this approach. So the first thing I’ll share is that one of our principles in search is that we never want to shock or offend people with explicit or graphic content when that’s not what they’re looking for. You know, this is part of the fundamental thing of helping you find quality and relevant information. And people often ask us, how do your algorithms work? Like, how should we understand what you think of as quality? And something that kind of, I was really impressed by as I started working with the search teams, is they actually publish 100, and I think it’s 160 pages now, of guidelines to raters that we use to help us understand the quality of results. And it’s in these guidelines that you see this principle codified. The principle that we never want to shock or offend you with explicit or graphic content when it’s not what you’re looking for. And the way we do that is really by understanding the intent behind your query. Understanding the intent behind your query requires language understanding. Now, in the most sort of brute force way, this would be, you type in a query, I’m sitting here in Mountain View, California, you type in a query Mountain View, and we understand that Mountain View doesn’t actually just mean a view from the top of a mountain. It means a place, because we have an understanding that Mountain View refers to a place. And we know that because it matches a bunch of web documents about the place. What we’re seeing with natural language processing is that this is getting a lot smarter. Our ability to do pattern matching goes far beyond just understanding that Mountain View is a place. And that’s making us much more effective at understanding when you were seeking out something that may have been a little racy, versus when you had more innocent interpretation of the query. But many of you may be wondering, why was there ever a problem with encountering the shocking racy content in the first place? So I’m going to hand over to Christian to shed a bit more light on that.
Christian von Essen:
Thank you. So, in particular in the past but still nowadays to a large extent at the core search algorithms like what Google is really work by finding documents that have the same words that appear in your query. And so, these results really are a reflection of what the internet has to offer for these particular words. And for a query, say like amateur, the vast majority of these documents on the internet is pornographic, right? Amateur porn is a very popular thing. But amateur doesn’t really necessarily have porn intent, right? The user might be looking for something else and might be very surprised to be confronted with pornography. To counteract this effect, we have that requires special subsystems. And these subsystems always had also to focus on queries that touch on identity terms, right? So that they are not unevenly affected by shocking content. Can we move to the next slide? Yes. Thank you. In 2022, we shared that we reduced unnecessary sexual results by 30% in the previous year. And we used AI language understanding, natural language understanding to achieve this huge reduction. And we’ve seen a similar improvement in the following year. And we’re still working on reducing the bad content further. Now, how can we use AI language understanding like BERT to do so? Let’s go to the next slide. You might say that it’s as simple as just trying to classify to predict when sexual content is okay, right? Yeah, but as we all here know, that’s why we’re here, AI comes with its own challenges. In particular, an AI can suffer from biases that would limit the usefulness, right? If AI thinks amateur means porn, then it doesn’t help us. So how do we address the bias in AI? Can we move to the next slide? Thank you. We specifically include training data, in this case, for protected minority groups. For example, Caucasian girls, Asian girls, Irish girls. And as you can see here, many patterns that we see as problematic are the same across groups. Black girl videos, white girl videos, something like that. And then when generating this training data for protected minority groups, we make use of these patterns to expand from one group to another automatically. And we can exploit the same kind of patterns, not only to address issues in biases of AI, but also in the biases of human races that generate this training data. Can we go to the next slide? Now, we have this wonderful approach, but does it actually work? To know that, we need to measure. To measure if they actually are successful in mitigating the biases, we see how our classifiers do as we compare across different slices. So are we, for example, as good or as bad as in a random other slice or in the whole slice? When we look at just queries touching on LGBTQ or touching on gender, touching on race. So a bit more formal, the probability of predicting this sort of porn could be the same for any slice of data, no matter what the slice is, given the same labels, given that it actually looks for porn or doesn’t. And compared to the baseline models that we had earlier or that we have without this corrective training data, we do see significant gains in equity, in being the same quality for in-slices and out-slices. And then as we added more methods and more data, we saw even further gains. And that’s this part. And then back to Emma.
Emma Higham:
Yes, I think this is really exciting to me because I think we often worry about, is the system working fairly for all user groups? Is the system working fairly and really representing the world in the way that it is fair to all user groups? What we’ve found here is that there’s a way to actually test that. And does that mean that every single system, when first naively built, is going to be fair? No, because it’s going to reflect biases and training data because it’s going to reflect the biases of people that may make it. That’s kind of true of any institution or system that we build. So that’s one way to hold our systems accountable. And what I’ve been really excited about with AI is both the power of the natural language processing that we’re seeing, the ability to understand users at scale across a wide range of locales, and understand the nuances of what they’re saying, while also holding that system accountable to making sure that it’s working fairly across all of these different groups. And I wanted to share that because what we’re also seeing is that similar to BERT, which is one form of natural language processing, we are also able to apply MUM, another very powerful system, to making our search results safer. So a critical example that’s really close to my heart is how we’ve applied MUM to improve personal crisis searches. We see queries like how to get help in search, queries, unfortunately, like, I want to kill myself. These are queries, which show the severity of a moment that a user is in. And they are not always written in naive terms, they’re not always written in a way that is easy for us to understand. With natural language processing, we’re able to translate the queries and say, this looks like a user may be in a moment of crisis, which makes us more able to return relevant results and return helpful resources. And, you know, for some of the severe queries I just mentioned, we really focus on partnering with NGOs around the world to provide helpful resources. And what we’re particularly excited about with MUM is that we’re able to be really effective across languages. There’s 75 locales where MUM is trained and operating highly effectively and that was the kind of power we were able to bring to the problem of personal crisis searches, leading to major improvements last year. So that’s it for today. We’re really excited to talk more about AI and how we’ve seen it work, not just be effective to the problem of being more inclusive across genders, but also to making systems safer at scale.
Moderator – Charles Bradley:
Thank you, Emma and Christian. I think it’s really useful to set us up with that. We needed to learn something new from today. I’ve got BERT, MUM, pattern matching, slices, lots of things that I have questions about. And I’m sure people want to dig into which we’ll get into in a bit, but that’s really, really sort of set the scene in very practical ways that this technology or technologies can be used for gender inclusivity. I’m going to come to Babina next from policy. So Babina, please introduce yourself and the floor is yours. Thank you.
Bobina Zulfa:
Sure. Good morning. Can you hear me? Yes, thanks. Perfect. Morning. It’s morning where I am. I understand it’s afternoon over there. A pleasure to be a part of this discussion. My name is Babina Zulfa and I’m a data digital rights researcher with Policy. So Policy is a feminist collective of researchers, academics, designers, etc. We work at the intersection of data, tech and society. So a lot of our work is socio-technical in a sense. So we are Pan-African and so a lot of our work just looks at how technologies are being adopted across the continent and how that is impacting communities in just different ways for the better or for worse. And we do that, especially through our research. We document that and come up with recommendations, particularly for government, but also now for other groups, civil society and technologists as well. I took this session over from Nima, who is our outgoing ED. She wasn’t able to be part of this, but it’s a pleasure to just be able to jump in and take this on. You did talk about tying in with the previous speakers and it’s interesting because I was thinking around, I guess I’ll jump into that in a bit. But I just did want to say that from the work we’ve been doing, we have a three-part report called Women in AI. So we’ve been looking at the intersection of gender and AI for the past maybe three years. And we’ve documented that and just looked at how these technologies are being used by African women who are in many ways, much less involved in terms of access, in terms of usage, meaningful usage, where there are limitations in terms of language, in terms of literacy, etc. But just recently, actually yesterday, we have a new handbook that just published. We’ve been doing the work with IDRC and this is sort of putting across draft principles to guide policymakers in thinking about how to govern these systems, but not just policymakers, civil society and technologists as well as they’re developing these systems. So just of the background, I think for my sharing today, I just wanted to point out that a lot of our work has been in a sense critical because we’re feminists and so we use the Afrofeminist lens to analyze this intersection that I’ve been talking about. And I’ll just start from a point of, I think something that for us we’ve been, especially with the last piece of work, the handbook that I’ve been talking about, is we’ve been broadly questioning the notion of as technologies are being developed and adopted across the continent. I’m noting this very much within our work, which is on the African continent, but I am. open to open this up, is that the notion of benefit, right, that these technologies are benefiting people in such and such a way. I think that’s a very broad term and our work has been working to, you know, sort of demystify that or just make that very clear what does benefit mean for different communities as maybe a model is being, there is satellite models we’re seeing that are being brought about to just maybe look at how much communities are getting electrified. What does that do for the communities as they’re maybe getting, you know, more surveilled and then losing their privacy. So we’ve just been working to understand that notion of benefit, what does benefit mean indeed. And so from that, we’ve been moving to a point of, you know, I think we did, we’ve seen that a lot of the research that’s being done around, you know, understanding ethics and responsibility when it comes to development and adoption of AI is the notion of safety and security. But I think we’re trying to move more to a place of emancipatory and liberatory AI. How do these technologies bring just more agency, more freedom, more non-discrimination, more equality for the people who these technologies are being, you know, created for or as governments are bringing them down onto the people for, you know, public benefit or private sector using them for whatever reasons. And so I’ll just say then that, you know, a number of things, I’ll just again, I think maybe quickly tie in with what Emma and Christian were sharing, which was something that I think I’d wanted to talk about a little, very interesting to hear about, for example, the MUM model and the crisis, you know, touches, that’s really, really interesting to hear about when you’re talking about the, you know, trying to shelter the users from the, say, explicit or graphic information. That’s something I think first we’ve been exploring on the other end, and just the, the, the, the, the broader trying to bring to question, how does that happen as you’re trying to clean up those data sets. So visibilizing of the workers who are behind doing that work. So I’ve been very interested in hearing that from both of you, Emma and Christian, because we’ve been talking so much about that, you know, in the broader, you know, data, just data justice and data exploitation conversation. Because we do know that these models, well, are, you know, of course, advancing greatly and are able to, in, you know, many ways, do sort of self cleaning. But there is again, you know, human labor that is doing that, that cleaning. And so what does that mean for the people that are doing that work? Is it what are the, you know, what’s their quality of life from doing that work? So that’s, that’s one of the things I just want to quickly tie that in with that with the, you know, bias and just trying to debias the systems. And then, just broadly, I think we’ve been looking at as our societies are increasingly data buying. And so part of that is, you know, intelligence systems are being taken up in different, you know, parts of our societies. We’ve been looking at, for example, femtech, which is, I think, something that’s becoming popular, especially here on the continent, where, for example, women haven’t typically had easy access to medical services. And now, there are these, you know, these, these are femtech apps that you could use, whether they’re menstrual health apps, or pregnancy apps. And now we’ve read work, for example, I think Mozilla has done a lot of work on this, showing that, you know, there is the consent, the regimes are faulty, or then they’re not very meaningful in the sense that the terms and conditions that are offered in there are sometimes just the legalese is too much for the people to understand, or they’re confusing, or they do live on certain notions where maybe your data will be shared with a third party. So these are just a number of issues that we’re exploring in our work as well. Meaningful consent, etc. We’re looking at also techno-chauvinism, as a lot of these technologies are being brought up. This is, I think, from Meredith Brewster’s work, we were looking at, you know, again, going back from where I started, which is, you know, the notion of benefits. Sometimes technologies are brought onto communities, and they do not do more good, and they do more harm. And so we’re questioning the notion of this notion of any and every technology is for the good. And so the moving away from the idea of techno-solutionism, and, you know, moving to a place of, you know, getting solutions on board that actually are relevant to communities needs and their realities, etc. So I think for us in our broader conversation, we find that we engage a lot with the conversation of power symmetries. Again, there is the developer, there is the end user. And along that, especially for the end user, how do these technologies, you know, impact their lives for better or for worse. And we look at that, and we find that usually it’s not uni-dimensional, usually it’s intersectional in a way, you know, you find if it is harm, it’s happening at a very intersectional level, at different levels. And so just to wrap up my submission, I just want to say for us, we’re very much interested in moving towards a place of, you know, realising AI technologies that are more, you know, liberatory and emancipatory to the communities that these technologies are being brought to. Thank you.
Moderator – Charles Bradley:
Thank you very much. Yeah, and really sort of helped paint a picture of the wide variety of ways that these technology can, you know, can be very beneficial and really improve on some of these values that we’ve been talking about. Jim, I’m going to pass you.
Jim Prendergast:
Testing. Oh, there we go. Sorry about that. So Charles, I just wanted to point out that we were supposed to have another academic present, Dr. Luciana Bonatti from the National University of Cordoba in Argentina. I guess being on the other side of the world, sometimes you miss news, but apparently there’s an outbreak of wildfires in that part of Argentina, and she and her family had to evacuate. So if she watches this down the road, we just want to let you know, we’re thinking of you, and we hope everything works out for you. And we look forward to working with you in the future.
Moderator – Charles Bradley:
Thanks, Jim. We’re going to go to Lucia at the OECD next. Over to you.
Lucia Russo:
Hello, good morning. Good afternoon. Thanks for the invitation for this very interesting panel. My name is Lucia Russo. I’m from the OECD, the Artificial Intelligence Unit, and I will talk a little bit about the OECD AI principles and the way they, excuse me, they promote gender equality in AI. So just as a bit of a background, what are the OECD AI principles? The OECD principles are a set of principles, an intergovernmental standard on artificial intelligence that were adopted in 2019, and were developed through a multi-stakeholder process that we involved over 50 experts with the objective of coming up with principles that would be a common guideline for countries and AI actors in developing transport AI and to steer technology in an innovative way, but also in a responsible way. These principles were also endorsed later on by the G20, and so we are today over, today 46 countries have adhered to these principles. These are principles that are not binding in nature, but still they represent a commitment from countries that adhere to them to steer technology in a way that is embedding those principles, and they are ten principles which are organized into five value-based principles and five recommendations to policy makers. So, in terms of the value-based, these are they call for promoting AI, which is aimed at inclusive growth, sustainable development and well-being, that embed human-centered values and fairness, AI that is transparent and explainable, safe, secure, and robust, and they call for actors to be accountable throughout the AI life cycle. And then, the five recommendations, which are policy makers, they call for promoting AI, which is aimed at inclusive growth, and safe, secure, and robust, and they call for actors to be accountable throughout the AI life cycle. And then, the five recommendations to government concern policy recommendations around investing in AI, research and development, fostering a digital ecosystem for AI, shaping and enabling policy environment, building human capacity and preparing for labor market transformation, and then, the six principles, which are the five principles, they are, first, they are touch, obviously, on gender equality, but in particular, the first and the second call on stakeholders to proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people on the planet, and in advancing inclusion of underrepresented people. And then, the second principle calls on AI actors to respect the rule of law, human rights, democratic values, and including non-discrimination and equality, diversity, and fairness. So, I would point to these two as perhaps the most relevant in this conversation, and then, obviously, these are very high-level guidelines for countries. So, what have we been doing and what are countries doing to implement those principles? So, since 2019, we have been working at the OECD to help countries implement in practical ways these principles, and we have been monitoring through the OECD AI Policy Observatory policies that countries have been putting in place to meet, to address all of these principles. So, here, obviously, I won’t be exhaustive. I wanted just to point to a few examples of policies that have been adopted, implemented in countries. For instance, in the United States, when we talk about, well, we know that to make AI more inclusive and also to reduce bias and increase fairness, one important aspect that was discussed by Google is about data quality. And so, in the United States, an example is the Artificial Intelligence Machine Learning Consortium to advance health equity and researcher diversity that basically is a program that aims to make electronic health record data more representative so that training data is of higher quality, but also to increase the participation and representation of researchers from underrepresented communities in AI and machine learning, so that basically algorithm bias is ensured by including data from different genders, ethnicities, and backgrounds, but also by a more diverse representation in AI development. And, again, in another example, fostering inclusivity and equity in AI development is a program in the UK promoted by Alan Turing Institute, which is Women in AI and Data Science. And here, there are three pillars to this program. First one, map the participation of women in data science and AI in the UK, but also globally, with the ultimate objective of increasing women participation in these fields. Second, examine diversity and inclusion in online and physical workplace. And last, exploring how gender gap affects scientific knowledge and technology, technological innovation, and then promoting gender inclusive AI design. So, these are two examples. And then last two points I would make, there are also other approaches taken by countries, for instance, in the Netherlands and Finland, there have been attempts to build guidelines and assessment frameworks for non-discriminatory AI systems that basically help identify and manage risk of discrimination, especially in public sector AI systems. And so, these are guidelines for especially public servants when they use or procure AI systems. And the last point is, last year, we launched a catalog of tools still on the same platform, the OECD AI Policy Observatory, and this is really a platform that is intended to share tools for trustworthy AI, and basically institutions around the globe can submit tools so that other organizations can use them in their work. And just having a quick check, so it’s a searchable database where you can search for objectives that these tools are aimed at achieving, and for instance, looking at reducing bias and discrimination and ensure fairness. We have over 100 tools, and for instance, one that I was checking yesterday came up is at Google, the People Plus AI Research multidisciplinary team that explores the human side of AI. So, this is one example. Other example is, for instance, a tool which is called CounterGen, which is a framework for auditing and reducing bias in NLP, and basically, it generates counterfactual data sets. So, comparing the output of a model between cases where the input is a member of protected category and two cases where it’s not. So, these are just examples. One can search and browse for more. So, I wanted to give a bit of an overview of things that exist, but obviously, this is all illustrative, and I look forward to questions and discussion. Thank you.
Moderator – Charles Bradley:
Thank you so much, Lucia. There’s so much in what you said. I’m sort of trying to scrabble around your website to find all the amazing resources that you shared. So, maybe we can pick back up some of those points because they also tie back into some of the key ones earlier around data, like proving that we know what is happening, baselining, and trying to improve outcomes, and it feels like that might be something that we sort of want to dig into a bit more as we get into this discussion. I’m going to pass to our last speaker, Jenna.
Jenna Manhau Fung:
Thank you. Thank you for having me on this panel today. My name is Jenna Fung. I am the program coordinator of the Asia Pacific Youth Internet Governance Forum. As I share my thoughts, I will perhaps change my head a little bit, but to start with, I probably will refer most of the outcomes from our regional output as well as some response to all the information we just got. I came from a background that’s totally not technical. I don’t have research background either, and so this is a really fruitful sharing earlier to me, and I actually was assigned to give reactions, and so I was paying so much attention, and it actually made me thought of a few points, but I will share it at the end of my speech because I probably want to point out a few things that the Asia Pacific Youth actually talked about. While most of the people, and I think we have had enough session at IGF that talks a lot, how we concern about the impact and risk with AI and the implication of it, but maybe because with the youth, because of our lack of experience and expectation and knowledge, we are quite positive. That’s my observation by working closely with the youth, but of course, with that group of youth, it’s just Asia Pacific voice. We know that there’s like majorities of the online populations are formed by young people, but we don’t really get to invite all of them to our conference, so this is still just a representation of voice, but what we see is that when we erase those knowledge and baggage or things that adult would usually carry, the younger generations are quite positive, and the reality is we must implement these things to our everyday life because I personally see it that way as well, and I think with the technology, especially after Christian’s and Emma’s sharing, I really think that AI can eliminate human bias, which is something we unconsciously act out and we don’t know, and so I am positive about that, just like, just name some examples. I’m an Asian, and sometimes just picking up, we might use marginalised group or minority to describe certain groups of people, but with that, that means we unconsciously subscribe to certain ideas, that’s why we have that kind of concept, right? But earlier with Lucia, and use a different adjective to use, I think Lucia actually used underrepresented group instead, which is rather neutral. We did not intentionally do that, but we would have this kind of bias sometimes, and I do think technologies can help us with that, and, of course, because we will have policy in place where we, I believe everyone who is in this room will subscribe to the ideas of having a multi-stakeholder approach, my assumption, to form these policies, and if those policies are in place, I believe we can proactively eliminate this kind of bias that we don’t intentionally send out. And so, just bringing in some ideas from the youth forum that we had, I think it’s really important to get the users, the consumer, to co-design all these policies, and also have like technical community to be involved in policy-making as well, because they have the knowledge about the technologies, but not all of them, like currently might be included in all levels of policy-making, and so if we have them participate more into this process of making policy for like such complex technologies like AI, I think that will be really important as well. And I believe international standards are really important, because that’s how different countries can modernize their legal framework, and so that they can cater the needs of their own nations, and it will also help different industries to follow and to handle their space, because, for example, what I see is that Big Tech is running most of the service platform where I live on. I am a Gen Z. So these are privately owned public space, which govern and regulated by private sector. And I think international standards is really important because that will provide a comprehensive guideline for that, which is human centric, as another speaker mentioned. And before I wrapped up, I want to take this opportunity to probably bring out just something really personal. I hope that I am not appear to being too rude. Other than my usual work with the youth, I am a writer. And I have a Substack newsletter. But I am a really small scale writer. And so I don’t really get the money to pay and get my own domain. And so my newsletter is actually not really appearing on Google search result because of the policies between probably, well, I don’t really have the knowledge. But I assume it’s like the policy between Google and Substack. And I think there might be something to do with Substack. They changed policy at some point, which my newsletter is not showing on Google anymore. And so that’s just one personal example that I want to throw it here. Because Google, it’s one of the biggest search engines that adopt by most people in this world. And I just wonder, if we are talking about inclusivity, how can we or how can enterprise put a mechanism in place to ensure small scales writer, for example, in my case, to be included as well. But yeah, thank you so much.
Moderator – Charles Bradley:
Thank you very much. Yes, really good points raised from the, obviously, the conversations you’ve been having with youth community before and before. And a very specific question at the end that we might want to take offline to someone that might know the answer to that one. We’re going to start getting people involved and have a proper conversation. So if there are things to come along, please do put it in the chat if you’re online or raise your hand. I wanted to first come back to Emma and Christian. And I’m definitely going to come to people who have good questions. And also, there was a point from Lucio’s talked about the sort of counterfactual fairness at Google. And I wanted to see whether Emma or Christian, you could share a bit more about your experience of that, if you can answer that.
Christian von Essen:
Yeah. I’m happy to talk about that. We’ve had a similar approach. And I have this slide with these. We see similar patterns, right? This replacement there is exactly the counterfactual similarity that we are trying to get here. This has been central and super useful to us. What also is helpful is ablation of certain terms. Sorry, yes?
Moderator – Charles Bradley:
I was going to ask, could you give us a 10-second definition of what that means for people who might not know what counterfactual fairness means in that context?
Christian von Essen:
Yes, of course. So the idea is when you take a user’s query, for example, and it has a marginalized minority group in there, like, I don’t know, black woman video, then the likelihood that a classifier predicts something about this person for this query should be the same for black woman video as for the counterfactual query, where you replace black woman video with black man video, or white woman video. If you replace these terms, the output of the classifier should not change significantly. The other part, then, is ablation. It shouldn’t matter much whether you talk about black woman video, black woman dress, or just woman dress. That is also essential to what we’ve been doing here. But if you do this counterfactual fairness, you’re still sticking, in a certain sense, to a slice of the data. We are still sticking to gender terms, race terms. Also, outside of these slices, these particular slices, the behavior of these methods of these classifiers and systems should be the same. Doesn’t matter if we’re talking about genders or LGBTQ queries. The quality of the classifiers between these slices needs to be the same as well. That’s the metric part that we had. So counterfactual is great, ablation is great, and then we go beyond that. But it’s a fantastic first step to augment your training data to get the classifiers to say the right things and be fair.
Moderator – Charles Bradley:
And then, sorry, Emma.
Emma Higham:
Yeah, I was just going to say, I think a lot of this is about asking your system questions and seeing how it performs. And what you really want is to be able to ask the question of black woman hairstyles, white woman hairstyles, see are we getting results that we consider to be equivalent, see what happens if we type in the query hairstyles. There will always be some disparity because these systems are operating at mass scale. But we aim to have a way to hold the system accountable and reduce any disparity that we see. And I think I did hear a question earlier on. I think it was from Zulfa around data justice. I think one thing that I’ve been impressed by here is that these systems are able to learn from patterns such that sometimes you can have a relatively small amount of data to start to interrogate the system. And you can see that a system is not behaving well with just a few examples. You don’t have to find every potential item in a large set of potential identity groups in order to interrogate the system. You just need a few to say, is this system behaving wrong? And that already helps. So this idea of small data being enough to interrogate the system has been very powerful.
Moderator – Charles Bradley:
Are there any questions on this point particularly? So we can carry on this thought. Yes, please.
Audience:
Thank you very much for all the sharing. It’s really interesting. So I have a bit of a specific question. So it’s on leveraging AI to reach a goal of gender inclusivity. But to what extent is this corrections you’re talking about that are happening after? So in terms of fine tuning rather than beforehand, which is when you’re feeding in the training data. Because I think there was a recently published article about a study from the University of Pittsburgh about how there is no clear data, no clear percentage of how much of the training data being used to train these LLM, how much of it is women-authored data. And so it perpetuates the gender gap. Because, OK, when you’re looking at the digital divide between Global North and Global South, then if you look closely at those online in the Global South, more likely they’re going to be male users online. And then so I just want thoughts on what do you feel about this particular problem that is it more fine tuning that’s happening after you’re finding these bias outputs? Or how much percentage of effort is going into looking at using more diverse training data?
Moderator – Charles Bradley:
Thank you very much. I think that’s for Google.
Christian von Essen:
Yeah, so in the beginning a few years ago, when we started with BERT and the language models became bigger, and the first step was to create models that are credible and useful at all, it was more of a fine tuning step later to address and correct these biases. But as we’re getting more into even larger models where training data selection now becomes a more challenging problem, and where also these kinds of concerns have spread more through the community and get more scrutiny not only outside, but also from communities inside Google, this gets more and more into the first step of training. So before fine tuning is happening, correcting the first step of data and making sure that that is representative gets ever more into that first step as well. And fine tuning and first step also get ever more mixed and intermingled. So that the question as such becomes very tricky to answer. Where does the first step end and fine tuning get started as we’re talking about mixtures of training?
Emma Higham:
Yeah, I mean, I would just plus one. I think these things are increasingly very, very intermingled. But what you do see is what an amazing technology. So let’s see what this technology can do. As we’re applying this new technology, how could we design it in a safe way? How could we design it in a way that it’s inclusive? You look at that first version of the technology. And then the first thing you do before you think about bringing it to market is you interrogate it. You do the fine tuning based on those tests. And then if it didn’t work well, you go back to the first step again. So this is really cyclical. And there are many layers at which we can hold our systems accountable. Often, you have foundational models that you’re using for lots of different use cases. And you want to make sure those are working well, as well as specific use cases, seeing how it’s behaving in context and making sure that in context it’s working well for users for a specific product experience. Great question.
Moderator – Charles Bradley:
Yeah, very good. While we’re still on the point, Lucia, is there advice, tools, resources on this particular point on the OECD that we should be looking at?
Lucia Russo:
Well, not on this specific. No, we are more on analyzing the big trends. So just to mention that we have two papers on generative AI, one that is really analyzing some preliminary considerations around these aspects that have been discussed, like what are these models, how they are evolving, what kind of policy implications they have around safety, for instance, and what kind of measures are developers implementing. So this is one paper. And then there is another paper that we did to support this G7 Hiroshima process around generative AI. And basically, there is an analysis of what countries, based on a questionnaire to G7 members, on what countries feel are the main risks around generative AI but also the main opportunities, and so what kind of actions internationally can be undertaken. So this is more in terms of, again, on policy responses. This is the contribution from the OECD, very much enjoying the conversation to understand better at which point you can intervene. This is very enlightening for us as well.
Moderator – Charles Bradley:
Great, yeah, absolutely. And obviously, the role that you’re playing on the bigger picture of this conversation, it’s sort of critical to get into the real weeds here, because the devil is really in the detail, isn’t it? We have a question online, and then.
Audience:
Yeah, thanks, Charles. So it’s from Samridhi Kumar. It’s a bit of a comment and a question. I think I still remain a tad bit skeptical about how AI and gender inclusivity may interact, especially when AI may present itself as a popular tool for surveilling people based on gender. What are the possible solutions for this dilemma?
Moderator – Charles Bradley:
Babina, what do you think? How could it be a solution to this dilemma? What are the solutions here?
Bobina Zulfa:
I think it’s a lot of things that the panel is trying to speak directly to here. But I think I share the sentiment with the person who asked the question that I’m also very skeptical as to how realistic some of these things are or how feasible they are. So for example, the persons from Google have been sharing how from the previous question, when the person asked about the training data sets as opposed to fine tuning. And then, Emma, I think you did share your optimism that we do have a good tool. And we can, before we send it out to the market, get it in a much better place before we send it out to the communities. But I think that, in a sense, again, is the, I mean, I think for me, and this may tie in with Lucia’s work as well with some of the work we’re doing on the regulatory arm of things, is it comes into balance out competing interests. Because there is Google, which is developing these technologies. And there is a number of interests that they have from the information sharing to being a profit-making company as well, to the communities or the persons that these technologies are being pushed out to the end user, who these technologies could pose real-life impacts on. And so I think, for me, it’s just, I think we just need to be very intentional about thinking about these things from the get-go. And I think it’s a lot of what we are reiterating here, everyone else of us. It could be with our OECD’s principles, thinking about these things from the get-go as we’re getting into development, even from the ideation stage. And then I think then we think about how more intersectionally factoring in these factors, as opposed to waiting to, oh, we push this out. And then, oh, we can now try to, we’re putting fires out, in a sense. And so, for example, the data issue, I am very skeptical as well when you mention that small data sets. I do totally agree these technologies have immensely evolved, and they’re able to use just small data sets to just do so many of these things you’ve been talking about, like looking out for bias, et cetera. But then again, that speaks to, again, someone mentioned this, if we do have limited data sets that are not representative of a big part of maybe the global majority, how do we expect, realistically, that not to reflect in the products that are pushed out at the end? And so I think it’s a lot of the caution, or the skepticism has been expressed through a lot of scholars’ work over the last year or two. And I think a number of principles, like the OECD is doing, UNESCO, et cetera, the work we’re doing, and so many other organizations, civil society, et cetera, are saying, are factoring these things from the get-go, and think about these things from the get-go. And then that could counter the skepticism, because then we’re sure that we are pushing out products that are safe and are going to actually be of benefit to the person that these products are being pushed out to.
Moderator – Charles Bradley:
Thank you. Yeah, we gave you the really hard question, so thank you for giving us such an eloquent answer to it. We have another question in the room. Andrew.
Audience:
Thanks, Anne. Thanks, Google, for the opening presentation, which is kind of interesting to get a bit more into the weeds about how you actually are trying to manage these problems. And I guess my question is a bit about the value of non-binding principles. There are currently about over 40 international processes setting out how to govern AI. A couple are binding, European. There’s a cluster of UN ones which may go nowhere. And there’s 25-plus voluntary non-binding initiatives being developed by a variety of industry and other types of bodies. And I just queried the value of endlessly producing high-level sets of principles, which don’t overlap or aren’t consistent, but may all offer slightly different variations. And it strikes me what was interesting about the Google presentation is what would be of real value to the wider public would be something that I think doesn’t yet exist, which is a mechanism to independently audit what you’re doing to assess whether the steps you’re taking at the engineering level are actually producing the outcomes that you want to be desirable. And if they do, you get some kind of kite mark or some recognition that what you’re doing with AI is actually fulfilling those wider social goals. And it strikes me that that would be, given the time, money, and effort that goes into things like the IGF, which is a whole series of fairly non-binding conversations or these voluntary principles, whether investing some of that time and money in developing those independent audit mechanisms might be a more useful use of the planet’s resources in terms of getting at what we want to get at.
Moderator – Charles Bradley:
I think I’ll let the OECD respond first. Lucia.
Lucia Russo:
OK. Well, thank you. So I’ve never done an analysis of all of the principles that exist, so I don’t know to which extent it’s fair to say that they don’t overlap, because I would assume that there is a large overlapping among these principles. And for instance, if one takes, well, recently the UK came up with their approach to AI regulation, and that is based on, again, high-level principles, cross-sectoral principles. And they do overlap to a large extent, or even almost all of the principles overlap with the OECD principles. The same, I don’t know, the NIST management framework in the US, it’s really closely linked to the work of the OECD. We did a classification framework for AI systems, but basically what it says is that not all AI systems are equal. They don’t have the same risks. They don’t have the same impact on the different contexts they work in. So there needs to be this risk-based approach, which is something that is becoming the approach taken in most jurisdictions. Even the UAI Act takes a risk-based approach by classifying risks of AI systems and having provisions related to the different systems based on the risk category they fall in. So I’m just saying I understand the concern of having a plethora of principles. I don’t think there is a hierarchy of principles, yet there are, I think, some principles that are being implemented more uniformly across countries and with some variation, of course. And I have not done this exercise, but perhaps try and check where they overlap, because I’m sure there is a lot that has to do, again, with fairness, with transparency, with accountability, and safety, security. So I understand, and I think it’s a fair concern to say that everyone is doing its own principles. Perhaps there is also, I mean, this is also a very new field. Everything is in the making, so even regulation. is experimenting really and trying to understand what’s the best approach. So I would say, yeah, perhaps there needs to be some more alignment and there are attempts lately to have more international coordination. As I was saying, the G7 is one. The UK is promoting this safety summit at beginning of November. The UN is also advancing work. So I think there are activities to come together and have more coordination on that. And I think the mechanism of auditing the systems that I agree there is not such a thing yet. When it comes, certainly with standards and with the UIAC, there will be a check on the system. So it won’t be perhaps the same thing that it was proposed. But I think there is a lot that is in the making. So all of this is just being developed right now. So I don’t have a full answer. I’m sorry, it’s a very difficult question. But I just want to say that there are a lot of discussions and there is a lot of commonalities, despite the fact that there seems to be a lot of lack of convergence. And yeah, that’s what I wanted to say.
Moderator – Charles Bradley:
Thank you. Yeah, and I think the sort of the principles have started to sort of, the principles work that’s been going on for a while now has started to sort of give us the train tracks for the regulation that is coming. And that has a lot more, obviously a lot more teeth to it. And that might sort of get to some of the points that Andrew was raising. Does anyone else want to come on this point before I ask another question from the panel? No. I suppose one of the things that comes, any more questions in the room whilst we’re before? One of the things this sort of gets to is trust in measurement. Like Google have, and Christian have given us this great presentation around what you’re already doing to measure bias and reduce sort of certain biases in that, in your work, and also how you’ve been able to reduce shocking offensive content through some of the technologies that you’ve used. But we’ve also heard the sort of the flip of that, which is sort of Google marking its own sort of homework and showing that how you’re measuring against your own known biases and how you’re improving your own system against your own sort of measurements. So I think it’s sort of, some of this is really about how do we build trust in that measurement and in that system. And I wonder whether any of the panelists had some thoughts on that. So if we’re going to use AI and we believe in the potential of AI to increase gender inclusivity, how do we know that it’s actually doing that? And do we trust that? And how might we trust that more? Any sort of reflections or thoughts on that from the panel? Or anyone in the room? Thank you.
Audience:
Can I just repeat my plea for independent audit process? I mean, the only way you know is, if you don’t trust the company to mark its own homework, someone else has to mark the homework. And I think my point is, going back to the OECD, I’m not saying there aren’t, I think there is agreement, fairness, inclusivity, there’s a set of things we already know we want AI to do or need to do. What we don’t have is any method of assessing whether any of the applications are actually doing it. And that’s where I’m saying time and investment needs to go within the wider community rather than in doing yet more sets of principles. So I think the independent audit is the key thing. And I have no reason to distrust what Google are doing. You know, on the basis of what I’ve heard today, it sounds perfectly credible, perfectly sensible, and they’re trying to work with the limitations of data, et cetera. But obviously for the rest of the wider public, it needs to be audited in some way to satisfy us that gender equality is being promoted through these kinds of systems. And surely that is where the conversation should be and where the investment should be and not on high level principles and the endless discussion of high level principles, which has gone on in IGFs from year after year for like 20 years. Thank you, sorry.
Moderator – Charles Bradley:
Yes.
Emma Gibson – audience:
Hi, I’m Emma Gibson from the Alliance for Universal Digital Rights or AUDRI for short. And I definitely agree with the gentleman who’s talking about independent audits. But unfortunately, I also want to introduce another set of principles that we launched this week. It’s the Principles for a Feminist Global Digital Compact. And it’s 10 principles. And one of them is around adopting equality by design principles and a human rights based approach throughout all phases of digital technology development. And the Equal Rights Trust last week launched some equality by design principles themselves. And really that is including things like gender rights, impact assessments, incorporating them into the development of algorithmic decision making systems or digital systems prior to deployment. So whatever you call them, there absolutely is appetite for that kind of thing. And they do need to be independent to make sure that we’re not amplifying and perpetuating existing biases.
Moderator – Charles Bradley:
Thank you. I think we should come back to some of the challenges that this technology might also be able to help with. So we were trying to get the session to focus on ways in which AI can solve some of these problems. And I wonder whether there are like particular challenges that the panel people in the room think that we should be spending our time, effort, money on, that we can actually sort of promote gender inclusivity and inequality. Like what should we be focusing on and how might AI help us do that? Or other examples of things that are already sort of underway very practically.
Lucia Russo:
Maybe I’ll go first. And here, I’m not going to talk about the technical tools. I would go more broadly, I think, around, again, what kind of policy actions can be put in place to increase gender equality in AI. One, when we talk about gender equality, when we look at data on AI, on women representation in AI, the landscape is still very much not very positive for women. So we know that in OECD countries, more than twice as many young men than women can program, which is essential for AI development. So there is already this discrepancy. Then in terms of AI researchers, only one in four researchers publishing on AI worldwide is a woman. So there is, again, not fair representation in AI research. And when we look at developers, again, this share, it’s even lower. From a 2022 survey of Stack Overflow users, only 4% of respondents were female. And LinkedIn data suggests that female professional with AI skills represent less than 2% of workers in most countries. So I would say that there are still policies, basic policies that concern really development of AI-specific skills for women that are essential. As we said at the beginning, I mean, one key aspect is to increase women representation in design of the systems, in research of the systems. So this is a key policy that countries should look at. And there are countries, obviously, already doing that by promoting, by promoting scholarships or even programs at universities in Germany, for instance, sorry, providing funding to women-led research teams in AI. So I would say there is some policies that countries can certainly do, that is really to address one key gender gap, which is the one of representation of women in AI research. So that’s what, yeah, what I would suggest to increase gender, this is, to increase gender, reduce gender gaps, this is essential.
Moderator – Charles Bradley:
Thank you very much. Emma, I think I want to come to you actually, if that’s okay. And because obviously you’ve shared us a little bit about how you’re using AI for safe search and the sort of ranking. I wonder whether you had any more like specific examples that you could talk to and how like inclusion is being used in those products as well.
Emma Higham:
Yeah, absolutely. I mean, one of the things I’m really excited about is how AI is improving our ability to do language understanding and to understand concepts at scale. One area that I’ve seen this have significant impact is a product I used to work on, Google Translate, where products like Google Translate, Google Search, we are actually all able to test them. We use them, many of us, every day. And we find when they don’t work well for us, and we hear that from users. One thing that we heard in the past was during Women’s World Cup, women would be typing in queries like France versus Brazil, and you’d find that it would take you to the men’s football team. Typing in the England team, you see the men’s team. That’s something that we heard from users That’s something that we heard from users. We heard scrutinized, and we looked to solve. Actually, it was a non-trivial problem to solve as we had to build the right partnerships. But this year, we were pleased to see that we were able to address that. The Women’s World Cup, you could get easy and accessible results about women’s football in just the same form factor that you could for men’s. That’s a great example of how users held us accountable and were able to improve our systems. In the same way for Google Translate, we’ve seen that there were some cases where translations were, in the past, not fully inclusive. This can be because language is very complex around the way that we think about gender in different languages. It’s not always easy for a computer to translate that well. But as we have seen AI get better at pattern matching, and our systems, our internal accountability, our internal ability to test these systems at scale, we have seen that Google Translate has got significantly better in this regard. And we’ve been able to test and validate that Translate is working across a wide range of languages in a way that we think is really effective for understanding gender in different ways. One specific example about a recent application of this is I can actually now talk to Translate and tell it in what form factor I want to be speaking. Do I want to be speaking in the formal version? Do I want to translate something so that it is in feminine tense or male tense? And this means we no longer need to default, right? We don’t need to make assumptions around, were you talking about a male audience or a female audience? We can set that in the tool. And this is the kind of thing that’s now possible and newly possible because of this technology. I hope that made sense. But I think the thing I’m excited about here is you’ve all been holding us accountable for many years. That’s one of the great things about working at Google is users hold us to a high standard. And I’m excited about AI as a tool that helps us meet that high standard better.
Moderator – Charles Bradley:
Thank you. Yeah, and it made a lot of sense. And it’s just really good to hear these very practical but very large impact shifts that are really starting to dig into the question here. And it’s things that impact people on a day-to-day basis as well, which I think is really good. And Google’s been particularly good at solving for day-to-day problems. It’s built quite a large business out of it. Everyone here who doesn’t speak Japanese has probably used Google Translate or Lens or something to navigate the street signs or the menus this week. I definitely have. Any final questions or thoughts from the room? Yes, please. Yeah, come and take the mic. Thank you.
Audience:
Hello. Yes, my name is Natalia. I’m working in the field of education. And what Luchiy just mentioned really resonates with me. I work in Cambodia for the past eight years, and I’m the founder of the first female coding club. And the representation of women in the field of technology is extremely low. It’s even lower than Luchiy has mentioned. And if you type in Google search Asian programmer out of 20 images, you will see maybe one or two Asian faces as programmers. But at the same time, like AI adoption and the growth is giving me a lot of positive vision because I do believe that actually, especially generative AI tools may bring a lot of opportunities for female workers in the field. As we know, most of the girls would choose a social or humanitarian subject. And this is where generative AI can be really a great field for the development and application of these interests in human science and social science interests in human science and social science mixed with technology. However, my question is, how can the policymakers make sure that this component of the broader introduction and engagement of female workers and students would be applied across the world? I work in Cambodia where only 1.2% of girls choose to study technology. This is extremely low. And Khmer language, like in Google, they don’t use, it’s not that very well working yet. So there are many barriers and I really wanna see much more focus on the upskilling, reskilling and introduction of female voice in the field of AI. And I think generative AI is a great pipeline for that. So are there any comments or I would like to hear? Thank you.
Moderator – Charles Bradley:
Thank you. Thank you very much. Lucia, are there any sort of thoughts, work from the OECD on this? You touched on the same sort of deficit earlier.
Lucia Russo:
Yeah. I mean, it’s one point that actually I forgot to make on the positive side is indeed that generative AI can help because you have a coding co-pilots that actually can speed up time to code. And I think it can also be a tool for people to learn much quickly to code. So there may be some, as it was suggested, some opportunities there that are from generative AI. The one thing of course is that the language that is trained, the generative AI large language models are trained on the data needs to be there. So what we see is also a lot of investment in training large language models in languages other than English. And so this is one thing that needs to be also promoted by countries so that these models exist not only for the languages that have the most of the data. And then in terms of policies, again, that is the question, how do you bring more interest for women? And I think one of the motivation that it was mentioned that is key. So we have been in a lot of policies like coding from earlier age, but also, as I said, scholarships, but also role models are quite important to make young girls also identify with type of jobs that they could take on later on. So this is a big question, how you have more women in science. But as I said, there are examples that span this kind of policy actions.
Moderator – Charles Bradley:
Thank you very much. Yeah, and yeah, definitely multifaceted challenge to do that. But I think the point here is that this becomes something that’s in our sort of day-to-day apparatus and therefore people are gonna be more interested in being part of it. So thank you, Natalia, for that comment and question. We’re coming to the end. I wanna sort of wrap up in about 30 seconds or so, but I wanted to just see if any of our panelists had anything burning they wanted to share or respond to before I did that. No. Great, well, huge thank you to our panelists for joining us in a wide variety of time zones and appreciate you staying up or getting up early to do so. I definitely found it a very interesting conversation. We were able to get into some of the practical aspects of this topic. We also touched on the multi-layered and complex nature of this topic as well. And I think that it’s been really good to see that there’s a lot of interest in developing solutions that can solve this problem with more people from a more inclusive way. We’ve had some principles launched in the session. We’ve had some discussions about the value of principles in the session. We’ve had some very practical data and sort of measures sort of shared. So I’ve learned something and thank you for doing that and for being part of this conversation. And with that, I would like to close the session, say thank you again and hope to see you all again soon. Thanks, bye.
Speakers
Audience
Speech speed
183 words per minute
Speech length
1207 words
Speech time
395 secs
Arguments
Current approach towards gender inclusivity in AI appears to be more reactive than proactive
Supporting facts:
- Discussions revolve around making corrections or fine tuning after identifying biased outputs.
Topics: AI, gender inclusivity, training data
The training data used for AI could be perpetuating gender gaps due to lack of clear percentage being women-authored data.
Supporting facts:
- A study from the University of Pittsburgh touched on the lack of transparency regarding the percentage of women-authored data used in training AI.
- The digital divide between Global North and Global South also contributes to this issue, as most online users in the Global South are male
Topics: AI, training data, gender gap
Concerns about how AI and gender inclusivity interact, particularly in terms of surveillance
Topics: Artificial Intelligence, Gender Inclusivity, Surveillance
The value of non-binding principles in governing AI is questionable due to inconsistency and overlapping
Supporting facts:
- There are currently about over 40 international processes setting out how to govern AI
- Few are binding like European, some are voluntary non-binding initiatives developed by different industries
Topics: AI governance, non-binding principles
An independent audit mechanism is needed to assess AI outcomes
Supporting facts:
- Google’s actions at the engineering level could be audited to see if they are producing the desired outcomes
Topics: AI auditing, AI outcomes
Need for independent audit process in AI applications
Supporting facts:
- Independent audits can assess whether applications are actually promoting fairness and inclusivity
- The wider public can be satisfied via an audit
Topics: AI applications, independent audit, trust in AI
The representation of women in the tech field is extremely low
Supporting facts:
- In Cambodia, the speaker works, only 1.2% of girls choose to study technology
- When typing ‘Asian programmer’ into Google search, only one or two Asian faces appear among 20 images
Topics: Women in tech, Gender Equality, Education
AI adoption and the growth provides positive vision and many opportunities for female workers
Supporting facts:
- Most of the girls would choose a social or humanitarian subject and generative AI can be an intersection between these interests and technology
Topics: AI, Women empowerment, Opportunities
There are many barriers to the broader introduction and engagement of female workers
Supporting facts:
- Google’s services aren’t working well with the Khmer language
- Lack of representation in visual search results
Topics: Barriers, Women in tech, Gender equality
Report
The discussions surrounding gender inclusivity in AI highlight several concerns. One prominent issue is the presence of biased outputs, which are often identified after the fact and require corrections or fine-tuning. This reactive approach implies that more proactive measures are needed to address these biases.
Furthermore, the training data used for AI might perpetuate gender gaps, as there is a lack of transparency regarding the percentage of women-authored data used. This opacity poses a challenge in accurately assessing the gender inclusivity of AI models. Another factor contributing to gender gaps in AI is the digital divide between the Global North and the Global South.
It has been observed that most online users in the Global South are male, which suggests a lack of diverse representation in the training data. This further widens the gender gap within AI systems. To promote gender inclusivity, there is a growing consensus that greater diversity in training data is necessary.
While post-output fine-tuning is important, it is equally essential to ensure the diversity of inputs. This can be achieved by using more representative training data that includes contributions from a wide range of demographics. There are also concerns about the interaction between AI and gender inclusivity, particularly with regards to surveillance.
The use of AI in surveillance systems raises questions about privacy, biases, and potential infringements on individuals’ rights. This highlights the need for careful consideration of the impact of AI systems on gender equality, as they can unintentionally reinforce existing power dynamics.
In terms of governance, there is a debate about the value of non-binding principles in regulating AI. Many international processes have attempted to set out guidelines for AI governance, but few are binding. This lack of consistency and overlapping initiatives raises doubts about the effectiveness of these non-binding principles.
On the other hand, there is growing support for the implementation of independent audit mechanisms to assess AI outcomes. An independent audit would allow for the examination of actions taken by companies like Google to determine whether they are producing the desired outcomes.
This mechanism would provide a more objective assessment of the impact of AI and help hold companies accountable. Investing in developing independent audit mechanisms for AI is seen as a more beneficial approach than engaging in non-binding conversations or relying solely on voluntary principles.
This suggests that tangible actions and oversight are needed to ensure that AI systems operate in an ethical and inclusive manner. The representation of women in the tech field remains extremely low. Factors such as language barriers and a lack of representation in visual search results contribute to this underrepresentation.
To address this, there needs to be a greater focus on upskilling, reskilling, and the introduction of the female voice in AI. This includes encouraging more girls to pursue technology-related studies and creating opportunities for women to engage with AI-based technologies.
Overall, while there are challenges and concerns surrounding gender inclusivity in AI, there is also recognition of the positive vision and opportunities that AI adoption can provide for female workers. By addressing these issues and actively working towards gender equality, AI has the potential to become a powerful tool for promoting a more inclusive and diverse society.
Bobina Zulfa
Speech speed
184 words per minute
Speech length
2153 words
Speech time
702 secs
Arguments
Questioning the notion of benefit from AI technologies for different communities
Supporting facts:
- As AI technologies evolve and are adopted across different communities, it is important to understand what ‘benefit’ means for these communities.
- Technologies may produce unexpected outcomes and may harm more than help in some instances.
Topics: AI adoption, Impact of AI
Promoting emancipatory and liberatory AI
Supporting facts:
- Interested in moving towards greater agency, freedom, non-discrimination, equality in AI technologies.
- Technologies should be relevant to communities’ needs and realities.
Topics: AI Ethics, Technology and Society
Concerns with data cleaning work and labour
Supporting facts:
- There’s human labour involved in data cleaning, which can have implications for workers’ quality of life.
- It is important to acknowledge and support the people who do this cleaning work.
Topics: Data Cleaning, Human Labour, AI and Bias
Need to think about potential issues and impacts of AI technologies from the start of development
Supporting facts:
- It’s important to balance out competing interests
- We need to avoid being reactive and instead be proactive
Topics: AI, Intersectionality, Market Development
Skepticism towards the idea of using small data sets to detect bias
Supporting facts:
- Concern that the limited data sets may not represent a big part of the global majority
- If data is not representative, it could reflect in the end products
Topics: AI, Data Analysis, Bias
Report
A recent analysis of different viewpoints on AI technologies has revealed several key themes. One prominent concern raised by some is the need to understand the concept of “benefit” in relation to different communities. The argument is that as AI technologies evolve and are adopted across various communities, it is vital to discern what “benefit” means for each community.
This is crucial because technologies may produce unexpected outcomes and may potentially harm rather than help in certain instances. This negative sentiment stems from the recognition that the impact of AI technologies is not uniform and cannot be assumed to be universally advantageous.
On the other hand, there is a call to promote emancipatory and liberatory AI, which is seen as a positive development. The proponents of this argument are interested in moving towards greater agency, freedom, non-discrimination, and equality in AI technologies.
The emphasis is on AI technologies being relevant to communities’ needs and realities, ensuring that they support the ideals of non-discrimination and equality. This perspective acknowledges the importance of considering the socio-cultural context in which AI technologies are deployed and the need to design and implement them in a way that reflects the values and goals of diverse communities.
Another critical view that emerged from the analysis is the need to move away from techno-chauvinism and solutionism. Techno-chauvinism refers to the belief that any and every technology is inherently good, while techno-solutionism often overlooks the potential negative impacts of technologies.
The argument against these views is that it is crucial to recognize that not all technologies are beneficial for everyone and that some technologies may not be relevant to communities’ needs. It is essential to critically evaluate the potential harms and benefits of AI technologies and avoid assuming their inherent goodness.
The analysis also highlighted concerns regarding data cleaning work and labour. It is important to acknowledge and support the people who perform this cleaning work, as their labour has implications for their quality of life. This perspective aligns with the goal of SDG 8: Decent Work and Economic Growth, which emphasizes promoting decent work conditions and ensuring fair treatment of workers involved in data cleaning processes.
Furthermore, the analysis identified issues with consent in Femtech apps. Femtech refers to technology aimed at improving women’s health and well-being. The concerns raised encompass confusing terms and conditions and possible data sharing with third parties. The lack of meaningful consent regimes in Femtech apps can have significant implications for gender inequality.
This observation underscores the need for robust privacy measures and clear and transparent consent processes in Femtech applications. The analysis also noted the importance of considering potential issues and impacts of AI technologies from the early stages of development. Taking a proactive approach, rather than a reactive one, can help address and mitigate any potential negative consequences.
By anticipating and addressing these issues, the development and implementation of AI technologies can be more socially responsible and in line with the ideals of sustainable development. Skepticism was expressed towards the idea of using small data sets to detect bias.
The argument is that limited data sets may not represent a significant portion of the global majority. If the data used in AI algorithms is not representative, it could lead to biased outcomes in the end products. This skepticism highlights the need to ensure diverse and inclusive data sets that reflect the diversity of communities and avoid reinforcing existing biases.
Finally, the analysis highlighted initiatives such as OECD’s principles that could help address the potential issues surrounding AI technologies. These principles stimulate critical thinking about the potential social, economic, and ethical impacts of AI technologies from the outset. Several organizations are actively promoting these principles, indicating a positive and proactive approach towards ensuring responsible and trustworthy AI development and deployment.
In conclusion, the analysis of different viewpoints on AI technologies revealed a range of concerns and perspectives. It is important to understand the notion of benefit for different communities and recognize that technologies may have unintended harmful consequences. However, there is also a call for the promotion of emancipatory and liberatory AI that is relevant to communities’ needs, supports non-discrimination and equality.
Critical views on techno-chauvinism and solutionism emphasized the need to move away from assuming the inherent goodness of all technologies. Additional concerns included issues with data cleaning work and labour, consent in Femtech apps, potential issues and impacts from the start of AI technology development, skepticism towards using small data sets to detect bias, and the importance of initiatives like OECD’s principles.
This analysis provides valuable insights into the complex landscape of AI technologies and highlights the need for responsible and ethical decision making throughout their development and deployment.
Christian von Essen
Speech speed
144 words per minute
Speech length
1242 words
Speech time
518 secs
Arguments
The use of AI language understanding in reducing unnecessary sexual results in searches
Supporting facts:
- In 2022, they shared that they reduced unnecessary sexual results by 30% in the previous year using AI language understanding.
- They saw a similar improvement in the following year and are still working on reducing the bad content further.
Topics: AI, Natural Language Processing, Search algorithms
Measurement of success in mitigating bias needs comparison across different data slices
Supporting facts:
- To measure success, they compare how the classifiers perform across different slices such as LGBTQ, gender, race
- The probability of predicting porn should be the same for any slice of data, given the same labels
Topics: Measurement, Bias Mitigation, Data Slicing
Counterfactual fairness in AI involves ensuring the outcome of a classifier should not change significantly when a query is modified by replacing terms related to marginalized minority groups.
Supporting facts:
- When a query such as ‘black woman video’ is used, the likelihood that a classifier predicts something about the person should remain the same when the terms are replaced with ‘black man video’ or ‘white woman video’.
Topics: Counterfactual Fairness, AI Equality
Ablation, an aspect of counterfactual fairness, means that the classifiers should remain fair even when certain terms are removed from a query.
Supporting facts:
- The output of classifiers should not change significantly whether the query is ‘black woman video’, ‘black woman dress’, or just ‘woman dress’.
Topics: Ablation, AI Equality
The behavior of the classifiers and systems should remain the same across all slices of data, not just for gender and race terms.
Supporting facts:
- This fairness should extend beyond terms specific to gender and race to include other categories as well, such as LGBTQ queries.
Topics: AI Equality, Data Slices
Counterfactual fairness is a necessary first step to augment your training data and create fair classifiers.
Topics: Counterfactual Fairness, AI Equality
Initial focus was on creating credible language models; bias correction came later
Supporting facts:
- In the early years of BERT and language models, the priority was to create models that were credible and useful before focusing on fine tuning and bias correction
Topics: BERT, Language Models, Bias Correction
Ensuring representativeness of training data is becoming a prior step before fine tuning.
Supporting facts:
- The first step of training is ever increasingly about making sure that the data is representative
Topics: Training Data, Data Representativeness, Fine Tuning
Report
The implementation of AI language understanding has yielded promising results in reducing the presence of inappropriate sexual content in search results. It was reported in 2022 that there had been a 30% decrease in such content from the previous year, thanks to the application of AI algorithms.
This positive development has continued in subsequent years, with ongoing efforts to further decrease the presence of harmful content. Addressing bias in AI is a crucial aspect of promoting equality, and specific measures have been taken to ensure that training data includes protected minority groups.
To counteract bias, training data now includes groups such as “Caucasian girls,” “Asian girls,” and “Irish girls.” Additionally, patterns across different groups are utilized to automatically expand the scope from one group to another, effectively reducing biases in AI systems.
Success in mitigating bias is measured by comparing the performance of classifiers across different data slices, including LGBTQ, gender, and race. The goal is to ensure that the probability of predicting inappropriate content remains consistent across all data slices, regardless of individual characteristics.
The inclusion of corrective training data and the application of additional methods have led to significant improvements in the equality of quality across different data slices. These improvements are evident when comparing models to baseline models. Furthermore, the introduction of more methods and data further enhances these gains.
Counterfactual fairness in AI involves making sure that the outcome of a classifier doesn’t significantly change when certain terms related to marginalized minority groups are modified. For example, if a search query includes the term “black woman video,” the classifier should predict a similar outcome if the term is replaced with “black man video” or “white woman video.” This approach ensures fairness across all user groups, regardless of their background or identity.
Ablation, which is also a part of counterfactual fairness, focuses on maintaining fairness even when specific terms are removed from a query. The output of classifiers should not change significantly, whether the query includes terms like “black woman video,” “black woman dress,” or simply “woman dress.” This helps ensure fairness in AI systems by reducing the impact of specific terms or keywords.
Fairness in AI systems should not be limited to gender and race-related terms. The behavior of classifiers and systems should remain consistent across all data slices, including categories such as LGBTQ queries. This comprehensive approach ensures fairness for all users, irrespective of their identities or preferences.
Counterfactual fairness is considered a necessary initial step in augmenting training data and creating fair classifiers. By ensuring that classifiers’ predictions remain consistent across different query modifications or term replacements related to marginalized minority groups, AI systems can strive for fairness and inclusivity.
While the initial focus of language models like BERT was on creating credible and useful models, efforts to address bias and fine-tune these models were incorporated later. It was vital to establish the credibility and usefulness of such models before incorporating bias correction techniques.
As AI models continue to grow in size, selecting appropriate training data becomes increasingly challenging. This recognition highlights the need for meticulous data selection and representation to ensure the accuracy and fairness of AI systems. Ensuring the representativeness of training data is seen as a priority before fine-tuning the models.
By incorporating representative data from diverse sources and groups, AI systems can better account for the various perspectives and experiences of users. The distinction between fine-tuning and the initial training step is becoming more blurred, making it difficult to identify where one ends and the other begins.
This intermingling of steps in the training process further emphasizes the complexity and nuances involved in effectively training AI models. In conclusion, the use of AI language understanding has made significant progress in reducing inappropriate sexual content in search results.
Efforts to address bias and promote equality through the inclusion of training data for protected minority groups, comparing classifier performance across different data slices, and ensuring counterfactual fairness have proven successful. However, it is essential to extend fairness beyond gender and race to encompass other categories such as LGBTQ queries.
The ongoing efforts to improve the credibility, bias correction, and selection of training data highlight the commitment to creating fair and inclusive AI systems.
Emma Gibson – audience
Speech speed
170 words per minute
Speech length
175 words
Speech time
62 secs
Arguments
Emma Gibson supports the idea of adopting equality by design principles in every phase of digital technology development
Supporting facts:
- The Equal Rights Trust launched equality by design principles
- Emma introduced the Principles for a Feminist Global Digital Compact
Topics: Equality by Design Principles, Algorithmic Decision Making Systems, Digital Technology Development
Report
The Equal Rights Trust has recently launched a set of equality by design principles, which has received support from Emma Gibson. Emma, a strong advocate for gender equality and reduced inequalities, believes in the importance of incorporating these principles at all stages of digital technology development.
Her endorsement highlights the significance of considering inclusivity and fairness during the design and implementation of digital systems. Emma also emphasizes the need for independent audits to prevent digital systems from perpetuating existing biases. She emphasizes the importance of ensuring that these systems do not perpetuate discriminatory practices and instead promote fairness and justice.
Conducting regular audits allows for the identification and effective addressing of any biases or discriminatory patterns within these digital systems. The alignment between these principles and audits with the Sustainable Development Goals (SDGs) further reinforces their importance. Specifically, they contribute to SDG 5 on Gender Equality, SDG 10 on Reduced Inequalities, and SDG 16 on Peace, Justice, and Strong Institutions.
By integrating these principles and performing regular audits, we can strive towards bridging the digital divide, reducing inequalities, and fostering a more inclusive and just society. In conclusion, the equality by design principles introduced by the Equal Rights Trust, with support from Emma Gibson, offer valuable guidance for digital technology development.
Emma’s advocacy for independent audits underscores the necessity of bias-free systems. By embracing these principles and conducting regular audits, we can work towards creating a more inclusive, equal, and just digital landscape.
Emma Higham
Speech speed
186 words per minute
Speech length
2385 words
Speech time
768 secs
Arguments
Google is using AI to make their search system safer and more inclusive
Supporting facts:
- Emma Higham works with the SafeSearch engineering team as a product manager
- The technology enables them to test their systems
Topics: Artificial Intelligence, Inclusivity, Online Safety
Google’s mission is about organizing the world’s information, making it universally accessible and helpful.
Supporting facts:
- Emma Higham mentions Google’s mission in her introduction.
Topics: Google, Information accessibility
AI is instrumental in pattern matching at scale, being effective for complex math and understanding inclusion problems.
Supporting facts:
- Emma Higham described AI as essentially pattern matching at scale.
Topics: AI, Pattern matching
Google strives to never shock or offend people with explicit or graphic content unless it’s what the user is looking for.
Supporting facts:
- Emma mentions that ‘we never want to shock or offend you with explicit or graphic content when it’s not what you’re looking for’ is one of their guidelines.
Topics: Google, Content moderation
Google uses guidelines to help understand the quality of results, and these guidelines include the principle to not shock or offend users with unsought explicit content.
Supporting facts:
- Emma mentions the 160-page-long guidelines for raters, which aim to improve search results quality.
Topics: Google, Ratings, Content moderation
Systems can be held accountable by testing their fairness for all user groups
Supporting facts:
- AI reflects biases in training data
- Addressing biases systematically can lead to significant gains in equity
Topics: AI, Natural Language Processing, Diversity
System should be accountable and any disparity should be reduced
Supporting facts:
- Questioning systems performance on queries of black woman hairstyles, white woman hairstyles
- Finding disparities in search results
Topics: Artificial Intelligence, Data Justice, System Bias
Technological development process is cyclical
Supporting facts:
- The first step involves creating the technology
- This is followed by fine tuning the technology
- If it did not work well, the cycle goes back to the first step
Topics: AI development, Technology
AI is improving language understanding and concept understanding at scale
Supporting facts:
- Emma Higham discusses the use of AI in Google Translate and how it has improved over time for better gender inclusion.
- AI has enabled users to set their preferred form factor in Google Translate, eliminating the need to default to certain assumptions about audience gender.
Topics: AI, language understanding, concept understanding
AI technology has improved inclusivity in products such as Google Translate and Google Search
Supporting facts:
- Google Translate has got better at understanding gender in different languages and allowing users to set their preferred form factor.
- Due to user feedback, Google Search improved to display results for the women’s football team when searching for the Women’s World Cup.
- AI is aiding in eliminating gender bias in language translation and search results.
Topics: AI, Google Translate, Google Search, inclusivity
Report
Google is leveraging the power of Artificial Intelligence (AI) to enhance the safety and inclusivity of their search system. Emma Higham, a product manager at Google, works closely with the SafeSearch engineering team to achieve this goal. By employing AI technology, they can test and refine their systems, ensuring a safer and more inclusive user experience.
Google’s mission is to organize the world’s information and make it universally accessible and helpful. Emma Higham highlights this commitment, emphasizing Google’s dedication to ensuring information is available to all. AI technology plays a vital role in this mission, facilitating efficient pattern matching at scale and addressing inclusion issues effectively.
Google’s approach prioritizes providing search results that do not shock or offend users with explicit or graphic content unrelated to their search. Emma Higham mentions that this principle is one of their guidelines, reflecting Google’s commitment to user safety and a positive search experience.
Guidelines are crucial for assessing search result quality and improving user satisfaction. Google has comprehensive guidelines for raters, aiming to enhance search result quality. These guidelines include the principle of avoiding shocking or offending users with unsought explicit content. Adhering to these guidelines ensures search results that meet user needs and expectations.
Addressing biases in AI systems is another important aspect for Google. Emma Higham acknowledges that AI algorithms can reflect biases present in training data. To promote fairness, Google systematically tests the fairness of their AI systems across diverse user groups.
This commitment to accountability ensures equitable search results and user experiences for everyone. Google actively collaborates with NGOs worldwide to enhance safety and handle crisis situations effectively. Their powerful AI system, MUM, enables more efficient handling of personal crisis searches.
With operability in 75 languages and partnerships with NGOs, Google aims to improve user safety on a global scale. In the development process of AI technology, Google follows a cyclical approach. It involves creating the technology initially, followed by fine-tuning and continuous improvement.
If the technology does not meet the desired standards, it goes back to the first step, allowing Google to iterate and refine their AI systems. Safety and inclusivity are essential considerations in the design of AI technology. Emma Higham emphasizes the importance of proactive design to ensure new technologies are developed with safety and inclusivity in mind.
By incorporating these principles from the beginning, Google aims to create products that are accessible to all users. AI has also made significant strides in language and concept understanding. Emma Higham highlights improvements in Google Translate, where AI technology has enhanced gender inclusion by allowing users to set their preferred form factor.
This eliminates the need for default assumptions about a user’s gender and promotes inclusivity in language translation. User feedback is paramount in improving systems and meeting high standards. Emma Higham provides an example of how user feedback led to improvements in the Google Search engine during the Women’s World Cup.
Holding themselves accountable to user feedback drives Google to deliver better services and ensure their products consistently meet user expectations. In conclusion, Google’s use of AI technology is instrumental in creating a safe and inclusive search system. Through collaboration with the SafeSearch engineering team, Google ensures continuous testing and improvement of their systems.
Guided by their mission to organize information and make it universally accessible, AI aids pattern matching at scale and tackles complex mathematical problems. Google’s commitment to avoiding explicit content, addressing biases, and incorporating user feedback strengthens their efforts towards a safer and more inclusive search experience.
Additionally, their partnership with NGOs and the development of MUM showcases their dedication to improving safety and handling crisis situations effectively. By embracing proactive design and incorporating user preferences, AI technology expands inclusivity in products such as Google Translate.
Jenna Manhau Fung
Speech speed
147 words per minute
Speech length
1011 words
Speech time
412 secs
Arguments
AI can eliminate human bias
Supporting facts:
- AI can eliminate unintentional human bias and bring more neutrality
Topics: Artificial Intelligence, Human Bias, Inclusion
Importance of user and technical community involvement in policy-making
Supporting facts:
- Involving users and technical experts can help in making comprehensive and effective policies for complex technologies like AI
Topics: Policy-making, User Involvement, Technical Community
Importance of international standards
Supporting facts:
- International standards can help countries modernize their legal framework and guide industries
Topics: International Standards, Legal Framework
Report
The analysis of the speeches reveals several significant findings. Firstly, it highlights that AI can eliminate unintentional human bias and bring more impartiality. This is valuable as it ensures fair decision-making processes and reduces discrimination that may arise from human biases.
Leveraging AI technology can enable organizations to improve their practices and achieve greater objectivity. Another important point emphasized in the analysis is the significance of involving users and technical experts in the policymaking process, particularly in relation to complex technologies like AI.
By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leading to the creation of more comprehensive and effective policies. This ensures that policies address the diverse needs and concerns of different stakeholders and promote equality and inclusivity.
Moreover, the analysis underscores the importance of international standards in the context of AI and related industries. International standards can assist countries in modernizing their legal frameworks and guiding industries in a way that aligns with ethical considerations and societal needs.
These standards promote consistency and harmonization across different regions and facilitate the adoption of AI technologies in an accountable and inclusive manner. In addition to these main points, the analysis highlights the need for an inclusion mechanism for small-scale writers.
It argues that such a mechanism is essential to address situations where the content of these writers does not appear in search engine results due to certain policies. This observation is supported by a personal experience shared by one of the speakers, who explained that her newsletter did not appear in Google search results because of existing policies.
Creating an inclusion mechanism would ensure fair visibility and opportunities for small-scale writers, promoting diversity and reducing inequality in the digital domain. Overall, the analysis emphasizes the transformative potential of AI in eliminating biases and promoting neutrality. It underscores the importance of involving users and technical experts in policymaking, the significance of international standards, and the need for an inclusion mechanism for small-scale writers.
These insights reflect the importance of considering diverse perspectives, fostering inclusivity, and striving for fairness and equality in the development and implementation of AI technologies.
Jim Prendergast
Speech speed
248 words per minute
Speech length
118 words
Speech time
29 secs
Arguments
Dr. Luciana Bonatti from the National University of Cordoba in Argentina could not present due to a wildfire outbreak that caused her and her family to evacuate.
Supporting facts:
- There’s an outbreak of wildfires in that part of Argentina, and she and her family had to evacuate
Topics: Dr. Luciana Bonatti, National University of Cordoba, Argentina, wildfire
Report
Dr. Luciana Bonatti, a representative from the National University of Cordoba in Argentina, was unable to present due to an outbreak of wildfires in the area. The severity of the situation forced her and her family to evacuate their home, resulting in her unavoidable absence.
The wildfires that plagued the region prompted Dr. Bonatti’s evacuation, highlighting the immediate danger posed by the natural disaster. The outbreak of wildfires is a significant concern, not only for Dr. Bonatti, but also for the affected community as a whole.
The intensity of the situation can be inferred from the negative sentiment expressed in the summary. Jim Prendergast, perhaps an attendee or colleague, demonstrated empathy and solidarity towards Dr. Bonatti during this challenging time. Acknowledging her circumstances, Prendergast expressed sympathy and conveyed his well wishes, hoping for a positive resolution for Dr.
Bonatti and her family. His positive sentiment demonstrates support and concern for her well-being. It is worth noting the related Sustainable Development Goals (SDGs) mentioned in the summary. The wildfire outbreak in Argentina aligns with SDG 13: Climate Action, as efforts are necessary to address and mitigate the impacts of climate change-induced disasters like wildfires.
Additionally, the mention of SDG 3: Good Health and Well-being and SDG 11: Sustainable Cities and Communities in relation to Jim Prendergast’s stance signifies the broader implications of the situation on public health and urban resilience. In conclusion, Dr. Luciana Bonatti’s absence from the presentation was a result of the wildfire outbreak in Argentina, which compelled her and her family to evacuate.
This unfortunate circumstance received empathetic support from Jim Prendergast, who expressed sympathy and wished for a positive outcome. The summary highlights the implications of the natural disaster in the context of climate action and sustainable development goals.
Lucia Russo
Speech speed
132 words per minute
Speech length
2608 words
Speech time
1188 secs
Arguments
OECD AI principles, aimed at guiding responsible and innovative AI development, promote gender equality
Supporting facts:
- These principles were developed in a multi-stakeholder process involving over 50 experts
- The principles promote AI that is based on human-centered values and fairness, with an inclusive growth and sustainable development focus
- 46 countries have currently adhered to these principles
Topics: OECD AI principles, AI development, Gender equality
Countries globally have implemented policies in line with the OECD AI principles
Supporting facts:
- The US has set up a program to improve data quality for AI and increase underrepresented communities in the AI realm
- The UK’s Alan Turing Institute has a program to increase women participation in AI and explore gender gaps in AI design
- The Netherlands and Finland have worked on guidelines for non-discriminatory AI systems in the public sector
Topics: OECD AI principles, AI policies, National implementation
The OECD AI Policy Observatory serves as a platform to share tools for reliable AI
Supporting facts:
- The Observatory allows organizations globally to submit tools for use by others
- The platform includes a searchable database of tools aimed at various objectives, including reducing bias and discrimination
Topics: OECD AI Policy Observatory, AI tools
The OECD has two papers on generative AI
Supporting facts:
- One paper analyzes the models, their evolution, policy implications, and safety measures
- There is another paper on the G7 Hiroshima process involving generative AI
Topics: Generative AI, AI Policy
There is likely a significant overlap among different AI principles
Supporting facts:
- The UK’s approach to AI regulation is based on high-level principles that overlap with the OECD principles.
- The NIST management framework in the US is closely linked to the work of the OECD.
- UAI Act takes a risk-based approach by classifying risks of AI systems.
Topics: AI Regulation, OECD Principles, AI Management Frameworks
Approach to AI regulation is still in the experimental phase
Supporting facts:
- Not all AI systems are equal and thus different laws and principles may apply depending on the context and risk factor.
- There are ongoing discussions and developments regarding AI regulation.
- There are attempts at international coordination like the G7, safety summit by the UK, and work by the UN.
Topics: AI Regulation, AI Management Frameworks, UAI Act
There’s a low representation of women in the AI industry
Supporting facts:
- In OECD countries, more than twice as many young men than women can program
- Only 1 in 4 researchers publishing on AI worldwide are women
- Survey of Stack Overflow users in 2022 showed only 4% of respondents were female
- LinkedIn data suggests professional females with AI skills represent less than 2% of workers in most countries
Topics: Gender Equality, AI industry, AI Development
Encourage the promotion of policies that increase women representation in AI
Supporting facts:
- Policies could include development of AI-specific skills for women and promotion of scholarships or programs at universities
- Germany is providing funding to women-led research teams in AI
Topics: AI industry, Government policy, Gender Equality
Generative AI can speed up coding processes and make it easier for people to learn to code.
Supporting facts:
- Coding co-pilots can expedite coding time
Topics: Generative AI, Coding, Education
There is a need for promotion and investment in developing large language models in languages other than English
Topics: Artificial Intelligence, Machine Learning, Language Diversity
Report
The Organisation for Economic Cooperation and Development (OECD) has developed a set of principles aimed at guiding responsible and innovative artificial intelligence (AI) development. These principles promote gender equality and are based on human-centered values and fairness, with a focus on inclusive growth and sustainable development.
Currently, 46 countries have adhered to these principles. To implement these principles, countries have taken various policy initiatives. For example, the United States has established a program to improve data quality for AI and increase the representation of underrepresented communities in the AI industry.
Similarly, the Alan Turing Institute in the United Kingdom has launched a program to increase women’s participation in AI and examine gender gaps in AI design. The Netherlands and Finland have also worked on developing guidelines for non-discriminatory AI systems in the public sector.
These policy efforts demonstrate a commitment to aligning national strategies with the OECD AI principles. The OECD AI Policy Observatory serves as a platform for sharing tools and resources related to reliable AI. This platform allows organizations worldwide to submit their AI tools for use by others.
It also includes a searchable database of tools aimed at various objectives, including reducing bias and discrimination. By facilitating the sharing of best practices and tools, the Observatory promotes the development of AI in line with the OECD principles. In addition to the policy-focused initiatives, the OECD has published papers on generative AI and big trends in AI analysis.
These papers provide analysis on AI models, their evolution, policy implications, safety measures, and the G7 Hiroshima process involving generative AI. While the OECD focuses on analyzing major trends in AI, it is not primarily focused on providing specific tools or resources.
There is an acknowledgement of the need for more alignment and coordination in the field of AI regulation. Efforts are being made to bring stakeholders together and promote coordination. For instance, the United Kingdom is promoting a safety summit to address AI risks, and the United Nations is advancing work in this area.
The existence of ongoing discussions and developments demonstrates that the approach to AI regulation is still in the experimental phase. The representation of women in the AI industry is a significant concern. Statistics show a low representation of women in the industry, with more than twice as many young men as women capable of programming in OECD countries.
Only 1 in 4 researchers publishing on AI worldwide are women, and female professionals with AI skills represent less than 2% of workers in most countries. To address this issue, policies encouraging women’s involvement in science, technology, engineering, and mathematics (STEM) fields are important.
Role models, early exposure to coding, and scholarships are mentioned as ways to increase women’s participation in these areas. Furthermore, there is a need to promote and invest in the development of large language models in languages other than English.
This would contribute to achieving Sustainable Development Goals related to industry, innovation, infrastructure, and reduced inequalities. Overall, the OECD’s principles and initiatives provide a framework for responsible and inclusive AI development. However, there is a need for greater coordination, alignment, and regulation in the field.
Efforts to increase women’s representation in the AI industry and promote diversity in language models are essential for a more equitable and sustainable AI ecosystem.
Moderator – Charles Bradley
Speech speed
168 words per minute
Speech length
2232 words
Speech time
795 secs
Arguments
Charles Bradley is hosting a session on leveraging AI to support gender inclusivity.
Supporting facts:
- The session aims to identify how AI can be used as a tool for good, especially in promoting gender inclusivity.
- The session involves a panel of experienced speakers and seeks to challenge existing beliefs and foster learning new perspectives.
Topics: AI, Gender inclusivity
Acknowledgement of Jenna Manhau Fung’s insights and experiences
Supporting facts:
- Charles Bradley acknowledges Jenna Manhau Fung’s experiences with youth engagement in AI and policy-making
- Cites her experience dealing with Google’s search policies as a small-scale writer
Topics: Youth Engagement, AI, Policy Making
Interested in Google’s counterfactual fairness
Supporting facts:
- Charles Bradley expresses curiosity about Google’s approach to counterfactual fairness
Topics: Google, Counterfactual Fairness, AI
Von Essen’s talk involves counterfactual fairness
Supporting facts:
- They use approaches involving counterfactual similarity
- Ablation of certain terms has been helpful in their work
Topics: Counterfactual Fairness, Data Science
Charles Bradley emphasized the need for trust in AI measurement systems
Supporting facts:
- Google is already working on measuring and reducing biases in their AI systems
- Google was also able to reduce offensive content using their technology
Topics: AI regulation, Measurement of bias
Generative AI can help speed up time to code and can also be a tool for people to learn to code quickly
Supporting facts:
- Discussion about the positive aspect of generative AI in coding and learning
- Investment in training large language models in languages other than English
Topics: Generative AI, Coding, Learning
The importance of role models in encouraging more young women in the field of science and coding
Supporting facts:
- Discussion about policies and actions to motivate more women in science
Topics: Role Models, Women in Science, Coding
Report
Charles Bradley is hosting a session that aims to explore the potential of artificial intelligence (AI) in promoting gender inclusivity. The session features a panel of experienced speakers who will challenge existing beliefs and encourage participants to adopt new perspectives.
This indicates a positive sentiment towards leveraging AI as a tool for good. Bradley encourages the panelists to engage with each other’s presentations and find connections between their work. By fostering collaboration, he believes that the session can achieve something interesting.
This highlights the importance of collaborative efforts in advancing gender inclusivity through AI. The related sustainable development goals (SDGs) identified for this topic are SDG 5: Gender Equality and SDG 17: Partnerships for the Goals. Specific mention is made of Jenna Manhau Fung’s experiences in youth engagement in AI and policy-making, as well as her expertise in dealing with Google’s search policies.
This recognition indicates neutral sentiment towards the acknowledgement of Fung’s insights and experiences. The related SDGs for this discussion are SDG 4: Quality Education and SDG 9: Industry, Innovation and Infrastructure. Furthermore, Bradley invites audience members to contribute to the discussion and asks for questions, fostering an open dialogue.
This reflects a positive sentiment towards creating an interactive and engaging session. Another topic of interest for Bradley is Google’s approach to counterfactual fairness, which is met with a neutral sentiment. This indicates that Bradley is curious about Google’s methods of achieving fairness within AI systems.
The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure. The discussion on biases in AI systems highlights the need for trust and the measurement of bias. Google’s efforts in measuring and reducing biases are acknowledged, signaling neutral sentiment towards their work in this area.
The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure. Bradley believes that the work on principles will set the stage for upcoming regulation, indicating a positive sentiment towards the importance of establishing regulations for AI. The enforceable output of regulation is seen as more effective than principles alone.
The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure. The session also explores the positive aspects of generative AI in the fields of coding and learning. It is suggested that generative AI can speed up the coding process and serve as a tool for individuals to learn coding quickly.
This perspective is met with a positive sentiment and highlights the potential of AI in advancing coding and learning. The related SDGs for this topic are SDG 4: Quality Education and SDG 9: Industry, Innovation, and Infrastructure. Moreover, Bradley emphasizes the importance of investing in AI training in languages other than English, implying a neutral sentiment towards the necessity of language diversity in AI.
This recognizes the need to expand AI capabilities beyond the English language. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure. Lastly, the role of role models in encouraging more young women to enter the fields of science and coding is discussed with a positive sentiment.
Policies and actions to motivate women in science are emphasized, highlighting the importance of representation in these fields. The related SDGs for this topic are SDG 4: Quality Education and SDG 5: Gender Equality. In conclusion, Charles Bradley’s session focuses on exploring the potential of AI in promoting gender inclusivity.
The session aims to challenge existing beliefs, foster learning new perspectives, and encourage collaboration among panelists. It covers a range of topics, including youth engagement in AI, counterfactual fairness, measuring biases, guiding principles, generative AI in coding and learning, investing in language diversity, and the importance of role models.
The session promotes open dialogue and aims to set the stage for future AI regulation.