Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235

12 Oct 2023 04:30h - 06:00h UTC

Event report

Speakers and Moderators

Speakers:
  • Christian von Essen, Private Sector, Western European and Others Group (WEOG)
  • Jenna Manhau Fung, Technical Community, Asia-Pacific Group
  • Neema Iyer, Private Sector, African Group
  • Luciana Benotti, Civil Society, Latin American and Caribbean Group (GRULAC)
  • Lucia Russo, Intergovernmental Organization, Intergovernmental Organization
Moderators:
  • Takeshi Komoto, Private Sector, Asia-Pacific Group

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Christian von Essen

The implementation of AI language understanding has yielded promising results in reducing the presence of inappropriate sexual content in search results. It was reported in 2022 that there had been a 30% decrease in such content from the previous year, thanks to the application of AI algorithms. This positive development has continued in subsequent years, with ongoing efforts to further decrease the presence of harmful content.

Addressing bias in AI is a crucial aspect of promoting equality, and specific measures have been taken to ensure that training data includes protected minority groups. To counteract bias, training data now includes groups such as “Caucasian girls,” “Asian girls,” and “Irish girls.” Additionally, patterns across different groups are utilized to automatically expand the scope from one group to another, effectively reducing biases in AI systems.

Success in mitigating bias is measured by comparing the performance of classifiers across different data slices, including LGBTQ, gender, and race. The goal is to ensure that the probability of predicting inappropriate content remains consistent across all data slices, regardless of individual characteristics.

The inclusion of corrective training data and the application of additional methods have led to significant improvements in the equality of quality across different data slices. These improvements are evident when comparing models to baseline models. Furthermore, the introduction of more methods and data further enhances these gains.

Counterfactual fairness in AI involves making sure that the outcome of a classifier doesn’t significantly change when certain terms related to marginalized minority groups are modified. For example, if a search query includes the term “black woman video,” the classifier should predict a similar outcome if the term is replaced with “black man video” or “white woman video.” This approach ensures fairness across all user groups, regardless of their background or identity.

Ablation, which is also a part of counterfactual fairness, focuses on maintaining fairness even when specific terms are removed from a query. The output of classifiers should not change significantly, whether the query includes terms like “black woman video,” “black woman dress,” or simply “woman dress.” This helps ensure fairness in AI systems by reducing the impact of specific terms or keywords.

Fairness in AI systems should not be limited to gender and race-related terms. The behavior of classifiers and systems should remain consistent across all data slices, including categories such as LGBTQ queries. This comprehensive approach ensures fairness for all users, irrespective of their identities or preferences.

Counterfactual fairness is considered a necessary initial step in augmenting training data and creating fair classifiers. By ensuring that classifiers’ predictions remain consistent across different query modifications or term replacements related to marginalized minority groups, AI systems can strive for fairness and inclusivity.

While the initial focus of language models like BERT was on creating credible and useful models, efforts to address bias and fine-tune these models were incorporated later. It was vital to establish the credibility and usefulness of such models before incorporating bias correction techniques.

As AI models continue to grow in size, selecting appropriate training data becomes increasingly challenging. This recognition highlights the need for meticulous data selection and representation to ensure the accuracy and fairness of AI systems.

Ensuring the representativeness of training data is seen as a priority before fine-tuning the models. By incorporating representative data from diverse sources and groups, AI systems can better account for the various perspectives and experiences of users.

The distinction between fine-tuning and the initial training step is becoming more blurred, making it difficult to identify where one ends and the other begins. This intermingling of steps in the training process further emphasizes the complexity and nuances involved in effectively training AI models.

In conclusion, the use of AI language understanding has made significant progress in reducing inappropriate sexual content in search results. Efforts to address bias and promote equality through the inclusion of training data for protected minority groups, comparing classifier performance across different data slices, and ensuring counterfactual fairness have proven successful. However, it is essential to extend fairness beyond gender and race to encompass other categories such as LGBTQ queries. The ongoing efforts to improve the credibility, bias correction, and selection of training data highlight the commitment to creating fair and inclusive AI systems.

Emma Gibson – audience

The Equal Rights Trust has recently launched a set of equality by design principles, which has received support from Emma Gibson. Emma, a strong advocate for gender equality and reduced inequalities, believes in the importance of incorporating these principles at all stages of digital technology development. Her endorsement highlights the significance of considering inclusivity and fairness during the design and implementation of digital systems.

Emma also emphasizes the need for independent audits to prevent digital systems from perpetuating existing biases. She emphasizes the importance of ensuring that these systems do not perpetuate discriminatory practices and instead promote fairness and justice. Conducting regular audits allows for the identification and effective addressing of any biases or discriminatory patterns within these digital systems.

The alignment between these principles and audits with the Sustainable Development Goals (SDGs) further reinforces their importance. Specifically, they contribute to SDG 5 on Gender Equality, SDG 10 on Reduced Inequalities, and SDG 16 on Peace, Justice, and Strong Institutions. By integrating these principles and performing regular audits, we can strive towards bridging the digital divide, reducing inequalities, and fostering a more inclusive and just society.

In conclusion, the equality by design principles introduced by the Equal Rights Trust, with support from Emma Gibson, offer valuable guidance for digital technology development. Emma’s advocacy for independent audits underscores the necessity of bias-free systems. By embracing these principles and conducting regular audits, we can work towards creating a more inclusive, equal, and just digital landscape.

Audience

The discussions surrounding gender inclusivity in AI highlight several concerns. One prominent issue is the presence of biased outputs, which are often identified after the fact and require corrections or fine-tuning. This reactive approach implies that more proactive measures are needed to address these biases. Furthermore, the training data used for AI might perpetuate gender gaps, as there is a lack of transparency regarding the percentage of women-authored data used. This opacity poses a challenge in accurately assessing the gender inclusivity of AI models.

Another factor contributing to gender gaps in AI is the digital divide between the Global North and the Global South. It has been observed that most online users in the Global South are male, which suggests a lack of diverse representation in the training data. This further widens the gender gap within AI systems.

To promote gender inclusivity, there is a growing consensus that greater diversity in training data is necessary. While post-output fine-tuning is important, it is equally essential to ensure the diversity of inputs. This can be achieved by using more representative training data that includes contributions from a wide range of demographics.

There are also concerns about the interaction between AI and gender inclusivity, particularly with regards to surveillance. The use of AI in surveillance systems raises questions about privacy, biases, and potential infringements on individuals’ rights. This highlights the need for careful consideration of the impact of AI systems on gender equality, as they can unintentionally reinforce existing power dynamics.

In terms of governance, there is a debate about the value of non-binding principles in regulating AI. Many international processes have attempted to set out guidelines for AI governance, but few are binding. This lack of consistency and overlapping initiatives raises doubts about the effectiveness of these non-binding principles.

On the other hand, there is growing support for the implementation of independent audit mechanisms to assess AI outcomes. An independent audit would allow for the examination of actions taken by companies like Google to determine whether they are producing the desired outcomes. This mechanism would provide a more objective assessment of the impact of AI and help hold companies accountable.

Investing in developing independent audit mechanisms for AI is seen as a more beneficial approach than engaging in non-binding conversations or relying solely on voluntary principles. This suggests that tangible actions and oversight are needed to ensure that AI systems operate in an ethical and inclusive manner.

The representation of women in the tech field remains extremely low. Factors such as language barriers and a lack of representation in visual search results contribute to this underrepresentation. To address this, there needs to be a greater focus on upskilling, reskilling, and the introduction of the female voice in AI. This includes encouraging more girls to pursue technology-related studies and creating opportunities for women to engage with AI-based technologies.

Overall, while there are challenges and concerns surrounding gender inclusivity in AI, there is also recognition of the positive vision and opportunities that AI adoption can provide for female workers. By addressing these issues and actively working towards gender equality, AI has the potential to become a powerful tool for promoting a more inclusive and diverse society.

Emma Higham

Google is leveraging the power of Artificial Intelligence (AI) to enhance the safety and inclusivity of their search system. Emma Higham, a product manager at Google, works closely with the SafeSearch engineering team to achieve this goal. By employing AI technology, they can test and refine their systems, ensuring a safer and more inclusive user experience.

Google’s mission is to organize the world’s information and make it universally accessible and helpful. Emma Higham highlights this commitment, emphasizing Google’s dedication to ensuring information is available to all. AI technology plays a vital role in this mission, facilitating efficient pattern matching at scale and addressing inclusion issues effectively.

Google’s approach prioritizes providing search results that do not shock or offend users with explicit or graphic content unrelated to their search. Emma Higham mentions that this principle is one of their guidelines, reflecting Google’s commitment to user safety and a positive search experience.

Guidelines are crucial for assessing search result quality and improving user satisfaction. Google has comprehensive guidelines for raters, aiming to enhance search result quality. These guidelines include the principle of avoiding shocking or offending users with unsought explicit content. Adhering to these guidelines ensures search results that meet user needs and expectations.

Addressing biases in AI systems is another important aspect for Google. Emma Higham acknowledges that AI algorithms can reflect biases present in training data. To promote fairness, Google systematically tests the fairness of their AI systems across diverse user groups. This commitment to accountability ensures equitable search results and user experiences for everyone.

Google actively collaborates with NGOs worldwide to enhance safety and handle crisis situations effectively. Their powerful AI system, MUM, enables more efficient handling of personal crisis searches. With operability in 75 languages and partnerships with NGOs, Google aims to improve user safety on a global scale.

In the development process of AI technology, Google follows a cyclical approach. It involves creating the technology initially, followed by fine-tuning and continuous improvement. If the technology does not meet the desired standards, it goes back to the first step, allowing Google to iterate and refine their AI systems.

Safety and inclusivity are essential considerations in the design of AI technology. Emma Higham emphasizes the importance of proactive design to ensure new technologies are developed with safety and inclusivity in mind. By incorporating these principles from the beginning, Google aims to create products that are accessible to all users.

AI has also made significant strides in language and concept understanding. Emma Higham highlights improvements in Google Translate, where AI technology has enhanced gender inclusion by allowing users to set their preferred form factor. This eliminates the need for default assumptions about a user’s gender and promotes inclusivity in language translation.

User feedback is paramount in improving systems and meeting high standards. Emma Higham provides an example of how user feedback led to improvements in the Google Search engine during the Women’s World Cup. Holding themselves accountable to user feedback drives Google to deliver better services and ensure their products consistently meet user expectations.

In conclusion, Google’s use of AI technology is instrumental in creating a safe and inclusive search system. Through collaboration with the SafeSearch engineering team, Google ensures continuous testing and improvement of their systems. Guided by their mission to organize information and make it universally accessible, AI aids pattern matching at scale and tackles complex mathematical problems. Google’s commitment to avoiding explicit content, addressing biases, and incorporating user feedback strengthens their efforts towards a safer and more inclusive search experience. Additionally, their partnership with NGOs and the development of MUM showcases their dedication to improving safety and handling crisis situations effectively. By embracing proactive design and incorporating user preferences, AI technology expands inclusivity in products such as Google Translate.

Bobina Zulfa

A recent analysis of different viewpoints on AI technologies has revealed several key themes. One prominent concern raised by some is the need to understand the concept of “benefit” in relation to different communities. The argument is that as AI technologies evolve and are adopted across various communities, it is vital to discern what “benefit” means for each community. This is crucial because technologies may produce unexpected outcomes and may potentially harm rather than help in certain instances. This negative sentiment stems from the recognition that the impact of AI technologies is not uniform and cannot be assumed to be universally advantageous.

On the other hand, there is a call to promote emancipatory and liberatory AI, which is seen as a positive development. The proponents of this argument are interested in moving towards greater agency, freedom, non-discrimination, and equality in AI technologies. The emphasis is on AI technologies being relevant to communities’ needs and realities, ensuring that they support the ideals of non-discrimination and equality. This perspective acknowledges the importance of considering the socio-cultural context in which AI technologies are deployed and the need to design and implement them in a way that reflects the values and goals of diverse communities.

Another critical view that emerged from the analysis is the need to move away from techno-chauvinism and solutionism. Techno-chauvinism refers to the belief that any and every technology is inherently good, while techno-solutionism often overlooks the potential negative impacts of technologies. The argument against these views is that it is crucial to recognize that not all technologies are beneficial for everyone and that some technologies may not be relevant to communities’ needs. It is essential to critically evaluate the potential harms and benefits of AI technologies and avoid assuming their inherent goodness.

The analysis also highlighted concerns regarding data cleaning work and labour. It is important to acknowledge and support the people who perform this cleaning work, as their labour has implications for their quality of life. This perspective aligns with the goal of SDG 8: Decent Work and Economic Growth, which emphasizes promoting decent work conditions and ensuring fair treatment of workers involved in data cleaning processes.

Furthermore, the analysis identified issues with consent in Femtech apps. Femtech refers to technology aimed at improving women’s health and well-being. The concerns raised encompass confusing terms and conditions and possible data sharing with third parties. The lack of meaningful consent regimes in Femtech apps can have significant implications for gender inequality. This observation underscores the need for robust privacy measures and clear and transparent consent processes in Femtech applications.

The analysis also noted the importance of considering potential issues and impacts of AI technologies from the early stages of development. Taking a proactive approach, rather than a reactive one, can help address and mitigate any potential negative consequences. By anticipating and addressing these issues, the development and implementation of AI technologies can be more socially responsible and in line with the ideals of sustainable development.

Skepticism was expressed towards the idea of using small data sets to detect bias. The argument is that limited data sets may not represent a significant portion of the global majority. If the data used in AI algorithms is not representative, it could lead to biased outcomes in the end products. This skepticism highlights the need to ensure diverse and inclusive data sets that reflect the diversity of communities and avoid reinforcing existing biases.

Finally, the analysis highlighted initiatives such as OECD’s principles that could help address the potential issues surrounding AI technologies. These principles stimulate critical thinking about the potential social, economic, and ethical impacts of AI technologies from the outset. Several organizations are actively promoting these principles, indicating a positive and proactive approach towards ensuring responsible and trustworthy AI development and deployment.

In conclusion, the analysis of different viewpoints on AI technologies revealed a range of concerns and perspectives. It is important to understand the notion of benefit for different communities and recognize that technologies may have unintended harmful consequences. However, there is also a call for the promotion of emancipatory and liberatory AI that is relevant to communities’ needs, supports non-discrimination and equality. Critical views on techno-chauvinism and solutionism emphasized the need to move away from assuming the inherent goodness of all technologies. Additional concerns included issues with data cleaning work and labour, consent in Femtech apps, potential issues and impacts from the start of AI technology development, skepticism towards using small data sets to detect bias, and the importance of initiatives like OECD’s principles. This analysis provides valuable insights into the complex landscape of AI technologies and highlights the need for responsible and ethical decision making throughout their development and deployment.

Jim Prendergast

Dr. Luciana Bonatti, a representative from the National University of Cordoba in Argentina, was unable to present due to an outbreak of wildfires in the area. The severity of the situation forced her and her family to evacuate their home, resulting in her unavoidable absence.

The wildfires that plagued the region prompted Dr. Bonatti’s evacuation, highlighting the immediate danger posed by the natural disaster. The outbreak of wildfires is a significant concern, not only for Dr. Bonatti, but also for the affected community as a whole. The intensity of the situation can be inferred from the negative sentiment expressed in the summary.

Jim Prendergast, perhaps an attendee or colleague, demonstrated empathy and solidarity towards Dr. Bonatti during this challenging time. Acknowledging her circumstances, Prendergast expressed sympathy and conveyed his well wishes, hoping for a positive resolution for Dr. Bonatti and her family. His positive sentiment demonstrates support and concern for her well-being.

It is worth noting the related Sustainable Development Goals (SDGs) mentioned in the summary. The wildfire outbreak in Argentina aligns with SDG 13: Climate Action, as efforts are necessary to address and mitigate the impacts of climate change-induced disasters like wildfires. Additionally, the mention of SDG 3: Good Health and Well-being and SDG 11: Sustainable Cities and Communities in relation to Jim Prendergast’s stance signifies the broader implications of the situation on public health and urban resilience.

In conclusion, Dr. Luciana Bonatti’s absence from the presentation was a result of the wildfire outbreak in Argentina, which compelled her and her family to evacuate. This unfortunate circumstance received empathetic support from Jim Prendergast, who expressed sympathy and wished for a positive outcome. The summary highlights the implications of the natural disaster in the context of climate action and sustainable development goals.

Lucia Russo

The Organisation for Economic Cooperation and Development (OECD) has developed a set of principles aimed at guiding responsible and innovative artificial intelligence (AI) development. These principles promote gender equality and are based on human-centered values and fairness, with a focus on inclusive growth and sustainable development. Currently, 46 countries have adhered to these principles.

To implement these principles, countries have taken various policy initiatives. For example, the United States has established a program to improve data quality for AI and increase the representation of underrepresented communities in the AI industry. Similarly, the Alan Turing Institute in the United Kingdom has launched a program to increase women’s participation in AI and examine gender gaps in AI design. The Netherlands and Finland have also worked on developing guidelines for non-discriminatory AI systems in the public sector. These policy efforts demonstrate a commitment to aligning national strategies with the OECD AI principles.

The OECD AI Policy Observatory serves as a platform for sharing tools and resources related to reliable AI. This platform allows organizations worldwide to submit their AI tools for use by others. It also includes a searchable database of tools aimed at various objectives, including reducing bias and discrimination. By facilitating the sharing of best practices and tools, the Observatory promotes the development of AI in line with the OECD principles.

In addition to the policy-focused initiatives, the OECD has published papers on generative AI and big trends in AI analysis. These papers provide analysis on AI models, their evolution, policy implications, safety measures, and the G7 Hiroshima process involving generative AI. While the OECD focuses on analyzing major trends in AI, it is not primarily focused on providing specific tools or resources.

There is an acknowledgement of the need for more alignment and coordination in the field of AI regulation. Efforts are being made to bring stakeholders together and promote coordination. For instance, the United Kingdom is promoting a safety summit to address AI risks, and the United Nations is advancing work in this area. The existence of ongoing discussions and developments demonstrates that the approach to AI regulation is still in the experimental phase.

The representation of women in the AI industry is a significant concern. Statistics show a low representation of women in the industry, with more than twice as many young men as women capable of programming in OECD countries. Only 1 in 4 researchers publishing on AI worldwide are women, and female professionals with AI skills represent less than 2% of workers in most countries. To address this issue, policies encouraging women’s involvement in science, technology, engineering, and mathematics (STEM) fields are important. Role models, early exposure to coding, and scholarships are mentioned as ways to increase women’s participation in these areas.

Furthermore, there is a need to promote and invest in the development of large language models in languages other than English. This would contribute to achieving Sustainable Development Goals related to industry, innovation, infrastructure, and reduced inequalities.

Overall, the OECD’s principles and initiatives provide a framework for responsible and inclusive AI development. However, there is a need for greater coordination, alignment, and regulation in the field. Efforts to increase women’s representation in the AI industry and promote diversity in language models are essential for a more equitable and sustainable AI ecosystem.

Jenna Manhau Fung

The analysis of the speeches reveals several significant findings. Firstly, it highlights that AI can eliminate unintentional human bias and bring more impartiality. This is valuable as it ensures fair decision-making processes and reduces discrimination that may arise from human biases. Leveraging AI technology can enable organizations to improve their practices and achieve greater objectivity.

Another important point emphasized in the analysis is the significance of involving users and technical experts in the policymaking process, particularly in relation to complex technologies like AI. By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leading to the creation of more comprehensive and effective policies. This ensures that policies address the diverse needs and concerns of different stakeholders and promote equality and inclusivity.

Moreover, the analysis underscores the importance of international standards in the context of AI and related industries. International standards can assist countries in modernizing their legal frameworks and guiding industries in a way that aligns with ethical considerations and societal needs. These standards promote consistency and harmonization across different regions and facilitate the adoption of AI technologies in an accountable and inclusive manner.

In addition to these main points, the analysis highlights the need for an inclusion mechanism for small-scale writers. It argues that such a mechanism is essential to address situations where the content of these writers does not appear in search engine results due to certain policies. This observation is supported by a personal experience shared by one of the speakers, who explained that her newsletter did not appear in Google search results because of existing policies. Creating an inclusion mechanism would ensure fair visibility and opportunities for small-scale writers, promoting diversity and reducing inequality in the digital domain.

Overall, the analysis emphasizes the transformative potential of AI in eliminating biases and promoting neutrality. It underscores the importance of involving users and technical experts in policymaking, the significance of international standards, and the need for an inclusion mechanism for small-scale writers. These insights reflect the importance of considering diverse perspectives, fostering inclusivity, and striving for fairness and equality in the development and implementation of AI technologies.

Moderator – Charles Bradley

Charles Bradley is hosting a session that aims to explore the potential of artificial intelligence (AI) in promoting gender inclusivity. The session features a panel of experienced speakers who will challenge existing beliefs and encourage participants to adopt new perspectives. This indicates a positive sentiment towards leveraging AI as a tool for good.

Bradley encourages the panelists to engage with each other’s presentations and find connections between their work. By fostering collaboration, he believes that the session can achieve something interesting. This highlights the importance of collaborative efforts in advancing gender inclusivity through AI. The related sustainable development goals (SDGs) identified for this topic are SDG 5: Gender Equality and SDG 17: Partnerships for the Goals.

Specific mention is made of Jenna Manhau Fung’s experiences in youth engagement in AI and policy-making, as well as her expertise in dealing with Google’s search policies. This recognition indicates neutral sentiment towards the acknowledgement of Fung’s insights and experiences. The related SDGs for this discussion are SDG 4: Quality Education and SDG 9: Industry, Innovation and Infrastructure.

Furthermore, Bradley invites audience members to contribute to the discussion and asks for questions, fostering an open dialogue. This reflects a positive sentiment towards creating an interactive and engaging session.

Another topic of interest for Bradley is Google’s approach to counterfactual fairness, which is met with a neutral sentiment. This indicates that Bradley is curious about Google’s methods of achieving fairness within AI systems. The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure.

The discussion on biases in AI systems highlights the need for trust and the measurement of bias. Google’s efforts in measuring and reducing biases are acknowledged, signaling neutral sentiment towards their work in this area. The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure.

Bradley believes that the work on principles will set the stage for upcoming regulation, indicating a positive sentiment towards the importance of establishing regulations for AI. The enforceable output of regulation is seen as more effective than principles alone. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure.

The session also explores the positive aspects of generative AI in the fields of coding and learning. It is suggested that generative AI can speed up the coding process and serve as a tool for individuals to learn coding quickly. This perspective is met with a positive sentiment and highlights the potential of AI in advancing coding and learning. The related SDGs for this topic are SDG 4: Quality Education and SDG 9: Industry, Innovation, and Infrastructure.

Moreover, Bradley emphasizes the importance of investing in AI training in languages other than English, implying a neutral sentiment towards the necessity of language diversity in AI. This recognizes the need to expand AI capabilities beyond the English language. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure.

Lastly, the role of role models in encouraging more young women to enter the fields of science and coding is discussed with a positive sentiment. Policies and actions to motivate women in science are emphasized, highlighting the importance of representation in these fields. The related SDGs for this topic are SDG 4: Quality Education and SDG 5: Gender Equality.

In conclusion, Charles Bradley’s session focuses on exploring the potential of AI in promoting gender inclusivity. The session aims to challenge existing beliefs, foster learning new perspectives, and encourage collaboration among panelists. It covers a range of topics, including youth engagement in AI, counterfactual fairness, measuring biases, guiding principles, generative AI in coding and learning, investing in language diversity, and the importance of role models. The session promotes open dialogue and aims to set the stage for future AI regulation.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more