Can (generative) AI be compatible with Data Protection? | IGF 2023 #24

10 Oct 2023 08:00h - 09:30h UTC

Event report

Speakers and Moderators

Speakers:
  • Armando Manzueta, Digital Transformation Director, Ministry of Economy, Planning and Development of the Dominican Republic
  • Arianne Jimenez, Meta, APAC (TBC)
  • Gbenga Sesan, Executive Director, Paradigm Initiative, Nigeria
  • Jonathan Mendoza, Secretary for Data Protection, National Institute of Transparency Access to Information and Protection of Personal Data (INAI) TBC
  • Sizwe Snail, Nelson Mandela University, former board member of the South African Information Regulator
  • Smriti Parsheera, Researcher, Indian Institute of Technology/CyberBRICS Project
  • Wei Wang, University of Hong Kong, China
  • Camila Leite, Brazilian Consumer Protection Institute (IDEC), Brazil
Moderators:
  • Luca Belli, FGV Law School, Rio de Janeiro
  • Shilpa Jaswant, Jindal Global Law School, India

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Kamesh Shekar

The analysis examines the importance of principles and regulation in the field of artificial intelligence (AI). It highlights the need for a principle-based framework that operates at the ecosystem level, involving various stakeholders. The proposed framework suggests that responsibilities should be shared among different actors within the AI ecosystem to ensure safer and more responsible utilization of AI technologies. This approach is seen as crucial for fostering trust, transparency, and accountability in the AI domain.

Additionally, the analysis emphasizes the significance of consensus building in regard to AI principles. It argues for achieving clarity on principles that resonate with all stakeholders involved in AI development and deployment. International discussions are seen as a crucial step towards establishing a common understanding and consensus on AI principles, ensuring global alignment in the adoption of ethical and responsible practices.

Furthermore, the analysis explores the role of regulation in the AI landscape. It suggests that regulation should not only focus on compliance but also be market-oriented. The argument is made that enabling the AI market and providing businesses with a value proposition in regulation can support innovation while ensuring ethical and responsible AI practices. This market-based regulation approach is believed to be beneficial for industry growth (aligning with SDG 9: Industry, Innovation, and Infrastructure) and economic development (aligning with SDG 8: Decent Work and Economic Growth).

Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not provide specific principles or regulations, it emphasizes the importance of a principle-based framework, consensus building, and market-based regulation. These insights can be valuable for policymakers, industry leaders, and other stakeholders in developing effective and responsible AI governance strategies.

Jonathan Mendoza Iserte

Artificial intelligence (AI) has the potential to drive innovation across sectors, but it also poses challenges in terms of regulation, ethical use, and the need for transparency and accountability. The field of AI is rapidly evolving and has the capacity to transform development models in Latin America. Therefore, effective regulations are necessary to harness its benefits.

Latin American countries like Argentina, Brazil, and Mexico have taken steps towards AI regulation and have emerged as regional leaders in global AI discussions. To further strengthen regulation efforts, it is proposed to establish a dedicated mechanism in the form of a committee of experts in Latin America. This committee would shape policies and frameworks tailored to the region’s unique challenges and opportunities.

The adoption and implementation of AI will have mixed effects on the economy and labor. By 2030, AI is estimated to contribute around $13 trillion to the global economy. However, its impact on specific industries and job markets may vary. While AI can enhance productivity and create opportunities, it may also disrupt certain sectors and lead to job displacement. Policymakers and stakeholders need to consider these implications and implement measures to mitigate negative consequences.

Additionally, it is crucial for AI systems to respect fundamental human rights and avoid biases. A human-centric approach is necessary to ensure the ethical development and deployment of AI technologies. This includes safeguards against discriminatory algorithms and biases that could perpetuate inequalities or violate human rights.

In conclusion, AI presents both opportunities and challenges. Effective regulation is crucial to harness the potential benefits of AI in Latin America while mitigating potential harms. This requires international cooperation and a human-centric approach that prioritizes ethical use and respect for human rights. By navigating these issues carefully, Latin America can drive inclusive and sustainable development.

Moderator – Luca Belli

The analysis delves into various aspects of AI and Data Governance, shedding light on several important points. Firstly, it highlights the significance of comprehending AI sovereignty and its key enablers. AI sovereignty goes beyond authoritarian control or protectionism and involves understanding and regulating technologies. The enablers of AI sovereignty encompass multiple elements, including data, algorithms, computation, connectivity, cybersecurity, electrical power, capacity building, and risk-based AI governance frameworks. Understanding these enablers is crucial for effective AI and Data Governance.

Secondly, the analysis underscores the need to increase representation and consideration of ideas from the Global South in discussions about data governance and AI. The creation of the Data and AI Governance Coalition aims to address issues related to data governance and AI from the perspective of the Global South. It highlights the criticism that discussions often overlook ideas and solutions from this region. To achieve comprehensive and inclusive AI and Data Governance, it is imperative to involve diverse voices and perspectives from around the world.

Moreover, the analysis emphasizes that AI governance should be considered a fundamental right for everyone. It is mentioned in Article 1 of the United Nations Charter and the International Covenants on Political, Civil, Economic, Social, and Cultural Rights. Recognizing AI governance as a fundamental right ensures individuals possess agency and control over their own technological destiny.

Furthermore, the analysis notes that the development of an international regime on AI may take between seven and ten years. This estimate is influenced by the involvement of tech executives who advocate for such an agreement. Due to the complexity of AI and the multitude of considerations involved, reaching international consensus on an AI regime requires ample time for careful deliberation and collaboration.

Lastly, the examination reveals that the process of shaping the UN Convention on Artificial Intelligence could be protracted due to geopolitical conflicts and strategic competition. These external factors introduce additional challenges and intricacies into the negotiating process, potentially prolonging the time required to finalize the convention.

In conclusion, the analysis offers valuable insights into AI and Data Governance. It emphasizes the importance of understanding AI sovereignty and its enablers, advocates for increased representation from the Global South, asserts AI governance as a fundamental right, highlights the time-consuming nature of developing an international regime on AI, and acknowledges the potential delays caused by geopolitical conflicts and strategic competition. These findings contribute to a deeper understanding of the complexities surrounding AI and Data Governance and provide a foundation for informed decision-making in this domain.

Audience

The analysis explores various topics and arguments relating to the intersection of AI and data protection. One concern is whether generative AI is compatible with data protection, as it may pose challenges in safeguarding personal data. There is also an interest in understanding how AI intersects with nationality and statelessness, with potential implications for reducing inequalities and promoting peace and justice. Additionally, there is a desire to know if there are frameworks or successful instances of generative AI working in different regions.

Privacy principles within Gen-AI platforms are seen as crucial, with 17 initial principles identified and plans to test them with 50 use cases. However, the use of AI also raises questions about certain data protection principles, as generative AI systems may lack specified purposes and predominantly work with non-personal data for profiling individuals.

There is a call for a UN Convention on Artificial Intelligence to manage the risks and misuse of AI at an international level. However, the analysis does not provide further details or evidence on the feasibility or implementation of such a convention. Potential geopolitical conflicts and strategic competition between AI powers are also highlighted as potential barriers to developing a UN Convention on Artificial Intelligence.

The “Brussels effect” is mentioned as a factor that may have negative impacts in non-European contexts. Concerns are raised about premature legislation in the field of AI and the need for clear definitions when legislating on AI to ensure comprehensive regulation. The analysis covers a broad range of topics and arguments, though some lack supporting evidence or further exploration. Notable insights include the need for privacy principles in Gen-AI platforms, challenges to data protection principles posed by AI, and the potential hindrances to global cooperation on AI regulation.

In conclusion, the analysis offers valuable insights into the complex relationship between AI and data protection.

Giuseppe Claudio Cicu

Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision making, monitoring, and compliance in organisations. However, challenges arise in terms of transparency and accountability. To address this, an ethical approach to AI implementation is proposed, such as the AI by Corporate Design Framework, which blends business process management and the AI lifecycle. This framework incorporates ethical considerations like the human in the loop and human on the loop principles. Furthermore, it is suggested that corporations establish an Ethical Algorithmic Legal Committee to regulate AI applications. This committee would act as a filter between stakeholders and AI outputs, ensuring ethical decision-making. Additionally, there’s a call for legislators to recognise technology as a corporate dimension, as it has implications for accountability, organisation, and administration. By developing appropriate regulations and norms, responsible and ethical use of AI in corporate governance can be ensured. Overall, AI has potential benefits for corporate governance and business processes, but careful consideration of transparency, accountability, and ethics is necessary.

Armando José Manzueta-Peña

The use of generative AI holds great potential for the modernisation of government services and the improvement of citizens’ lives. By automating the migration of legacy software to flexible cloud-based applications, generative AI can supercharge digital modernisation in the government sector. This automation process can greatly streamline and enhance government operations. AI-powered tools can assist with pattern detection in large stores of data, enabling effective analysis and decision-making. The migration of certain technology systems to the cloud, coupled with AI infusion, opens up new possibilities for enhanced use of data in government services.

To successfully implement AI in the public sector, attention must be given to key areas. Firstly, existing public sector workers should receive training to effectively manage AI-related projects. Equipping government employees with the necessary skills and knowledge is essential. Citizen engagement should be prioritised when developing new services and modernising existing ones. Involving citizens in the decision-making process fosters inclusivity and builds trust. Government institutions must be seen as the most trusted entities holding and managing citizens’ data. Strong data protection rules and ethical considerations are crucial. Modernising the frameworks for data protection safeguards sensitive information and maintains public trust.

The quality of AI systems is heavily dependent on the quality of the data they are fed. Accurate data input is necessary to avoid inaccurate profiling of individuals or companies. Effective data management, collection, and validation policies are vital for meaningful outcomes. Strong data protection measures, collection, and validation processes ensure accurate and reliable AI-driven solutions. Developing nations face challenges in quality data collection, but good quality data and administrative registers are necessary to leverage AI effectively.

In conclusion, successful AI implementation in the public sector requires government institutions to familiarise themselves with the advantages of AI and generative AI. Workforce transformation, citizen engagement, and government platform modernisation are crucial areas. Strong data protection rules and ethical considerations are essential. The quality of AI systems relies on the quality of the data they are fed. Proper data management, collection, and validation policies are necessary. Addressing these aspects allows government institutions to harness the full potential of AI, modernise their services, and improve citizens’ lives.

Michael

The analysis examines the issue of harmonised standards in the context of AI and highlights potential shortcomings. It is argued that these standards might fail to consider the specific needs of diverse populations and the local contexts in which AI systems are implemented. This is concerning as it could result in AI systems that do not effectively address the challenges and requirements of different communities.

One of the reasons for this oversight is that the individuals involved in developing these standards primarily come from wealthier parts of the world. As a result, their perspectives may not adequately reflect the experiences and concerns of marginalised communities who are most impacted by AI technologies.

While some proponents argue that harmonised standards can be beneficial and efficient, it is stressed that they should not disregard the individual needs and concerns of diverse populations. Balancing the efficiency and standardisation of AI systems with the consideration of local contexts and marginalised populations’ needs is paramount.

The tension between the value of harmonised AI standards and the disregard for local contexts is noted. It is suggested that the development of these standards may further entrench global inequities by perpetuating existing power imbalances and neglecting the specific challenges faced by different communities.

In conclusion, the analysis cautions against the potential pitfalls of harmonised AI standards that do not take into account diverse populations and local contexts. While harmonisation can be beneficial, it should not be at the expense of addressing the specific needs and concerns of marginalised communities. By striking a balance between efficiency and inclusivity, AI standards can better serve the needs of all communities and avoid perpetuating global inequities.

Kazim Rizvi

In his paper, Kazim Rezvi delved into the important topic of mapping and operationalising trustworthy AI principles in specific sectors, focusing specifically on finance and healthcare. He discussed the need for responsible implementation and ethical direction in the field of AI, highlighting the potential synergies and conflicts that may arise when applying these principles in these sectors. To address this, Rezvi proposed a two-layer approach to AI, dividing it into non-technical and technical aspects.

The non-technical layer examines strategies for responsible implementation and ethical direction. This involves exploring various approaches to ensure that AI technologies are developed and deployed in a manner that upholds ethical standards and benefits society as a whole. Rezvi emphasised the importance of involving multiple stakeholders from industry, civil society, academia, and government in this process. By collaborating and sharing insights, these diverse stakeholders can contribute to the effective implementation of AI principles in their respective domains.

In addition to the non-technical layer, the technical layer focuses on different implementation strategies for AI. This encompasses the technical aspects of AI development, such as algorithms and models. Rezvi emphasised the need for careful consideration and evaluation of these strategies to align them with trustworthy AI principles.

Moreover, Rezvi highlighted the significance of a multi-stakeholder approach for mapping and operationalising AI principles. By involving various stakeholders, including those from industry, civil society, academia, and government, a more comprehensive understanding of the challenges and opportunities associated with AI can be gained. This approach fosters partnerships and collaborations that can lead to effective implementation of AI principles in relevant domains.

Rezvi also discussed the need for coordination of domestic laws and international regulations for AI. He pointed out that currently there is no specific legal framework governing AI in India, which underscores the importance of harmonising laws in the context of AI. This coordination should take into account existing internet laws and any upcoming legislation to ensure a comprehensive and effective regulatory framework for AI.

Furthermore, Rezvi explored alternative regulatory approaches for AI, such as market mechanisms, public-private partnerships, and consumer protection for developers. While not providing specific supporting facts for these approaches, Rezvi acknowledged their potential in enhancing the regulation of AI and ensuring ethical practices and responsible innovation.

In conclusion, Kazim Rezvi’s paper presented an in-depth analysis of the mapping and operationalisation of trustworthy AI principles in the finance and healthcare sectors. He highlighted the need for a multi-stakeholder approach, coordination of domestic laws and international regulations, as well as alternative regulatory approaches for AI. By addressing these issues, Rezvi argued for the responsible and ethical implementation of AI, ultimately promoting the well-being of society and the achievement of sustainable development goals.

Wei Wang

The discussion centres around the regulation of Artificial Intelligence (AI) across different jurisdictions, with a particular focus on Asia, the US, and China. Overall, there is a cautious approach to regulating AI, with an emphasis on implementing ethical frameworks and taking small, precise regulatory steps. Singapore, for instance, recognises the importance of adopting existing global frameworks to guide their AI regulation efforts.

In terms of specific regulatory models, there is an evolution happening, with a greater emphasis on legal accountability, consumer protection, and the principle of accountability. The US has proposed a bipartisan framework for AI regulation, while China has introduced a model law that includes the principle of accountability. Both of these frameworks aim to ensure that AI systems and their designers are responsible and held accountable for any negative consequences that may arise.

However, one lingering challenge in AI regulation is finding the right balance between adaptability and regulatory predictability. It is vital to strike a balance that allows for innovation and growth while still providing effective governance and oversight. Achieving this equilibrium is essential to ensure that AI technologies and applications are developed and used responsibly.

The need for effective governance and regulation of AI is further emphasized by the requirement for a long-standing balance. AI is a rapidly evolving field, and regulations must be flexible enough to keep up with advancements and emerging challenges. At the same time, there is a need for regulatory predictability to provide stability and ensure that ethical and responsible AI practices are followed consistently.

In conclusion, the conversation highlights the cautious yet evolving approach to AI regulation in various jurisdictions. The focus is on implementing ethical frameworks, legal accountability, and consumer protection. Striking a balance between adaptability and regulatory predictability is essential for effective governance of AI. Ongoing efforts are required to develop robust and flexible regulatory frameworks that can keep pace with the rapid advancements in AI technology and applications.

Smriti Parsheera

Transparency in AI is essential, and it should apply throughout the entire life cycle of a project. This includes policy transparency, which involves making the rules and guidelines governing AI systems clear and accessible. Technical transparency ensures that the inner workings of AI algorithms and models are transparent, enabling better understanding and scrutiny. Operational and organizational transparency ensures that the processes and decisions made during the project are open to scrutiny and accountability. These three layers of transparency work together to promote trust and accountability in AI systems.

Another crucial aspect where transparency is needed is in publicly facing facial recognition systems. These systems, particularly those used in locations such as airports, demand even greater transparency. This goes beyond simply providing information and requires a more deliberate approach to transparency. A case study of a facial recognition system for airport entry highlights the importance of transparency in establishing public trust and understanding of the technology.

Transparency is not limited to the private sector. Entities outside of the private sector, such as philanthropies, think tanks, and consultants, also need to uphold transparency. It is crucial for these organizations to be transparent about their operations, relationships with the government, and the influence they wield. Applying the right to information laws to these entities ensures that transparency is maintained and that they are held accountable for their actions.

In conclusion, transparency is a key factor in various aspects of AI and the organizations involved in its development and implementation. It encompasses policy, technical, and operational transparency, which ensure a clear understanding of AI systems. Publicly facing facial recognition systems require even higher levels of transparency to earn public trust. Additionally, entities outside of the private sector need to be transparent and subject to right to information laws to maintain accountability. By promoting transparency, we can foster trust, accountability, and responsible development of AI systems.

Gbenga Sesan

The analysis highlights the necessity of reviewing data protection policies to adequately address the extensive data collection activities of AI. It points out that although data protection regimes exist in many countries, they may not have considered the scope of AI’s data needs. The delayed ratification of the Malabo Convention further underscores the urgency to review these policies.

Another key argument presented in the analysis is the centrality of people in AI discourse and practice. It asserts that people, as data owners, are fundamental to the functioning of AI. AI systems should be modelled to encompass diversity, not just for tokenism, but to ensure a comprehensive understanding of context and to prevent harm. By doing so, we can work towards achieving reduced inequalities and gender equality.

The analysis also underscores the need for practical support for individuals when AI makes mistakes or causes problems. It raises pertinent questions about the necessary steps to be taken and the appropriate entities to engage with in order to address such issues. It suggests that independent Data Protection Commissions could provide the requisite support to individuals affected by AI-related concerns.

Additionally, the analysis voices criticism regarding AI’s opacity and the challenges faced in obtaining redress when errors occur. The negative sentiment is supported by a personal experience where an AI system wrongly attributed information about the speaker’s academic achievements and professional appointments. This highlights the imperative of transparency and accountability in AI systems.

Overall, the analysis emphasises the need to review data protection policies, foreground people in AI discourse, provide practical support, and address concerns regarding AI’s opacity. It underscores the significance of transparency and accountability in ensuring responsible development and deployment of AI technologies. These insights align with the goals of advancing industry, innovation, and infrastructure, as well as promoting peace, justice, and strong institutions.

Melody Musoni

The analysis explores the development of AI in South Africa as a means to address African problems. It emphasizes the significance of policy frameworks and computing infrastructures at the African Union level, which emphasise the message that AI can be used to tackle specific challenges that are unique to Africa. The availability of reliable computing infrastructures is deemed crucial for the advancement of AI technology.

Furthermore, the analysis delves into South Africa’s efforts to improve its computational capacity and data centres. It mentions that South Africa aspires to be a hub for hosting data for other African countries. To achieve this goal, the government is collaborating with private companies such as Microsoft and Amazon to establish data centres. This highlights South Africa’s commitment to bolstering its technological infrastructure and harnessing the potential of AI.

The discussion also highlights South Africa’s dedication to AI skills development, with a particular focus on STEM and AI-related subjects in primary schools through to university levels. This commitment emphasises the need to provide quality education and equip the younger generation with the necessary skills to drive innovation and keep up with global advancements in AI technology.

However, it is also stressed that careful consideration must be given to data protection before implementing AI policies. The analysis asserts that existing legal frameworks surrounding data protection should be assessed before rushing into the establishment of AI policies or laws. This demonstrates the importance of safeguarding personal information and ensuring that data processing and profiling adhere to the principles of transparency, data minimisation, data subject rights, and campus limitation.

Moreover, the analysis sheds light on the challenges faced by South Africa in its AI development journey. These challenges include power outages that are expected to persist for a two-year period, a significant portion of the population lacking access to reliable connectivity, and the absence of a specific cybersecurity strategy. This underscores the importance of addressing these issues to create an environment conducive to AI development and implementation.

Additionally, the analysis points out that while data protection principles theoretically apply to generative AI, in practice, they are difficult to implement. This highlights the need for data regulators to acquire more technical knowledge on AI to effectively regulate and protect data in the context of AI technology.

In conclusion, the analysis provides insights into the various facets of AI development in South Africa. It emphasises the significance of policy frameworks, computing infrastructures, and AI skills development. It also highlights the need for prioritising data protection, addressing challenges related to power outages and connectivity, and enhancing regulatory knowledge on AI. These findings contribute to a better understanding of the current landscape and the potential for AI to solve African problems in South Africa.

Liisa Janssens

Liisa Janssens, a scientist working at the Dutch Applied Sciences Institute, believes that the combination of law, philosophy, and technology can enhance the application of good governance in artificial intelligence (AI). She views the rule of law as an essential aspect of good governance and applies this concept to AI. Liisa’s interdisciplinary approach has led to successful collaborations through scenario planning in military operations. By using scenarios as a problem focus for disciplines such as law, philosophy, and technology, Liisa has achieved commendable results during her seven-year tenure at the institute.

In addition, there is a suggestion to test new technical requirements for AI governance in real operational settings. These settings can include projects undertaken by NATO that utilize Digital Twins or actual real-world environments. Testing and assessing technical requirements in these contexts are crucial for understanding how AI can be effectively governed.

In summary, Liisa Janssens emphasizes the importance of combining law, philosophy, and technology to establish good governance in AI. She advocates for the application of the rule of law to AI. Liisa’s successful engagement in interdisciplinary collaboration through scenario planning highlights its effectiveness in fostering collaboration between different disciplines. The suggestion to test new technical requirements for AI governance in real operational environments provides opportunities for developing effective governance frameworks. Liisa’s insights and approaches contribute to advancing the understanding and application of good governance principles in AI.

Camila Leite Contri

AI technology has the potential to revolutionise various sectors, including finance, mobility, and healthcare, offering numerous opportunities for advancement. However, the rapid progress of innovation in AI often outpaces the speed at which regulation can be implemented, leading to challenges in adequately protecting consumer rights. The Consumer Law Initiative (CLI), a consumer organisation, aims to safeguard the rights of consumers against potential AI misuse.

In the AI market, there are concerns about the concentration of power and control in the hands of big tech companies and foreign entities. These companies dominate the market, resulting in inequality in AI technology access. Developing countries, particularly those in the global south, heavily rely on foreign technologies, exacerbating this issue.

To ensure the proper functioning of the AI ecosystem, it is crucial to uphold not only data protection laws but also consumer and competition laws. Compliance with these regulations helps ensure transparency, fair competition, and protection of consumer rights in AI development and deployment.

A specific case highlighting the need for data protection is the alleged infringement of data protection rights in Brazil in relation to ChatGPT. Concerns have been raised regarding issues such as access to personal data, clarity, and the identity of data controllers. The Brazilian Data Protection Authority has yet to make progress in addressing these concerns, emphasising the importance of robust data protection measures within the AI industry.

In conclusion, while AI presents significant opportunities for advancement, it also poses challenges that require attention. Regulation needs to catch up with the pace of innovation to adequately protect consumer rights. Additionally, addressing the concentration of power in big tech companies and foreign entities is crucial for creating a fair and inclusive AI market. Upholding data protection, consumer rights, and competition laws is vital for maintaining transparency, accountability, and safeguarding the interests of consumers and society as a whole.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more