Rule of Law for Data Governance | IGF 2023 Open Forum #50

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator 2

The second session of the roundtable discussion began with Mr. Fang Yu, Director of the Internet Law Research Centre of China Academy of Information and Communications Technology. He provided valuable insights into the field and emphasized the importance of Internet law in the rapidly evolving digital landscape. Following Mr. Fang Yu, Mr. Lee Makiyama from Brussels shared his perspectives, contributing interesting ideas to the discussion.

Next, Ms. Wang Rong, a senior expert from Tencent Research Institute, offered her valuable insights on the topic. She shed light on the significance of research and development in the context of internet technology and its implications for legal regulation.

Continuing the discussion, Mr. Zhu Ran, Vice President of Alibaba Cloud Intelligence Group, shared his profound insights into the advancements of cloud computing in relation to internet law. His perspectives highlighted the need for effective legal frameworks to address the challenges and opportunities presented by cloud computing technologies.

The final speaker, Professor Zhao Jingwu from Beihang University’s Law School, delivered a thought-provoking presentation. He explored various legal aspects of internet governance and emphasized the need for comprehensive legal frameworks to address emerging digital issues.

Despite the time limitation, the forum concluded on an optimistic note, expressing the desire for more in-depth exchanges and discussions in the future. The participants and speakers were sincerely thanked for their wisdom and efforts in contributing to the open forum.

Moreover, the UNIGF was acknowledged for its vital role in providing a relevant dialogue platform for global stakeholders to engage in meaningful discussions on internet law and governance.

In summary, the second session of the roundtable discussion featured a diverse range of speakers who presented their perspectives on various aspects of internet law. The insights and arguments shared highlighted the need for robust legal frameworks to navigate the complexities of the digital era. The forum concluded with a shared commitment to further exploration and collaboration in this important field.

Fang Yu

The digitisation of our world is a key trend in the 21st century, with the digital economy and the internet becoming indispensable global goods. This has resulted in the need for new laws and regulations to effectively govern the cyberspace.

The digital economy is rapidly developing, and the internet plays a crucial role in its growth. It is now an integral part of our lives, facilitating communication, commerce, and innovation on a global scale. The significance of the digital economy is further emphasised by its relation to SDG 9: Industry, Innovation and Infrastructure.

Data legislation plays a fundamental role in enabling the effective use of data in the digital economy. It is divided into three key areas: data security, personal information protection, and data value. Data security focuses on ensuring that data is used effectively without compromising national security and stability. Personal information protection is vital as it ensures individuals have control over their private data and prevents unauthorised access or misuse. Realising the value of data is essential for the digital economy, as it drives innovation, creates new opportunities, and contributes to economic growth. The importance of data legislation aligns with SDG 16: Peace, Justice and Strong Institutions.

Furthermore, the issue of data governance is a long-term concern in the face of the growing digital economy. It is recognised that effective data governance is crucial to address the challenges and risks associated with handling vast amounts of data. Data governance offers the potential to enhance the level of data governance and ensure the benefits of the digital economy are shared among all stakeholders. This aligns with SDG 17: Partnerships for the Goals.

In conclusion, the digitisation of our world and the growth of the digital economy have necessitated the development of new laws and regulations to govern the cyberspace. Data legislation, including data security, personal information protection, and data value, is imperative for the digital economy to thrive. Moreover, data governance is a long-term issue that requires attention to improve the level of data governance and maximise the benefits of the digital economy for all.

Moderator 1

The expanded summary provides a detailed analysis of the importance of fair and effective data governance for public benefits and sustainable development. It emphasizes the role of data in driving economic innovation and social development, highlighting its significance in today’s society where it has become a key driver of innovation and development.

The analysis recognizes that data application and governance present both opportunities and challenges. On one hand, data has the potential to bring about significant positive impact by informing decision-making processes, driving economic growth, and fostering social progress. However, it also raises concerns about privacy, security, and ethical considerations. Therefore, it is crucial to have effective data governance to maximize the benefits of data while mitigating potential risks and negative consequences.

To address the complex nature of data governance, the analysis suggests the need for multi-stakeholder forums that facilitate the exchange of insights on data-related applications and governance. These forums aim to bring together representatives from government, civil society, the technology community, and the private sector on a global scale. By fostering collaboration and knowledge-sharing, these forums can contribute to the development of fair and effective data governance frameworks that address the concerns and interests of all stakeholders.

In conclusion, the analysis highlights the critical role of data in driving economic innovation and social development. It underscores the importance of fair and effective data governance to maximize the benefits of data while addressing the challenges and concerns associated with its use. The suggestion to hold multi-stakeholder forums reflects the need for a collaborative approach to data governance, where different perspectives are considered to develop comprehensive and inclusive frameworks. The expanded summary provides valuable insights into the significance of data governance for public benefits and sustainable development.

Wang Rong

The Personal Information Protection Law (PIPL) in China is gaining recognition for its alignment with international standards. The law covers all sectors, including private and public, ensuring comprehensive protection of personal information.

Platform companies like Tencent stand to benefit from the strict provisions of the PIPL. Tencent has developed systematic tools for data privacy compliance and is committed to protecting user privacy and data compliance. They have also been at the forefront of developing privacy technologies such as the Linxi privacy platform, Federated learning, Trusted computing, and Secure multi-party computing.

In addition to technical solutions, Tencent focuses on giving users more control and transparency in their products. They aim to empower users with informed choices regarding their personal information.

The PIPL in China is seen as a positive development in personal information protection and data compliance. It sets a high standard for businesses, particularly platform companies like Tencent, who demonstrate their dedication to safeguarding user privacy and complying with the law.

Overall, the PIPL and Tencent’s initiatives contribute to the broader goal of data privacy. They encourage companies to prioritize user privacy and comply with regulations, positioning Tencent as an early adopter and industry leader in personal information protection.

Zhu Ran

The Chinese government has consistently upheld the principle of governing the Internet in accordance with the law, recognizing the rule of law on the Internet as vital for digital governance and the advancement of digital civilization. This commitment to legal governance of the Internet reflects a positive sentiment towards ensuring a secure and regulated online environment.

Alibaba Cloud Intelligence Group has played a significant role in cloud-based data governance, offering a range of cloud services to clients from over 200 countries and regions worldwide. Their services include computing, storage, networking, data processing, and security protection, all aimed at effective data management and governance. This demonstrates Alibaba’s strong commitment to data governance and their contributions towards advancing the goal of industry, innovation, and infrastructure.

To strengthen their data compliance governance further, Alibaba Cloud has obtained important certifications, such as ISO 27001 and CSA Star certification. Having also earned PCI DSS certification in the financial field, Alibaba Cloud’s dedication to data compliance is evident. These certifications not only validate their commitment to industry standards but also assure clients of their compliance and security measures.

Alibaba Cloud continues to prioritize data governance by implementing technical guarantees for data security on their cloud platform. They have developed a system that classifies various types of data on the cloud, ensuring secure usage, entry, and exit of data. This commitment to technical guarantees fosters confidence among their clients and ensures that data security remains a top priority.

The release of Alibaba Cloud’s Data Security and Privacy Protection White Paper further emphasizes their focus on safeguarding data. This document highlights the best practices of applying cloud computing to protect data security. Such transparency and information sharing contribute to increasing awareness and understanding of data privacy and security.

Alibaba Cloud takes a proactive stance in supporting data privacy and security. They have launched a data security initiative that emphasizes the importance of cloud computing platforms being solely used for protecting customer data. They believe that platforms have an obligation to help protect the privacy, integrity, and availability of client data. This demonstrates their commitment to ethical data practices and their support for a secure online environment.

In conclusion, the Chinese government’s commitment to governing the Internet in accordance with the law, along with Alibaba Cloud’s significant contributions to cloud-based data governance and commitment to data compliance and security, reflects a positive sentiment towards ensuring secure and regulated online environments. Alibaba Cloud’s technical guarantees, white paper, and proactive stance further reinforce their dedication to data privacy and security. These efforts serve as an example for other organizations and emphasize the importance of upholding data governance standards for the advancement of the industry, innovation, and infrastructure goal.

Hosuk Lee-Makiyama

The analysis reveals significant aspects of cross-border data flow and its legal implications. It suggests that jurisdictional issues have largely been addressed, as many jurisdictions have expanded their reach and established a legal basis for cross-border data regulation. This expansion reflects the recognition of the importance of regulating data flows beyond national borders.

Additionally, the analysis underscores the importance of harmonizing and aligning laws to facilitate cross-border data flow. It argues that built-in transfer mechanisms within privacy laws and expedited data sharing processes can enhance efficiency and collaboration among agencies. This highlights the necessity of a cohesive legal framework for seamless data exchange.

The analysis also highlights the progressive evolution of the rule of law on the internet. By codifying rules and regulations, there is greater legal clarity compared to relying solely on executive orders. This signifies a positive step toward establishing a solid legal foundation to govern online activities and ensure accountability.

Furthermore, the analysis challenges the notion of a fictitious debate around ‘trust’ in data governance. Despite varying societal backgrounds, agencies worldwide are working toward similar data governance goals. This implies the potential for common ground and shared objectives, fostering trust through collaborative efforts.

Overall, the analysis provides a comprehensive overview of the progress made in addressing cross-border data flow challenges. It emphasizes the importance of jurisdictional expansion, harmonization of laws, clear regulations, and collaborative data governance. These insights shed light on the complexities associated with cross-border data flow and the ongoing efforts to navigate them while promoting trust and accountability.

Tang Lei

China has been dedicated to promoting law-based cyberspace governance ever since it became fully connected to the Internet in 1994. The country has taken significant steps to establish a comprehensive legal framework by enacting more than 140 laws pertaining to cyberspace. This legislation serves as the basis for governing and regulating the online environment in China.

One of China’s key arguments is that it champions the interests of all countries in promoting law-based cyberspace governance. The country believes that all nations should adhere to legal principles and frameworks to ensure a safe and secure online space for everyone. China’s commitment to this ideal is evident in the number of laws it has enacted and the efforts it has made to create a robust legal framework for cyberspace governance.

Furthermore, China emphasises the importance of equal participation in global cyberspace governance. It supports the involvement of all nations on an equal footing and actively engages in international exchanges and cooperation in the field of law-based cyberspace governance. By promoting inclusivity and collaboration, China seeks to foster a global community that works together to address the challenges and opportunities in cyberspace.

China also recognises the need for constant innovation and adaptation to keep up with the evolving technological landscape. It acknowledges the challenges posed by new Internet technologies and responds to them in a forward-looking manner. By promoting innovation in the concept, content, approach, and methods of law-based cyberspace governance, China ensures that its legal frameworks remain relevant and effective in addressing emerging issues.

In conclusion, China’s efforts in promoting law-based cyberspace governance are commendable. The country has enacted numerous laws and established a comprehensive legal framework to govern the online environment. China advocates for the interests of all countries and supports equal participation in global cyberspace governance. Additionally, it emphasises innovation in adapting to the challenges brought by new Internet technologies. Overall, China’s commitment to law-based cyberspace governance contributes to a safer and more secure online space for people worldwide.

Zhao Jingwu

In today’s digital society, data security and cross-border data flow have emerged as crucial issues. Data has become a vital element in the national innovative development of countries. The ability to securely transfer data across borders is not only important for domestic data security regulation and commercial utilization but also essential for promoting the global digital economy.

China, for instance, has taken steps to address the governance of cross-border data flow through its domestic law. The country has implemented clear rules that classify cross-border data flow into four categories. However, its governance model is seen as lacking openness and cooperation. While China acknowledges the importance of data security, there is also a need to balance it with utilization. The coexistence of security and utilization is considered essential in the China data governance system.

On the other hand, there are arguments against unrestricted cross-border data flow without proper attention to data security. Pursuing data flow without considering data security risks compromises the exchange value of data and can lead to security vulnerabilities such as data linkage and theft. It is important to strike a balance between the free flow of data and the necessary security measures to safeguard sensitive information.

Another concern is the politicization of data security, particularly in relation to China. There is an international perception that China follows a path of data controlism, essentially politicizing the issue of data security. This perception raises questions and highlights the importance of ensuring data security without unnecessary politicization.

In conclusion, ensuring the security of data flow is critical in today’s digital society. While China has defined rules for cross-border data flow, its governance model is viewed as lacking openness and cooperation. Striking a balance between security and utilization is key to effective governance. Additionally, pursuing unrestricted data flow without considering data security risks compromising the exchange value of data. The observation that the issue of data security can be politicized, particularly with China’s perceived approach, raises further concerns.

Wang Yi

The analysis provides a comprehensive overview of the personal information protection laws and data governance in China. It highlights some unique characteristics of China’s personal information protection laws and their approach to balancing the protection and utilization of personal information. Specifically, China’s civil code distinguishes between the right to privacy and the right to personal information protection. This distinction allows for a nuanced understanding of personal information and ensures that individuals’ rights are safeguarded while also promoting the responsible use of personal data.

In terms of data governance, the analysis reveals that in China, there is a consensus that data carries a wide range of interests beyond personal and property interests. This understanding indicates a broader perspective on data and its potential for various applications. The analysis suggests that data is non-exhaustible and can be used by multiple entities simultaneously. This recognition underscores the importance of developing comprehensive data governance frameworks that account for the diverse interests associated with data.

Notably, the analysis explores two main perspectives on data governance in China. The first perspective advocates for establishing property rights over data. This approach requires defining the boundaries of rights for different entities, ensuring that ownership and control over data are clearly defined. The second perspective focuses on access to data through lawful behaviours. This view prioritises the establishment of regulations and guidelines that govern the appropriate uses and accessibility of data. Both perspectives demonstrate the need to navigate the complex challenges surrounding data governance and strike a balance between individual rights and collective interests.

The analysis also acknowledges that China’s governance practices in this field could offer valuable insights for other jurisdictions facing similar concerns. It highlights the potential for China’s experience to serve as a reference for state governments worldwide grappling with data regulation and governance issues. By examining China’s approach, other jurisdictions may gain useful knowledge and strategies for developing effective policies and frameworks to protect personal information and regulate data usage.

In conclusion, the analysis sheds light on the distinctive features of personal information protection laws and data governance in China. It underscores the importance of balancing the protection of personal information with the need for responsible utilization. Furthermore, it emphasises the recognition of data as carrying a wide range of interests and the necessity of establishing comprehensive data governance frameworks. Overall, the analysis contributes valuable insights and recommendations for the ongoing global conversation on data governance and personal information protection.

Zheng Junfang

Alibaba, a prominent player in the digital economy, has been instrumental in connecting merchants and consumers through the use of data. This connection has revolutionized commercial operations, making them smarter, more transparent, and highly efficient. By harnessing the vast amount of data at their disposal, Alibaba has created a platform that facilitates seamless transactions between merchants and consumers, driving growth and prosperity in the digital ecosystem.

With a global reach that serves nearly 10 million merchants and 1.3 billion consumers worldwide, Alibaba continues to create greater value for customers and society as a whole. Their positive impact on the digital economy is evident through their ability to leverage internet technology to foster innovation and develop industries.

However, Alibaba acknowledges the challenges presented by data governance in the rapidly advancing digital economy. They are advocates for a law-based approach to data governance, recognizing the importance of protecting personal information, intellectual property rights, and ensuring network and data security. By acknowledging the risks associated with extensive data usage, Alibaba emphasizes the need for a robust legal framework to support a thriving digital economy.

In alignment with their commitment to responsible data management, Alibaba emphasizes the importance of AI technology serving humanity’s interests while safeguarding personal privacy and data security. Their efforts in this regard are exemplified by their launch of large-language model R&D in 2019 and the recent introduction of their ethical risk review management system. By proactively adhering to AI regulations, Alibaba ensures the advancement and stability of AI technology while prioritizing personal privacy and data security.

In conclusion, Alibaba’s significant impact on the digital economy is indisputable. Their role in connecting merchants and consumers through intelligent data usage has revolutionized commercial operations, promoting efficiency and transparency. Moreover, Alibaba’s dedication to law-based data governance and ethical AI practices underpin their commitment to responsible data management. Overall, Alibaba’s positive contributions to the digital economy firmly establish them as a global leader in the industry.

Jesus Lau

Mexico is currently facing challenges in data handling, specifically regarding data literacy and data protection. A significant concern is the lack of necessary skills amongst many citizens to understand and effectively utilize data. To address this issue, there is a need to raise awareness and promote the importance of data literacy and its benefits for personal and professional development.

Mexico has taken steps to ensure data protection by including it in Article 16 of its Constitution. The country has a solid rule of law regarding data governance, which encompasses principles and practices that ensure fair, transparent, and consistent management and governance of data within organizations and society. This provides a strong foundation for data protection.

However, Mexico still faces difficulties in data handling. Data breaches, with hackers targeting both private and government repositories, pose significant threats. This highlights the need for robust cybersecurity measures to safeguard sensitive information.

Legislative lag is also a concern. Data protection legislation often falls behind technological advancements and emerging threats, making it difficult to effectively address data security issues. It is essential to update and strengthen legislation to keep up with the evolving landscape of data handling.

Government ethics in data handling is another aspect that needs attention. Ensuring transparency, accountability, and ethical practices in the collection, storage, and use of data by government entities is vital to foster trust and protect citizens’ privacy.

Additionally, the widespread use of social media and the tracking of individuals’ data present further challenges in data handling. Stricter regulations and better controls are required to manage the risks associated with the use of personal data on social media platforms.

To overcome these challenges, Mexico should prioritize data and information literacy education and training. This could involve offering courses on data and algorithmic literacy and providing access to data analysis tools and resources. By prioritizing data literacy education at both individual and organizational levels, Mexico can empower its citizens to understand and make informed choices about their digital presence in cyberspace.

In conclusion, Mexico faces various challenges in data handling, including limited data literacy, data breaches, legislative lag, government ethics, and social media tracking. It is essential for the country to promote data literacy, strengthen data protection measures, update legislation, ensure government ethics, and regulate social media data handling. By prioritizing data and information literacy education and training, Mexico can empower its citizens to engage with data effectively and confidently navigate the digital world.

Xu Zhiyuan

China has implemented a comprehensive legal framework to effectively manage the flow of data across borders. This framework consists of three key laws: cybersecurity laws, data security law, and personal information protection law. These laws serve as a solid foundation for regulating and safeguarding the transfer of data.

To support these laws, the Chinese government has implemented additional measures. These include data export security assessments, which require the export or import of data or a certain amount of personal information to undergo a thorough security evaluation. This assessment ensures that data leaving or entering the country meets the necessary security standards.

Furthermore, standard contracts are signed between personal information processors and overseas recipients. These contracts outline the rights and obligations of both parties, providing a legal framework for the safe and responsible transfer of personal information. Additionally, China has established detailed rules for personal information protection certification. These certifications are conducted by professional institutions approved by the China Administration of Cybersecurity (CAC). They evaluate the measures taken by personal information processors to protect and manage personal information in accordance with regulations set by the China Information and Communication Administration (CIC).

China’s commitment to the safe and orderly flow of data across borders is evident in its efforts to create a supportive environment for data exchange. The State Council of China has issued a special document that establishes a streamlined process, known as the “green channel,” for qualified foreign investment enterprises. This green channel enables these enterprises to effectively conduct outbound security assessments of important data and personal information.

Additionally, the CIC has drafted special regulations on cross-border data flow and is seeking public opinions to further promote the orderly and free flow of data. These proactive measures demonstrate China’s determination to facilitate secure data exchange and support the growth of the digital economy.

China also emphasizes the importance of international cooperation and open engagement to promote the development of the digital economy. The establishment of a comprehensive legal system for managing transborder data flow, along with China’s proposal to explore convenient security management mechanisms for cross-border data flow, reflects its commitment to collaboration and partnerships.

In conclusion, China has taken significant steps to establish a robust legal system for managing the flow of data across its borders. The introduction of cybersecurity laws, data security law, and personal information protection law, along with the implementation of data export security assessments, standard contracts, and personal information protection certifications, demonstrates China’s dedication to the safe and orderly flow of data. With a strong focus on international cooperation and support for the digital economy, China is positioning itself as a key player in facilitating secure and efficient data exchange on a global scale.

Neil Walsh

The importance of data governance in personal, national, and international security is emphasised in the provided information. With data driving our thought processes and daily activities, its governance becomes critical. This highlights the need for effective management and protection of data to ensure security at various levels.

Another key point highlighted is the need for good legislative and governance mechanisms for managing data. It is mentioned that the governance of raw and segmented data often lacks clarity, indicating the importance of establishing clear frameworks and guidelines for data management.

Furthermore, there is an urgent call for a comprehensive law and policy framework to assess threats and prosecute offenders in the cyber realm. This is prompted by a recent devastating cyber attack in Eastern Africa that had significant impacts on the economy and security of the affected country. The lack of involvement from countries in Eastern Africa in convention work is noted, which is discouraging considering the need for collective efforts in addressing cybercrime.

In the context of the Cybercrime Convention, it is advocated that all factions, including NGOs, civil society, and academia, should be involved in the debate. This inclusivity is seen as essential despite the diplomatic difficulty arising from the divergent views of countries involved. It is important to consider different perspectives and input from various stakeholders to ensure a well-rounded and effective approach to tackling cybercrime.

Additionally, the significance of active listening and open communication for preventive diplomacy is highlighted. It is acknowledged that active listening and dialogue among individuals and nations are essential tools in the pursuit of preventive diplomacy. This process enables understanding, cooperation, and the building of partnerships to address conflicts and maintain peace.

In conclusion, the provided information underscores the importance of data governance in personal, national, and international security. It highlights the need for legislative and governance mechanisms to effectively manage data. Furthermore, the urgency for a comprehensive law and policy framework to address cyber threats and prosecute offenders is emphasised. The importance of inclusivity in the debate surrounding the Cybercrime Convention, involving various factions and stakeholders, is also stressed. Finally, the significance of active listening and open communication for preventive diplomacy is acknowledged.

Session transcript

Moderator 1:
Beijing Normal University, welcome to the open forum on the role of law for data governance hosted by the Bureau of Internet Laws and Regulations of the Cyberspace Administration of China. With the deepening of globalization and digitalization, data has become one of the core drivers of economic innovation and social development. Data application and governance face both opportunities and challenges. At the same time, fair and effective data governance is essential for public benefits and sustainable development. This forum aims to gather different stakeholders from government, civil society, and the technology community, as well as private sectors in Asia, Africa, Europe, and America to exchange insights and ideas on the current status and evolution trend of global data-related applications and data governance, to examine and assess important concerns and challenges in global data governance, and to explore the role of law approach for data governance that is beneficial to the common values of humanity. Now let’s start the forum with the first session of keynote speech. Please remind you that each speaker can have eight minutes. First of all, let’s welcome Mr. Tang Lei, Deputy Director General of the Bureau of Internet Laws and Regulations of the Cyberspace Administration of China.

Tang Lei:
Please. Distinguished guests, ladies and gentlemen, good morning. Today we are gathering here in the Internet Governance Forum. to exchange our thoughts on the future of digital governance for humankind. I find it very meaningful and look forward to a constructive outcome of the forum. Since China was fully connected to the Internet in 1994, it has committed itself to law-based cyberspace governance, enhancing continuously the level of law-based cyberspace governance. China is the world’s largest developing country and has the largest number of Internet users. We always uphold people-centered development and we always uphold further development of the Internet, with a keen understanding of the extreme difficulties and complications in cyberspace governance. China has been forward-looking in responding to the challenges brought by new Internet technologies, applications and business forms and models, and promoted innovation in the concept, content, approach and methods of law-based cyberspace governance. Meanwhile, China has played an active part in international exchanges and cooperation in law-based cyberspace governance, is committed to build a multilateral, democratic and transparent global Internet governance system together with other countries. Distinguished guests, friends, China set out from its realities to explore its approach to cyberspace regulation and governance, consolidating the legal system for cyberspace governance. Until March 2023, China has enacted more than 140 laws on cyberspace, forming a cyber legislation framework endorsed by traditional legislation and underpinned by specialized cyber laws governing online content and management, cybersecurity, information technology, and other elements. Keeping order in a rule-based cyberspace, China has taken rigorous measures to ensure fair and rule-based law enforcement in cyberspace, strengthening enforcement in key areas of immediate concern to the people, promoting a healthy cyber environment, promoting public awareness and competence in law-based cyberspace governance. China makes every effort to break new ground in the content, form, and means of spreading legal knowledge via the Internet. The Chinese netizens’ awareness and understanding of the rule of law have generally increased. Respecting, learning, abiding by, and using the law is a shared understanding and basic principle. Increasing international exchanges and cooperation in law-based cyberspace governance, China is fully engaged in international exchanges and cooperation in the field of law-based governance of cyberspace. It plays an active role in rule-making and it resolutely safeguards the international system with the United Nations at its core, supports the participation of all countries. in global cyberspace governance on an equal footing, engage in bilateral and multilateral dialogues and exchanges in law-based cyberspace governance, increase international law enforcement and judicial cooperation on cybersecurity. Distinguished guests, friends, the internet benefits the whole world. China champions the interests of the people of all countries in promoting law-based cyberspace governance. We stand ready to partner with colleagues from all over the world to enhance the level of law-based cyberspace governance. China will further improve legislation on digital governance and endeavor to establish a legal system for the protection of people’s rights and interests in cyberspace, data security, and platform regulation, deepen the implementation of laws and regulations in the digital field. China will also put the role of the United Nations as a main channel into foreplay and has strengthened international exchanges and cooperation in making rules for digital governance through platforms like the BRICS Cooperation Mechanism, Shanghai Corporation Organization, and the World Internet Conference. Distinguished guests, friends, facing the opportunities and the challenges brought by digitalization, China will follow the global governance principle of achieving shared growth through convolution and collaboration and work together with the international community. to ensure global digital governance is law-based, and that digital progress will deliver greater benefit to the people and a better world. In the end, I’d like to conclude by wishing today’s Open Forum a great success. Thank you all.

Moderator 1:
Thank you, Mr. Tang, for the relevant experience of China. Now, I give the floor to Neil Walsh, Head of UNODC Mission and Regional Representative for East Africa.

Neil Walsh:
Good morning, everybody. Can I check that you can hear me okay? Yes, you can. Thank you. Okay, a very good morning to you all from Vienna in Austria, where it’s approaching three o’clock in the morning, and I’m in a very small hotel room. So it is a great pleasure to be with you all. My name is Neil Walsh, and it’s my honor to be the Head of Mission and Regional Representative of the UN Office on Drugs and Crime in Eastern Africa, where I’m normally based in Kenya. My 200 staff and I cover 13 countries in East Africa and the Indian Ocean, and we deliver UNODC’s effects to counter organized crime, terrorism, and corruption. I wish to express at the outset my deep thanks to the Internet Governance Forum, the Bureau of Internet Laws and Regulations of the Cyberspace Administration of China, and to my dear friend and mentor, Professor Wu Shenguo of Beijing Normal University, who I saw on camera a few moments ago. I only wish that I could be with you all in Kyoto, but unfortunately the dates clashed with UNODC’s annual Heads of Mission meeting, and I have to be here in Vienna. The topic of today’s event is the rule of law for data governance and subjects that both the CAC and BNU are world experts in. But these are topics of criticality, not just for the institutions I’ve named, not just for the People’s Republic of China and the IGF, but for every nation, every business and every person on our planet. And in Eastern Africa, I see on a daily basis the absolute need for all of these aspects to come together. Data governance is a broad term, and I suspect that all of us could explain it and define it in different ways, based upon our experience, our education and our culture. And as we all know, definitions in all things cyber are often very politically challenging and academically diverse. But friends, I think we can all agree that we have a broad collective understanding of the importance of data governance in personal, national and international security. Data, be it personally identifiable or more anonymized, drives our world, whether we’re conscious of it or not. Data also drives our thought processes, our desires and our biology. Whether it’s the serotonin boost from a social media like or a revulsion when we see, experience or think about organized crime, data is at the core of everything that we do. And thinking about the adverts that we see every day, the data mining that drives targeted advertising and the surprise that we all feel, followed by a slightly uncomfortable sense when we realize that the new product we’ve been discussing with friends is suddenly across all of our social media feeds without asking or without searching consciously for it can be quite unpleasant. Data is the product of choice for exploitation and profit. And the region of the globe that I lead for UNODC, there is a daily clash between the desire for more data and our ability or not to analyze and exploit it at pace. The legislation and governance mechanisms of the raw and segmented data, whether within one’s own country of residence or nationality or beyond that is often unclear. And from my experience, conversations and guidance from Professor Wu and the CAC over the years, it’s clear to me that there is much more always to be done. And so data is at the heart of the United Nations. It must be at the core of our decision-making on topics as diverse as economic growth, sustainable development, encountering cyber crime. And I was able to listen to the last 15 minutes of the previous session and to see some old friends like Deputy Assistant Secretary Alison Peters on stage where we are discussing these matters together. Data when mined proportionately, lawfully, accountably and necessarily can make the difference between an emotional response and an objective decision. And in my role leading the UN’s work in Eastern Africa, I’ve placed the need for routine, accurate, strategic intelligence data at the core of our business. We can’t give good country level and regional policy advice if we don’t have good data. And good data sourcing without the ability to assess, analyze and exploit it is at best wasted or at worst dangerous. And some years ago, I led the UN’s policy response to cyber crime and my colleague Nayeli Loya led our operational programming globally. And I can remember so many meetings and conversations when we met ministers around the world who saw cyber crime as a future threat. None of us consider cyber crime to be a future threat now. It is the here and now everywhere. And just recently, a country in my region in Eastern Africa suffered a devastating cyber attack. It only lasted for a few hours, but the impact was significant. Electricity failed in some regions, payment systems in shops failed, the economy stalled. This was without doubt a national security incident and an international security incident. And so we need good law. We need good policy nationally and internationally to create the means to assess the threat and to prosecute offenders and to hold to account those who seek to undermine development and cause harm. And we need it now. I’m deeply encouraged by the work being done by UN member states under UNODC’s stewardship to craft the new Cybercrime Convention and the interventions of non-governmental organizations and civil society and academia are absolutely critical in this debate as well. This is diplomatically hard and many countries have divergent views. But most worryingly for me is the lack of involvement of countries in my region in Eastern Africa. The convention work is as important for Africa as it is for Asia, Europe and the Americas. So it is incumbent for all of us to create a supportive, nurturing, challenging environment for those who should engage and get the best out of this debate but are currently absent. We need to use our collective skills to bring them and their insights, their experience and guidance to support and mentor those who are yet to step in to these areas in necessary depth. Because we all know that if we don’t fill this space, others will. Others who don’t have our good, peaceful intent at heart. Others who will seek to harm and to exploit. And that’s why it’s so important to talk together about the rule of law for data governance. We need to talk to one another and most importantly, friends, we need to actively listen to each other too. That’s what the public, the people we serve, expect from us and need from us. This is preventive diplomacy in action. And that is why today’s event right now is so important. So friends, I want to thank you once again to the IGF, to the Cyberspace Administration of China and Beijing Normal University for inviting me to speak with you. I really wish I could be sitting with you right now. But most importantly, I want to say an enormous thanks to all of you who care about the topic, its seriousness and the consequences if we get it wrong. So from the middle of the night in Vienna, thank you for listening. And I hand it back to you in Kyoto. Thank you.

Moderator 1:
Thank you, Mr. Walsh, for your wonderful sharing. Now let’s turn to Professor Wang Yi, Vice President of Renmin University of China, for his speech. Professor Wang.

Wang Yi:
Thank you very much. Thank you very much for your kind introduction. Mr. Speaker, my fellow panelists and our distinguished audience, it is quite an honor to have this opportunity to share some of my thoughts on state governments today. I would like to provide a brief overview from the dual perspective of a participant in the legislative drafting process and a legal scholar in civil law in China. I will introduce some of the consensus reached by the academic community on state governance and the latest progress in this field. They are divided into the following two sections. personal information protection, and corporate data governance. In terms of personal information protection in China, the civil code of the People’s Republic of China has taken the lead in providing rules and standards for personal information protection in both its general provisions and the right to personality section. Since information technology has posed new challenges to the protection of personal information after the enactment of the civil code, the personal information protection law of the People’s Republic of China has overall continued the provisions of the civil code’s relevant articles, but with more specific rules within the existing framework. China’s legislative model of dual protection for personal information through the civil code and the personal information protection law exhibits several unique characteristics. Firstly, the civil code distinguish between the right to privacy and the right to personal information protection, making the boundaries between the two clear. Traditional privacy rights primarily address one-to-one infringement, while personal information protection primarily deals with large-scale macro-level infringements. Secondly, the civil code provides a balance between the protection and the utilization of personal information. The personal information protection framework initially originated from the… handling of personal information by government agencies. However, today, technology companies, especially online platforms, have become the primary actors in information processing. Therefore, personal information protection should be subject to adjustments within civil legal systems. Moreover, China’s civil code also places emphasis on safeguarding personality rights. It complements the personal information protection law, together forming a legal framework with Chinese characteristics in legal practice. The second topic I’d like to discuss is the civil code and corporate data governance in China. In today’s world, where the value of big data is widely recognized, how should we approach the legal framework of this big data? Should big data be exclusively discussed in the context of intellectual property? An emergent and important consensus in China is that there are many types of interests that can be associated with data, not limited to the personal and the property interests typically associated with intellectual property. Based on this consideration, the civil code has included multiple provisions for data within civil law as the object of legal relationships. The Chinese academic community shares the following three points of consensus of the key and the most important. differences between data and others, such as tangible and real property. The first and foremost is that data is non-exhaustible and able to be repeatedly utilized. The second characteristic is that when it comes to collection and utilization, data can be collected and used in parallel among multiple actors. The third characteristic is the complexity of the types of interests that data can carry. It can potentially carry both personal and property interests. Building on such shared notions, the biggest dispute in academia and practice is how the law allocates property interests above data. A typical example is disputes caused by web crawlers. In China, there are mainly two divergent viewpoints. One is to establish property rights over data and resolve disputes through property rights. The other is to access and resolve disputes through the legality of the relevant behaviors. However, given these two models differ, they still lead to similar outcomes. Even when data is subject to general property rights, such rights are often restricted and access is granted to other parties, especially ordinary users. Similarly, establishing legal rules for relevant actions also requires defining the boundaries of rights for different entities. In my personal opinion, it is inevitable to establish property rights. over the monetary interests carried by debt, but the content of debt rights that corporations enjoy may vary in different contexts. As far as I am concerned, governments around the world share similar concerns. I believe that China’s debt governance practice will provide valuable reference for other jurisdictions. This is all I have for today’s forum. Thank you very much.

Moderator 1:
Thanks for Professor Wang’s sharing, which provided us with a new perspective. Next speaker, let’s invite Mr. Xu Zhiyuan, Deputy Chief Engineer from China Academy of Information and Communications Technology. Please.

Xu Zhiyuan:
Hi. Good morning, everyone. I’m here today to talk about China’s framework of transborder data flow management. At present, data has become a key strategic resource. Major countries and regions in the world have implemented different degrees of restrictions on the cross-border data flow, and have constructed their own cross-border data flow management system. However, the international community has not yet formed a general consensus on the specific regulatory rules of cross-border data flow. And there are mainly three models. The United States, the European Union, and the emerging countries. China has always promoted cross-border data flow in accordance with the law. and has basically established a framework based on the rule of law. Since 2016, China has established the three plus three legal system with cyber security laws, data security law, and the personal information protection law as a top level design. And the data export security assessment measures, personal information export standards contract measures, and detailed rules for the implementation of personal information protection certification as supporting rules. In accordance with the three plus three legal system, China has promoted supervision and control of cross-border data flow in an orderly manner. In particularly, a number of demonstration cases have been formed in the security assessment of outbound data. At the same time, China’s local level has actively explored the innovation pilot of data outbound, promoting the safe and orderly cross-border data flow, and has accumulated the value of data elements. Next, I would like to introduce three ways of Chinese data export. First is safety assessment. The three plus three legal system establishes a basic requirement that the data export of import data or certain amount of personal information must pass a security assessment. Second is a standard contract, which is formulated by the Cyberspace Administration of China. China and signed by the information by the personal information processor and the overseas recipient Stimulating the rights and obligations of both parties third is protection certification Protection certification is an activity In which professional institution approved by CAC conducts a comprehensive evaluation of the personal information protection and management measures of personal information processors in accordance with CIC regulations if the merits meet the requirements the institution will issue a certification mark to the processor On the basis of the 3 plus 3 legal system China has further explored and innovated to promote the orderly cross-border data flow Recently the State Council of China issues a special document on foreign investment the title is opinions on further optimizing the environment for foreign investment and increasing the efforts to attract foreign investment. The document proposed to explore a convenient security management mechanism for cross-border data flow we will implement the requirements of the cybersecurity law the data security law and the personal information protection law establish a green channel for qualified foreign investment enterprise effectively carry out outbound security assessment of important data and personal information and promotes a safe orderly and free flow of data. On September 28th, the CIC drafted special regulations on the cross-border data flow to solicit public opinions aiming to further promote the orderly and the free flow of data in accordance with the law. Distinguished guests, China has always opened its door to the development of the digital economy and actively engaged in international cooperation. In the face of the development as a global digital economy, China will continue to regulate the cross-border data flow in accordance with the law, adhere to the vision of building a community with a shared future for mankind and share the dividends of digital development with other countries. Thank you.

Moderator 1:
Thank you, Mr. Xu. Now let’s welcome Professor Jesus Law, co-chair of the International Steering Committee of UNESCO Media and Information Literacy Alliance and the Vice President of University of Veracruz, Zena.

Jesus Lau:
Hello, good evening here. I’m in the southern part of Mexico. And I would like to say thanks to Professor Wu of Virginia Norman University and to CAC for the invitation to be part of this panel. My paper is called, Naked Address, Law for Data, Challenges and Opportunities in Mexico. Citizens in Mexico and in Latin America in general have legal and de facto options to be data naked or dress. In our contemporary data-driven modern society. In other words, they have the right to allow. or restrict the compilation and tracking of the digital footsteps in cyberspace. Mexico has a sound legal framework to protect individuals’ privacy. Data protection is ensured in Article 16 of the Mexican Constitution, as well as in the federal law for the protection of personal data held in private parties published in July 2010 and its regulations published in December 2011. The Mexican rule of law for data governance refers to a set of principles and practices that ensure that data within an organization or society is managed and governed in a fair, transparent, and consistent manner in accordance with established laws, regulations, and ethical standards. The authority responsible for data protection is the National Institute of Transparency, Access to Information, and Personal Data Protection. The acronym or the abbreviation is INAI. INAI oversees compliance with the law and has a primary focus on disclosing governmental activities, budgets, and public information, as well as protecting personal data and individuals’ right to privacy. INAI has the authority to conduct investigations, review and sanction data protection controllers, and authorize, oversee, and revoke certifying entities. The Ministry of Economy is responsible for informing and educating national and international corporations with commercial activities in the Mexican territory about their obligations regarding the protection of personal data. Among other responsibilities, it must issue relevant guidelines for the content and scope of the privacy notice in cooperation with E9. However, there are many challenges in Mexico, such as data breaches, hackers targeting private and government data, repositories pose a significant threat. Number two, legislative lag. Data protection legislation often lags behind technological advancement and emerging threats. Number three, government ethics, ensuring ethical data handling and decision making within the government is essential. Social media tracking. Social media platforms are major data trackers for marketing purposes, and sometimes this can be annoying. Number five, limited data literacy. Many citizens lack the necessary skills to understand, interpret, and effectively use data. The last challenge, in other words, limited data literacy, in the bold list, is certainly the most important because it can help address the rest of the challenges. Mexico needs to foster data literacy among its citizens, empowering them to understand, interpret, and effectively use data in today’s artificial intelligence driven world. To address these challenges, the country should, number one, raise awareness, promote the importance of data literacy and its benefits for personal and professional development. Number two, simplify complex data. Develop strategies and tools to make complex data more accessible and understandable. Number three, manage data overload. Provide guidance on how to navigate and extract meaningful insights from large data sets. Number four, overcome technological barriers. Ensure the technology and offer training on data analysis tools. Number five, address data quality. Promote data quality practices and techniques for cleaning and reprocessing data. Number six, teach a statistical and mathematical concepts. Offer education on a statistical and mathematical concepts relevant to data analysis. Number seven, emphasize data privacy. Educate individuals and organizations on responsible data handling and privacy compliance. Number eight, expand data access and enhance availability and access for all citizens. Number eight, promote change. Encourage organizations to adopt data-driven decision-making and foster a culture that values data. Number 10, address cultural and organizational barriers. Provide support, resources, and a conducive algorithmic culture for data literacy. Number 11, and last, allocate time and resources. Invest in training and development of data literacy skills, considering time and budget constraints. Conclusion, prioritizing data and information literacy education and training at both individual and organizational levels is essential. This may involve offering courses on data and algorithmic literacy, providing access to data analysis tools and resources. Enforcing a culture that values data-driven decision-making. Additionally, continuous efforts to raise awareness about the importance of data literacy can motivate individuals. to acquire these skills and make informed choices about the digital presence in cyber space. As a summary, according to the main message is that we need to offer data literacy training to our citizens. Thank you very much for this opportunity to speak at this session. I wish you success in the following sessions.

Moderator 1:
Thank you, Professor Law, for your wonderful words. I give the floor to the next speaker, Ms. Zheng Junfang, CRO and CFO of Alibaba Cloud Intelligence Group, please.

Jesus Lau:
Thank you.

Zheng Junfang:
Respected Mr. Tang Lei and the other speakers, ladies and gentlemen, friends, good morning. It is a great pleasure to participate this workshop and exchange ideas with you all on this topic. Today, our lives and work are intertwined with digital technology like never before. Indeed, data as a factor of production has emerged as a strategic resources for economic development. Chinese President Xi Jinping stated, we need to build a digital economy with data as a key factor, boost the integrated development of the real and the digital economies, and further integrate the internet, big data, and artificial intelligence with a real economy. As one of the earliest internet companies in China, Alibaba has benefited from the development of internet technology and from opportunities offered by the times. It is from this standpoint that we wish to share our thoughts and experience in the field of data governance. Data is a key factor of the digital economy. Alibaba, a sci-tech enterprise starting with e-commerce, has empowered the digitalization of numerous merchants and products to meet the huge market demand in China. Since its establishment, Alibaba has played an integral part in establishing connections between merchants and consumers throughout data. These data have made commercial operations smarter, information more transparent, and adjustment of supply and demand structure more efficient. It is also through data that we have built a trust system in transactions. Today, Alibaba serves nearly 10 million merchants and 1.3 billion consumers worldwide, and we work to continuously create greater value for customers and the society. In 2009, the first line of code was written for Alibaba Cloud’s self-developed cloud operation system. After 14 years of tireless efforts, our data-centric cloud computing platform has grown into a front-runner worldwide. In the era of cloud computing, both individuals and start-ups can enjoy the benefits of the digital economy. Mihai You is a video game development and publishing company that took shape in a dormitory at the Shanghai Jiao Tong University in 2011. Mihai You began to utilize Alibaba Cloud Computing Service when there were only eight staffers in the company. As of June 2023, the net profit of this young company reached nearly $2.27 billion. It is fair to say that Mihayo is a true cloud-native digital enterprise. Its success is a microcosm of this era, in which numerous innovative enterprises and Alibaba Cloud are mutually reinforcing. The value of data is unlimited, not only for business, but also for public services. During the Asian Games in Hangzhou, they just closed earlier this week. For example, cloud computing supported three core systems, the games management systems, results distribution systems, and game support systems. Alibaba Cloud also enabled the seamless integration of these core systems and provided intelligent applications, such as broadcasting and event communications. With the technical support of Alibaba Cloud, we can say that the event became the first Asian Games on the cloud. Only through law-based data governance can we give full play to the value of data. As an ancient Chinese saying has it, nothing can be accomplished without norms or standards. With the rapid development of the digital economy, data has played a vital role in promoting economic and social development. However, it has also posed challenges to the protection of personal information, intellectual property rights, and network and data security. In the cyberspace, therefore, promoting law-based data governance has become a global consensus. We believe that effective data governance will better facilitate data flow. Likewise, the free and secure flow of data within the framework of the rule of law will give full play to the strength of data as a factor of production. On the one hand, as a unique factor of production, data can be utilized repeatedly by different parties thanks to their inclusiveness. On the other hand, data can generate different values in different scenarios as their generation and utilization involve various stakeholders, thus creating a bucket effect. For this reason, Alibaba Cloud has advocated the whole process management of data throughout their lifecycle. Looking ahead, we would like to continue to participate in efforts to advance the rule of law in regard to data governance together with all clients and partners in this digital ecosystem. We face both opportunities and challenges in the area of AI. AI is one of the most innovative cutting-edge digital technologies in the world. As a high-tech Internet enterprise, Alibaba Cloud launched R&D on our large-language model in 2019, and the latest iteration, Tongyi Qianwen, was made available to the public recently. In the future, we will launch different partnership programs and endeavor to create more enterprise-specific models to ensure that every industry can better share the fruits of intelligent development. In the new era of intelligent development, we are the benefactors of the advance in AI while facing many uncertain risks and confusion. In response, the Cyberspace Administration of China, together with six other authorities, jointly issued in July the Intricate Measures for Administration of Generative Artificial Intelligence Services. the first of its kind globally. It provides a definitive legal environment and basis for the sound development of AI in China. In line with this regulation, Alibaba then released the management system for Scientech ethical risk review, introducing three principles of responsible AI, namely availability, reliability, and credibility. We hold that AI technology should serve the interests of humanity, be advanced and stable, and protect personal privacy and data security. Here, we would like to make three proposals. First, establishing high-quality university public corporate. Second, developing the standard system and the precaution system of data security for opposing racial discrimination, safeguarding the rights and interests of women and children. And third, actively carrying out international exchanges and cooperation in the field of data governance to promote global norms and consensus in this regard. Thank you.

Moderator 1:
Thank you, Ms. Zheng. Thanks again for all speakers of the first session. Next is the second session of round table, the moderator is my colleague, Wu Shengkuo.

Moderator 2:
Thank you. Thank you very much, Professor Liang. Now let’s move on to the second session of the roundtable discussion. Please remind you that each speaker can have seven minutes. Firstly, let’s welcome Mr. Fang Yu, Director of the Internet Law Research Center of China Academy of Information and Communications Technology, please.

Fang Yu:
Thank you, Mr. Wu. Distinguished guests, friends, good morning. Firstly, please allow me to say hello by Japanese. Ohayou gozaimasu, watashi wa Fang Yu desu. This is the first time for me to be here in Kyoto. This is a nice city and a beautiful place. More importantly, it is my great honor to speak at this forum. I am Fang Yu from China Academy of Information and Communications Technology, which refers to CAICT. The CAICT is a think tank in China engaged in research related to the field of network. Now I’m in charge of the Internet Law Research Center. My center mainly studies issues related to network legislation and has participated in several important legislations in China. As a think tank, we carry out some basic research, especially on cutting-edge legal issues. As we all know, the growing digitalization of our world is one of the key trends of the 21st century, and it is fundamentally changing the way we live and work. Digital economy is developing rapidly. The Internet is now an indispensable global public good. We need new laws and regulations to govern the cyberspace. Mr. Tang Lei has given an overview of China’s cyber law system, which is very impressive. Meanwhile, digital economy is driven by data. So I think data law is absolutely an important part in this system. The issue of data legislation is widely concerned by all countries in the world. In China, we generally divide it into three aspects – data security, personal information protection, and data value. First, data security means to use legal measures to ensure the effective use of data without affecting national security and social stability. To reach this goal, we need to make data classification and protect different types of data by different means. It covers many factors, but in which the issue of cross-border data flows is very critical. Mr. Xu Zhiyuan has already explained this topic in great detail, so I will not repeat it. Second, personal information protection can be said to be the fundamental rule in the development of the digital economy. And most countries are faced with the contradiction between personal information protection and personal information utilization. The European Union wants to balance them by the famous GDPR, and many countries have developed their own personal information protection laws with reference to the EU approach. China has a long history of personal information protection practices and in 2021 adopted China’s personal information protection law, which provides a Chinese plan for the protection and usage of personal information. The last is the issue of data value. This is quite essential for the digital economy, but there is no proper solution to this problem, and China is actively starting and exploring it. China has taken the lead in recognizing that data plays a fundamental role in the development of the digital economy, and encourages it to give full play to its essential role. However, many people are still deeply discussing the issue of data rights, hoping to determine the ownership of data through a legal framework to further realize the value of data. Distinguished guests, friends, I believe that data governance will be an issue which needs to be studied for a long time in the background of the digital economy.I hope that countries around the world can work together to make progress, especially through discussions under the framework of the United Nations. and other international mechanisms, so as to jointly improve the level of data governance and share the benefits of the digital economy development. Thank you for listening.

Moderator 1:
Thank you very much, Mr. Fang, for your wonderful point of view. Now I give the floor to Hosaka Lee Makiyama, Director of European Center for International Political Economy, please.

Hosuk Lee-Makiyama :
And once again, I would like to repeat my deepest gratitude to IGF, Benjindorma University and CSE for this invitation to make this very brief intervention. And as a simple legal and economic scholar that has studied global governance of data economy for the last 15 years, I’m very honored to offer my observation on this very difficult topic on the rule of law, because despite all the self-evident societal and developmental benefits that we have heard today in this panel, as well as in the other forums of IGF, it is well understood that the cross-border data flows has raised many important questions regarding rule of law, and especially in the context of international law. And when the internet and the data economy emerged two decades ago, the primary question used to be, and it’s to some degree still is, whether we can avoid internet becoming a legal void, jurisdictional terra nullius, if you like. And to date, I think that these concern has been pretty much addressed, and the issue has been less about determination of jurisdiction or legal forum, as it was believed, as very often the legal questions around new innovation tend to be very, very different than we imagined them at the onset. And many jurisdictions have actually expanded their reach and the legal basis with some form of extraterritoriality. Many speakers already commended the EU GDPR as a model or a template for many laws that have followed. I think EU GDPR is also a good example of the extraterritoriality since it is applied extraterritorially based on the citizenship or residency of the data subject rather than the object, which is the case in many other laws. And it has also established a practice of jurisdiction based on the citizenship or the passport or the fiscal placement of the data subject. And the ecosystem turned out to be much more insulated perhaps than what it was believed. We have seen that due to cultural and linguistical reasons, Internet has actually much more local flavor than we expect them to have. And due to the delivery of the data economy, which is so contingent on a fiscal continuum. So, for example, like local payment systems, banking and or a fiscal delivery of the transaction. We can also see that there is a natural tendency where these jurisdictional questions have been resolved. And despite the use of extraterritoriality issue associated with cross-border compliance and enforcement have been actually quite moderate. And to some extent, it is thanks to the legislative harmonization we have seen, for example, under the COE Budapest Convention on Cybercrime. But first and foremost, it is notable how basic liability principles or contract law or criminal law have actually applied equally online. as well as offline in Europe and many other jurisdictions. And increasingly online services are also subject to various type of licensing and notification requirements, meaning that actually the jurisdictional question has been resolved and also the rule of law has been territorialized at onset. And if that is the development we have seen in the past to first two decades of digital economy and where we have seen the evolution of legal doctrine, which is based on personal information on data management that we know today. We can also see that the current evolution in terms of rule of law on the internet is quite progressive where we have seen codification of in many instances of previous soft laws and executive decision. And the codification has also led to more legal clarity as more and more rules are actually transparent and actually written down rather than executed as executive orders or soft law. We see that there is an improvement of rule of law. However, not all foreign economic actors may not necessarily share the desirability of this clarity or these outcomes that the rules have enabled. So for example, if I just would take an example from Europe once again, Europe has introduced the Digital Services Act, Digital Markets Act and Data Act or EU Cloud Services Scheme which all have shifted the investigative ex post legislation like for example, an antitrust enforcement to an ex ante approach through universal obligations and through regulations rather than investigations. And in other elsewhere, we see cybersecurity laws that have provided more legal clarity with clearer legal basis, distinguishing different cases and practices. And once again, better clarity is always desirable, but some may simply just disagree with the rules. We still see issues with national treatment. So critics say that the EU law set very arbitrary thresholds of what high risk practices entail, and therefore it is very selective in its legal scope. And in fact, many thresholds are naturally ambiguous or subjective. I think there was a disruption in my connection, but I’ll try again. As I was saying, many thresholds that are naturally ambiguous or subjective, and that’s basically the nature of the internet law itself. And it is foreseen that case law will provide the further clarity on these issues. And, but given the dynamism of the legal systems of internet law as a subject, the question is whether we will be required to change the legal framework and update them before any case law actually evolves. This is the risks of governing fast paced technology which we all have to live with. And this basically leads to a final point here that there is a universal problem where enforcement agencies have a natural disadvantage in understanding the current practices, but not necessarily enforcing their rules. Transparency is of course, synonymous with accountability. And I think that the current trend shows that we are regulating to understand commercial practices, user patterns, rather than mitigating actual potential risks associated with data flow or inadequate enforcement. And I think that IGF hosts of this year, Japan has taken significant step here for the global community by taking the initiative for the institutional arrangement under the G7 on the FFT, and which will enable governments to study issues and causality and best practices for better data governance. And to basically to wrap up history and the future of cross-border governance of data flows has been characterized by friction of obligations rather than perceived conflict of laws or values. Different legal system are founded on different societal values. And despite these differences, it is evident that the regulators seek surprisingly similar outcomes in their digital economy and try to address similar issues. And however, these outcomes may have very different commercial consequences if you look at the individual companies. A foreign disruptor in one country is actually an incumbent and a national champion in another country. Some objectives of policy create very different winners and losers. This is not necessarily a product of diverging values or diverging objectives. And extraterritoriality can only be resolved through mutual cooperation and such as mutual legal assistance treaties. And many of the privacy laws have built-in transfer mechanisms. MCC has been mentioned several times under this course of this panel, but also adequacy decision. And these mechanisms for expedited data sharing process can enhance efficiency and collaboration amongst agencies. However, enforcement can only be guaranteed by governments, not by private actors. And this is an understanding which is the basis of the European model and many other legal models we see across Asia. So impetus comes from harmonisation, alignment of laws and especially on data protection, privacy and security, rather than establishing a common international standard or voluntary or trusted global frameworks. And I think we see that the equivalence decisions and other fundamental mechanisms for cross-border data flows are actually 100% legal in their nature. Trust is a matter of question between governments and people, but it does not necessarily relate to the data. It’s a function of equivalence rather than between two jurisdictions, rather than a function of trust. And here’s where many open data advocates tend to talk, prefer to talk about trust rather than equivalence between laws. So it may be a fictionary conflict that we see around trust, where we see, as once again, that many agencies around the world are actually working towards similar goals, despite having very different societal backgrounds. Thank you so much. And I’ll pass the word back to the panel.

Moderator 2:
OK. Thank you very much, Mr. Lee Makiyama from Brussels, for the interesting sharing. Next, we have Ms. Wang Rong, senior expert from Tencent Research Institute.

Wang Rong :
Thank you. Good morning, everyone. I’m Wang Rong from Tencent Research Institute, which is a research platform focusing on public policies in digital economy. I’m very honored to participate in the RGF Data Governance Forum. I guess that everyone must be impressed by the extraordinary accomplishments that China has achieved. just as Tang Lei, Deputy Director, introduced to us. Now, I would like to share the China’s personal information protection from the perspective of corporate compliance practices. First, I would like to share some interesting findings. So for the purpose of corporate compliance, Tencent Research Institute compared the provisions of China’s personal information protection law referred as PIPL, promoted in 2021 with the European Union’s General Data Protection Regulation, as you know, that’s GDPR. Through comparing these two laws, we found some interesting findings. The first, China’s PIPL is fully integrated with the international general principles of personal information protection represented by GDPR. In terms of legislative model, China’s PIPL adopted a globally mainstream model, which is very comprehensive and universal legislative model that is applicable to all sectors, not only to private sector, but also in public sector. And third, in terms of the laws content, the specific rules, the PIPL introduces the basic rules, including the legal basis of data processing, the rights of data subjects, the obligations of data controllers and data processors. Finally, in terms of strictness of rules, China’s PIPL is basically matches the EU GDPR standard. In some aspects, China’s PIPL is even more stringent than GDPR. So as we concluded. Although there are still some subtle differences between the PIPL and GDPR, but in general speaking, China’s PIPL is highly compatible with international legislation standards. The strict PIPL in line with international standards will bring full benefits to the healthy development of platform companies such as Tencent. Companies will fully embrace the implementation of law with a positive attitude. It is constructive to help the digital industry to build consumer trust through legal system protection. As you know, rebuilding consumers’ confidence and security trust is one of the core issues in the digital society. We believe the legal system itself undoubtedly plays an important role in now. As data processing scenarios become more complex, data flows between different institutions increase dramatically. Through the legal system, Clary finds the legal responsibilities of different market players in different aspects of data processing is very constructive to establish a data protection ecosystem. So based on our business, Tencent relies on systematic tools to implement data privacy compliance work. Tencent is one of the earliest internet companies in China to explore personal information protection and data compliance. We emphasize technology itself to empower privacy protection. We have developed the Linxi privacy platform to establish comprehensive technical capabilities to facilitate our service to fully comply with the privacy protection requirements. In addition, Tencent continues to develop privacy technologies such as federated learning, trusted computing, and secure multi-party computing to explore more technical solutions for personal information protection in the whole life cycle of digital service. In the practice of implementation of the PIPL, Tencent continues to improve product transparency, giving our users more choice and control, and provide one-stop privacy solutions for our users. Besides that, we have established an integrated rights response and processing mechanism to ensure that users’ personal information rights requests are responded in a timely and effective way. So, in a conclusion, just as we advocated of technology for good, we hope that our products and services themselves try to take advantage of technology to do good and build up the consumer’s trust. That’s all. Thank you for listening.

Moderator 2:
Thank you, Mr. Wang, for your relevant sharing. Now, let’s welcome Mr. Zhu Ran, Vice President of Alibaba Cloud Intelligence Group, please.

Zhu Ran:
Thank you. Ladies and gentlemen, friends, good morning, everyone. It’s my honor to participate in the workshop on the role of law for data governance of 18th Internet Governance Forum and share my ideas on this topic with all of you. The Chinese government has always adhered to the principle of of governing the Internet in accordance with law in its efforts to promote the healthy and orderly development of the Internet. The rule of law on the Internet is not only an important way of digital governance, but also an important outcome of digital civilization advancement. Alibaba has done a lot of work in data governance in line with national laws and regulations as well as international initiatives. Practice of Alibaba Cloud in Data Governance Alibaba Cloud Intelligent Group has been committed to the cloud-based data governance for years, relying on a self-developed AppSara system which prides clients from more than 200 countries and regions worldwide with cloud services such as computing, storage, networking, data processing, and security protection. The group has explored a complete set of methods for data governance. In terms of compliance governance, as a company that provides cloud computing services for the public, Alibaba Cloud has worked to improve data compliance governance and has become a cloud service provider with the best qualification in Azure as well as an industry leader in protecting data security and privacy of cloud computing. As early as 2013, Alibaba Cloud passed ISO 27 sound and CSA star certification and later passed PCI DSS certification in the financial field. In terms of technical guarantees, Alibaba Cloud continues to strengthen technical guarantees for data governance of the cloud platform. First, Alibaba Cloud has classified various types of data on the cloud and ensured data security in their usage, entry and exit, and other situations by taking advantage of technologies and stepping up operations and maintenance systems. Second, Alibaba Cloud has established well-functioning disaster recovery systems and redundant systems for cloud computing, networking, storage of data. Several disaster recovery systems have been built such as dual active in the same city, backup data in other cities in case of disaster, multi-active in multiple cities, scheme of two places and three centers, and so on. Third, in terms of infrastructure, Alibaba Cloud has established security controls over data with regard to its storage encryption. transmission, encryption, and across-control, ensuring data security of multi-tenancy in cloud computing. As for the policy support, Alibaba Cloud took the lead in launching a data security initiative in 2015, stating that since data are customers’ assets, cloud computing platforms cannot be used for other purposes. Rather, platforms have the obligation to help protect the privacy, integrity, and availability of client data. The initiative also held that cloud computing platforms should provide a privacy and data protection framework and scheme for cloud users. In 2021, Alibaba Cloud released the Data Security and Privacy Protection White Paper, which introduces the best practice of Alibaba applying cloud computing to safeguarding data security. Efforts in security protection involve physical security, data storage, network transmission, computing security, as well as backup and disaster recovery. Recently, Alibaba Cloud officially launched our LLM. This is a next-generation model for LLM. It can understand complex instructions, engage in multi-round dialogue, write copy, perform loggage rezoning, understand multi-modal inputs and support multiple languages. So it can be applied in, for example, planning, office administration, shopping, recommendation, and home design to help customers raise efficiency of their work and services. What’s more, enterprises can build their own LLM models by Tongyi Qianwen to develop more enterprise level applications. We believe that data guidance for LLM determines the scope and depth of LLM’s application. Therefore, R&D and application of AI have always been pursued under the guidance of principles of availability, reliability, credibility, and controllability in Alibaba Cloud. That’s all. Thank you.

Moderator 2:
Thank you, Mr. Zhu, for your profound insight. Now I would like to give the floor to Mr. Zhao Jingwu, Associate Professor of Law School of Beihang University. Please.

Zhao Jingwu:
Thank you, Professor Wu. Good morning, everyone. It’s my honor to have an opportunity to share my thoughts, speech here. I’m Zhao Jingwu from Beihang University. What I would like to talk today is simple issues, is how to ensuring the security of corresponding data flow. through the legal instruments. Well, in modern society, the economic and strategy value of data has become a crucial element of the national innovative development of digital economy. Well, the combination of traditional and digital business has changed the operating model of the real economy, as well as the basic condition of international digital economy development. So cross-border data flow is not just a matter of domestic data security regulation and the commercial utilization, but also a complex issue that affect the promoting of global digital economy. Well, in recent years, we can see that more and more countries, regions, and international organizations, by including China, has tried to explore safe and trustworthy model for cross-border data flow through domestic legislation, bilateral agreements, and international treaties. However, at the same time, there are also many controversy needs to be solved urgently. Well, in this context, we can see that China has actively promoting the governance path of cross-border data flow. However, here is a misleading in the international governance activities, which is to encourage the cross-border data flow without restrictions. Well, perhaps their original intention was to achieve a border and more efficiency data flow effects, but the key is they fail to understand the relationship between the data security and data flow. Well, it’s worth mentioning that in the Article 1 of the data security law in China, the governance idea of data is to ensure data security and to promote data development and utilization. Well, in summary, it means pay equal attention to safety and utilization. So we agree that blindly pursue cross-border data flow without paying attention to data security. is not only fail to realize the exchange value of data, but also breeds security risk, such as data linkage and theft, which would lead in the reduction of the economy value of data resources. So in the international community, there is a view of China follow the data controlism path, which is essentially politicalized the issue of data security. That is because we don’t have a unified standard for the international cross-border data flow around the world. While multilateral cooperation always have to comply with different domestic laws and the international agreement. So there’s no denying that the national data security and the citizen personal privacy are generally recognized primers for cross-border data flow. Furthermore, across the global, there’s no country allowed cross-border data flow without any condition. And the more country, domestic law put data security and the national security as a first place. So what I want to emphasize is China isn’t an open and cooperative governance model for cross-border data flow. It’s not an empty words. China’s domestic law has clearly defined four categories of rules for cross-border data flow, which including security assignments of outboarding data transfer, standard contract for the cross-border transfer of personal information, and third party security certification and special rules for special areas. So all above these rules are supported by the corresponding laws and regulations. So moreover, a few days ago, China’s regulator authorities just released the regulation on regulating and facilitating the cross-border data flow. There is a draft for comments. which further refined China’s governance framework for cross-border data flow and the response practical issues with social general concern. For example, the draft clarified that the outbounding data transfer does not require security assignment, standard contract or security certifications when the non-important data generated in the activities, such as international trades, academic corporations, corresponding manufacturing and the market influence activities. So Chinese supervisation system for corresponding data flow is not simply to restrict data export, but to better protect and promoting data export. So Chinese legislation has established diverse channel for cross-border data flows, which not only catering to the market demands for various industrial and enterprise, but also align with international rules on the cross-border data flows. So all of these help multinational enterprise to solve practical problems for repeating the compliance, multiple compliance, even conflict compliance during the processing of outbounding data transfer. Finally, I hope we can reach a consensus that the governance of data, which especially is a cross-border data flow, cannot ignore data security, nor can it set too many restrictions for security. The concept of security and the utilization coexistence in China data governance system, offering China a wisdom and approaching to solve the problem of cross-border data flow. That is all what I want to say. Thank you. Thank you, Professor Wu.

Moderator 2:
Thank you, Professor Zhao, for your wonderful words. Due to time limitation, we have to conclude this forum. We hope to have more in-depth exchanges and discussions in the future. Once again, we would like to thank all guests and friends for your wisdom and efforts to contribute to this open forum. We also would like to thank UNIGF for providing us with a more than relevant dialogue platform. This open forum is concluded here. We invite all of you to have a photo group here. Thank you.

Xu Zhiyuan

Speech speed

128 words per minute

Speech length

701 words

Speech time

329 secs

Zhao Jingwu

Speech speed

144 words per minute

Speech length

919 words

Speech time

384 secs

Fang Yu

Speech speed

120 words per minute

Speech length

647 words

Speech time

325 secs

Hosuk Lee-Makiyama

Speech speed

144 words per minute

Speech length

1544 words

Speech time

643 secs

Jesus Lau

Speech speed

127 words per minute

Speech length

866 words

Speech time

408 secs

Moderator 1

Speech speed

130 words per minute

Speech length

452 words

Speech time

209 secs

Moderator 2

Speech speed

131 words per minute

Speech length

240 words

Speech time

110 secs

Neil Walsh

Speech speed

167 words per minute

Speech length

1367 words

Speech time

490 secs

Tang Lei

Speech speed

107 words per minute

Speech length

694 words

Speech time

389 secs

Wang Rong

Speech speed

115 words per minute

Speech length

678 words

Speech time

352 secs

Wang Yi

Speech speed

119 words per minute

Speech length

858 words

Speech time

434 secs

Zheng Junfang

Speech speed

131 words per minute

Speech length

1071 words

Speech time

491 secs

Zhu Ran

Speech speed

87 words per minute

Speech length

708 words

Speech time

486 secs

Procuring modern security standards by governments&industry | IGF 2023 Open Forum #57

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Wout de Natris

The Internet Standards, Security and Safety dynamic coalition is dedicated to enhancing the security and safety of the internet. They have formed three working groups to address specific areas: Security by design on the Internet of Things, Education and skills, and Procurement and Supply Chain Management and the Business Case. These groups aim to tackle various challenges and contribute to a more secure and safer online environment.

The coalition is actively engaged in several projects, including the deployment of DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure), as well as exploring emerging technologies and addressing data governance and privacy issues. These initiatives reflect the coalition’s commitment to promoting best practices and robust security measures in the digital landscape.

One of the key objectives of the coalition is to convince decision makers to invest in secure design and deployment of internet standards. To achieve this, they are developing a persuasive narrative that utilises political, economic, social, and security arguments. By providing compelling reasons, they aim to encourage decision-makers to prioritise and allocate resources towards implementing robust security measures.

The Procurement and Supply Chain Management and the Business Case working group have released their first report, which focuses on comparing global procurement policies. This report sheds light on the current landscape and provides insights into various approaches and practices in procurement. Consequently, this information can be utilised to identify areas for improvement and to advocate for more secure and transparent procurement processes.

An important observation highlighted by the coalition is the lack of recognition of open internet standards by government policies. This finding underscores the need for greater alignment and integration of these standards into policy frameworks. Universal recognition and adoption of standards for data protection, network and infrastructure security, website and application security, and communications security are seen as crucial steps toward a safer digital environment.

In addition, the coalition aims to provide a practical tool for decision makers and procurement officers. This tool, which includes a list of urgent internet standards, will help guide decision-making and procurement processes, ensuring that security considerations are effectively integrated into ICT procurement.

The coalition also seeks to improve procurement policies and the validation process for open internet standards in public procurement. They recognise the importance of streamlining and expediting these processes to ensure efficient and effective adoption of open standards. By doing so, procurement policies can be enhanced, leading to more secure and reliable digital infrastructure.

Overall, the Internet Standards, Security and Safety dynamic coalition is making significant efforts to enhance internet security and safety. Their work spans various areas, from promoting secure design and deployment of internet standards to advocating for the recognition and adoption of open internet standards in government policies. By collaborating and addressing key challenges, they aim to create a safer online landscape for individuals, organisations, and governments.

Audience

The speakers discussed the importance of promoting the international use of testing websites to uphold standards such as accessibility and sustainability. They highlighted the effectiveness of a Dutch testing website and advocated for its adoption globally. The positive sentiment was reinforced by the speaker’s personal experience of receiving a T-shirt after testing a website that scored 100%.

Shifting focus to India’s digital transformation, the discussion revealed concerns about the poor compliance status. Although India has made progress in digital public infrastructure, including the development of a vaccine website during the COVID-19 pandemic, there is a need for scaling up existing applications to meet the demands of the country’s population. The lack of multilingual applications and universal acceptance in India’s digital transformation was also brought to attention, with a specific mention of the problem of non-Latin scripts in domain names. The speakers highlighted ICANN’s efforts to resolve this issue and suggested incorporating testing for these aspects in the code of internet.nl.

The importance of digital standards was emphasized, but it was noted that India does not have a law mandating compliance with the latest standards. Instead, the speakers proposed nudging stakeholders through volunteer work and the periodic dissemination of test results.

Overall, the analysis provided a comprehensive overview of the discussions, including key points, arguments, and evidence presented. The speakers’ positive sentiments, concerns, and suggestions offer valuable insights for further exploration in the field of digital transformation and compliance.

Annemiek Toersen

Open standards play a crucial role in enhancing the interoperability, security, accessibility, and vendor neutrality of IT systems within the Dutch government. The Netherlands Standardization Forum, which advises the Dutch government on open standards, has identified about 40 open standards on the “comply or explain” list that are mandated for use in new IT systems or services.

To promote open standards adoption, the Dutch government has implemented a comprehensive strategy that includes mandating specific open standards, investing in community building, and closely monitoring their adoption. The Netherlands Standardization Forum has successfully secured agreements for implementing standards like HTTPS and DNSSEC. They also use internet.nl to regularly measure the usage of open standards across approximately 2,500 government domains.

To achieve wider acceptance, the Dutch government actively cooperates with vendors and international counterparts. For example, the Netherlands Standardization Forum has collaborated with Microsoft to ensure support for the DANE security standard by spring 2022. They are also sharing the code base of internet.nl with countries like Denmark, Australia, and Brazil to encourage broader adoption of open standards.

Despite these efforts, there is still work to be done, as many government tenders do not fully comply with open standards requirements. The Netherlands Standardization Forum regularly reports insufficient compliance to the Dutch cabinet.

Collaboration between internet.nl and other dashboards focusing on website accessibility can strengthen testing standards, including elements like accessibility and sustainability.

Convergence of different internet standards is necessary to avoid duplicating efforts, and the Ministry of Internal Affairs and Infrastructure is working towards a single dashboard to combine various standards.

Validating standards is crucial, and the Netherlands Standardization Forum emphasizes the need for scrutiny to ensure effectiveness and relevance.

The adaptation of standards is supported, but it requires common agreement among multiple organizations in the Netherlands. Overall, open standards are foundational to the Dutch government’s IT systems, and the Netherlands Standardization Forum continues to drive adoption. However, challenges such as compliance and convergence need to be addressed through ongoing cooperation, validation, and adaptation.

Mallory Knodel

The analysis emphasizes the significance of implementing global internet security standards in procurement and supply chain management policies. It highlights that while some countries, like The Netherlands, already incorporate references to standards in their procurement policies, there is a noticeable lack of standardisation across regions and countries. This lack of a unified and syndicated approach poses challenges in ensuring consistent and effective internet security measures throughout supply chains.

To address this issue, the promotion of multi-stakeholderism in procurement and supply chain management is advocated. The suggestion is to utilize platforms such as the Internet Governance Forum (IGF) as a means to advance this initiative. By involving various stakeholders, including governments, private sectors, and civil society, it is believed that a more comprehensive and collaborative approach towards internet security can be achieved.

Moreover, the analysis calls for greater transparency in procurement policies worldwide. Specifically, it points out the need for more countries to openly publish their procurement policies. This transparency not only enhances accountability but also allows for better knowledge-sharing among nations, fostering the adoption of best practices in internet security.

Another key argument made is that cybersecurity standards should be treated as reference points in international treaties. These standards can also be transformed into compliance mechanisms, ensuring that nations adhere to established protocols in internet security. Additionally, there are opportunities to utilize open cybersecurity standards, which provide a basis for common guidelines and practices that can be widely implemented.

In terms of potential future investigations, the relevance of standardisation in the EU procurement process is acknowledged. While not the main focus of the research, the impact of standardisation on EU procurement is considered an area worth exploring further. This suggests that standardisation has the potential to play a significant role in shaping procurement practices within the European market.

Furthermore, the analysis highlights the importance of market entry as a driving factor for companies to pursue standardisation. In some cases, US companies may opt to get their technology standardised at platforms like Etsy in order to meet the requirements of European governments or tender bids. This emphasizes the role of standardisation in facilitating market access and competitiveness in the European market.

In conclusion, the analysis underscores the need for global internet security standards in procurement and supply chain management policies. It calls for a more standardized and syndicated approach across nations, promoting multi-stakeholderism and transparency. By treating cybersecurity standards as reference points and compliance mechanisms, and utilizing open standards, greater consistency and effectiveness in internet security can be achieved. The relevance of standardisation in the EU procurement process and its impact on market entry are also recognized. Overall, this analysis provides valuable insights and recommendations for advancing internet security standards in the procurement and supply chain management domain.

Alisa Heaver

The Dutch government strongly supports the Platform Internet Standards and Forum Standardisation, recognizing the crucial role that standards play in various sectors. They view the adoption of standards as essential for driving innovation and fostering a strong digital infrastructure. The government actively forms public-private partnerships to further promote the adoption of these standards.

These partnerships have been instrumental in advancing the use of standards by the Dutch government. Collaborating with private entities allows them to leverage expertise and resources to implement and develop internet and other types of standards. This collaborative approach strengthens the government’s ability to adopt standards and encourages collective responsibility in their development and implementation.

The Dutch government’s support for internet standards extends beyond its borders. They actively encourage other governments to embrace these standards for procurement and promote global collaboration. Alyssa Iver, a representative of the Dutch government, emphasizes the importance of working with experts in respective countries on internet and other types of standards. This collaborative emphasis ensures that standards are tailored to meet the unique needs and contexts of different countries, contributing to the global adoption and implementation of standards.

In conclusion, the Dutch government’s strong support for the Platform Internet Standards and Forum Standardisation reflects their understanding of the vital role of standards in driving innovation and creating a robust digital infrastructure. Through public-private partnerships and global collaboration, they actively promote the adoption of standards both domestically and internationally. This commitment not only advances their own digital agenda but also contributes to the global framework for standards and collaboration.

Olaf Kolkman

The Internet Governance Forum (IGF) meeting focused on the importance of internet security for the common good. Olaf Kolkman, an advocate for protecting infrastructure, emphasized the need to safeguard the internet to benefit everyone, rather than just individual organizations. This highlights the collective responsibility to ensure the security and stability of the internet.

One of the challenges discussed at the meeting was the slow adoption processes for open internet standards. The adoption of these standards often takes several years before they are widely implemented. However, the meeting recognized that public-private partnerships can play a crucial role in promoting and accelerating the adoption of modern internet standards. By collaborating with various stakeholders, including governments and private organizations, the widespread adoption of these standards can be facilitated.

To further support the implementation of modern internet standards, effective tools were highlighted. The internet.nl test tool, for example, helps organizations and individuals assess if their websites, emails, and local connections are functioning in line with these standards. It is projected that over 1 million tests will be conducted using this tool by 2023. This demonstrates the practical impact and usefulness of such tools in facilitating the adoption of modern internet standards.

Knowledge sharing across countries was also emphasized as a means to promote the adoption of open internet standards. Countries like Brazil, Denmark, and Singapore have already initiated the adoption of these standards and tooling, setting an example for others to follow. The Platform Internet Standards, which was initiated as a public-private initiative, is open to learning from global experiments. This collaborative approach allows for the exchange of knowledge and best practices, enabling more countries to adopt these standards effectively.

Olaf Kolkman strongly supports the use of open internet standards as they enhance user safety, security, and online connectivity. He calls upon organizations to adopt these standards to ensure that the internet functions correctly and benefits everyone. These standards not only safeguard individual users and organizations but also contribute to the overall well-being of society.

Aside from discussions on internet security, the importance of accessibility and captioning in reducing inequalities was also acknowledged. The work done by Rochelle and her team in captioning was appreciated. Accessibility measures play a critical role in ensuring equal access to information and services for all individuals, regardless of abilities.

The Dutch Internet Standards Forum highlighted the need for wider use of testing and procurement methodologies to ensure the proliferation and adoption of internet standards. Olaf Kolkman pointed out the effectiveness of procurement methodologies and tools like internet.nl. He emphasized the practical impact of such initiatives, both in terms of financial considerations and wider deployment. It is imperative that regions and countries beyond the Dutch Internet Standards Forum begin utilizing similar tools to increase their usage and effectiveness.

In conclusion, the IGF meeting emphasized the importance of internet security, the challenges in adopting open internet standards, the role of public-private partnerships, the need for effective tools, and the significance of knowledge sharing and accessibility. It underscored the collective responsibility to protect infrastructure for the common good and to ensure that the internet functions in a safe, secure, and accessible manner for all. The discussions and insights gained from the meeting contribute to advancing the adoption and implementation of modern internet standards globally.

Gerben Klein Baltink

The adoption of modern internet standards is essential for ensuring safety, security, and efficient connectivity in today’s interconnected world. However, the process of accepting and implementing these standards can be slow and challenging. It requires the cooperation and agreement of both IT technicians and board members within an organization.

The Platform Internet Standards and internet.nl play a vital role in making modern internet standards more accessible. Internet.nl, for example, has experienced significant growth, with over one million tests conducted in 2023. It provides a platform that allows users to determine whether their website, email, or local connection is functioning correctly with modern standards. This enables organizations to identify and address any issues that may arise during the implementation process, facilitating the correct adoption of standards.

International cooperation and sharing of resources and strategies are crucial for the global success of modern internet standards. Several countries, such as Brazil, Denmark, and Singapore, have established similar initiatives and platforms to promote the adoption of these standards. The Platform Internet Standards is open to sharing its learnings and experiences with other countries and organizations interested in establishing similar initiatives. This collaborative approach promotes knowledge exchange and fosters a more unified and effective implementation of internet standards worldwide.

The Dutch Internet Standards Forum plays a significant role in implementing new internet standards. The process of adding new standards to internet.nl is based on a consensual agreement within the forum. This ensures that all stakeholders have a say in determining which standards should be included and how they should be implemented.

When integrating new standards, the team at internet.nl investigates existing open-source tests that comply with the desired standard. If suitable tests are not available or do not integrate well with the current test environment, they consider creating their own code. This flexible approach allows for the seamless integration of new standards, ensuring that the testing process aligns with the specific requirements of each organization.

In cases where certain standards, such as accessibility standards, do not integrate well with the current test environment, proactive promotion is recommended. Instead of disregarding or delaying the adoption of these standards, they should be promoted as future inclusions. This approach encourages continuous improvement and ensures that all aspects of internet standards are addressed in due course.

In conclusion, the adoption of modern internet standards is crucial for ensuring safety, security, and efficient connectivity. The Platform Internet Standards and internet.nl play a vital role in making these standards more accessible through testing tools and solutions. International cooperation and the sharing of resources are essential for global success. The Dutch Internet Standards Forum facilitates the implementation of new standards, and the integration process involves investigating existing tests or creating new code. Proactive promotion of standards that cannot be immediately integrated ensures a comprehensive approach to internet standards.

Flavio Kenji Yana

NIC-BR is a non-profit civil entity in Brazil that is responsible for the administrative and operational functions related to the .br domain. Their main focus is on improving the internet infrastructure in Brazil, and their projects and actions aim to benefit various sectors of Brazilian society. One significant project is the Test Padrões (Test Standards) project, which utilizes open source code provided by Dutch implementation. This project promotes the best security practices for websites, email services, and user connections to the internet. It was implemented in December 2021, and its effectiveness can be assessed on top.nic.br. By adopting these security standards, NIC-BR aims to enhance internet security in Brazil.

The Test Padrões project is part of Brazil’s Safer Internet program, which collaborates with ISPs (Internet Service Providers) and internet service providers, including operators. NIC-BR defines Key Performance Indicators (KPIs) to monitor the effectiveness of their actions. By working with ISPs and service providers, NIC-BR ensures widespread adoption of these security recommendations, creating a safer internet environment.

NIC-BR is actively involved in the Manners initiative, which encourages good online behavior. Brazil has the largest number of participants in this initiative, and there has been a significant annual increase in participation. This demonstrates Brazil’s commitment to creating a positive online environment and fostering partnerships for the Sustainable Development Goals (SDGs).

Brazil has a robust internet landscape with over 10,000 ISPs, including small and medium-sized operators nationwide. These ISPs account for approximately 50% of the internet traffic in Brazil. Many ISPs and Internet Service Provider Associations in Brazil actively support NIC-BR’s programs and initiatives, emphasizing their dedication to improving the internet ecosystem.

In summary, NIC-BR plays a crucial role in Brazil’s internet governance and infrastructure. Their projects, such as Test Padrões, and collaborations with ISPs contribute to a safer internet environment. Brazil’s active participation in initiatives like Manners showcases their commitment to responsible online behavior and partnerships for sustainable development. With the support of ISPs and service providers, NIC-BR is working towards enhancing internet security and improving the overall internet experience for users in Brazil.

Session transcript

Olaf Kolkman:
Okay, dear friends, last session at least for me and I think also for most of you, here we are in a meeting of the IS3C or the Internet Standards Security and Safety Coalition, which is actually the name of one of the dynamic coalitions here at the IEGF. The topic of this workshop is, the title of this workshop is Procuring Modern Security Standards by Governance and Industry and that’s part of the interest of this dynamic coalition. In general, when you look at security being deployed in organizations, then there is always an informed self-interest to protect yourself. The problem with securing the internet is that that is security for the common good and usually you’re securing something within your infrastructure to protect yourself partly, but also others. So there are all kinds of economic incentive problems that make the introduction of internet security standards and common practices might be difficult. And this dynamic coalition sets out to both study and stimulate the deployment of those modern internet standards. I’m looking at Voud, seeing if I’m summarizing this well. And we’re here to discuss a number of the work items that the coalition has been working on. Can I have the next slide? Ah, can I have the next slide? Yes. So we’re here with a bunch of speakers and panel members. My name is Olaf Kollekman, I’m from the Internet Society. We have Satish Babu. We have Flavio Kenji-Janai. Liz Orembo will join us later. Wouter Natris is here at the end of the table. Satish and Flavio are also at this table, of course. Gerben Klein-Bolting is online, if everything is well. Annemieke is to my left, to the right for the watchers. And Gilberta Zorrella is in Brazil and online. The layout of the session, you can skip this slide. Everybody knows by now that I’m that person. I’m giving the introduction at this moment. Then Gerben Klein-Bolting and Annemieke Toersen will talk a little bit about the role of open standards, particularly in procurement experience in the Netherlands, a nice presentation. Then, oh wait, wait, wait, then Wouter Natris will talk a little bit with Liz, who will be there. Then we have an opportunity for questions from the audience, both online and here in the room. Next slide. Satish Babu will then present some perspectives. At that time, we’re close to 2.30 already. And then we’ll have a panel discussion. Oh no, we will have Gilberta Zorrella and Flavio giving some perspective from Brazil. And after that, we have only a couple of minutes for a panel discussion and further questions. If everybody is still awake and not fallen asleep from sleep. a long, long week. So let’s go. Without further ado, the session on the platform internet standards in the Netherlands.

Alisa Heaver:
But before we go there, Alyssa Iver from the Dutch government, Ministry of Economic Affairs, is here. And she would like to say a couple of words. Camera swing to the microphone about this initiative. Yes, so my name is Alyssa Iver. I’m from the Dutch government, from the Ministry of Economic Affairs. And the Dutch government has been fully supportive of this platform internet standards and of the forum standardization where Anamika is from. And these two standard public-private partnerships have been really crucial in the Netherlands to, at least for the Dutch government, to further adopt standards that are deemed of importance. And I think it’s good that we’re having this session here. And I would also really like to encourage other governments to work together with experts in their countries on internet standards and on other types of standards to see which standards should be adopted by government and used for procurement. You’ll hear a lot more about that. And yeah, I really think that we should, well, I’m really pleased that we have this good relationship in the Netherlands. And I hope to see this spread across the world. So have a good session here. I guess that’s back to me.

Olaf Kolkman:
Yes, without further ado, I think we are going to listen to Gerben. So if the Zoom room can be opened. so that Gerben can speak, that would be great. Gerben, are you with us? I am with you, but can you hear me? And now we can hear you. Hello, Gerben.

Gerben Klein Baltink:
Good morning. Well, as mentioned by Olaf, talking about standards is not relevant just for the individual user of the internet, but for the common good. And it has been some, I think, 10 years ago that amongst other people, Olaf and I met at a meeting at the Ministry of Economic Affairs in the Netherlands, where we sat together with organizations across the board from, let’s say, the Internet Society in the Netherlands, as well as Dutch government. And all of us were involved in some way in trying to bring open modern standards forward. But we all realized that this was not an easy thing to do. The adoption process is sometimes very slow. It can take many years before the actual take-up of a new standard is realized. And we, discussing this topic, we realized that we could do something, perhaps, in close cooperation in a public-private initiative that then was called the Platform Internet Standards. Our first meeting was around nine years ago of this new body, of this new platform. And we soon realized that we really had to stick together, government, public organizations, private organizations, to make this work. And one of the things that we soon realized is that if we would like to make modern internet standards more acceptable for everybody, it would help if there would be some kind of test tool to make sure that everybody could see if their own website and email or local connection could actually use these modern standards and if they use them, whether these standards are set up in the right way. And of course, this is not something that many individuals will do themselves. So we initially focused at organizations hoping to attract both the technical people in such an organization, as well as the board members, because it’s not something that can be done by IT technicians alone, it has to be accepted by the board of an organization as well. This test tool, and some of you may know it or even use it, it can be found at the website internet.nl. And there we dive into many of these modern open standards, but we do not only explain the standard and test the standard, we also point out how you can go, and this is the procurement part, to your supplier if something is not set up correctly or if a standard is simply not used. So one of the things that we offer is insight in does your website, does your email, does your local connection function correctly with these modern standards? And if not, what would be the kind of solution that you can apply? At this website, you will also find the hall of fame of those websites that are already 100% up to speed with these modern standards, but also a hall of fame of hosting organizations that can help you if you want to have their support. to have your own website and email set up in a correct way. And we have seen that the use of internet.nl by many organizations and many individuals is growing and growing. I think we will pass over one million tests this year in the year 2023 itself. And we come from, let’s say, 650,000 tests last year. And we do also see tests in a more technical environment. Our API and our dashboard, where you can run multiple domains, multiple email servers at once, and see if these are all set up correctly. So these modern standards, we think, will benefit everybody because your safety and security and connectivity online will be enhanced greatly. So what we try to achieve is that as many people have, and organizations have, these modern standards so that we can all benefit from an internet that is functioning correctly. And the good news is, as Alisa mentioned in the beginning, it would be great if other countries would have the same idea about these modern open standards and applying them. And we are more than happy to help other organizations, other countries, to set up something similar. And some countries already have, like Brazil, like Denmark, like Singapore. So we see initiatives around the globe in the adoption of these standards and tooling. And we are open to learn from other experiments as well. And you can’t do without explanation. And the explanation can be found at the website itself, but also in the help and the help team that we have to provide. organizations with support. And we have also made some tooling available and not only from our platform internet standards, but also from international and national organizations that have the same kind of idea. So for now, I would like to hand over to Annemiek and let her explain what the Dutch government does with the forum standardization.

Olaf Kolkman:
And you’re more than welcome to visit internet.nl and make use of our test tools. Thank you.

Annemiek Toersen:
Thank you very much, Germen, for your introduction. And thank you for attending our session, all of you in the here and abroad. And my name is Annemiek Toersen from the Netherlands Standardization. And I like to tell more about how Holland and the Netherlands do something about adoption of open standards. Why, actually, open standards? Sorry. I am from the forum, Netherlands Standardization. And the standardization is a think tank and aims for more interoperability of the Dutch government. Open standards are key to this goal. And therefore, the standards from the forum actively promotes and advises the Dutch government about the usage of open standards. So the forum has about 25 members with various backgrounds, from government, business and science. And the main topic of the forum is the organization of the so-called comply or explain list of open standards. And this list should be applied by the complete public sector organizations, central as well as the central. So why open standards? All open standards we promote regards information exchange between governments and citizens and also between governments. themselves. So with open, we mean that the specifications of the standards is publicly available and that interested parties can participate in the standardization process. So there should be no single party that controls the standard. So open standards are more important because of the interoperability as mentioned here and the security which influences trust, of course, accessibility as government is obliged to inform the whole society, of the society as a whole, and vendor neutrality. When it comes to internet standards, the Dutch government has a threefold strategy shown here in the picture. I will go briefly through it. First, the standardization form can mandate specific open standards. We can do so by including standards on this list, the so-called comply or explain list. This is done after careful research in which we also consult technical experts. Standards on this list should be required when governments are investing in new IT systems or services. As we survey on some bigger IT organizations within the Dutch government, we have seen quite some progress using open standards. However, it also became clear that some organizations hadn’t moved yet. So therefore, in addition to the comply or explain list, standardization form can also make agreements, agreements with ultimate implementation dates. That might be handy because we have already done so for several modern internet standards like, you might know, HTTPS and DNSSEC. We have initial plans to make such an agreement for RPEG-EI as well. Sorry, I go back, because I wasn’t finished yet on that number two. I just finished the number one, the mandatory. If we go to corporations, we work together. Let me show a little bit more. We mandate also, apart from number two, we abide specific open standards law. For instance, the open standard HTTPS is now, since July the 1st in Holland, in the Netherlands, obliged by the law, the WDO, the digital government law. If we go to the second block on the left side, the corporation, we invest in community building. So we try to bridge the gap between technical experts and government officials. So therefore we are already happy with the internet standard platform Gerwin just mentioned, and are actively participating in this platform. This corporation enables us to be more effectively helpful to governments with their technical questions, and also with their questions regarding how to request the modern internet standards from their vendors. And the third block on your right side, we monitor the adoption of standards. So how do we do that? We review tenders and procurement documents, and for modern internet standards we happily use, of course, the internet.nl, Gerwin mentioned already, to frequently measure over about 2,500 government domains. A small note I can mention here is that since internet.nl now also has a test for RPKI, we will perform a large scale measurement for RPKI. The results of this measurement will be used in the decision process to set on ultimate implementation date for RPKI. All right, we go indeed to the next slide. In order to benefit the use of open standards, it’s very important to have a certain critical mass because if only one or two organizations use the standards, the public society has no advantage at all actually. So we need more and more participants using open standards and by creating more transparency, we create also more openness. We refer to an analysis of the Bureau of Economic Policy here in the note under these two downwards in the sheet. You can have a link if you like from us. Furthermore, I go to the mandatory, number one, specifically. As I told you, we have a complier explained list and on that list we have about 40 open standards. These standards are evaluated through four criteria, openness, added value, market support and proportionality, therefore the critical mass as mentioned before. The standards should be actually proven in practice, that’s very important. Open standards vary in different categories like, well, of course, the internet and security standards, document standards and web standards, but also, for instance, for administration like e-invoicing, but there are many more. And when the government invests, they should request for those relevant standards. Government should use these standards. In case they don’t use it, then they should report it and with a specific reason. For instance… If it costs extremely much money, then they can report it in their annual financial report why they didn’t use the open standards. Okay. We go to the next slide, please. I already mentioned 40 open standards of which about 15 are related to the Internet security. These standards prevent, for instance, from spoofing, eavesdropping, and, well, you might know better already, but those are some of those Internet standards. RPKI we already mentioned, but, well, especially DNSSEC and IPv4 6. In addition, security.txt is just a new one on our list. It’s very handy. Next sheet, please. We go, as you recognize it, to number two, the cooperation. So to get further in promoting the use of these open standards, we don’t only mandate, but also, indeed, cooperate, as I mentioned before. We do that in a couple of ways, nationally, internationally. Nationally, we already mentioned platform Internet standards, but also with the Secure Mail Coalition. Last week, my colleagues were together with a lot of European countries talking about international possibilities, and we reuse Internet.nl codes as much as possible, and Denmark, Australia, Brazil already started with it, but we invite you, as well, if you are interested, please take contact with us, because we can help you. The code is in English available, and, well, we can assist whenever you want. In order to create that critical mass again, because more people, then it works more efficient, and we have more knowledge gathering together, and get it better every day. Besides that, we contact vendors and hosters. So think about Cisco, Microsoft, of course, Open Exchange, Google, Akamai, well, we can mention much more. And as an example, Microsoft, we contacted them in order to implement Dane, support Dane security standards. And this inspired Denmark as well to write a letter. And the results are with success, because coming spring 2022, 2024, they will fully support the Dane security standards. So that is very, we look forward to see that next year. Microsoft will work together. Finally the monitoring where I was talking about, we evaluate the tendencies I mentioned on the relevant open standards and we research whether those open standards are included. So apart from that, we take also contact with governments in order to check whether they requested open standards and are included in the offers of suppliers. If they didn’t, then we call them and get in touch and ask why, because some of them don’t explain unfortunately in the reports. We also would like to know why they didn’t ask it. And a lot of procurement departments don’t even know how to start with it. So we support them with the text, special text for tenders. And we support them with a decision tree, which makes it handy for people who are not so technically, don’t have a technical background, but a procurement background, can support them to ask for those specific standards. Unfortunately, we conclude that these tenders still not fully complete with open standards. That’s a pity. And we report this once a year to the cabinet in the Netherlands. The internet.nl mentioned already a couple of times, you see also this nice t-shirt. If you score as a Dutch organization 100%, then you have a very special t-shirt, apart from the Hall of Fame, of course, as Gerwin mentioned. The actual usage of the open standards is measured twice a year. So twice a year we offer this also to the cabinet. And the tooling, we can do that en masse, but also if some organizations like to have their own measurement, that’s also possible. So please contact us. And we conclude that there is quite some growth in using the open standards due to the cooperation. So we mentioned already the cooperation with Microsoft, but also other vendors. And that might, yeah, well, that have results. That’s good to hear. So it works. That’s what it says. Good to know for you is that we sometimes dig deeper. So for instance, vendors who lag behind, we contact. And if there is room, we advise about the standards and so to use, so the use improves. And the last final, well, actually it says already, if you don’t ask it, you don’t get it. So that’s for sure. So there are some lessons learned. Please make sure whenever your government tender, ask for open standards. And check it with the tool, the toolinginternet.nl. Just like Denmark, Australia and Brazil did, who did reuse the code. So I invite you, if you have questions about that, but also hesitate, like, is it something for our country or our government, please feel free to question.

Olaf Kolkman:
Thank you very much. I hand it over to Ola. Thank you. And I just typed in my personal domain, xalx.nl. in internet.nl, and yes, 100%, that t-shirt is mine. Now, I also, just as a remark, I also have to smile a little bit when you talk about modern internet standards, because some of the standards that you refer to as modern are indeed a quarter of the age of the internet itself. However, the security.txt standard has been published as RFC 9116 in April 2022. So that is a really interesting, fresh standard. And just to give you a little bit of a feeling why that standard is so important, the security.txt standard is very simple. It says, publish contact information of the person who is responsible for the security of your website in a specific location of your website, so that somebody who finds a bug, a vulnerability, in your website knows where to find that contact information. It’s a very simple standard about, if you want to know something, look there. And by doing so, you help people that do security research being able to contact the people responsible for the problems. And that makes a great difference in the security of the internet. Again, this is not about your own infrastructure, although this one helps, but it’s also about collaborating in the greater good. And I think that security.txt is an easy, explainable example of this. A quick logistical question, Wout. Will you take your session now, or shall we first move on to a? You’ll take over, okay. Then Wout, you have something to report. I have. Thank you, Olaf.

Wout de Natris:
My name is Wout Ten Atries and I am a consultant based in the Netherlands. And within the IGF community, I’m the coordinator of a dynamic coalition called Internet Standards, Security and Safety, as you can see on this slide. And our strap line is making the internet more secure and safer. And that’s, of course, something that everybody tells you and everybody says. But we actually came up with an action plan to do that. Next slide, please. Next slide, please. And we started at the virtual IGF of 2020 with a concept of a dynamic coalition. In 2021, we were able to present three working groups. And that is number one, two, and three you see on this list. And the first one is security by design on the Internet of Things. And that working group released its report this Tuesday here at the IGF. The second one is education and skills. And that already released its first report last year in Addis. And we’ll come to number three very soon. And number five as well. Number four is internal but also does analysis of our relevance compared to the global digital compact and the sustainable development goals. And that last report was also presented here at the IGF. Number six is data governance and privacy. That was supposed to be released, but that was done together with UNDESA. And they decided not to release so that we could not share that information here. Number seven is a skeleton that never came true. But I had a meeting today that may actually reveal that. revive it very soon. So that is encouraging news. Number eight is on DNSSEC and RPKI deployment, two standards that have been mentioned many times at this table already. But this is not about talking about the technique of deployment, we are going to try and produce a narrative that convinces people in decision-taking positions to actually procure, secure by design. And it may be that they are always asked from a technical point of view, but these people probably need political, or economical, or social, or security arguments to be convinced to invest or demand these levels of security. Number nine we announced is on emerging technologies, and also there we had several talks here at the IGF. These are quite encouraging that we will be able to start this global comparison on policies that are being developed on AI, quantum, and perhaps in the future metaverses. Number 10 you see is a dot, number 11 is a dot, anyone who has an idea that would fit this dynamic coalition can step up and contact me or Mark Revell, who is not here but who is our senior policy advisor, and share your idea and then perhaps we will see what we can do together. So let me proceed to number three and number five, that is what we are presenting on here today. Next slide please. So the working group number three is called Procurement and Supply Chain Management and the Business Case. The person who should be presenting here is Lisa Rambo, but apparently her session took a lot longer than planned, and hopefully she still comes in, and if not I will do the presentation completely, but I have done it before, it is not really an issue. This working group produced its first report here at the IGF, so we released it on Tuesday and what we did, next slide please, is a global comparison of procurement policies of governments. Next slide please. That what this group did was try and see how many procurement documents are available on the internet, but also to see if they are from the government. or from the private sector. What we found are only public documents. So we found 11. Oh, Mallory, you can take over right away. There’s a chair for you. So I’ve only had the first slides. You can sit and present if you like.

Olaf Kolkman:
Yeah. Okay.

Wout de Natris:
Yeah. It’s a great timing because I’m at the first slide. So I’m explaining what we were trying to achieve. So thank you. This is Mallory Nodal and Mallory actually did the whole planning and part of the research that she was responsible together with Lisa Rambo for the report. So Mallory, great to have you here and please take over from me. Yeah.

Olaf Kolkman:
How much time do I have? I don’t want to go on and on. About 10? Okay. Good. Right.

Mallory Knodel:
Sorry to interrupt this whole flow, but I was at a different session and it just ended. So I’m glad to be here. I’m glad the timing’s worked out. So yes, this is then the first slide where we’re really explaining what the goal of this work has been defined as. When we look at the procurement and supply chain management in the business case, that of course is in addition to other tactics where we can further the security standards throughout the internet. But at this very particular point, we also wanted to consider what is then the internet governance’s role in this work. How could the IGF from where it sits and all the stakeholders that participate in it benefit from this sort of research and perspective and guidance when talking at a high level about norm setting around the recommendations for procurement and for supply chain management. So we will go to the next slide, please. Do I have to do that myself? Okay. Great. Great. wanted to then in the plan you know figure out where we’re headed and how we’re actually going to get there and it primarily to me seems as to be a research project assuming that there are in fact many procurement guidances out there already and the question really is do they include and consider security standards and if our guy if we are creating new guidance at the more global level we want it to of course be impactful and to be taken up so part of the research of figuring out what already exists in this space is an exercise in finding out who our main stakeholders would be and ensuring that the work product that comes out of it is any good so that’s what this slide really tells you the the text is of course too small for you to read here but we identify the outcome as meeting global internet security standards is a is a ubiquitous baseline requirement in any public or private sector procurement and supply chain management policy now the different objectives speak to some of the different strategies I’ve just mentioned we want to fully scope and map the variety of procurement policies that already exist to determine what are the what are the current challenges and opportunities for people setting those policies the second objective is to make sure that we can distill that into very actionable guidance for anyone who is writing these policies either or refining them for that matter or even implementing them and then the last thing is of course we want to create a group and a community a dynamic coalition if you will around this work so that it continues and it’s strengthened by iteration by continued research so those are the three different objectives there are different activities under each one that I’m not going to go ahead and elaborate but just suffice to say we’re really just in the first bucket we’re really only looking at this very early stage at the research itself and the scope of what we’re actually up to so that’s what we’ve been able to accomplish with this first research paper the next subsequent where we’re distilling it into real guidance where we’re building a community of practice around this that comes in the years to come so next slide please so yes so this is what our survey achieved. We really just had to, of course, create a research question, create sub-questions, actually go out and find source material to question, to, you know, be curious about. So we were asking what has been done by others on procurement and supply chain management guidance? What is already out there? There’s a really uneven spread. You know, we sort of assumed at some point that we would hit on a goldmine, maybe like a regional document had been created and then all of the countries in that region had followed the document, but that never really actually happened. In fact, it’s really patchy. You do have some European countries who have done something, but then you have, and you have some like Latin American countries, but, you know, it’s not even and it’s actually not clear where this sort of norm-setting could happen, which indicates that there’s a gap and that this is something we can actually do. So the next slide, please. I’m not going to go through the terminology, but for the purposes of the paper, we do try to define what these concepts mean. Next slide, please. So the methods was really quite straightforward. It was desk research. We didn’t get to a stage where we would do actually interviews with people. I feel like that might be next phases where we’re actually looking at, you know, what kind of guidance would be helpful and actionable. You actually talk to people who’ve done this before and try to understand from them what they’ve done in a qualitative kind of way, but this is very just let’s find all the documents we can that seem to fit the brief and read them and break them down. So we created sub questions when we’re asking. We were curious about only procurement that talks about cybersecurity, not all procurement. We don’t care where people’s pencils come from. We then distilled it into are they talking about, well obviously they had to be published, that was the second one, and then we were looking for clues that security standards would actually be present in those documents. Next slide, please. Yep, I think this is really straightforward. I just want to say that we did take care and making sure that we had representative samples we were hoping to have a Spread on language, but that was a challenge because the main researchers were English-speaking We wanted to make sure that we had not just global North countries, but also global South countries. We were looking of course for Places where there was synergy between different procurement policies and then obviously where there are also gaps Next slide, please We in order to present the findings we did actually track the findings We were we were looking as through looking through all of these We struggled to figure out at first how to present all the different findings because it was such a patchwork Sampling and not there was not the synergy that we expected So we went ahead and adopted an existing framework that comes from NIST That at least takes cyber security functions and breaks them out So this just allowed us to be a little bit more incisive as we were Distilling some of the advice that we found in the different documents So I’m not going to go through this but in the report it helps It helps orient the reader a bit to this NIST framework so that you can see why we rationalized Presenting the findings in the way that we did next slide, please There the conclusions I think are the most important part so I won’t rush through them so much of it I’ll let you read them. I will Just point out a few so Actually, I’ll just point out two I’m gonna focus on the Netherlands for both of them So, you know and we can all be proud of how well the Netherlands is done in this area It’s not it’s something that we knew to expect going into it, but specifically the two that we thought were worth mentioning was that one of the very few procurement policies that even mentions standards at all was The coming out of the Dutch ministry and I’m not going to be able to pronounce to pronounce the name of the? Postul of Leghuislijst and we just had a presentation about both of these. Wonderful, so you all know already how terrific they were but they turned up in our research as examples of things that we would like to see others potentially will make it into our guidance to follow. So next slide please. The other sort of real conclusions here that I think are worth mentioning is then where this research points to future work because it was our intention all along to not really do anything new with this research but actually point the way towards what could be done. And so I think we’ve done a good job of identifying and making a case for why we need to take future action and this is for others who really wanna take up this work and want to use the IGF as a platform to move some of this significant work forward. So the open standards, open cybersecurity standards should be points of reference and there’s an opportunity to make use of that. There are some international treaties that also could be translated into compliance mechanisms that could implicate procurement and supply chain. There are many places that do not even have standalone documents and it might be a good opportunity or that haven’t published them openly. I guess we could maybe give that caveat but that’s an opportunity to do that and to encourage it. So if you have procurement policies, please publish them. If you don’t yet have them, maybe you ought to consider it because it’s quite important. The fourth future work area is that we could also develop these frameworks. So this I would imagine would be in the larger work within the dynamic coalition where you connect this strategy, remembering it’s one of many, to the larger work that other people are doing within a framework. There is also a need to do proper documentation not in the sense of norm setting but just in the sense of learning, monitoring and evaluation of how this works when there is an incident. We’re folding that in and trying to learn from it in the context of procurement. And then the very last thing is just it would be really great in the IGF again to leverage the multi-stakeholderism of this and to encourage more coordination. We often feel like this might be a conflict of interest to have industry and governments, especially. when those industry are going after, you know, those contracts, those procurement contracts. But in fact, that ability to collaborate and work more closely, I think, could have good effects. So that should be the last slide. But maybe we’ll go one more. Let’s see. Yeah. So of course, you can contact us. This is all information that’s also in the report itself, so you don’t need to worry about this slide. And I think that’s it then. So thanks. I hope that was on time. I didn’t. Thank you, Marilyn. That was perfect. Wout? No, I’m going on with the next slide. Ah, good. Then it’s not perfect, but we’ll manage.

Wout de Natris:
It’s definitely perfect what Malorie said, as you could, she voices it much, much better than I ever could. So thank you, Malorie, for joining us. But I think, and this is not on the slide, but from the other research that we’ve done, for example, on IoT security, what we see is the same what comes out here, is that this open internet standards that we’re talking about are almost not recognized by government. They’re not in policy papers, let alone in legislation, which we’re not advocating here. But the fact that governments don’t recognize the existence of exactly that what makes the internet work is worrying. Because does it mean that they don’t know it exists? Do they don’t understand what the implications are? If you don’t protect that inner core of the internet. So that is a question that comes up in all our research. As you can see that we went through procurement study and global comparison study with the recommendations and the conclusions that you just saw. We also have a working group that is called Prioritizing and Listing Existing Security-Related Internet Standards and ICT Best Practices. What this working group has done and also just like the procurement, thanks to the RIPE Community Fund that graciously funded all this work, is that if governments are to start procuring, there are probably 10,000 standards that need to be procured at some point in time and it will probably be very overwhelming to explain to somebody who doesn’t even know the first one exists. So we got together a team of experts and we asked them to list the most urgent existing open security standards out there. And it won’t be a surprise that we asked the project manager of forum standardization to step in to help. But with people from India, from Latin America, from Singapore and a few other countries, they got together and started talking. And through the past months, they came up with a list which has been on consultation since last Tuesday. But what we try to do is to provide decision takers and procurement officers involved in ICT procurement with a list containing these most urgent internet standards so that they can actually have a tool to start working with and start understanding why this is so important. And then comes the working group I mentioned on the narrative that is going to be another little component of this whole thing the IS3C is trying to produce. Next slide please. So and well as I said that there is a consultation going on since the 10th and you are happy to join it. The link can be provided at any moment. It closes on Sunday the 5th of November. But what is it exactly that we are consulting? Next slide please. So what did our advisory panel do? First they started to grasp what the meaning is. After that they decided it needs scoping. And that scoping came down to four parts and you can see that three of the four are the same as was presented just now by Annemieke. So the first one the standards have to be interoperable. So that means that you do not only protect yourself but you also protect somebody else but somebody else also has to protect you. So it’s about two sides that need protection to have an effect. The second one is they are all security related. So that leaves out a lot of other sort of standards. All these standards have to have an open process. So available for everybody. You don’t have to pay for them. You can access them. You can start using them without having to become a member of an organization or without nothing. You can just find it on the internet and deploy them. And finally they have to be proven as a success. So others must have deployed them as well and successfully. And that’s number four is different than from the forum standardization. So you can see that this is an influence coming from other parties as well. When we decided on the scoping we came to categories. And after a lot, lot, lot of discussions we came to four categories. The first is data protection and privacy. The second network and infrastructure security. The third website and applications, web application security. And finally communications security. And what was debated the most, should there be a fifth one on cloud security? Because that is one of the biggest topics out there at this moment. But most of the experts said no, because these four categories go for the cloud, so we don’t need a separate cloud component. They all function within the cloud, so cloud should adhere to these four. So the next step was when we had that, we could start thinking about which standards are actually going to be in that list. And that proves a lot easier than the scoping and the categories, because that was done in a few days, and everybody more or less agreed except the ones that I want that one and that one. But we want about 40, so that’s manageable. And we have a concept list at this point in time. Next slide, please. So I’m not going to mention which are in there, but a lot have been mentioned by Anamika, because the most urgent one will be in her list, but there are differences. So people, other people from other places in the world stressed another standard. And that is what we’re going to do next. In this consultation document, we explain what we try to do. We motivate with arguments why we made the decisions that we make, but we want the wider community in the world to come in as well. Tell us if we scoped right, or give us very good arguments to change it. Make good arguments why we need another category, and suggest other standards. So if that happens, then in the second half of November, we come together as an expert team, and I am the coordinator. I’m not an expert. I’ll tell a historian doing a lot of work in this field, but not at hardcore techniques. It’s decision time. So we’re going to decide, the experts are going to decide whether a standard will be in there or not, or that the categories are changed based on the arguments made. So hopefully, by the half of December, we are able to present this tool, and have another tangible outcome of this IGF process. And that then needs to be proliferated, and that’s exactly what Mallory says. It is something that will go immediately under her report. that as much as possible and share it with governments and from there hopefully we’ll get the traction to improve procurement policies in the near future. So that will be a second project and with that I conclude. Thank you, Olaf.

Olaf Kolkman:
Perfect. Thank you. I promised that there would be some question time and I will allow for questions but I hope there are none because then we are exactly in the planned time scale again. I do have a question but I’ll leave it till after the session so that, yeah.

Audience:
I have a question about the testing website for, at least in the Netherlands, for websites which is really working very well. I just tested my own website that was 100% so I won a T-shirt and I think it would be a really good idea to, and that’s what you are doing here as well, to promote the use of these kind of testing websites internationally. There may also be some interesting advancements of the Dutch website. For instance, I’m thinking of a few more soft standards such as accessibility or maybe in the future testing the sustainability elements of your website. So I would love to make a strong case for including those kind of standards on that website as well. I think that the people responsible are in the room. It doesn’t work, yeah.

Annemiek Toersen:
Well we are not responsible for that but people can apply those standards. And accessibility, of course, is already on it, because it’s obliged in the Netherlands, WCHG. And I know that there is developing in the Netherlands, also at the Ministry of Internal Affairs and Infrastructure also, are combining internet.nl with other dashboards like accessibility. So people are thinking about it, but now it’s a thing to get all the ideas together, because everyone is inventing the wheel again. And that’s not good, of course. So it’s a good issue you point out, Valerie. So there must be more experts to get it, to combine that, and one dashboard. So we’re pushing that as well. Good suggestion. Thank you. Person at the mic was Valerie Frissen, just for the record. Good.

Olaf Kolkman:
The next part of the session, but I just noticed something. And we often forget that these sessions are made possible and accessible, actually, on that point of accessibility, by people doing real work. And I just saw a name in the Zoom room. Rochelle is doing the captioning. And I would like to thank Rochelle and her team for her hard work here, because it really makes a difference in these type of environments. Let’s see. Yeah, I think that’s appropriate. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Satish, you have perspectives from India. Thanks very much, Olaf.

Audience:
My name is Satish, and I’m from India. And I’m going to share two slides on what, or three slides on what we’re getting a good picture of. current status of compliance and it is pretty bad. So we are trying to kind of monitor this bunch every six months and we will then see the kind of transition what happens, you know, over the period of time. So in India, the whole digital thing is very, very important for us. India is betting heavily on digital technologies for its growth. It has made several strides in digital transformation. For example, the digital public infrastructure called IndiaStack and multiple digital public goods including the, when the COVID was there, we had this huge, you know, website for vaccines. Now, India is one of the most populous country in the world, if not the most populous. And the IndiaStack, so whatever application we build, it has got to be scalable to that citizen scale, which is 1 billion plus. So these are really large applications and these include, you know, financial, health, logistics, even the smallest villages, we see people using mobile phones to transact money, I mean, move money. Now, some of us are very nervous when you see this growth. It is good in a way, but when you look at the underlying, the core internet itself, we find that they’re not kind of complying to the latest standards. So this is actually worrying and that is why we kind of created, thought about this initiative. This is completely based on volunteer work and currently we’re trying to raise some seed funding for recreating internet.nl kind of a thing for India. Now, India, as was mentioned about accessibility, we have some additional requirements and one important thing is the multilingual part of it. And we also have something called the universal acceptance, which is a challenge. Now, this is when you create a domain name in a script other than Latin, say in Hindi, the Devanagari script, and you create an email out of it and then we find that that email does not work. It does not work in many websites. So the reason is that the programmers who created that software have not programmed for this kind of email IDs. So this is a huge problem. It doesn’t even work in the big tech companies like Google. So the ICANN is trying to now resolve that problem, but for India, when you want to test for these things, we have to test. test on these angles as well. So we’re trying to add to the code, of course, while making it open source itself, so that other people can also use it. So we’re trying to recreate the internet.nl with some more features that are specific to Indian requirements. And we plan to periodically run this test and disseminate the results to all stakeholders in the country. And we hope to be nudging or pushing them to adopt these standards. As was mentioned earlier, India has, like many other countries, India has no law that says you have to comply with all this. So we’re trying to work from bottom up through the community effort to kind of get these institutions to start implementing these standards. I’ll stop here.

Olaf Kolkman:
Thank you very much. That was very fast. Well, we have more discussion time at the end now. Oh, yeah, I need to use the microphone. That’s true. Yeah, thank you for that. That was very clear, very concise, and even comprehensive. Thank you. The Brazilian situation, Gilberto and Flavio, let’s see if Gilberto is audible. So Gilberto on Zoom, can you speak something? Yes. Perfect. We hear you. So I now hand over the microphone to you and to Flavio.

Flavio Kenji Yana:
OK, I’m sharing my presentation. OK, can you see my presentation? Yes, we can. OK, good. Thank you very much for the opportunity to participate in this event. I am Gilberto Zorrello. I am a product manager from Brazilian Network Information Center, NIC.br, that implements the decisions and projects by Brazilian Internet Steering Committee, CGIBI, which is responsible for. the coordination and integration of all internet service initiatives in the country. Presentation is about the top test padrões in Portuguese or test standards in English, based on the internet.nl tool in the security recommendations that must be adopted on networks on Brazil. NIC-BR is proposing these standards to Brazil. That’s the idea. That’s our agenda for this presentation. Of NIC-BR, the Brazilian Network Information Center. NIC-BR is a non-profit civil entity that since 2005 has been assigned with an administrative and operational functions related to the .br domain in Brazil. In addition to providing and maintaining the domain names registration activity, NIC-BR goes beyond similar entities in other countries. We invest in actions and projects that bring a series of benefits to improve the internet infrastructure in Brazil. With a revenue collected exclusively through the provision of the domain registration. Some of our efforts are focused on many sectors of Brazilian society, disseminating knowledge about best practice to be adopted in new networks and related areas. In some cases, we threaten relationships with private governmental and non-profit entities to encourage the adoption of best practice to be adopted in. and internet services. The top project here in Brazil. The project was developed by NICPR to disseminate the best secret press in Brazil for websites, email service and user connection to internet. It uses the open source code provided by Dutch implementation. The project is part of the program of Safer Internet in Brazil, which works with ISPs, internet service providers and including the operators to disseminate the best security practices that they should implement on their respective networks. Then, top BR in Brazil, we are using in this program, as a part of this program, okay? The operation was started in December of 2021 and can be assessed by top.nic.br in this domain. A little about the program, okay? The program is acts in support of internet technical community in reduction of denial of service attacks. A set PR team inside the NICPR says notification to the technical community in Brazil about these problems. Improvement of the network routing security according MANRS recommendations. MANRS is a internet society initiative. We, the program spreads, then execute best practices according top recommendations. Disseminate the best practices to configuring websites and email services according top recommendations. recommendations to encourage the implementation of IPv6 in final users and internet services using top as a testing tool. The plans of action performed by NIC-BR. We have several teams inside the NIC-BR. SEP-BR is a security, SEP-TRO, internet products, registry of domains, ix.br and systems. That these groups creates technical teaching materials and some good practices, raising awareness in the technical community by lectures, course and training, having direct interaction with network operators by bilateral meetings to explain how to implement the best practice and recommended in each situation. Defining KPIs to monitor the effectiveness of actions. That’s the ideas of the plan. Some results of the plan now. We have some statistics. This statistics shows the quantity of IP addresses notified with misconfigured service. Note the reduction of the, since the beginning of the program. And now the reduction is about 70% of this kind of problems. The other issue that we work in inside the program is implementation of manners in Brazil. Manners, this statistic shows the distribution by country of internet providers participating of manners initiative. that Brazil has the largest number of participants in as increasing every year. 20, 25% of the manners participants comes from Brazil. And now we have some statistics for the top implementation. We started at the end of 2021st and we have some, we are increasing the tests. This shows the number of connection tests performing, the percentage of recursive DNS server and users with IPv6 implemented, the percentage of DNS services validating the protocol DNSSEC. Now we have some statistics about the website tests, the number of unique domains tested, the number of percent and percentage of tests that passed by some tests and the number of sites that get tested 100%, the hall of fame in our case. It is similar statistic for email tests. Many associations, ISPs, Internet Service Provider Association support the program here in Brazil, including, of course, TOP and Academia too. Academia is an RNP and the other, the Connexus is Incumbent Operators Association and other association here are Association of Internet Service Providers. Brazil has more than 10,000 Internet Service Providers, small, and medium operators around the country. That’s a specific situation of Brazil, okay? We have, of course, incumbents responsible for about 50% of the internet traffic in Brazil and the rest of the traffic, these small and medium operators are responsible for the rest of the traffic in Brazil. Some remarks of the implementation. TOP was delivered in end of the 21st, greatly running version 1.4 of internet.nl. Today, we don’t have a securitization state yet and RPKI, okay? But the version 1.7 is implemented in test server. We are now validated the implementation. We intend to deliver the end of this year. The best practice recommended by the two are recommended from NIC.br to technical community in Brazil. Then the idea is this best practice NIC proposed for the technical community in Brazil together with best practices of manners and the best practice proposed by SERT.br. The two is being the same net together with the program in the country and the technical events for specific sectors, such government, academia, internet operators. The accounting area of Brazil’s region. legislature carried out many tests some months ago. They said that the government started using the tool to test their sites, but this is in the beginning. That’s the point here in Brazil. The top tool provides important indications of the implementation status of recommended best practice and provides a baseline for operators to implement them in their networks. That’s a main point of the talk. They created this baseline in order these operators under this line, they work to get this baseline. This is a very important tool for our country. Brazil has continental dimensions, and it’s a challenge to keep up with the evolution of the use of these standards here in Brazil. That’s my short presentation. We are ready for any questions if you have.

Olaf Kolkman:
Thank you very much. Flavio, were you adding something or just for the questions? No, no, yes. Yeah. OK. OK. Thank you for this, Gioberto. Very good to have you with us. We are exactly on the dot on time. It’s quarter to 3. Are there any questions? I’m looking around. I’m looking online. There was a question earlier whether these sessions are being recorded, and they are recorded and will be made available on the IGF website later. I do have a substantive question, though. I’m not quite sure who on the panel could answer that. Maybe somebody in the audience. Takes a little bit of introduction. In Europe, we have a regulation, it’s quite involved, regulation number 1025-2012. So this is a regulation from 2012 which allows the identification of technical specifications that are eligible for public procurement. There is a whole procurement law in Europe which I’m not a specialist on. But the idea was that specifications that were not made by formal standards organizations such as, you know, like ETSI, ISO, ITU, and national standards bodies would need to be whitelisted, identified in order to be used in European procurement and perhaps even in the member states. I do not know exactly. The standards from fora and consortia are not by default on those lists. And the fora and consortia that we’re talking about are IEEE, ITF, W3C, and all those type of things. When the forum was set up, we went through a quite extensive process to whitelist a number of standards. And DNSSEC is in there, DKIM is on there, IPv6 is on there. So there are a couple of them. But that standard, that process sort of halted. And so this is not to comment on that process but more on the question if you do procurement, do you run into the situation that the public authorities can only refer to standards made by formal standards bodies? That was a long-winded question but I think that that final question said it all. Yeah. Valtanatris, that the only thing I can share with you here is that when we started the dynamic coalition, the commission pointed us to a person in the commission who was involved in this process with the measure.

Wout de Natris:
And when I talked to them, basically it came down to we’re not doing very much anymore because it took more than one and a half year to even start talking about an open standard, let alone deciding that it was validated by this commission. And this is the last news I have from two years ago, so I don’t know what it is now, but they never came back online to me since. So maybe you know more, Alisa, but it was not an encouraging answer I got from these people. So that’s what I know. The question is, of course, how did the Netherlands come up with the comply and complain list? Did they compel, whatever, I’m tired, sorry. But explain this, that were they validated or just decided it just makes common sense to have this on, do you know? Thank you.

Annemiek Toersen:
I don’t know whether it’s on, that list. No, I don’t know. The European, because you said… The DNSSEC is on your list. Yeah, sorry. Yeah, the DNSSEC is on our list. Have they been whitelisted by the Dutch government first, or we just decided we have to have them on the list?

Wout de Natris:
Because in Europe, they are not validated in the European Commission. Well, those standards are supplied by a maintenance, other people.

Annemiek Toersen:
They offer the standard, like this is very important. So I’m not sure if it is IETF or who’s doing it, yeah, IETF, but a lot of organizations like NCSC says this is a very important standard, we adjust it, and if more organizations in Holland says that, then it’s proven experience that it is practiced. So that is one of the criteria in order to come to the comply or explain list. So, I think we have a research question here.

Mallory Knodel:
Looking at Mallory. Well, so, I mean, just to say, this doesn’t come up in our research because we weren’t looking for it. It could be maybe a separate question that could be done. I actually think the source material for this would be different as well because maybe you’re actually asking, in practice, how does this work? It could also be qualitatively done. I will just say, anecdotally, I know there are some US companies that when they’re considering going for a contract with a government in Europe, or tendering, or so on, they will then often initiate the standardization then. So, it may be just a consideration of workflow, right? If I’ve got a technology and I’d like procurement in the EU, then I need to demonstrate that the standards I’m using in this are either in existing bodies that have been listed, or that you can initiate the whitelisting at that point, or you’ve got technology that hasn’t yet been standardized at all, and you might as well start doing it in Etsy because that will be the quickest track. So, I know that the companies have that calculus in their heads about how to go after contracts. So, maybe that’s another answer to the question is it’s not always a predetermined, oh, I know that this standard is going to be important in the European market. It might come only when the market entrance actually happens.

Olaf Kolkman:
Are there other questions from the audience or from the panel? Oh, go ahead. Thanks, Olaf. Walter Nathus, is Gerwin still online? Yes, he is. Yes, hi, Gerwin. Who is it? How are you? I’m fine, here.

Wout de Natris:
I’ve got a question for you because the internet.nl, the standards that are there are often, something is added to it. What would be the next that you are thinking of and how do you come to the decision

Olaf Kolkman:
to add specifically that standard?

Gerben Klein Baltink:
So what is the next phase for internet.nl? Well, it is more or less the same as explained by Annemieke. Participants in the Dutch Internet Standards Forum can contribute by asking if the others agree that, for example, universal acceptance as one of the standards that we have considered should be added to our test environment. And then the process is simple. If everybody agrees that it is a good standard to dive into, the next step will be that we look into available tests already from the international community, open source. And if they are available, how well they would combine with internet.nl. Can we actually implement them in the test tool? And if not available, we look into the possibility to create our own code. Sometimes that works just as well as finding stuff already open source online. But sometimes you also have to conclude, for example, in relation to the accessibility standards, that they do not integrate too well in our current test environment. So then we decide to promote them. So have a news item featuring universal acceptance or the accessibility standards. And we will keep them more or less as spares for the future whenever we have the resources or the technology available to include them. That’s more or less the process.

Olaf Kolkman:
And as we learned from the other session, sometimes there was another session on internet.nl this week. Sometimes it’s just. impossible to measure something, like route validation. We were talking about routing security in that session. Looking around once more, going, going, gone. That ends this panel. I think what we learned here is that there are tools to increase the visibility of the standards that are needed to secure our global environment. Name and shame in the form of internet NL, more name than shame, granted. But also procurement methodologies, making sure that the initiative is felt where it’s felt most, namely in the wallet. And I think these are great initiatives. I think that the next thing that needs to happen is that more countries or environments or regions start using tools like this. So we have another deployment issue that we need to tackle. And with that, I leave that in the good hands of the Dynamic Coalition and would like to all thank you for being here. Have safe travels home. And have a good sleep. The consultation, yes. Yeah, yeah, yeah. The consultation, maybe that slide can be reprojected quickly. Let me just tell it. We have a website, www.is3, the number three, coalition.org. The reports that I mentioned can be found there. And the consultation is announced there. It has a link to a Google Doc where everything is included. And everybody is allowed to, has that link, is allowed to make remarks. And we close it on the 5th of November. Thank you for the opportunity again, Olaf. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Alisa Heaver

Speech speed

128 words per minute

Speech length

233 words

Speech time

110 secs

Annemiek Toersen

Speech speed

143 words per minute

Speech length

2164 words

Speech time

911 secs

Audience

Speech speed

186 words per minute

Speech length

775 words

Speech time

250 secs

Flavio Kenji Yana

Speech speed

120 words per minute

Speech length

1284 words

Speech time

641 secs

Gerben Klein Baltink

Speech speed

158 words per minute

Speech length

1160 words

Speech time

441 secs

Mallory Knodel

Speech speed

190 words per minute

Speech length

2274 words

Speech time

720 secs

Olaf Kolkman

Speech speed

133 words per minute

Speech length

2029 words

Speech time

913 secs

Wout de Natris

Speech speed

163 words per minute

Speech length

2297 words

Speech time

848 secs

Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis highlights a negative sentiment towards cooperation between authorities, suggesting that it may result in longer response times and decisions that lean towards soft laws rather than hard laws. This indicates that when multiple authorities are involved in decision-making processes, the cooperation required may slow down the overall process, potentially delaying the timely resolution of issues. Additionally, the preference for soft laws over hard laws implies a level of flexibility and compromise that may not always be in the best interest of regulation and enforcement.

On the other hand, the analysis identifies a positive sentiment towards the diverse approaches taken by different countries in addressing global issues. This variation in responses can be attributed to the regional context in which these issues arise. It showcases the importance of considering local factors and tailoring solutions accordingly. By acknowledging and respecting the different approaches taken by various countries, a more comprehensive and effective response to global issues can be achieved.

Despite the challenges associated with cooperation, there is support for the need to collaborate and work together. It is emphasized that finding ways to harmonise responses and regulations is crucial. This highlights the importance of striking a balance between allowing for diverse approaches rooted in regional context while also ensuring alignment and consistency in addressing global challenges. By doing so, synergies can be formed, facilitating more efficient and effective decision-making processes.

It is worth noting that the analysis does not provide specific evidence or supporting facts for the arguments presented. However, it sheds light on the different perspectives and sentiments related to cooperation between authorities, the impact of regional context on addressing global issues, and the necessity of finding mechanisms to harmonise responses and regulations.

In conclusion, the analysis indicates a negative sentiment towards cooperation between authorities, pointing out the potential drawbacks such as slower response times and favoring soft laws. However, it also recognises the positive aspect of diverse approaches taken by different countries in addressing global issues. The need for cooperation is acknowledged, with an emphasis on finding ways to strike a balance between regional context and harmonising responses and regulations. This expanded analysis provides insight into the complexities and challenges faced in achieving effective international cooperation while highlighting the importance of adaptation and collaboration.

Joanne Cunhe

The analysis draws attention to several challenges related to global platform governance and stakeholder participation, particularly focusing on India and the Global South. One major obstacle is the lack of capacity in terms of both financial resources and personnel. This can make it difficult for these regions to actively engage in global discussions and shape decision-making processes.

Another highlighted challenge is the absence of diverse voices in these global discussions. It is crucial to involve groups that are directly affected by or study platform harms in order to ensure comprehensive and inclusive governance. However, these voices are often underrepresented or not fully involved in decision-making processes, limiting their influence.

The sentiment surrounding these challenges is predominantly negative, reflecting the difficulties faced by India and the Global South in effectively participating in global platform governance. These challenges call for greater attention and support to address disparities and provide equal opportunities for participation.

Regarding stakeholder participation in decision-making processes, a significant challenge is achieving meaningful involvement. Different approaches to stakeholder participation in India’s rulemaking have been observed, but stakeholders are often limited in their involvement at the initial stage, especially when operating at the draft bill level. This limitation prevents stakeholders from having a substantial impact on decision-making processes, raising concerns about inclusivity and transparency.

Power dynamics between various stakeholders also play a crucial role in shaping the type of participation observed. The dynamics between civil society and the state, or between the state and platforms, vary greatly across jurisdictions. Understanding these power dynamics and tailoring approaches based on the context becomes essential to ensure fair and equitable participation.

Another important aspect highlighted is the need for tailored approaches to platform regulation in the global majority. Considering specific contexts in different regions is crucial for effectively addressing challenges and nuances associated with platform governance.

While greater collaboration among different stakeholders within civil society is seen as a necessity, there is existing fragmentation that hinders support at global forums. Efforts should be made to address this fragmentation and foster collaborative approaches that can lead to impactful decision-making and partnerships for sustainable development.

In conclusion, the analysis demonstrates the challenges faced by India and the Global South in terms of global platform governance and stakeholder participation. It emphasizes the need for capacity building, diverse voices, and inclusive decision-making processes. Additionally, power dynamics and tailored approaches to specific contexts are crucial factors to consider. Efforts to address these challenges and promote collaboration among stakeholders will be essential for effective governance and achieving sustainable development goals.

Online Moderator

The first question raises the issue of developing national regulations for digital platforms, considering their cross-sector impact. The term “platformization” refers to the widespread presence of platforms across various industries, requiring regulations that can cover a wide range of issues. The question emphasizes the complexity of this task, as it involves addressing the extensive regulatory agenda associated with platform regulation. It is crucial to develop effective national regulations that strike a balance between the benefits of digital platforms and concerns related to competition, privacy, data protection, and user rights.

The second question explores different governance models for regulating digital platforms. It examines the advantages and disadvantages of a centralized model, where the state plays a dominant role, compared to a polycentric model that involves both the state and civil society. The centralized model offers the advantage of clear hierarchical structure and potential efficiency in decision-making. However, it may also lead to concentration of power, reduced inclusivity, and a risk of regulatory capture. On the other hand, the polycentric model promotes multi-stakeholder involvement, diverse perspectives, and potentially reduces the risk of regulatory capture. However, reaching consensus and making decisions efficiently may be more challenging.

Both questions highlight the complexity and importance of addressing these issues comprehensively and inclusively. Considering the cross-sector impact of platforms and adopting governance models that balance state involvement and civil society participation are crucial in shaping effective regulations for digital platforms. Ongoing discussions, research, and collaboration among policymakers, industry leaders, civil society organizations, and other stakeholders are needed to develop regulatory frameworks that encourage innovation, protect user rights, promote fair competition, and ensure a sustainable and inclusive digital ecosystem.

Miriam Wimmer

Platform regulation encompasses a wide range of laws aimed at promoting competition and combating misinformation. The business models of digital platforms present unique challenges in protecting fundamental rights. Data protection authorities naturally play a role in discussions surrounding platform regulation, as digital platforms involve the large-scale processing of data. Furthermore, traditional data protection principles and rights touch upon issues concerning digital platforms.

Brazil supports the concept of multi-stakeholder participation in digital regulation and has implemented a model of multi-stakeholder internet governance. There is an expectation that regulatory bodies dealing with digital issues should have formal consultation mechanisms in place. This supports the idea that involving multiple stakeholders in the regulatory process can lead to more comprehensive and effective outcomes.

However, challenges exist in coordinating and cooperating between public bodies involved in platform regulation, particularly due to budgetary and resource constraints. The creation of new governmental bodies may be limited, which can hinder effective coordination and cooperation. These challenges highlight the need for efficient and effective cooperation between regulatory bodies involved in platform regulation.

Dealing with digital platforms requires the understanding and enforcement of multiple legislations, which necessitates the involvement of different regulators. The institutional setup to address the complexities of digital platforms is very cross-cutting and transversal, thus requiring the collaboration of various regulatory bodies. This highlights the need for a comprehensive and coordinated approach to platform regulation, involving various stakeholders and regulatory authorities.

A centralized model for platform regulation is deemed unfeasible due to the diverse fields that platforms touch upon, such as labor relations, misinformation, human rights protection, and competition aspects. The complexities of these issues make it impractical to discuss a centralized regulator for the entire digital ecosystem. Instead, a multifaceted approach involving different agencies and stakeholders is required to effectively regulate and address the challenges posed by digital platforms.

Cooperation between different agencies, as well as the involvement of various stakeholders, is necessary for successful platform regulation. However, cooperation does not arise spontaneously; it must be crafted into legislation to ensure its effectiveness and legitimacy. Additionally, public participation is crucial in ensuring the legitimacy and effectiveness of regulatory decisions. While cooperation and public participation may be time-consuming, they are integral to shaping regulations that address the diverse concerns surrounding digital platforms.

In conclusion, platform regulation involves addressing various legal challenges, promoting competition, and addressing concerns such as misinformation and fundamental rights. The business models and data processing involved in digital platforms necessitate the involvement of data protection authorities. Brazil supports multi-stakeholder participation in digital regulation. However, challenges in coordinating and cooperating between public bodies exist, and the decentralized nature of digital platforms requires multiple regulators. A centralized model is impractical, and cooperation, legislation crafting, and public participation are essential for effective and legitimate platform regulation.

Sunil Abraham

The analysis explores a range of topics, including regulation, open-source projects, emerging technologies, discrimination, 5G standards, AI fairness benchmarks, disability rights, compliance engineering, global compliance, and user empowerment. The discussions provide valuable insights into these subjects, highlighting important considerations and challenges.

One key point discussed is the three layers of the regulatory ecosystem: classical regulation, co-regulation, and self-regulation. The example of the Information Technology Act in India is mentioned, which demonstrates reflexive regulation and provides regulated entities with immunity from liability when complying with state-mandated or self-regulatory standards such as ISO 27001.

The analysis discusses META’s active involvement in open-source projects, AI models, and open datasets. META is shown to have over 1,200 open-source projects and has released 650 open-source AI models and 350 open datasets. This showcases their commitment to open collaboration and innovative solutions.

The need for legislation with multi-stakeholder engagement to regulate emerging technologies is another important argument presented. It is emphasized that good laws are necessary to ensure that regulations remain future-proof and effectively address potential harms caused by emerging technologies. The importance of bottoms-up knowledge building and norm setting in the legislative process is also highlighted.

The role of open-source tools in preventing discrimination is emphasized. META’s Massively Multilingual Speech tool, capable of identifying and processing thousands of languages, is mentioned as a means to ensure inclusivity. The release of the open data set Casual Conversations is also noted, enabling the benchmarking of software to prevent discrimination. This highlights the significance of utilising open-source solutions to promote fairness and reduce inequalities.

Regarding 5G standards, the analysis mentions that the Indian proposal for rural and remote connectivity was not included in the main 5G standard due to a lack of structured resources for participation. This underscores the need for structured resources to facilitate regular participation in relevant international platforms.

The potential consequences of adopting alternative indigenous standards for 5G are discussed. It is argued that such adoption could result in the loss of network effects in hardware manufacturing, highlighting the complexities involved in standardisation decisions.

The analysis emphasizes the importance of multiple benchmarks before implementing mandates for AI fairness. It is mentioned that multiple benchmarks are evolving in this area, encompassing both open and proprietary models. This underscores the need for a comprehensive understanding of the technology and its implications before implementing mandates.

The mandate of mature standards for protecting the rights of the marginalized is described as an important argument. Specifically, the need for state-mandated standards such as WCAG to ensure the protection of disabled individuals’ rights is highlighted.

The analysis discusses META’s regulation readiness approach, noting that policy and legal teams within META monitor enacted and proposed laws in different jurisdictions. The aim is to create compliance artifacts that can be applied globally, showcasing the company’s commitment to regulatory compliance.

The challenges posed by conflicting legal obligations in different jurisdictions are highlighted. Such conflicts can hinder global rollouts of certain user rights or features, illustrating the complexities of navigating legal obligations across multiple jurisdictions.

The discussion on new laws requiring corporations to have explicit contact points addresses the potential benefits and challenges associated with these laws. It is mentioned that while these laws can empower users, they can also present challenges in terms of personal criminal liabilities and additional complexities. The Indian IT law is referenced, which requires global corporations to have three individuals stationed in the office who are available to users and government stakeholders. This commitment entails personal criminal liability and presents additional complexity.

In conclusion, the analysis provides valuable insights into various aspects of regulation, open-source projects, emerging technologies, discrimination, 5G standards, AI fairness benchmarks, disability rights, compliance engineering, global compliance, and user empowerment. The discussions underscore the importance of multi-stakeholder engagement, the use of open-source tools to promote inclusivity and fairness, the complexities of standardisation decisions, the need for comprehensive understanding before implementing mandates, the mandate of mature standards for protecting the rights of marginalized individuals, and the challenges and benefits associated with laws requiring corporations to have contact points. Overall, the analysis highlights the complexity and importance of regulatory issues and the need for informed and collaborative approaches to address them.

Marielza Oliveira

The analysis explores various aspects of internet governance and capacity building. One key point highlighted is the significance of multistakeholder engagement in achieving consensus and shared goals. The multistakeholder approach, which involves involving various stakeholders, is deemed the most effective way to build consensus around common goals and values. Multistakeholder initiatives aim to meet expectations by being inclusive, diverse, collaborative, and legitimate.

However, it is acknowledged that the multistakeholder approach needs to adapt to the evolving nature of the internet. Different stakeholders have become dominant in internet governance, and it is argued that the approach should identify which stakeholders should be involved in addressing the diverse challenges faced by the internet today. This necessitates constant evaluation and adjustment to effectively address the complexities of internet governance.

The rise of big tech platforms is also a significant factor in the changing landscape of internet governance. The fast-paced ethos and immense power of these platforms are not always aligned with the pace and authority of other actors, particularly governments. This poses a challenge to the role of governments in internet governance. It is essential to address power imbalances between the private sector and governments to ensure fair and equitable governance of the internet.

Another crucial aspect is the need for capacity building among government and civil society actors. It is noted that many judicial actors have limited understanding of the limitations of technologies like artificial intelligence (AI). In response, UNESCO has initiated training programs for these actors on AI. Additionally, a competency framework for civil servants has been developed to enhance their knowledge and understanding of relevant issues. This capacity building is considered vital to bridge knowledge gaps and empower government and civil society actors to actively participate in internet governance.

Regarding addressing power imbalances in the digital space, the analysis discusses the potential of the Global Digital Compact and emerging internet regulations. The Global Digital Compact presents an opportunity to reimagine the approach to internet governance and establish a fair and inclusive framework. UNESCO’s Internet for Trust guidelines are highlighted as a contribution to this process. These efforts aim to create a more balanced digital space where power imbalances are addressed, and the interests of all stakeholders are considered.

In conclusion, the analysis underscores the importance of multistakeholder engagement, adaptability to the changing nature of the internet, addressing power imbalances, and capacity building for effective internet governance. The Global Digital Compact and emerging internet regulations offer avenues to address these challenges and create a more balanced and inclusive digital space.

Renata Ávila

The analysis explores the topic of multi-stakeholder governance and democratic deficits in the internet governance model. Renata Ávila highlights Brazil’s multi-stakeholder model as an exemplary approach to democratic governance of the internet. This model is considered essential for ensuring transparency and inclusivity in decision-making processes related to the internet.

However, it is noted that the effectiveness of multi-stakeholder governance depends on its structure. If not appropriately designed, these governance models can inadvertently validate the opinions of the most powerful actor at the table, potentially undermining the democratic nature of the process.

Another important argument presented is that companies should refrain from exploiting democratic deficits. The analysis suggests that companies should instead adopt a more transparent and open approach, actively sharing information and being proactive in their commitment to multi-stakeholder governance. This would help address concerns related to potential double standards and unfair practices that arise when legislation is lacking or insufficient.

Furthermore, the analysis highlights the need to address inequalities and exclusions within the multi-stakeholder model. Two specific areas that need attention are the rural-urban divide and gender divides. The analysis advocates for meaningful civil society participation and emphasizes the importance of internal processes within civil society to reach broader consensus.

The analysis also argues for a bottom-up approach and civil society’s active participation in the design and problem-solving processes. The NetMundial process in Brazil is cited as an example where civil society had a significant role in designing and triggering the problem-solving process, making it a successful model to follow.

It is also suggested that civil society should have access to mechanisms that enable them to activate processes when needed. This would allow civil society to effectively address concerns and ensure that their voices are heard in the decision-making processes.

Collaboration between different actors, including civil society, is seen as a valuable asset for effective policy-making and implementing changes. The analysis gives an example of how civil society facilitated the exchange of practices and cases between antitrust and consumer protection authorities through WhatsApp. By engaging various stakeholders, new insights can be gained, and solutions can be developed collectively.

Transparency is considered the best antidote for addressing concerns related to multi-stakeholder governance. Increased transparency can help build trust among stakeholders and promote accountability. It is noted that South-South cooperation plays a vital role in balancing power dynamics and improving multi-stakeholder models.

The analysis also emphasizes the importance of sharing good practices, learning from each other, and holding platforms accountable. By studying platforms and documenting both successful and unsuccessful attempts, improvements can be made, and repetition of errors can be avoided.

In conclusion, the analysis highlights the importance of democratic governance in the internet space through multi-stakeholder models. It emphasizes the need for transparency, inclusivity, and meaningful participation from civil society. The analysis also underscores the significance of addressing inequalities and promoting collaboration between various stakeholders. Through these efforts, a more balanced and effective multi-stakeholder governance model can be achieved, ensuring democratic decision-making processes in the internet governance landscape.

Moderator

The speakers in the discussion highlighted the importance of multistakeholderism in the context of internet governance and platform regulation. They emphasized that digital platforms are crucial tools for global communication, but their regulation can be challenging, especially for developing countries. It was noted that regulation models from Europe, which have been successful in their own context, may not be easily adaptable for countries in the Global South due to different states of institutional development.

The speakers also discussed the different approaches that countries take in governing and regulating digital platforms. They noted that diverse government agencies are often involved in the process, such as ENPD, Senacon, and CADI in Brazil. This demonstrates the complexity and multifaceted nature of platform regulation, which requires the involvement of various stakeholders and government departments.

The concept of multistakeholderism was seen as a valuable approach for regulating platforms and promoting internet governance. It was mentioned that multistakeholderism has played a role in strengthening civil society’s participation in platform regulation in Brazil through the engagement of various stakeholders, as evidenced by the participation of CGIBI and the consultation it conducted. The speakers argued that multistakeholderism allows for a broad range of actors to be considered in decision-making, helping to build consensus around shared goals and values.

The speakers acknowledged that implementing successful multistakeholder approaches is not always guaranteed. They pointed out challenges such as power asymmetries and the potential for participants to not be legitimately chosen. However, they also highlighted the potential improvements that could be made, including building awareness and reducing knowledge imbalances.

The discussion also touched upon thechallenges posed by dominant stakeholders, particularly big tech platforms. The speakers noted that the ethos of big tech does not always align with the pace of other actors, and their accountability can be challenged. This highlights the need for effective regulation and governance of these platforms to ensure fairness and protect fundamental rights.

The speakers stated that the adaptation of the multistakeholder approach is necessary to fit the rapidly transforming digital landscape. They emphasized the need to raise awareness around the benefits of multistakeholder approaches and reduce knowledge imbalances among actors involved in platform regulation.

In conclusion, the speakers agreed that inclusive and transparent governance of digital platforms, through the adoption of multistakeholder models, is essential. They recognized the challenges faced by developing countries in adapting existing regulation models and stressed the importance of sharing and adopting successful practices between countries. Additionally, they emphasized the need for cooperation, collaboration, and active involvement of civil society in decision-making processes. Overall, the discussion provided valuable insights into the complexities and dynamics of platform regulation and the importance of multistakeholderism in achieving effective governance.

Session transcript

Moderator:
Welcome, everyone, to our town hall session, Multistakeholder Platform Regulation and the Global South. My name is Henrique Faulhaber. I’m one of the board members of Brazilian Internet Steering Committee, CGIBR. I represent the private sector there. I will be the moderator of this session, and Juliano will make the online moderation. First, I would like to thank the IGF organization for this space and everyone that’s present here and online. Special thanks to our speakers. You have six speakers. One speaker is here, Ms. Joanne Cunha. She’s a program officer in the Center for Communication Governance and National Law, University of Delhi. And the others are in their respective countries. So you have people from Brazil, Brasilia, Paris, Nigeria. So you have very much those people that are in the late night, late, early morning together with us today. So the first speaker will be Marielle Oliveira. She’s Director of the UNESCO Communication Information Sector Division for Digital Inclusion Policy and Transformation. After her, you have Sunil Abraham. He’s Public Policy Director from Facebook India. And after him, Kadira Elusman. He’s a Senior Program Officer for Paradigm Institute. No, she, in fact, she’s Officer for Paradigm Institute. After you have Miriam Wimmer. She’s Director for the Brazilian Authority on Privacy and Data Security. After you have Joanne. And at the end, you have Renata Afla, who is CEO from Open Knowledge Foundation. We will give eight minutes for each speaker. And after that, we will open for questions for the audience here and also online. As you know, digital platforms have gained significant traction within the internet governance debates as they have become essential tools for global landscape public and private communications. However, a great part of the discussions resolves around the models followed by Europe that, as many point out, are then exported to global south countries. We mean global south countries as developing economies, OK? The name global south may sometimes lead to other questions. So you can understand global south countries as developing countries on this area. But we see that the strong influence of Europe on regulations all over the world, and so-called Brussels effect, affect how regulations are adapted to local regional contents. But countries in the developing economies are in different states, different states of institutional development considering government bureaucracy, civil society organizations, or regional international organizations. So governance arrangements that may work in the global north can fail in Latin America countries, for instance. Therefore, it’s relevant to foster that exchange of experience between global south countries to formulate appropriate regulations to regional reality involving different stakeholders. In this sense, CGIBI, Brazilian Internet Steering Committee, carried out a consultation on platform regulation this year. We received more than 1,000 contributions from individuals and organizations of different sectors. You are in the moment of preparing a summary to present to society the main contributions about the theme that could guide CGIBI as to make the recommendations and guidelines for future work on that field. A relevant part of the consultation was precisely on the arrangements necessary for regulating platforms and about the role of multistakeholderism could play on those arrangements. Countries could have different approaches on how to govern and regulate digital platforms. Normally, several government agents are tackling with sub-themes of platform regulation. In Brazil, for instance, we have ENPD, the data protection agents, dealing with data privacy and data protection. We have Senacon from Justice Department dealing with consumer rights. You have CADI, our antitrust agency, dealing with competitive issues, et cetera. So you have, even in government in several countries, in Brazil, for instance, that have distributed a role on the task of regulating the digital platforms. So to put in place the multistakeholder approach to platform governments, our sector should interact with government, with various parts of the government in order to adjust and implement proper process to deal with this no trivial task that is to regulate digital platforms. Even through expectations over multistakeholderism in the global internet government’s realm have been questioned in several policy arenas. In Brazil, multistakeholderism has played a fundamental role in strengthening civil cyber society participation through the means of Brazilian Internet Steering Committee. CGIBA will play a relevant role in the local government’s internet government’s model through multistakeholder participation and has given substantial contribution and active participation in political debates, such as the construction of the Brazilian Internet Bill of Rights, known as MARC-CIVIL, and the Brazilian General Data Privacy Law, GDPR. Possible due to the Brazilian Internet Steering Committee reputation, multistakeholderism was frequently mentioned in the consultation that we carried this year as a value of platform regulation. In this sense, CGIBA, multistakeholder experiencing, even with the difficulties and possible improvements, can serve as an inspiration for institutional arrangements in platform regulations. Beyond that, national and regional approach aiming to establish a sustainable platform regulation model should also consider the challenge to align then with ongoing international process, such as the UNESCO Guidelines for Regulating Digital Platforms and other task force. This workshop aims to delve into different digital platforms regulations, government’s models, through the exchange of global south and developing economies, practice, and discuss the role of state and non-state stakeholders vis-a-vis the value of internet government multistakeholder model. I hope you have a great conversation, and that each experience presented here might serve as inspiration and improvement for others. Now, I would like to pass the word to our fifth speaker, Ms. Mariela Oliveira from UNESCO. She’s online. Please. Mariela, are you there?

Marielza Oliveira:
Yes, I’m trying to unmute myself. Hello. Hello, everyone. Hello. Thank you very much for inviting me to join this session. It’s really interesting. But let me start by apologizing, because I’ll have to leave before the end of it, because I have another commitment. I have to replace somebody who wasn’t able to participate. So anyways, well, thank you very much again. And you had asked me about the issues of multistakeholderism and how what’s accomplishments are increasing over the last decade, and how the model can adapt to the present situations of the internet, and so on. So let me just start by saying that multistakeholderism is kind of a governance approach that helps policymakers to harness the collective intelligence of communities and a type of participatory democracy that emerged very much because of the increasing complexity of policymaking and the difficulties governance systems were facing in finding sustainable solutions to the various problems that they were facing. Multistakeholder engagement was and continues to be, in our view, the best way to build consensus around a shared set of goals and values, while ensuring that the result is created by taking into account the needs and concerns of a broad range of actors, including government, private sector, and civil society, and even grassroots organizations like women’s groups or youth organizations, technical community, and so on. Most complex policymaking happens at the global level. Global multistakeholder approaches have emerged very much as a complement to multilateral cooperation, since it helps to fill some of the gaps in knowledge and legitimacy that are left when global decisions are taken by a single actor. And digital development is one of the most complex processes because it affects most economic, social, and environmental aspects of our life. And this is why the World Summit on the Information Society created the Internet Governance Forum, specifically as a multistakeholder process to co-develop the principles and norms that shape internet evolution. And beyond the WSIS, various organizations have endorsed the multistakeholder model, including UNESCO, of course, OECD, the Council of Europe, the ITU, the G8, African Union, even the UN General Assembly. And at UNESCO, we advanced this multistakeholder concept by embedding it into UNESCO’s Internet Universality Rome Framework, which proposed four principles on how the internet can support the construction of global knowledge societies, that the internet should be human rights-based, open to all, accessible by all, and multistakeholder-led. And that has already been taken up by over 40 countries, this particular framework. But let me say that one would expect that adding greater expertise, more diversity into decision processes, and encouraging consensus building would always lead to better decisions. But that doesn’t really happen always because multistakeholder initiatives, they don’t necessarily meet expectations every time. An effective multistakeholder approach needs to be inclusive, diverse, collaborative, legitimate, and that doesn’t necessarily happen. You know, for example, the parties involved may not have been legitimately chosen to represent all the interests around a given issue. The time and the resources needed for coordination may be just too much for the actual benefit. Power asymmetries may exist, they exist that prevent the parties from contributing equally, or multistakeholder process is not really linked to a final decision-making process, so that there’s no clarity what’s going to happen once the multistakeholder group arrives at their consensus. And the internet today is very different than when it was created. As it became more and more central to how societies and economies operate, different stakeholders, they started kind of jostling for greater power over its governance. And in addition, the reality of multistakeholder participation is actually challenged by the nature of the internet itself, including the issues of jurisdiction and enforcement, scale, and the pace at which, you know, the digital transformation is taking. So, some stakeholders actually have become too dominant and powerful, particularly big tech platforms, and they have a moving fast ethos that doesn’t really work with the pace of other actors. The relationship between multistakeholder and multilateral governance mechanisms, one at global level and the other at national or regional level, is not very clear yet. And the role of governments is actually challenged by the power of private sector, and they face difficulties in enforcing accountability when harms happen. And so, we need to adapt the multistakeholder approach to today’s internet that really is a platform internet. And for that, I think that there are some things that need to be done. First, the global community needs to really build greater awareness and buy-in on the benefits of multistakeholder approaches. It needs to clearly identify who should be around the table to meet the different internet governance challenges, because they are not the same, and you don’t need everybody around every issue. Cybersecurity issues, for example, may be discussed by a particular group, while digital inclusion may be discussed by a different group, but always multistakeholder and representing different actors that have a stake on the issue. We need to reduce knowledge imbalances by raising capacities of government and civil society actors on frontier technologies. For example, we are training judicial actors on artificial intelligence because we realize that many of them do not understand what the capabilities and the limitations of these technologies are, and have developed a competency framework for civil servants on AI and digital transformation exactly for that. It’s hard for them to sit around the table and argue for certain types of approaches with private sector if they don’t really understand what these technologies are capable of. And the other thing is that we really need to identify relevant and legitimate stakeholders to represent each of the groups and working methods that need to be more transparent and inclusive with participants collaborating on equal footing. And that means better resourcing civil society, academia, etc., to participate. At the global level, for example, we see the difficulties that the IGF itself faces on resourcing. It entirely depends on donations, you know, with a tiny little budget for the secretariat. Global governance on the internet, depending on that, is really insane. And we also need to mostly, you know, the big thing is that we need to clarify how much stakeholderism works with multilateralism, and then what their different roles and mutual accountabilities are. The IGF, for example, is evolving in that direction, has created a leadership panel to engage with the UN Secretary General so that you can build and create a bridge between those two processes. But we still need to account for the power imbalances that have arisen, both within multilateral and multistakeholder processes, with the powerful internet platforms located in a handful of countries. And I’m very hopeful that the Global Digital Compact will offer us a chance to reboot the approach and that the emerging global regulation of internet platforms, such as UNESCO’s Internet for Trust guidelines, will contribute to this process. So, thank you very much for the chance to participate.

Moderator:
Thank you, Mariosa, for your truths. Now, I will invite Mr. Samir Abraham to make his speech. He’s from META. In fact, I would like very much to understand the Indian scenario on this issue of platform regulation. And please, Sunil, the floor is yours.

Sunil Abraham:
Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grateful that they have given me another opportunity to work with them. Also, it’s indeed a privilege to be on the same panel as personal heroes, such as Renata. So, thank you again for that. I’m someone that has been involved in the IGF conversations since 2005. And when I came to Tunis for the second WSIS meeting, I was there as part of civil society. And I was trying to make a documentary film with BBC. At that point, I was also working for the International Open Source Network. And one of the people that I interviewed was Gilberto Gil, who was the Minister of Culture of Brazil at that point on this question of open source. So, very quickly, as Maria Elsa has mentioned, there are three clear layers to the regulatory ecosystem. Classical regulation that emerges from the state. Coregulation, which is when self-regulation meets state regulation. State regulation. And then pure self-regulation, which is called the pure multi-stakeholder model, which we see in organizations like the IETF and so on. Especially the standards-setting organizations. And all of these forms of regulation play a close role with each other. And I’m going to take the example of India to show how this is the case. In our existing IT law, the Information Technology Act, the section for protecting sensitive personal data called 43A is a good example of reflexive regulation. If regulated entities do not comply with the state-mandated standard ISO 27001, then they get the same immunity from liability if they’re able to propose a self-regulatory standard and then comply to it. So, that happened in 2008 with the amendment and in 2012 or 11 with the rules. And then the government lost trust, I think, to some degree in self-regulation. And for quite a period since then, for almost 15 years, there wasn’t very… much new self-regulatory and co-regulatory proposals made. But clearly the Indian government has decided to try again, and when it came to regulating the gaming industry earlier this year and last year, they introduced the very same architecture where the regulated entities were meant to come together, form self-regulatory bodies, introduce norms, then comply with those norms in order to be considered compliant with the law. So how does that old theme of free software and open standards and open data connect to all of this? Here is where I would like to take an example from my current employer. So after spending 25 years in civil society, I’ve spent the last three in the private sector with Meta. So how does that all connect to the work that I’m now doing at Meta? It’s because Meta is not like all the other platforms, and I wouldn’t want to paint all platforms with the very same brush. Meta has more than 1,200 open source projects. Just on AI, we have released 650 open source AI models and another 350 open data sets. So let’s now look at a particular anxiety, and this is the anxiety around algorithmic bias. And let’s take the case study of text-to-speech and speech-to-text technologies. Meta has a tool called Massively Multilingual Speech. It’s an open source tool, and this tool can ID 4,000 languages and do speech-to-text and text-to-speech for 1,100 languages. So how can we be sure that this tool does not discriminate when it comes to gender, when it comes to age, when it comes to sexual orientation, or maybe disability? The way this has been done is Meta has also released an open data set called Casual Conversations, which contains 27,000 hours of people of different identities and with different traits speaking these languages so that the open source software that has been released by Meta, Massively Multilingual Speech, can be benchmarked against the open data set. So what I’m trying to argue is that while there is indeed a requirement for states to legislate and to legislate after doing multi-stakeholder engagement and consultation, there is also this equally important role of bottoms-up knowledge building and norm-setting, and ideally good laws, especially laws in the global south, will have space for all of this. And that is the only way in which we can ensure that regulations remain future-proof and constantly and in a very agile fashion, address the harms that are caused by emerging technologies. Once again, thank you for giving me this opportunity to share my thoughts with all of you.

Moderator:
Thanks, Sunil. You have next Miriam Vime, she is Director from Brazilian DPA. Miriam, are you there?

Miriam Wimmer:
I’m here, yes. Can you hear me? Yes. Good night. It’s very late here in Brazil, it’s 11.30pm, but I’d like to say thank you very much for inviting me and for making it possible for me to participate online. I’m truly delighted to join this panel and I’m really sorry not to be able to be with you in Kyoto. I hope you’re having a great time, lots of fun and very productive discussions. Thank you.

Moderator:
I’m curious about your inputs about digital platform regulation interface with data privacy and protection. I would like very much to receive your feedback on how much take-a-headers should apply on this space. And of course, you’re free to talk about the audiences you want.

Miriam Wimmer:
Okay, so thank you very much for the question, Henrique. And I’d like to begin by mentioning that I speak from the perspective of a government stakeholder. I’m currently one of the commissioners at the Brazilian National Data Protection Authority and I’ve been following the debate on internet governance for quite some time now and currently working with data protection. So there’s a very interesting intersection when we discuss platform regulation. And I’d like to begin by highlighting that one of the most important challenges we face when discussing platform regulation is in fact defining the scope of regulation and consequently identifying the institutional actors, be they governmental or non-governmental, that should be put in charge of such regulation. And I think an important point to start off our discussion is to mention that platform regulation may in fact mean many different things, ranging from ex-ante regulation to promote competition in digital markets, to laws aimed at combating disinformation, to rules on labor relations through online platforms, many other things in between. And the term digital platform itself is also very broad, encompassing, just to give a few examples, marketplaces, platforms for renting rooms or calling cars, social media platforms. And of course, these different business models also have different characteristics and maybe raise different challenges in terms of protecting fundamental rights. Another point I think is quite relevant, mentioned by Sunil just now, is that the term regulation is in itself open to many different interpretations. So it includes not only state-centered regulation, which is perhaps the most traditional form of regulation, when you have a governmental agency that may apply administrative sanctions if a formal rule is not complied with, but also many other regulatory arrangements, which include co-regulation and self-regulatory mechanisms, as well as different levels of multi-stakeholder participation and supervision. And here, I think a point that I’d like to make is that Brazil has historically sustained that multi-stakeholderism and multilateralism are not in themselves antagonic. They’re not mutually excluded necessarily. And in the same way, when we discuss regulation of digital platforms, it seems to me that it is possible to conceive models where you have traditional state regulation coexisting with co-regulation or self-regulation at certain levels. So I touch on these preliminary points to call attention to the fact that when we discuss platform regulation in general and the role of data protection authorities in particular, we are potentially discussing many different things, many different approaches, and many different institutional actors that may be involved with this discussion. And regarding data protection authorities in particular, it’s important to note that all these different business models that relate to digital platforms are essentially based on the large-scale processing of data, including and especially personal data. And therefore, to some extent, data protection authorities are already naturally involved or attracted to this discussion. And here, I think it’s valid also to mention that many traditional data protection principles, such as purpose limitation, transparency, data minimization, for instance, as well as several data protection rights, the traditional article rights, access, rectification, cancellation, opposition, already touch upon many of the issues that are raising concerns when you discuss digital platforms nowadays. Also, the approach that many data protection laws bring in terms of the need to carry out risk assessments and have governance programs and transparency requirements already exist in many data protection laws, as is the case in Brazil, although not specifically geared towards digital platforms. So the feeling I had is that while many of our concerns are already somehow related to the field of data protection, of course, other concerns in other fields may be called to act. And in this sense, data protection legislation may not be in itself sufficient, and other existing regulators may come into play. And I think Mariausa or Henrique mentioned antitrust, consumer protection, as well as several other formal governmental bodies. And in fact, there is also discussion of the need to or not to create new regulators. So the issue, I think, at the end of the day is that there is a huge challenge of coordination and cooperation between public bodies, and also the need to assess how multi-stakeholderism can build into this process in order to make it more transparent and more legitimate. Here in Brazil, specifically, this debate on platform regulation, I think, is currently very, very hot. It’s very high in the public agenda. And I would argue that here in our country, it’s been shaped by some factors that are quite specific to our own reality. So the first aspect is that we have a quite complex institutional setup. We are a continental country with over 5,500 municipalities. We have a federal government that is very large, very complex, many ministries, many different agencies, and complex and existing bodies with regulatory competencies that in some sense will touch upon the issue of digital platforms. And here in Brazil, the fact is that it’s not easy to create new governmental bodies. Usually, there are not enough financial resources. There are not enough human resources. And in fact, and this may be a common theme in the countries of the Global South, it’s very difficult to prioritize these issues in countries where often very basic needs of the population are still not properly met. So it happens sometimes that new legislation comes up, but it appears often with insufficient enforcement mechanisms, creating a real challenge in terms of effectiveness. As an example, our Data Protection Authority, ANPD, was created in 2020, while the legislation was already in force for some months, and we had no staff members, no financial resources, and only very recently have we been able to take some important steps in terms of creating a more robust institutional structure. But a second aspect that I think is very particular to Brazil is that we have a very well-known and very consolidated model of multi-stakeholder internet governance based on the Brazilian Internet Steering Committee, CGI.BR, who is in fact organizing this panel. And over the last decade, considering the experience we had with the CGI, with the Marcos Civil do Internat, which is our civil rights-based framework for the internet, and also with the general data protection legislation, we have almost an implicit requirement that regulation aimed at the digital environment in Brazil should include a very high level of multi-stakeholder participation. This, I think, is very interesting. It’s almost a common expectation that when we discuss regulating the digital environment, all stakeholders should somehow become involved in the discussion. I think it’s a very interesting and very important aspect of our domestic setup here, and this has in fact been true not only for the process leading up to the approval of the Marcos Civil, but also for the process that led to the approval of the general data protection legislation, where we had lots of public consultations, public hearings, and intense negotiations until the very last minute, until the legislation could finally be agreed upon and approved formally by National Congress. Another aspect is that because we have this, I think, this psychological framework where we think that multi-stakeholderism is in fact important when we discuss digital regulation, is that there is also an expectation that regulatory bodies that deal with digital issues should also have formal consultation mechanisms in place. So in the case of the DPA, for instance, we have a legal requirement to carry out regulatory impact assessments, but also public consultations, public hearings. And in fact, we have an advisory board, a consultative committee that is multi-stakeholder, that involves governmental members and different members of society, academia, private sector, and so on, who is responsible for supervising and issuing advice to our DPA in terms of what our priorities should be, what our national policies should look like. So this, I think, is an interesting example, and I feel that one of the lessons we could maybe learn from the Brazilian experience is that when discussing platform regulation, multi-stakeholder participation is practically a de facto requirement to ensure that the legislation, when approved, is considered legitimate and also that it becomes, in fact, effective because of the consensus that is created around the terms that end up being approved. And in this sense, I think Mariela spoke a lot about how the multi-stakeholder model has been evolving, and I fully agree this is a concept that is still a work in progress in many senses. But I do think that what we have learned from the multi-stakeholder governance model to the internet, which also involves understanding the roles and responsibilities of different actors, right? This is certainly a model that could be taken into consideration when discussing platform regulation in general, taking into account also the contribution that existing legislation and existing public organizations can give to facing the challenges that digital platforms raise in our society nowadays. So thank you very much, Henrique. I look forward to the second round.

Moderator:
Thank you, Miriam. So, João, thanks for staying with us today. And maybe you can talk about your research on digital platform, especially in India. Thank you. The floor is yours. Thank you.

Joanne Cunhe:
Hi, I’m João. I’m from the Center for Communication Governance. We’re an academic center that’s out of a national law university in New Delhi, India. Henrique was just referring to some of the recent platform regulation work that we’re trying to do, given a lot of the work that’s happening in India on platform regulation, where we’re looking at seeing a new regulation that’s going to look at platforms, but also emerging technologies. And some of the things that I wanted to focus also in light of the larger discussion here is the challenges that India and some global south countries generally have when it comes to participating in regulation on platform governance. And a lot of these challenges stem from our challenges with just generally more global processes of coming together in forums and participating at those larger discussions. And I think that that would hold true for a lot of other global majority countries as well. Some of these discussions we’ve been having over the course of the last couple of days, and one of the first things that comes to mind more naturally is the issue of resources in general. To be able to participate in any global discussion or even within our national discussions, the capacity is a huge factor. And I think that that capacity, whether it’s financially being able to access these conversations or even just through personnel, there are a lot of challenges that do not allow a lot of voices from the global majority to be present. And then, of course, as a result, you have a lot of perspectives that are missing out in the conversation. And that’s especially when we’re thinking about the involvement of groups that either represent folks directly impacted from platform harms or even people that are studying the impact of some of these harms. So definitely resources is a huge consideration. And just in terms of personal capacity, I would think that when it comes to actually being in terms of meaningful participation at some of these conversations, at the larger levels especially, there are limits to how much civil society can come in on to be representative and even to engage in meaningful dialogue if you don’t have a lot of the other stakeholders like the technical community, the state representatives. And that kind of burden on civil society sometimes is really hard, especially if that’s not collaborative. And just the point on collaboration is what I wanted to talk next. But just before we get to that, when we think about the representatives from state and from industry or technical communities being involved in participation, something that has come from our research, not related to platform regulation, but an indication of one of the same challenges is that in a project that we’re working on with emerging technologies, specifically blockchain, we’re trying to explore standards development in the blockchain ecosystem. And what has come out quite often in terms of some of these representatives from industry and even government representatives being able to participate at the various standards bodies, forums, is quite difficult for them to do given the financial constraints that it sort of places on them to be able to attend these discussions. So not only is it expensive, but it’s also, it takes a lot of time and you have to have that kind of personnel. But coming back to my point on collaboration, I think that something that in the Indian context is slowly developing, but still at, I would say, not a stage to be able to make that much of a difference is that a civil society is slowly building up collaborations within. We have worked with each other over a very long period of time, but to be able to have that kind of collaboration that we’ve had in specific instances to be able to effectuate movement on regulation is somewhat slower, especially, I would say, and this is a challenge that can be there in some parts of the global south, but having that fragmentation can often make it difficult to have support at general forums that we are having these discussions in. But in India, we’ve seen a couple of instances where this has worked really well when we had our data protection bill being contemplated and it’s been going on for years now. I think we just had the bill come into law and some of the initial responses to the data protection bill had a lot of civil society gathering together to offer joint comments, to push the needle on conversations. Similarly, when we had issues with net neutrality in India, I think around 2014-15, there was huge collaboration within not only civil society, but other stakeholder groups. We had the technical community also come together and collaborate to make that change in saying that we needed net neutrality and then the proposal at that point was not okay. But just that this, because our stakeholder system is still developing and how we collaborate. with each other, we’re still in the process of being multi-stakeholder within this context itself. And to bring that even into global forums is in the process of happening. And something from the way that we work at the organization is that we’re, and this is also true for other organizations in this space, is that we’re trying to be more interdisciplinary, have that multi-stakeholder approach to the way that we even have conversations at a national level, trying to get in, being from a space that’s largely academic and legal oriented to ensure that you have social scientists, you have the technologists in the room when you’re trying to think about how policymaking should happen. Just having those voices, I would say, is something that is slowly, there’s a conscious effort towards it. I think that another linked challenge has been that when it comes to decision-making within the Indian context, there are different approaches to stakeholder participation in policy, in policymaking. For example, when I was talking about net neutrality in India, that was something that we saw, we saw proposals in a different manner, it come out in a different manner. For example, our telecom regulation authority offers the point of participation before an entire bill has sort of come into the public forum, so you have the opportunity to comment on a proposal, you have the opportunity to participate in how the idea is being developed. And there’s also a different approach that sometimes happens where with, say, our Data Protection Act that came out and possibly with the new Digital India Act, is that your point of participation starts at a draft bill, and so the involvement that you have at those beginning stages as stakeholders is somewhat limited, and so that also plays a role in how much you can have that level of stakeholder participation. And just one last thing before I close, is that I think that, and this once again holds true, I think for most contexts in the global majority, is that when you’re thinking about participation in terms of having all of your stakeholders together, it makes a huge difference if the kind of dynamics that are involved, if it’s asymmetric, if it’s between civil society and the state, or the state and platforms, the way that it plays out in each jurisdiction has a huge bearing on how you have that kind of participation in your own context, and having to tailor your approach depending on what works best for your context is extremely important, and I think that’s one of the things we can take away even when we’re thinking about multistakeholder participation at the global, in global processes and discussions, is that ensuring that that diversity and representation is not just, it’s not dominated by certain groups, you have actual representation that is properly diverse, to ensure that those kind of asymmetries, those kind of power dynamics no longer are present, and I think that it takes a lot of, it takes conscious effort to try and create those mechanisms that allow for this, but yes, I wanted to end on the point on power dynamics, because I think that that is something that we can take and learn from our own context, even when we approach more global forums and how the Global South participates at some of these global forums. Thank you.

Moderator:
Thank you, Joan. Unfortunately, Ms. Khadija is not online, maybe she has a problem, so our next speaker will be Renata Ivler, she’s CEO from Open Knowledge Foundation. Please, Renata, give your opinions and comments on those issues that we are talking here. The floor is yours.

Renata Ávila:
Thank you so much, I hope that you can hear me well. It is such an honor to be invited to this panel. CGI is an organization that I admire, I consider it a home away from home. It is a place where you feel that, you feel and you sense what can be a democratic governance of the Internet. And now that Brazil is back, as we say outside, we can say from the outside in a more easy way, now that Brazil is back in the multilateral and multilateral arena, I’m coming here with a great hope of what we can achieve with this interaction, multilateral, with the multistakeholder model that Brazil has developed in the processes that Brazil is going to lead in the very near future, including G20. I will tell you a little bit more about those dreams in a minute. But what I want to talk when talking about multistakeholder governance is, I want to talk from the perspective of the place I come from. I want to talk from the perspective of countries with a huge democratic deficit, with brutal inequalities within society, and with a very, very, very weak government hit by austerity measures constantly. And in such context, just to give you an example, we in Guatemala do not even have a privacy and data protection law. It’s 2023 and we don’t even have that law. In such context, something that becomes very, very, very important in this multistakeholder model is for companies to understand the relevance of not taking advantage of the, you know, like taking advantage of democratic deficits, of lack of legislation, just to, you know, go forum shopping and doing whatever they want in some jurisdictions while upholding the very high rights standards, just because the legislation, for example, in the European Union is more sophisticated than the legislation in Guatemala. Something that often is observed from the outside, is observed from civil society in countries like mine is that precisely that makes people using a platform in a different jurisdiction feel very, like, you know, like mistreated and somehow abandoned by a company that applies double standards, higher protection to some users than to others and higher mechanisms of, you know, more efficient mechanisms, just because the legislation in some countries forces companies to do so. And that, I want to highlight it because we have suffered in very complex settings, such as elections in the region, when we see, for example, dedicated offices for the big countries that represent an interesting share of the market for some electoral processes, and a fluid cooperation and dialogue between civil society and the companies. And in other countries, some companies will have like a closed doors meeting with powerful, not necessarily, you know, the most ethical actors, and civil society will have like, you know, a door slammed in the face. And we like, you know, try to like, try to find someone who might know someone inside of the company, and try to go through back channels to make the company aware of a very serious problem. We also have a problem when the government is not the most ethical government in the world. And there’s enough suspicions from the outside that the government is colluding with certain companies to either like, you know, get away with labor, you know, labor violations, or even even more concerning, you know, like with with deals, internal deals to favor, favor, like, the position of the government and enforce it without an adequate law framework. Then I want to also highlight here, the importance when you have a weak government that doesn’t have a dedicated office for or a mechanism for facilitating this multi stakeholder engagement, often what happens is it goes, it goes to the, the organism in charge is often the Ministry of Communications. It’s often like, you know, I don’t know, it’s the automatic default, that it goes to the Ministry of Communications. And it usually, while it might, it might look to the outside that through multi stakeholder processes taking place, what happens is a lot of participation washing, a nice couple of tables, what you would like, you know, invite someone from industry, someone like, you know, like a NOS ARC approach, two from here, two from there, two from there, and a couple of pictures are taken. And then a decision that is affecting a lot of communities not not present at the table is taken without notice. Often missing from table, missing even from the government side is the office of the human rights organism and National Human Rights Office. The ombudsman is not present. They have authorities are not present. Nobody representing the interest of children is present. Nobody is representing the interest of indigenous peoples is present in the case of my country, for example, is very relevant. The the problem when when there’s a not an adequate multi stakeholder model is that this leads to a completely unbalanced while looking from from the outside as participation what it is often the validation of the most powerful actor at the table point of view. I want to refer like very quickly to specific things that different actors in such context could take to to to achieve better multi stakeholder governance. In the in the case of companies, when you are like dealing with a small country, having explicit contact points is incredibly useful as a shortcut of establishing communication between different actors. Personally, I have I have to have had to do it with three countries, finding even the contact person when the company doesn’t have an office in the country, even in moments of emergency when there’s death threats or there’s like very serious situation going on is impossible. So having a specific point of contact in a place that’s accessible to all actors is extremely important. The second is do not do not do the double standard thing. I know that that’s very challenging because I know that the companies have one objective that is profit and that’s it. And sometimes it’s easier not to comply with the highest standard of regulation. But for users, it means a lot. The other thing is try to adopt this approach that you will have to adopt now with the DSA in Europe or have policies, procedures, measures and tools available for people to understand how you are operating in the you are operating your companies. We understand that many of the trade laws protect companies and enable them to be as opaque as possible. But in this context, if you have a commitment to multistakeholderism, you have to be more proactively transparent. And that leads me to academia. You have to open your doors to academics as well to understand really what is going on inside. Access to that data and openness to understand what is going on will provide us with enough information to come at the table and have meaningful participation rather than just performative participation. And the nice picture at the end. For governments, I think that the interagency approach is urgent. It can no longer belong, you know, like to just one unit. It can no longer be an issue of the Ministry of Telecommunications of a technical office somewhere dealing with science and technology. It has to be is an issue that now is central to public law and to the most at the highest level of the institutions that make the government operative for civil society. I think that looking inside, it is also necessary not only have the Internet experts at the table, again, being intergenerational, being intercultural and being aware of the tremendously complex rural-urban divide. And of course, the gender divides is necessary. And having the internal process within civil society to reach broader consensus is also very, very, very, very important. And not to replicate societal exclusions. And then we need to understand that there’s another process that we are seeing as a reaction in many countries of the lack of multistakeholder approaches that are really meaningful. And that is we are seeing more like not only nationally but transnationally the organization of users from different sectors around a platform protesting against the platform. We have seen it with Make Amazon Pay, for example. That’s a very interesting example on when you have a deficit in the governance and participation of people, people will organize and protest against what they see unjust. That said, for countries that are doing it well, the only thing that I can hope is that the model is shared broadly with other countries that need a similar approach. And having a model, a multistakeholder participation, a general frame will be extremely useful for countries, for example, in Latin America and in Africa to replicate. And having a South-South cooperation will become key to be stronger, not only as isolated countries but as regions, to have a better leverage with the legislation that we want and the frames that we want from the companies, but also to increase cooperation and collaboration across academia and across civil society that is meaningful. Yeah, that’s it. I would say the most important part of what I said is being aware that not everything is the EU and not everything is a country as sophisticated as Brazil or India. Let’s look outside and see how we can help.

Moderator:
Thank you, Renata. So, we have time for the debate here. If you have questions here at the audience, please go to the mic, and we also have a couple of questions online. I believe you first respond to our speakers a question that comes online, and after, please go to the mic and make your considerations also in questions. Juliano, do you have something from online?

Online Moderator:
Thank you, Henrique. We have received two questions. I may read them through here. First question is, platform regulation may present a broad regulatory agenda. How to cope with the transversality of the phenomenon of platformization in developing national regulation for digital platform? Second question is, considering the challenges of digital platform regulation governance, what are the pros and cons of a centralized model with more state protagonism in a polycentric model with a more balanced protagonism between state and civil society?

Moderator:
So, all the speakers are free to respond to one of those questions. One of you could exercise on that, please.

Miriam Wimmer:
I could perhaps offer some initial comments, Henrique. So, I think both questions are really interesting, and they discuss the question of the institutional setup to deal with the phenomenon that is very cross-cutting, very transversal, and reflects, in fact, I think in a broader sense, the idea of platformization of everything, government platformization, private business models platformization. And I think both questions are very relevant because they touch upon the idea that multiple regulators may at some point in time be called to act when dealing with digital platforms. And as I mentioned earlier, I think the huge challenge we have is firstly making sure that these different pieces of legislation fit well together in a jigsaw puzzle that creates a coherent picture. And I think this is a challenge not only for the Global South, but also for Europe, for instance, who is discussing DSA, DMA, AIAC, GDPR, and the pieces seem to not be yet quite fully interoperable. And the second issue, then, is how to make sure that in the day-to-day enforcement of legislation, the different regulators are actually speaking to each other and making sure that their approaches are coherent and they are acting under a systematic understanding of the whole legislation applying to the digital scenario, which is huge and cross-cutting, as mentioned. I think the second question raises also an interesting point, when it compares centralized models versus polycentric models involving state and civil society. I would argue that the models, a centralized model for platformization, is actually a very good model. I think the second question raises also an interesting point, when it compares centralized models versus polycentric models involving state and civil society. I would argue that the models, a centralized model for platform regulation does not really seem feasible simply because platforms raise so many questions in so many different fields, such as labor relations, misinformation, human rights protection, competition aspects. So it does seem to me that we, it doesn’t really make sense to discuss a centralized regulator for the entire digital ecosystem. So we would necessarily be discussing the need for multi-agency cooperation and government, which is the point I think Hinata made in her speech previously. And my feeling is that in the same way that we can discuss the compatibility between multi-stakeholder and multilateral approaches, we can also discuss models and frameworks that involve both traditional governmental regulation, but also… associated with multi-stakeholder participation. And I think there are many different models we could debate. I think there is a model in place in Brazil, which could certainly be perfected, which involves agencies with advisory boards, with mandatory participation procedures. And here again, I would like to echo what Hemata said. We cannot simply have this, you know, just for show we need meaningful participation. This is in fact a challenge. But I would say that a polycentric model involving different state organizations, but also civil society, seems to me to be more appropriate considering the complexity of the digital environment and the many different aspects that we are aiming to regulate when we discuss platform regulation. Thank you.

Moderator:
Sunil raised his hand and also Hemata. Please, Sunil, make your comments.

Sunil Abraham:
Yeah, thank you so much. I also wanted to connect what both these questions are interrogating with the point that Joanne made earlier. I’d like to recall the case study of the conflict between 5G and 5GI, indigenous standard proposed by both academics and also Indian government entities. As Joanne argued, the lack of structured resources in order to ensure that there is regular participation at fora such as 3GPP, resulted in the Indian proposal that covered rural connectivity and remote connectivity low mobility, large cell. That proposal was not part of the main 5G standard. And then the Indian community decided to fork the standard and have an alternative indigenous standard that would have come with big consequences, especially losing out on the network effects of hardware manufacturing. But fortunately, that story ended well. And the 5G committee accepted the LMLC extension to the standard. The second thing I’d like to say is when I hear questions, as you have read out, I’m always tempted to answer both and. It is almost that we always need everything that is being proposed. And I’ll use two examples. Let’s look at open standards such as WCAG for disability. Ideally, a mature standard like that should be mandated by the state. That is the way we will protect the rights of the disabled. But another inquiry space, as I had just covered in my first intervention, fairness benchmarks for AI models. That is somewhere where multiple benchmarks are evolving. And perhaps we don’t want a mandate at this point because there are some companies pursuing an open model, other companies pursuing a proprietary model. We need fairness benchmarks that work for everybody. And therefore, a mandate at this stage is premature. And therefore, the polycentric vision of governance is much more appropriate. I’ll end my comments there.

Renata Ávila:
Yeah, and I wanted to also respond to the second question. And I also agree with Sunil that it’s both and. And the and, my and is civil society needs access to the problematization and need mechanisms to trigger the multi-stakeholder mechanism. It is usually top down and does not bottom up. And that’s a problem. Like it is only when we need to launch campaigns and we need to make a lot of noise from the side of civil society to finally trigger the mechanism often or wait for the next meeting or so on. And I think that if civil society is enabled to design the process, if some room is given for civil society to design and trigger this process to be activated when discussing a problem or discussing something urgent, discussing something relevant, the multi-stakeholder will be, this polycentric approach will be like more agile, but also the centralized approach will be more agile. I, again, you know, like I remember the robust case of Marco Civil and then the NetMundial process that took place in Brazil. And how did, I remember that I was dividing my day between two spaces. It was the NetMundial, the arena NetMundial when youth was participating, when it was actually something arena NetMundial was a process that was bottom up. It was a process that, okay, if all the countries are like gathering dirt and there’s a lot of noise, very technical, let’s create a space for expression with our own rules showing instead of just telling people how we are making the internet work. And it was quite wonderful to have this coexistence of more creative and open spaces showing how the technology is lived by different parts of the society. And not only the, you know, like the invitation on the meeting with just one spot for one member of the society. So access to problematization, access to the design of the process and a possibility to activate the processes when needed.

Moderator:
Thank you, Renata. We have a question for the audience, Camila.

Audience:
Thank you. Hi, good afternoon. I’m Camila Leite. I’m from the Brazilian Institute of Consumer Defense, IDEC. And we’re talking lots about how the challenges are transversal to lots of issues and how it’s important to cooperate. You were talking about cooperation in terms of different stakeholders. So multi-sectorialism between authorities, between different laws and also between different countries. But beyond of this important role of cooperation, we also have some challenges related to cooperation. I’m very supportive of that, but we have to face these challenges. And I know that we have lots of challenges when we’re talking about dynamic markets, when we’re talking about new regulations. But for example, in terms of cooperation between authorities, we can take longer to answer. We might have a decision that might be a soft law than a hard law when needed. And it might be necessary, but it might postpone some important actions. And the regulation might solve that, but the time can be longer. And also between countries, we have these global issues, but sometimes we have different answers in different countries, which might be also a response to the national and to the regional context. But we are talking about global problems. We are talking about a national context. So how can we harmonize that? I know that we don’t have a silver bullet to all of that, but how can we face these challenges related to cooperation? Thank you.

Moderator:
Thank you for the question, Camila. Someone from the speakers could respond or comment.

Miriam Wimmer:
I don’t really feel entitled to respond, but I can certainly comment. And I think Camila has a very important point when she states the practical difficulties of cooperation. And I speak in the position of a civil servant who has been working in government for 17 years now almost. And in fact, the truth is that cooperation does not arise spontaneously because people wish to cooperate. It has to be crafted into legislation. There have to be procedures for cooperation to take place. And even when this exists, it’s not always easy. It’s usually hill up. As a regulator, I say we take twice as much time when we have to convince other governmental agencies to reach consensus in a certain position. So it’s not easy, but it is necessary, I think. And I think it’s a bit the same position we have with regard to multistakeholderism and public participation. It takes time. It gives lots of work for regulators to analyze contributions in a substantive manner, not only for show. It makes procedures take longer than they would if we decided in a top-down manner. But it is important to make the end result more legitimate and more effective, in my point of view. So I don’t really have the answer for that, Camila. I think you have an important point, but I don’t see any way around cooperation.

Renata Ávila:
Yeah, very quickly. That sometimes, you know, civil society can be like the secret weapon for that cooperation to happen. Well, it happened with a case on WhatsApp, actually, when WhatsApp wanted, a couple of years ago, to change their policies and, you know, to get lots of people in the Global South to simply accept and update on terms and conditions. It was amazing cooperation among the antitrust and consumer protection authorities behind, you know, behind, it was not very public, but it was happening. And civil society facilitated exchange of practices and exchange of cases that there were, a couple of cases were launched in the EU and in Turkey. And then these authorities shared how the reasoning behind a case of antitrust case in this change of conditions because of dominant position and so on. It was quite amazing to see when we connect, like civil society was like the facilitator for the connection of authorities, of consumer protection and antitrust. And the magic happened because, you know, like they could talk to their peers on a specific case. And that’s also a very important aspect of multi-stakeholder cooperation. Not all the stakeholders need to be invited at the table at the same time, all the times. It can happen, collaboration can happen between actors, not necessarily always involving the same actors in the government and always involving the private sector. We need our time and our space to share without some of the actors and it enables good things to happen.

Moderator:
Thank you, Renata. I have a question. I would like to send you, Abraham, as a person from META. As soon as we have those new regulations on platform on Europe, how you think, how you happen with the META platforms global? Because somehow when we implement some features on the system to comply with European regulation, maybe it will be worthwhile to, that those features should be applied to other geographies. So can you respond about how you think it will happen after the approval in Europe of DSA and DMA? Please, Abraham, if you can answer.

Sunil Abraham:
Thank you so much for that question. So I can shed a little light on how the regulation readiness teams within META operate. What basically happens is there are policy teams and legal teams that keep track of a variety of enacted law and also proposed law across many, many jurisdictions. And when the engineers are analyzing this corpus of laws and obligations, both current and future, their compliance engineering approach is to build for artifacts that can be deployed in multiple jurisdictions. So you’re absolutely right that if a particular user right is enabled, for example, in the GDPR, then that user right is also rolled out in other jurisdictions where that legal obligation may not exist at all. But there are also instances where the legal obligations are conflicting. So for example, there are legal obligations in the Indian context that conflict with legal obligations from other jurisdictions. And then the task before the engineers is to build compliance to both obligations. And in those cases, it is not always possible to roll out something that is specifically there in Europe, across the world. I’ll go back to the point Renato made about having an explicit contact point. This is something that is quite mature within the context of European regulations. In Indian law, especially the IT law, there is a requirement to have three people employed by the global corporation and stationed in the India office. And these three people have to be available both to users, at least one of them has to be, and then the others have to be available to government stakeholders. And this is something, as Renato pointed out, is a obligation that is user empowering. That is definitely the case. But unfortunately, the obligation also comes with personal criminal liability. And the additional complexity in the Indian law makes it much harder for us to automatically sort of homogenize our compliance approach. And therefore, compliance in one jurisdiction often will look different from compliance in another jurisdiction.

Moderator:
Thank you. So we are short in time. So I will ask for our speakers from his final remarks, starting from Maria Elza. Maria Elza, please. Maria Elza is left. We have to leave already. It’s true, of course. So Joanne, you can start with your final comments.

Joanne Cunhe:
Thank you. I think that we’ve covered a lot of different things, a lot of different topics within how we should approach platform regulation in the global majority. Our approaches, of course, have to be extremely tailored to our specific context. And the questions that we had just now, like the speakers have said, there aren’t any clear answers. But I would just say that I think that as we’re currently trying to understand how to regulate for our own context and to see what those nuances are that need to reflect, even in global conversations, I think that it will definitely give us a better sense of what we should expect and what we should rally for when we’re thinking about just platform regulation and general governance in larger conversations. Thank you.

Miriam Wimmer:
Okay, next meeting. One minute for your final comments because we’re short of time. Thank you. I’d just like to thank you for the opportunity to take part in this debate. It was really illuminating. I really enjoyed the comments and I look forward to other opportunities. Thank you. Okay, thank you. Zanjou, please, your final comments. Yeah, again, a very quick intervention to say that I agreed with another point that Renato made on transparency being the best antidote for a variety of concerns. And I’d like to point to upgrades to our election ads, to our ads library, which I think is important for civil society and academics to hold us to account.

Renata Ávila:
Thank you. Renato. And last words, South-South cooperation. We have a big imbalance of power here and imperfect multi-stakeholder models in many, many countries. So let’s share the good multi-stakeholder practices regarding platforms and let’s learn from each other and connect not only across different multi-stakeholder bodies in different countries, but as communities of people studying the platforms and people holding the platforms accountable. And let’s share the good results, but also let’s share when we are not successful, the results that that led, like when there’s democratic deficit and a problem is caused locally, we need to document this well. So it’s not repeated elsewhere. And so the big platforms that are like serving many users in many countries can correct promptly with the pressure of more than one jurisdiction.

Moderator:
Thank you, Renato. So thank you for everybody that participate in this fruitful debate. I hope that the debate is, you continue inside IGF and other areas. And I will need to say that CGI-BR is working very hard on that issue, mainly in Brazil with our consultation and next recommendations. And I will keep it informed about the results of our consultation in Brazil. Thank you very much. So we’ll close this session. Thank you.

Audience

Speech speed

176 words per minute

Speech length

277 words

Speech time

94 secs

Joanne Cunhe

Speech speed

140 words per minute

Speech length

1676 words

Speech time

717 secs

Marielza Oliveira

Speech speed

154 words per minute

Speech length

1317 words

Speech time

511 secs

Miriam Wimmer

Speech speed

183 words per minute

Speech length

2676 words

Speech time

878 secs

Moderator

Speech speed

121 words per minute

Speech length

1669 words

Speech time

827 secs

Online Moderator

Speech speed

94 words per minute

Speech length

89 words

Speech time

57 secs

Renata Ávila

Speech speed

146 words per minute

Speech length

2506 words

Speech time

1027 secs

Sunil Abraham

Speech speed

126 words per minute

Speech length

1565 words

Speech time

746 secs

Non-regulatory approaches to the digital public debate | IGF 2023 Open Forum #139

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Juan Carlos Lara

The discussions revolve around the challenges posed by online violence, discrimination, and disinformation in the digital public debate. These harmful effects have far-reaching impacts, particularly against marginalised and vulnerable communities and groups. The failure of both private tech companies and states to fully comply with their human rights obligations has worsened these challenges.

Regulatory proposals have emerged globally in response to these issues in the digital public sphere. These proposals aim to address concerns such as competition, data protection, interoperability, transparency, and due diligence. Efforts by international organisations to provide guidelines and regional blocs reacting with their own concerns have contributed to this regulatory landscape.

While regulation is necessary, it is crucial that it does not infringe upon the principles of freedom of expression and privacy. The question of how to strike a balance between regulation and these fundamental rights remains a point of debate. It is important to consider the potential fragmentation of the internet and the lack of regulatory debates in many regions of the majority world.

Soft law principles, as well as the application of international human rights laws, play a crucial role in guiding the behaviour of companies in the digital sphere. They have provided valuable guidance for alternative frameworks. However, the effectiveness of these principles and laws is a matter of discussion.

In conclusion, the discussions highlight the urgent need to address the challenges posed by online violence, discrimination, and disinformation. While regulatory proposals have emerged globally, it is essential to ensure that the regulation strikes a balance between protecting human rights, such as freedom of expression and privacy, and addressing the harmful effects of the digital public sphere. Soft law principles and international human rights laws provide valuable guidance for company behaviour, but ongoing discussions are needed to determine their effectiveness. Overall, collaborative efforts between governments, tech companies, and civil society are essential to achieve a digital space that upholds human rights and promotes a more inclusive and equitable society.

Chantal Duris

Chantal Duris stressed the importance of adopting both regulatory and non-regulatory approaches to address challenges related to social media platforms. She expressed concern about legislations that primarily hold platforms accountable for user speech, rather than addressing the underlying business models. Duris highlighted the potential dangers of such approaches, as they can impact freedom of expression. She advocated for platforms to operate based on the UN Guiding Principles, regardless of regulatory status, emphasizing the need to respect human rights. Duris also emphasized the importance of addressing the root causes of issues like disinformation and hate speech, both through regulating business models and exploring solutions outside the digital space. She supported the decentralization of social media platforms to empower users and enhance freedom of expression. Duris expressed concern about the limitations of automated content moderation tools and suggested the need for more human reviewers with language expertise. She discussed the trend of strategic litigation against platforms, highlighting that it could hold them accountable for failures to respect human rights. Duris recognized the challenge of keeping pace with evolving technology and regulatory initiatives, but argued that both platforms and regulators should take responsibility for upholding human rights. She also noted the growing recognition of civil society’s role in the digital space and the increasing consultations and engagements sought by platforms and regulators. Overall, Duris highlighted the need for a multi-faceted approach, incorporating regulatory measures, adherence to UN Guiding Principles, addressing root causes, decentralization, improving content moderation, and recognizing the role of civil society, with platforms and regulators sharing responsibility for upholding human rights.

Ana Cristina Ruelas

Summary:

Addressing harmful content online requires a multidimensional approach that takes into account linguistic nuances, cultural context, and the protection of freedom of expression. This is highlighted by the need to consider the complexities of different languages and crisis situations when moderating content. Companies must align their actions with the UN guiding principles to ensure their policies prioritise transparency, accountability, and human rights.

Education and community engagement play integral roles in tackling harmful content. Media and information literacy programmes empower users to navigate online spaces responsibly, while fostering a sense of shared responsibility in maintaining a safer online environment. Furthermore, a synergistic effort is necessary, combining policy advice, regulation, and the involvement of multiple stakeholders. This involves a multi-stakeholder process that includes the development, implementation, and evaluation of regulations.

Collaboration between regulators and civil society is vital to effective enforcement. Creating conversations between these groups can help reduce tensions and enhance the efficacy of regulations. Regulators should not feel abandoned after legislation is passed; ongoing enforcement and operation of laws must be a key focus.

To achieve a balanced and collective approach in dealing with companies, stakeholders from different regions are coming together. For example, the African Union is taking steps to address companies with a united front. This collective approach allows for better negotiation and more equitable outcomes.

It is important to emphasise a balanced, human rights-based approach when dealing with companies. Among the 40 countries analysed, some believe that this approach is the correct path forward. By prioritising the principles of human rights, such as freedom of expression and inclusive stakeholder participation, governments can create a regulatory framework that safeguards individuals while promoting peace, justice, and strong institutions.

In conclusion, tackling harmful content online requires a comprehensive and nuanced strategy. Such an approach considers linguistic nuances, cultural context, and the protection of freedom of expression. It involves aligning company actions with UN guiding principles, prioritising education and community engagement, and establishing effective regulatory processes that involve collaboration between regulators and civil society. With these measures in place, a safer online environment can be achieved without compromising individual rights and the pursuit of global goals.

Pedro Vaca

The current dynamics of freedom of expression on the internet are concerning, as there is a deterioration of public debate. This raises the need to ensure that processes, criteria, and mechanisms for internet content governance are compatible with democratic and human rights standards. Moreover, limited access to the internet, including connectivity and digital literacy, poses a challenge in enhancing civic skills online.

Recognising the importance of addressing these issues, digital media and information literacy programmes should be integrated into education efforts. By equipping individuals with the necessary skills to navigate the digital landscape, they can critically evaluate information, participate in online discussions, and make informed decisions.

State actors have a responsibility to avoid using public resources to finance content that spreads illicit and violent materials. They should instead promote human rights, fostering a safer and more inclusive online environment. In addition, internet intermediaries bear the responsibility of respecting the human rights of users. This entails ensuring the protection of user privacy, freedom of expression, and access to information.

Managing the challenges in digital public debate requires a multidimensional approach. Critical digital literacy is vital in empowering individuals to engage in meaningful discourse, while the promotion of journalism supports a free and informed press. Internet intermediaries must also play a role in upholding human rights standards and fostering a healthy online debate.

Upon further analysis, it is evident that there is a lack of capacity and knowledge among member states regarding internet regulation. This poses a significant challenge in effectively addressing issues related to content governance and user rights. Efforts should be made to enhance understanding and collaboration among countries to develop effective and inclusive policies.

Shifting the focus towards the role of public servants and political leaders presents an opportunity to reduce discrimination and inequality. By implementing stronger regulation, especially for political leaders, their limited freedom of expression compared to ordinary citizens can be addressed. Adhering to inter-American and international standards can serve as a guideline for ensuring accountability and promoting a fair and inclusive public sphere.

Overall, this extended summary highlights the importance of protecting freedom of expression online, promoting digital literacy, and holding both state actors and internet intermediaries accountable. It also emphasizes the need for increased collaboration and knowledge-sharing among member states to effectively address the challenges in the digital realm.

Ramiro Alvarez Ugarte

The global discussion on the regulation of online platforms is gaining momentum, with diverse viewpoints and arguments emerging. The Digital Services Act (DSA) implemented in Europe is being viewed as a potential model for global regulation. Bills resembling the DSA have been presented in Latin American congresses. Additionally, several states in the US have passed legislation imposing obligations on platforms.

Legal challenges concerning companies’ compliance with human rights standards and the First Amendment are being debated. These challenges can have both positive and negative implications for holding companies accountable. For instance, companies have faced litigation in the US for alleged violations of the First Amendment.

In addition to regulatory measures, there is recognition of the potential of non-regulatory initiatives, such as counter-speech and literacy programs, in addressing the challenges posed by online platforms. These initiatives aim to empower individuals to discern between fake and real information and combat disinformation. Successful implementation of counter-speech initiatives has been observed during Latin American elections.

Nevertheless, concerns exist about the potential negative consequences of well-intentioned legislation on online platforms. It is argued that legislation, even if well-designed, may have unintended harmful effects in countries with insufficient institutional infrastructure.

The tension between decentralization and the need for regulatory controls is another point of contention. A fully decentralized internet, while offering freedom of choice, may facilitate the spread of discriminatory content. Balancing the desire for increased controls to prevent harmful speech with the concept of decentralization is a challenge.

Polarization further complicates the discussion on online platform regulation. Deep polarization hampers progress in implementing regulatory or non-regulatory measures. However, it also presents an opportunity to rebuild the public sphere and promote civic discourse, which is essential for overcoming polarization.

In conclusion, the global conversation on regulating online platforms is complex and multifaceted. The potential of the DSA as a global regulatory model, legal challenges against companies, non-regulatory measures like counter-speech and literacy programs, concerns about the unintended consequences of legislation, the tension between decentralization and regulatory controls, and the challenge of polarization all contribute to this ongoing discourse. Rebuilding the public sphere and fostering civic discourse are seen as positive steps towards addressing these challenges.

Session transcript

Juan Carlos Lara:
The mic is open, guys. What? The mic is open. It’s a hot mic. I think it is time to start. So it is now the moment in which we begin this panel, this session right here. Welcome everyone who is attending this in the final day of the IGF 2023. This is open forum number 139, non-regulatory approaches to the digital public debates. Are we going to speak Spanish? OK, cool. So welcome to this session. This is the final day of this year’s IGF. It is a pleasure to be with you all. First of all, I want to thank the organizers of this here event, representing the Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights of the Organization of American States. Thanks also to the representatives of Sweden and the European Court of Human Rights that have supported the proposal for this session. And also to the Foundation for the Freedom of the Press in Colombia and the Center for Freedom of Expression and Access to Information at the University of Palermo in Argentina. Second of all, I will introduce myself. My name is Juan Carlos Lara. I work for Derechos Digitales, a civil society organization working on the intersection of human rights and digital technologies in Latin America. I am coming from the city of Santiago in Chile, and my colleagues are scattered throughout the Latin American region. Our concern as an organization is how digital technologies can be used for the exercise of human rights, as well as they can be a threat to human rights when they are regulated or misused by actors both private and public. Finally, I’m going to briefly introduce the panelists. Just by name, they will be introducing themselves when it’s time for their own interventions. We are accompanied at this hour online by Mr. Pedro Vaca, the Special Rapporteur for Freedom of Expression of the Inter-American Commission of Human Rights in the Organization of American States. Here on site, we have Ana Cristina Reyes, Senior Program Specialist at the Freedom of Expression and Safety of Journalists section in UNESCO, the United Nations Educational, Scientific, and Cultural Organization. By Chantal Duris, Legal Officer at Article 19, the International Human Rights Organization working to protect and promote the rights of freedom of expression. And by Ramiro Alvarez-Ugarte, Deputy Director at the Center for Studies on Freedom of Expression and Access to Information at the University of Palermo, Argentina. Thank you all once again for attending this, and thank you to the panelists who will be speaking in turn in a few minutes. The rules of this panel are as follows. We will begin with a brief overview of the situation which has motivated this discussion here on what the digital public debate landscape and the challenges to human rights are with regards to online expression. After that, each speaker will have 10 minutes for their interventions. After that, if time allows, we will have a second round of reactions and participation, hopefully, for audience interventions mediated by the moderators here on site and also online. The guiding question that will open this discussion is on the possibilities of non-regulatory approaches, whether they can succeed and the challenges they present. But to introduce the subject, a few words from the moderator here, because we understand that in the intricate terrain of the digital public debates, we have faced for a long time a series of challenges to human rights that have been compounded, that have been reinforced, that have been worsened. In some cases, by events around the world. And the failure of both private tech companies and states to fully comply with their human rights obligations has had profound consequences affecting democratic institutions, human rights, and the rule of law. And with the backgrounds of global and local crises in terms of war, disease, authoritarian rule, and human rights abuses that happen both offline and online, we are faced with challenges to human rights that oftentimes are addressed or attempted to be addressed through regulatory response, but because of the presence and the importance of private actors. This always entails also an interaction with companies that often have more power or more resources than many states. Over time, we have witnessed the far-reaching impact of online violence, discrimination, and disinformation in the digital public debate, issues that have cast shadows over the virtual landscape, leading to harm, especially against marginalized and vulnerable communities and groups. What was once a platform promising diverse voices and perspective has seen troubling developments, hostile communicative environments, particularly for traditionally discriminated groups. Furthermore, the discourse has become polarized, distorting the conversations around essential matters and eroding trust in authoritative sources, such as academia, traditional media sometimes, and also public health authorities. To address these challenges, some regulatory proposals have come to the forefront at a global scale. We have seen that there are efforts by international organizations to provide guidelines, to provide guidance for regulatory response. We have seen that regional blocs have also reacted with their own concerns, but many of these intricate systems have aimed to tackle various diverse, different, but interconnected issues, including competition, data protection, interoperability, transparency, and due diligence in the digital public sphere. And while these efforts are critical for responsible behavior online and for protecting human rights, they also introduce complex questions and concerns that demand careful consideration about the balance of rights, about the roles of states, about jurisdictional issues, and the enforceability of the provisions that are created. One of the pivotal questions that emerges is related to the fragmentation of the internet. And while regulation is essential for safeguarding human rights, it is vital that these regulations do not inadvertently infringe upon the principles of freedom of expression, of privacy, and the rest of the human rights. So striking a delicate balance in the digital world is a formidable challenge. Notably, in many regions, regulatory debates have been in their infancy or have been completely absent, especially in many regions in the majority world. And in this context, soft law principles, the application of international human rights laws, have played a crucial role in guiding the behavior of companies that mediate online communications. These principles have provided valuable guidance for alternative frameworks, but their effectiveness is a matter of discussion and debate. So in response to this debate, we are going to speak this morning here about what these challenges are. Since we have seen the advance of a global trend to regulate platforms and the internet in general as a path to address the growing threats of human rights, what are the limitations of these proposals? If they have limited effects, in some cases can present these tensions with the balance of human rights. What other policies, what other institutional and legal frameworks have been implemented globally or can be implemented globally or regionally to propel freedom of expression online and its diverse, equal, fair, non-discriminatory, and democratic online public debates? The first word is going to be to Mr. Pedro Vaca, the Special Rapporteur for Freedom of Expression of the Inter-American Commission of Human Rights. So please, Pedro, go ahead. Thank you.

Pedro Vaca:
Good morning there. I hope you’re having a great IGF this year. Thank you very much. Firstly, I would like to highlight that in the Americas, we identified that the current dynamics of freedom of expression on the internet are characterized by at least three aspects. The first one is the deterioration of the public debate. The second is the need to make processes, criterias, and mechanisms for internet content governance compatible with democratic and human rights standards. And third, the lack of access, including connectivity and digital literacy to enhance civic skills online. And this is closely related to dynamics of violence, disinformation, inequalities, and the opportunities of participation in the public debate, and the viralization of extremist content. We understand at the rapporteurship that diverse and reliable information and free, independent, and diverse media are affecting disinformation, violence, and human rights violations, and that this requires multidimensional and multistakeholder responses that are well-grounded in the full range of human rights. As people worldwide increasingly rely on the internet to connect, learn, and consume news, it is imperative to develop connectivity, and access to the internet is an indispensable enabler of a broad range of human rights, including access to information. Interoperable, reliable, and secure internet for all, facilitated individuals’ enjoyment of their rights, including freedom of expression, opinion, and peaceful assembly is only possible if we have more people accessing and sharing information online. Additionally, in the informational scenario of media and digital communication, citizens and consumers should be given new tools to help them assess the origin and likely veracity of news stories they read online. Since the potential to access and spread information in this environment is relatively easy, and malicious actors benefit from it to manipulate the public debate. In this sense, critical digital literacy aims to empower users to consume content critically, as a prerequisite for online engagement by identifying issues of bias, prejudice, misrepresentation. Critical digital literacy, however, should also be about understanding the position of digital media technologies in society. This goes beyond understanding digital media content to include knowledge of the wider socioeconomic structures within which digital technologies are embedded. So here we have a few questions. How are social media platforms funded? Or for instance, what is the role of advertisement? To what extent is content free or regulated? Given the importance for the exercise of rights in the digital age, digital media and information literacy programs should be considered an integral part of education efforts. The promotion of digital media and information literacy must form part of a broader commitment by states to respect, protect, and fulfill human rights and by business entities. Likewise, initiatives to promote journalism are key in facing informational manipulation and distortion which requires states and private actors to promote the diversity of digital and non-digital media. On the other hand, the role of public officials in the public debate is highlighted. It is recalled that state actors must preserve the balance and conditions of the exercise of the right of access to information and freedom of expression. Therefore, such actors should not use public resources to finance content on sites, applications, or platforms that spread illicit and violent content and should not promote or encourage stigmatization and must observe obligations to promote human rights which includes promoting the protection of users against online violence. The state has a positive role in creating and enabling environment for freedom of expression and equality while recognizing that this brings potential for abuse. In this sense, in the Americas, we have a recent example in Colombia of a decision by the Constitutional Court that urged political parties to adopt guidelines in their code of ethics to sanction acts or incitement to online violence. In this paradigmatic decision, the court recalled the obligation of the state to educate about the seriousness of online violence and gender online violence and to implement measures to prevent, investigate, punish, and repel it. And also, the court insisted that the political actors, parties, and movements, due to their importance in the democratic regime, are obliged to promote, respect, and defend human rights as a duty that must be reflected in their actions and in their attitudes. Additionally, the court ruled that the state should adopt the necessary measures to establish a training plan for members and affiliates of political parties and movements on gender perspective and online violence against women in response. Considering that unloveful and violent narratives are propelled by state actors on the internet through paid actors should follow specific criteria in the ad market. Any paid contracting for content by state actors or candidates must report through active transparency on the government or political party portals the data regarding the value of the contract, the contracted company, and the form of contracting, the content resource distribution mechanisms, the audience segmentation criteria, and the number of exhibition. On the other hand, to make business activity compatible with human rights possible, the office of the special rapporteur reiterates that internet intermediaries are responsible of respecting the human rights of users. In this sense, they should first refrain from infringing human rights and address negative consequences on such rights in which they have some participation, which implies taking appropriate measures to prevent, mitigate, and where appropriate, remedy them. Second, try to prevent or mitigate negative consequences on human rights directly related to operations, products, or services provided by their business relationship, even when they have not contributed to generating them. Third, to adopt a public commitment at the highest level regarding respect for human rights of users, and that is duly reflected in operational policies and procedures. And fourth, carry out due diligence activities that identify and explain the actual and potential impacts of their activities on human rights, which is called also impact assessments. In particular, by periodically carrying out analysis of the risk and effects of their operations. In conclusion, to wrap up, the challenges facing the digital public debate require a multidimensional approach. Soft law, as was stated before, education, self-regulation, and legal mechanisms can together create a framework to mitigate harms we face online. Let us strive for a digital space where freedom of expression and the protection of human rights are promoted, fostering a society that values inclusivity, diversity, and respect for all.

Juan Carlos Lara:
Thank you very much. Thank you very much, Mr. Pedrovaca. Thank you for those remarks. And thank you for also starting this conversation addressing the need for a multidimensional approach. This is not necessarily a discussion of regulatory or non-regulatory measures, but apparently of different types of measures at the same time. And we will now listen to the rest of our panelists, beginning, of course, with our second onsite participant here, Mrs. Ana Cristina Ruelas, Senior Program Specialist at the Freedom of Expression and Safety of Journalists section in UNESCO. Please, Ana Cristina, you have 10 minutes. Thank you.

Ana Cristina Ruelas:
Thank you very much. It’s an honor to share this panel with you, Pedro. Good to see you. So as Pedro said, from UNESCO, we have a holistic approach to try to deal and understand with this phenomenon. UNESCO tries to foster public debate through education measures that I will not speak quite a lot about, because this is not my area of expertise. But there’s a lot of work done with teachers, with educators, to target potential harmful content and harmful content online. There’s a specific work that is being done to develop resilience in different communities, primarily in four countries, Bosnia and Herzegovina, Indonesia, Colombia, and Kenya, through the Social Media for Peace Project, which is founded by the European Union, and aims to create media and information literacy measures, but also to develop a way of understanding how content moderation is happening in these different countries, and what are the different issues and context-related matters that allow this harmful content to be spread. There’s another action that is happening that relates to capacity building on different stakeholders, duty barriers, such as judges, parliamentarians, regulators, in order to understand that when dealing with potential harmful content, there’s a name to safeguard freedom of expression, access to information, and diverse cultural content. And there’s work done, also, through the cultural sector, in order to understand the impacts of harmful content in artistic freedoms and cultural expressions, such as indigenous expressions. And the last thing, which I think is also important, is that we also have another action that is related to policy advice and guiding member states in the process of acknowledging that governance of digital platforms requires, as Pedro has mentioned, to safeguard freedom. of expression, access to information, and diverse cultural content, while balancing and while addressing the phenomenon of disinformation, hate speech, conspiracy, charity, and propaganda. So in this session, I will focus in two main and specific projects that UNESCO is being putting forward lately. And I will start with the Social Media for Peace Project, which is one of the projects that, as I said, started in four different countries and allow us to understand what is happening with content moderation and how is it affecting different communities, and also how a non-regulatory approach can be successful while it’s holistic with other different type of solutions. So the first thing that we learn within the Social Media for Peace Project is that context matters. This means that when it comes to content moderation, language cannot be just left aside. There’s specific languages in different regions that are important to understand in order to address content moderation issues. And this is not happening in many countries, or mainly in the countries that we’re working on. That specifically are also countries that are in crisis, or that come from crisis. The second thing that it is important that we found is that despite acknowledging the crisis, despite of the lack of knowledge and context and nuances that the platforms should understand, and that the problems that hateful content can create in an offline world, there’s a problem of not considering these countries as a priority, and then not providing enough funding for the development of content moderation measures. So companies have specific priorities to those countries that have a global impact, or that represent a market share that are important. And in those countries that this is not happening, they are not putting sufficient budget to them. And then this is increasing and creating more problems. The Social Media for Peace project also understood that when dealing with these problems, the most important thing to bear in mind is to have the capacity to dialogue between the different stakeholders. Acknowledging that in conflict zones, there are many issues that should be, like that in the offline world are happening, that have to be considered in the online world. So that’s why due diligence from the platforms is very important. Understanding the context, having the possibility to develop risk assessment and identify the specific mitigation measures that they have to put in place in order to reduce the specific risk based on the context is very, very important. But while doing this work, and I want to say there was two main approaches. The first one is the fate on the companies to turn their economic interest on how content moderation was doing through the public interest of making people know and reducing the impact of this content that many times it’s also a harvest through advertising as it’s already been mentioned. So that’s the first question. Are we keeping the fate on changing or shifting the economic interest to the public interest from the companies? Many people still believe in these countries that this can be one of the approaches to push for companies to increase their budgets in order to do better content moderation and then have a safer space. Then there’s other approach, which mainly come from the states that Pedro has already comment, which is try to reduce this phenomenon with bad regulation, with regulation that does not safeguard freedom of expression that criminalizes the user and does not touch the companies that considers that the only and solely responsible for harmful content is specifically the user. And that is another approach. And then UNESCO, after the work that is being done through the social media piece, they started saying, okay, as we are not acknowledging that these are the two different approaches, what we need to also is to start a debate that allow us to understand if it’s possible to balance freedom of expression, access to information and the access to diverse cultural content with while dealing with potential harmful content such as disinformation, hate speech and conspiracy theories. And while doing this debate, UNESCO started a consultation that led to more than 10,000 comment that came from the engagement of people from around 134 countries. And what we learned is that when governance systems are transparent, have check and balances put in place, they align content moderation and creation to the UN guiding principles of human rights. When they are accessible and inclusive to that diverse expertise and what they actually take bearing in mind the promotion of cultural content, then it can be a game changer. So that’s why UNESCO started developing these guidelines for the governance of digital platforms that on the one hand, recognize the state responsibilities on enabling a freedom of expression environment that such as Pedro has mentioned had a specific requirements for the governments to commit not only to freedom of expression online, but also to all of their duties in respecting and promoting freedom of expression offline. And the second thing is that UNESCO acknowledged that creating a governance system requires the acknowledgement that any regulatory measures that has to be coherent and comprehensive with the different kinds of regulatory arrangements should be through a multi-stakeholder approach. This means that there’s no only statutory regulation that depends on state and companies, but there should be a participation, an active participation of other stakeholders in the whole of the regulatory process, meaning the development of the regulation, the implementation and the evaluation of the regulation. Then the third thing that the guidelines state is that companies have to comply with five key principles. One, due diligence, which specifically state that companies have to develop different human rights risk assessments when they are developing new operations, when they are enhancing new operations, create new ownerships, develop new products. They have to do it prior an electoral cycle. This is very important considering, for instance, that 2024 is a super electoral year and at least three, four parts of the population that is able to vote will come to vote on 2024. The third is that a company should develop a human rights assessment when it comes to crisis, emergencies, and armed conflicts. And the fourth is that they have to understand the different risks that the companies or the content that poses within the company’s post to a specific communities, such as journalists, such as environmental defenders, such as artists, or other vulnerable and marginalized communities. The second principle is transparency. I don’t have to go through very deep into it. The third is accountability. The fourth is user empowerment, which means that within the governance system, there should be specific programs that are developed for media and information literacy. And the fifth is the alignment of all the actions to the UN guiding principles. So this is a work that so far has been done. We definitely believe, as Pedro said, and we state that this is an holistic approach and that non-action should be only and one only because if they don’t come together with many other actions that relate to, yes, education, to yes, creation of communities, yes, policy advice and regulation, then these different phenomenons will not be targeted. Thank you.

Juan Carlos Lara:
Thank you very much, Ana Cristina, for that extremely informative intervention with all of the initiatives that UNESCO is carrying out, including trying to provide guidance for regulation for governments in a manner that has included many rounds of consultations and a broad discussion, as you mentioned, with thousands of comments from the world over, which, of course, as you have been mentioning, also enriches the learning inside organizations like UNESCO itself in how to address many of these issues from the perspective of freedom of expression, access to information, and access to diverse cultural contents, which I think is a key factor in all of this and sometimes not necessarily addressed explicitly. So thank you very much for that. Now, Chantal, can you please tell us about your own view about these subjects? Can you hear me? Okay. Thank you very much.

Chantal Duris:
I will try not to be repeating too many points that have been made by the first two interveners, which are obviously excellent and all extremely relevant. For example, that we need to look at both, at the whole toolbox, right? We need regulatory and non-regulatory approaches. Perhaps just very briefly, I think this discussion is very important because we do agree that many of the proposals that we’ve seen or legislation that has been adopted recently that was seeking to regulate platforms has indeed, there is indeed a danger that these will do more harm than good because they talk a lot about holding platforms accountable, but at the same time, very often what they do is not necessarily focus on the business model of the platforms, on the data tracking, on the advertising model, but almost they ask the platforms to exert more control over, in fact, user speech. So the focus goes from the platforms on systems to the speech of users. And it is critical that any regulatory framework that has this strong impact on freedom of expression, that it is seriously grounded in it, that it is evidence-based and of course, grounded in the principle of legality, legitimacy, necessity and proportionality as Article 19 of the ICCPR requires. And this is also why working more or less globally, it depends also on the jurisdiction, what sort of solutions we think will be appropriate. With many governments, we would not advocate although in principle, we think sound regulatory frameworks should be in place. With many governments, we won’t start to advocate for passing legislation that will control platforms because we do fear, of course, that it will be not a regulatory proposal that will be respectful of freedom of expression, but give the government more options to control online speech. And also Article 19 has long advocated that it is extremely important to take this competition law angle as well because there are very few dominant players in this field. They are gatekeepers of these markets and they are also really gatekeepers of our freedom of expression online. And we do strongly believe that decentralization can per se have a positive effect on freedom of expression, more healthy competition, more empowerment for users. For example, if a user thinks, I do not want to be on a certain platform because I do not think that they respect privacy enough. This is important for me. They should be able to leave that platform and still be, for example, connected to the contacts and families that wish to remain on the platform. As has been mentioned, the UN Guiding Principles can be a very important tool, of course are an essential tool that we advocate for platforms to take into considerations all over the world, really. So whether we have a good regulation in place, a bad regulation in place, or no regulation in place at all, that should always be the basic benchmark against which they should operate. And a lot has been said about them, so I won’t go into detail. Also, because we’re also talking about risks of the different approaches, we think if we take the approach that enabling responses are also at the center of this discussion, then we think that the risks to freedom of expression are much more limited. And this is also linked to another observation we make. Often we find that the discussions seem to say that the social media platforms are the cause of the problems, and we do not deny that they have exacerbated certain societal tensions and increased polarization. There is no question about it, and there is enough evidence that this is happening. At the same time, we do think that this is essential to look at the root causes, for example, of disinformation, of hate speech, of online gender-based violence. And this may, again, include certain regulation of the platform’s business model, but it also needs to look at very different areas outside the specific digital space. So, for example, Article 19 has published now, a couple of years ago, a toolkit when it comes to hate speech, where we detail really what those different approaches need to look like, where we also, again, need to look at regulatory and non-regulatory responses, such as anti-discrimination legislation. Public officials, as Pedro mentioned, should not themselves engage in stigmatizing discourse or counter such discourse when they encounter it. There needs to be, they need to receive, public officials should receive equality training, independent and diverse media environment. All these aspects are obviously key to ensure that we have, say, offline, so to speak, an environment that is also inclusive, that is not gonna translate into then even more extreme speech online. And, of course, civic space, a strong civic space, strong civil society initiatives are also a key component of that. And also to mention, to follow up on what Ana Cristina said, so Article 19 is a partner of UNESCO when it comes to the Social Media for Peace project, and there have been a number of research report, as Ana Cristina alluded to, that have really found also the failings of the platforms, again, taking into account sufficiently the contextual elements. It starts from human rights teams that are not in place for many countries, so civil society in many countries, they don’t have anyone to call at META, for example, if they say there’s a video that needs to be taken down, or we see there’s an election coming, we see that there’s a crisis developing offline and online, there’s not really anyone who they might necessarily be able to talk to who will be responsive. Obviously, a very important additional problematic element is the use of automated content moderation tools as well, because they exacerbate why we recognize that obviously content moderation cannot happen only through human reviewers. It’s also true that many of these tools, they are not sophisticated enough and might never be to really make a proper assessment of some very complex categories of speech. Even for a court, it can be very complex to make a judgment on, you know, was there really hate speech? Was there the intention to incite hatred? Was there disinformation? Was there an intent to publish false information and disseminate it? Was there an intent to cause harm? So, obviously, doing this moderation at scale can present very serious challenges and we always call for more human reviewers that are native in the languages that they moderate. More local civil society organizations need to have direct access, meaningful access to the platforms because we also know that there have been these trusty partner programs which have not always been very satisfactory, to say mildly, and civil society has often found that it’s a bit of a waste of time and the waste of their resources and the impact is limited. Perhaps because I know we are far advanced in time, I wanna make a final reflection. I think an interesting trend we are seeing now is also, which is a non-regulatory trend, but also based on regulation, is a strategic litigation that we see increasingly brought against online platforms. So very prominent examples have recently been the US Supreme Court cases where victims, where families of victims of terrorist attacks in Turkey and in France have filed suits against Twitter and Google, for example, saying that their systems have failed in a way where they have enabled terrorist content to spread online and have also sort of aided and abetted these terrorist organizations. We’ve also had other litigations happening in Kenya over the violence, the violent content that was spread in Ethiopia that was moderated from Kenya and also over the failings in Myanmar, strategic litigation has been brought. That in itself, from our perspective, has some challenges because from a freedom of expression perspective, organizations have always said it is essential that platforms do remain largely immune from liability for the content that they host. But at the same time, of course, there needs to be platform accountability and there needs to be remedies if they infringe on the human rights of the actors in the respective countries or affected communities in the respective countries. So here as well, it will depend on how this litigation is brought. We do not wanna see a court saying, after all, you need to be held liable for hosting terrorist content because it has led to a terrorist attack. At the same time, it can be very interesting if we start seeing more litigation that focuses on remedies for failures to conduct these human rights impact assessments to take human rights to diligence measures and to do the mitigation measures properly. So I do think that is a trend that we see that has a lot of publicity. So there’s a lot of bad reputational aspects linked for the platforms and that could be also a good pressure tool for them to essentially get their act together as well.

Juan Carlos Lara:
Thank you. Thank you very much, Chantal, also for offering so many different pathways towards what we expect to see, but they’re so difficult to achieve, which is accountability from the platforms that speaks to the role that they have in exacerbating social problems even though they might not be creating them according to some discussion and some views. So now, Ramiro, your turn. So tell us what policies, institutional, legal. frameworks have been implemented or can be implemented beyond just the regulatory ones to address the problems that we have with online speech.

Ramiro Alvarez Ugarte:
Thank you very much. Should I introduce myself? Yeah. I’m Ramiro Alvarez-Hugarte. I’m the Deputy Director of CELE, a research center based in Buenos Aires. I don’t want to be too repetitive of things that have already been said. So let me just offer you, I think, a diagnosis that we have at CELE in terms of where we are and also to highlight a few tensions that I think underlie our discussion and have not yet been resolved. It seems like we’re in an interregnum. The old doesn’t die yet and the new is not born yet. So we are at that moment in which we are sort of in between the old and the new. And that’s always interesting times to be and it’s also challenging. I think we are clearly moving towards a regulatory moment. So in a way, the question that has been posed in this panel, I think it’s more or less intention with the trend of where the world is going. I agree with everything you just said and I agree that regulatory and non-regulatory measures are important and they should take place at the same time. But I think we are moving towards a regulatory moment. Of course, the DSA in Europe is obviously what will most likely be a model that will expand across the globe. We have already seen bills presented in congresses in Latin America. They haven’t been adopted yet but legislators in other countries look at the DSA and they copy language and they copy some of their provisions and that is a process in and of itself full of challenges. We have also seen calls to revisit Section 230 in the United States because of congress and its gridlock. It’s difficult to imagine that a comprehensive review of Section 230 will happen anytime soon. But we have seen state-level legislation that has been passed imposing on platforms obligations. We have already seen strategic litigation against companies but not in the direction that you mentioned, in the opposite direction. Like, for instance, the Joe Boning cases in which they basically say that the kind of relationship that the federal government has established with companies in the US violates the First Amendment. So in a way, litigation cuts both ways. So it could be a litigation that questions companies for failing to stand up to their human rights standards but it could be also litigation against companies for violating the First Amendment in the case of the United States. So I think that’s where we’re going. It will be interesting to see how we get there. Now in terms of alternatives, of course, the Inter-American Commission has supported alternatives for a long time, non-regulatory approaches. I was part of the 2019 process of discussing the guidelines to combat disinformation in the electoral context. And the main outcome of that was just to support non-regulatory measures. So I’m not going to repeat what you guys just said, but literacy, of course, it’s incredibly important. I would like to highlight, though, that literacy initiatives are, in a way, a bet on an old principle that it was very cherished in the human rights and freedom of expression field, which is that, to an extent, it is our responsibility as democratic citizens to figure out what’s fake from what’s not. So the internet, of course, makes it more difficult to exercise that responsibility. But in a way, I would highlight and underscore that those kinds of initiatives are a bet on that old principle. We haven’t yet renounced it. And, of course, all kinds of measures to promote counter-speech are obviously very easy. They’re not threatening from a human rights point of view, and they’re fairly easy to implement and, apparently, they’re quite successful, especially what I’ve seen most successfully deployed is counter-speech to combat disinformation in the context of elections in Latin America. But again, calls for regulation has been happening. Observacom in Latin America has been very strongly supporting the kind of regulation that on paper looks very good and looks respectful of human rights standards. The same with the UNESCO guidelines. Of course, the risk that is involved in these initiatives is something that Chantal already mentioned, the risk that even good legislation on paper could do more harm than good. And I think this has to do with, in many countries, sort of a lack of an institutional infrastructure necessary to adopt these kinds of regulations. That obviously is a concern for activists, but as I said before, I think we’re moving in that direction, and we’ll have to deal with that as the time comes. But I’m pretty sure that in the next couple of years, we will see legislation being passed outside of the European Union, and we will have challenges in that sense. Now I would like to highlight a couple of underlying tensions in order to close my remarks. So for instance, we have been discussing the importance of decentralization. I also would agree with Chantal about the importance of antitrust legislation, which for practical reasons, of course, will happen where corporations are incorporated or in places where they have important marketplace presence, and where they have the kind of institutional infrastructure necessary to move forward with this process. There is ongoing litigation in the United States against Google. There is, at the same time, investigations in the European Union. It is hard to imagine that, for instance, a Latin American country could move in that direction, but I think that’s important. Now it seems to me that this is in tension with the, I would say, framing of the DSA, or the framing of the regulations that are being proposed, because to an extent, those kinds of regulations depend on a few powerful intermediaries. So if we would, let’s say, break them all apart and have an Internet that is extremely decentralized as it was towards the end of the 1990s and beginning of the 2000s, I don’t know how that would be compatible with increasing control, even in a way that is respectful of human rights. Because if we have a truly decentralized web in which people get to choose, a lot of people will choose hateful content. A lot of people will choose and engage in discriminatory content. If it is truly decentralized, there will be no way of controlling that. So I think that’s an underlying tension that, to an extent, speaks about, I think, a really deep and profound disagreement in the field of human rights, in terms of what kind of future are we imagining as desirable. And I mean, this is something that I think is there, that it’s underlying. And I think we don’t discuss it as openly as we should. Are we willing to support freedom of expression in the form that we have affirmed it through the 20th century, where we informally relied on gatekeepers to sort of keep check on that? Are we embracing the decentralized promise of the Internet of late 1990s? And that means a lot of speech that is really problematic. I don’t know if it’s harmful. I think there is still a lot to figure out in terms of evidence. A lot of speech that is called harmful, we just don’t have enough evidence to support that it is actually that harmful. But I think that underlying tension is there, and that we should keep it in mind, and that we should discuss it more openly. Thank you. Thank you, Ramiro, for your sobering remarks, and also for highlighting what’s one of the trends that we see towards regulation, even though we can discuss other forms of addressing

Juan Carlos Lara:
some of these challenges. So I want to first check whether we have hands in the room that would like to pose any questions. So otherwise, we would start to be closing this panel, since time is about to run out. But before we do that, I would like to pose the question myself. So if I see no hands, it would be to the panel itself, beginning with Pedro. I don’t know if you are there, but it will be a rapid round of one challenge and one opportunity we have if there is a future in which we will see regulation that will come. One challenge and one opportunity that we may find in non-regulatory approaches that can be taken today as soon as possible among non-governmental actors in order to provide for the internet that we all want, and for the platform responsibility with human rights that we would expect. We will go in the same sense that this panel began, with up to two minutes. Please, Pedro, you go first. Thank you, Juan Carlos.

Pedro Vaca:
And let me just thank the whole panel for this amazing conversation, a lot of questions. The challenge that we have faced is the lack of capacity in a lot of member states. We cover the Americas, we monitor 30 countries, and at this moment, October 2023, we do not have enough capacities, even knowledge, among member states to be part of the conversation. So, I think we have to develop contact points at the foreign affairs ministers in as many countries as possible, because we only have powerful countries with the capacity, then we do not have enough representation to deal with the challenges. And then the opportunity, I think, and that’s why I highlighted the Constitutional Court of Colombia. I think the opportunity is we can put all our efforts in the user and the consequences for the user, or we can also prioritize the role of public servants and political leaders. I mean, if you have xenophobia or racism in a society, you have a problem, but if you have political leaders that incentivize xenophobia and discrimination, you have a bigger problem. And that’s why I think that if we consider public servants as points of reference of society, probably they should, and democracies should and could frame in a better way what is allowed and what is not allowed at that level of representation. I mean, the frame of freedom of expression of people that wants to become, wants to govern, wants to participate in the political sphere is limited if you compare it with ordinary citizens. And in that specific opportunity, we have a lot of inter-American and international standards. So, it’s something that is not even soft law. You have ruling at the inter-American court to support that.

Juan Carlos Lara:
Thank you, Pedro. I’ll ask also to the rest of the panelists, first Cristina and then Justina, please. One challenge, one opportunity.

Ana Cristina Ruelas:
That the discussions focus a lot on how legislation will look like and not how the second stage of the process would feel. So, I’ve been saying this in the different panels that I have participated in the IEF. It’s like, many regulators have said, you know, once legislation is passed, no one cares about it and they leave us alone. And as Ramiro mentioned, there’s many regulatory authorities that do not know how to deal with this issue and that are not used to talk with civil society. So, we need to break that tension and to be able to create conversation among them. So, that will be another opportunity. And an opportunity also is that since companies are based in the same country, what we see is that countries, stakeholders in different countries, in different regions, for instance in Africa and the African Union are coming together because they say, okay, companies don’t care about one of our countries per se. You know, they don’t have a specific interest in X country. But what they do care is of us together. So, they are getting together with civil society, with electoral management bodies, with the African Union. They are coming together with the different stakeholders to go before the companies and say, this is what we need and this is how we want it. That said, that creates a great opportunity because between 40 countries, you have countries that actually believe that a human rights-based approach is the way to go through and there are other countries that do not believe so. But there’s a balancing process and that is, for me, a great opportunity. Thank you very much. Chantal?

Juan Carlos Lara:
Thank you.

Chantal Duris:
I think in terms of challenges, I will mention this is a challenge, generally speaking. I mean, society tends to move slow, regulators tend to move slowly, technology doesn’t. And we are seeing this trend now again where they are trying to catch up. There are a lot of initiatives. There are a lot in the European Union itself, for example. There are a lot. There’s the AI Act, the Digital Markets Act, the Digital Service Act, the Political Advertising Regulation. And there is a challenge also for civil society active in this field already to be able to catch up with everything and cover everything. And not to mention, also, there are a lot of civil society actors that are very much impacted by what’s happening in the digital space but are not necessarily experts in it. They’re not experts in content moderation, they’re experts in, for example, women’s rights. And those are quite technical subjects, so it requires a lot of expertise. So I think this is one of the main challenges, the expertise that it requires and the capacity that it requires. I think the opportunities, we do feel that there is more recognition from, say, some of the platforms, some of the regulators that many of the issues they are dealing with, civil society are experts in it as well. They seek more, there are more consultation processes. To what extent the opinions of civil society are taken into account is another point. But we do feel there is more, again, appetite from platforms and regulators to have us engaged. But at the same time, we don’t want this in a way where they just outsource their own responsibility and say, we don’t need to deal with the human rights aspect, civil society do the work for us. Perfect.

Juan Carlos Lara:
Thank you very much, Chantal. Ramiro, you have the last word. Very quickly.

Ramiro Alvarez Ugarte:
I would say the following. I think one of the biggest challenges is that to move forward in regulation or non-regulatory measures, we have to do it generally in a context of deep polarization, and that is always very difficult. But at the same time, I think that context offers an opportunity, because I think that in most democracies around the world, there is a need to rebuild the public sphere and civic discourse. There is a need to start talking to each other in a way that is respectful. And even though that is difficult precisely because of polarization, that underlying need is still an opportunity, and we should take advantage of it. Thank you very much.

Juan Carlos Lara:
And with that, our time is up. Thank you very much to my fantastic panelists and everyone who has attended this session, and have a nice rest of your IGF. Take care, everyone. Bye-bye. Thank you.

Chantal Duris

Speech speed

177 words per minute

Speech length

2064 words

Speech time

699 secs

Ana Cristina Ruelas

Speech speed

150 words per minute

Speech length

2028 words

Speech time

809 secs

Juan Carlos Lara

Speech speed

163 words per minute

Speech length

2061 words

Speech time

758 secs

Pedro Vaca

Speech speed

136 words per minute

Speech length

1533 words

Speech time

675 secs

Ramiro Alvarez Ugarte

Speech speed

156 words per minute

Speech length

1528 words

Speech time

588 secs

Networking Session #50 AI and Environment: Sustainable Development | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Yoshiki YAMAGATA

Professor Yamagata is at the forefront of designing urban systems to enhance resilience in the face of climate change. His team harnesses the power of the Internet of Things (IoT), big data, and artificial intelligence (AI) technologies to achieve this goal. They have focused their research on studying the Tokyo city center and its surrounding areas.

Using IoT, big data, and AI technologies, Professor Yamagata’s team aims to comprehensively understand urban emissions and develop sustainable strategies for policymakers and building owners. They employ machine learning techniques to estimate dynamic carbon mapping and portray emissions resulting from various urban activities. This approach utilizes abundant sources of data such as occupancy information, people’s mobility patterns within buildings, sensor data, and transport measurements.

Professor Yamagata emphasizes the significance of being prepared and implementing preventive measures to mitigate the risks posed by heatwaves. By combining hazard maps with precise location information of workers, the team can accurately assess exposure levels to heatwave risks. In areas identified as high-risk, they can deploy sufficient ambulances in advance to potentially save lives of those vulnerable to heat-related illnesses.

Another crucial aspect of Professor Yamagata’s work is his belief in enhancing walkability in cities to promote the health and well-being of citizens. By utilizing big data and AI, his team can analyze walking behavior in cities, identifying ways to improve the flow of people and enhance the overall health and well-being of urban residents.

The team also recognizes the importance of visualizations as a tool to aid in understanding sustainable urban systems. These visualizations are being developed collaboratively, involving stakeholders such as policymakers. Policymakers are particularly keen to see policy options directly in these visualizations, requiring granular details regarding different options such as energy management, urban planning, and digitalization. Therefore, involving policymakers in the application of AI technologies is crucial to address their specific needs.

Additionally, involving policymakers in the use of AI is a key research question for Professor Yamagata’s team. Understanding the benefits that systems can provide to users is another important consideration. If users cannot perceive the advantages, privacy concerns may arise. Therefore, it is crucial to ensure that users clearly see and appreciate the benefits of these systems.

In summary, Professor Yamagata’s work focuses on designing urban systems that are resilient to climate change. Utilizing IoT, big data, and AI technologies, his team conducts research on understanding urban emissions, developing strategies for policymakers and building owners, addressing heatwave risks, promoting walkability, and visualizing sustainable urban systems. The involvement of stakeholders, including policymakers, is necessary for successful implementation, and it is important to ensure that users perceive the benefits of these systems without privacy concerns.

Audience

During the discussion, participants noted issues with the plug unexpectedly turning off, causing confusion. This raised concerns as the device should not turn off without the plug, creating uncertainty about its status and available positions.

Importantly, the value of having a teacher physically present in the classroom was discussed. The presence of a teacher enhances the learning experience and promotes better interaction with students, emphasizing the importance of in-person teaching alongside online platforms.

Previous online meetings and events, including a webinar on blockchain, were also mentioned. Participants recalled attending various events organized by the Council but noted their absence from a specific event. These events provide opportunities for knowledge exchange and networking.

Additionally, it was noted that one of the panelists was removed from the discussion. The inclusion of a video sent by a participant indicated the sharing of multimedia content during the conversation.

In conclusion, the discussion focused on technical issues with the plug, the significance of face-to-face teaching, previous online events, and the incorporation of multimedia content. Gratitude and appreciation were expressed at the conclusion of the discussion.

Peter CLUTTON BROCK

AI and data science have demonstrated their potential to be key enablers in the global transition to achieving net zero emissions. Several notable examples highlight the positive impact of AI in various areas related to climate action.

One such example is DeepMind’s collaboration with Google, where AI was employed to significantly increase the energy efficiency of Google’s data centres. Through AI techniques, DeepMind managed to enhance the energy efficiency of these facilities by an impressive 30-40%. This advancement is significant as data centres are known to consume vast amounts of energy, and optimizing their efficiency can lead to substantial reductions in greenhouse gas emissions.

Another remarkable application of AI can be seen through the efforts of the Climate Trace Coalition. By utilising AI and satellite imagery, they were able to enhance the accuracy of global emissions inventories. This improvement is crucial in our collective efforts to effectively monitor and manage greenhouse gas emissions, enabling better decision-making and targeted interventions.

Furthermore, Unisat’s Flood AI tool has contributed to improving disaster response in Asia and Africa. By leveraging AI, this tool has enhanced the ability to predict and respond to floods, ultimately aiding in mitigating the devastating impacts of such natural disasters. This application of AI demonstrates its potential to assist in building resilience and safeguarding vulnerable communities against the effects of climate change.

Despite the promising opportunities AI and data science offer, there are challenges that need to be addressed for their wider application. The two main frustrations hindering progress are data discovery and data access. The process of discovering relevant data and accessing it efficiently can be cumbersome and time-consuming, impeding the adoption and effectiveness of AI and data science solutions.

To overcome these frustrations, several strategies are proposed. Firstly, the development of improved data discovery tools is crucial for facilitating easier access to relevant datasets. Additionally, better regulation is needed to ensure that data is appropriately shared, while still protecting privacy and maintaining security. Furthermore, the establishment of commercial data markets, coupled with financial incentives, can encourage companies to share their data, unleashing its potential for AI-driven solutions.

The Centre for AI and Climate is actively working towards developing an intelligent data catalogue specifically tailored for climate action. Their efforts align with the need for a more organised approach to data discovery and accessibility, providing a consolidated platform for researchers, policymakers, and organisations to access and utilise relevant climate data.

In addition to supporting climate action, AI is expected to play a significant role in digitally managed energy systems. It has the potential to optimise investment decisions for asset developers, ensuring efficient allocation of resources towards sustainable energy infrastructure. Moreover, electricity networks can leverage AI to make informed decisions regarding which energy sources can connect to the grid and what upgrades are necessary, thus improving the overall efficiency and reliability of energy systems.

However, it is essential to maintain a balance between automation and democratic input in these digitally managed systems. While the increased use of AI may lead to a more automated electricity system, human control and democratic participation remain crucial for accountability and fairness. By involving stakeholders and ensuring democratic input, it becomes feasible to limit the level of automation and prevent potential negative consequences.

In summary, AI and data science have demonstrated the potential to significantly advance efforts towards achieving net zero emissions. Various examples showcase the positive impact of AI, from enhancing energy efficiency in data centres to improving disaster response and enhancing the accuracy of emissions inventories. However, addressing challenges related to data discovery and data access is crucial to unlocking the full potential of AI. With improved regulation, commercial data markets, and the development of intelligent data catalog solutions, AI can be effectively utilised in climate action and digitally managed energy systems.

Jerry SHEEHAN

AI systems have the potential to enable sustainability and transform climate modeling, according to one of the speakers. They argue that tools like carbon-aware computing can shift compute tasks to data centres with higher availability of carbon-free energy. Additionally, they highlight the Climate Trace project, which harnesses AI to track greenhouse gas emissions. These examples demonstrate how AI can contribute to addressing environmental issues and promoting sustainability.

However, another speaker raises concerns about the increasing computing needs of AI systems and their potential environmental impacts. They explain that direct environmental impacts result from AI compute, along with the resource’s life cycle. Furthermore, they point out that indirect impacts may arise from AI applications, which can lead to unsustainable consumption patterns. This argument suggests that as AI becomes more prevalent, it could exacerbate environmental challenges.

In response to the potential environmental impacts of AI, another speaker emphasises the need for common measurement standards and expanded data collection. They argue that without comprehensive data and consistent measurement frameworks, it is difficult to track and analyse the environmental impact of AI effectively. This highlights the importance of developing robust methods to assess the environmental implications of AI technologies.

The role of international organisations, such as the OECD, is highlighted by one speaker in facilitating cooperation on AI and climate change. They argue that these organisations serve as the connective tissue that brings countries together to tackle complex issues that transcend borders. By fostering collaboration and knowledge-sharing, international organisations can play a critical role in addressing the global challenges posed by AI and climate change.

AI’s potential contributions to various sectors, including the environment, agriculture, and healthcare, are recognised by one of the speakers. They explain that AI is a general-purpose technology with broad applications, and its diffusion is increasing across different countries in various sectors. This highlights the versatility and potential positive impact of AI on multiple industries.

The concerns regarding the negative impacts and risks of AI are acknowledged, but there is a belief that breakthroughs enabled by AI can help save the planet. Despite the potential drawbacks, the positive practical applications of AI are highlighted by one speaker. They suggest that while it is important to address the environmental impacts and risks of AI, it should not overshadow the potential benefits it can offer in addressing global challenges.

To address the challenges associated with measuring and understanding the environmental impacts of AI, one speaker proposes the establishment of measurement frameworks. They argue that as AI scales up and is applied on a larger scale, it becomes crucial to have standardised methods to assess and evaluate its effects accurately. This suggests a proactive approach to addressing potential negative impacts through robust measurement practices.

Adhering to the principles-based approach of the OECD is advocated by one of the speakers as a way to responsibly implement AI. They emphasize principles such as transparency, engagement, and a human-centred approach to ensure that AI technologies are developed and deployed ethically and in alignment with societal values. This underscores the importance of ensuring the responsible and accountable use of AI.

Finally, the importance of public involvement and understanding of the benefits and risks of AI is highlighted in the policy-making and system development process. One speaker advocates for the integration of public input and transparent parameters into AI-related decisions. This suggests that inclusive and participatory approaches can help address concerns and build trust in AI technologies.

In conclusion, the different perspectives presented in the summary demonstrate the complex relationship between AI and the environment. While AI systems have the potential to enable sustainability and contribute to various sectors, concerns about their environmental impacts and risks should be addressed. Common measurement standards, international cooperation, and responsible implementation are crucial in harnessing the potential of AI to address global challenges such as climate change. Public involvement and understanding are also important in shaping AI policies and systems.

Patrick

The workshop focused on the relationship between artificial intelligence (AI) and the environment, with speakers highlighting various aspects and potential benefits. One key point discussed was the use of AI in preserving healthy ecosystems. Efficient energy management was identified as an area where AI-based systems have been successfully implemented, citing the example of Switzerland using AI to manage the capacity of public transport and discourage overloading. Real-time data on energy production and consumption was also mentioned as a crucial tool for dealing with the effects of climate change and managing energy resources more efficiently. This application of AI in energy management was seen as a way to improve environments.

Another important aspect was the responsible use of AI to serve its purpose in preserving the environment. The speakers emphasized the need to ensure that AI tools are used in line with their intended purpose and argued that AI should be applied responsibly to help preserve healthy ecosystems. This sentiment was supported by the idea that every human right ultimately depends on a healthy biosphere, and AI could be a helpful tool in achieving this goal.

The workshop also emphasized the significance of international cooperation and the sharing of best practices for achieving environmental sustainability. The speakers stressed the importance of collaboration and the need to share knowledge and expertise on AI’s impact on the environment. For instance, the Council of Europe was mentioned as working with international organizations like the OECD to study the impact of AI in sustainable urban systems. The speakers highlighted the importance of data analysis to track and analyze the environmental impact of AI, as well as the need for common measurement standards to ensure comparability.

Furthermore, the speakers acknowledged the potential benefits of AI in supporting the green transition and addressing climate change. They mentioned that AI can be applied to research across numerous disciplines, aiding the transition to a greener world. Examples were given of AI being used in fields like environmental impact, transportation, and material science. The positive sentiment towards AI’s potential in supporting the green transition was evident throughout the discussion.

In conclusion, the workshop provided valuable insights into the connection between AI and the environment. The responsible use of AI to preserve healthy ecosystems, the importance of international cooperation, and the potential benefits of AI in supporting the green transition were all key takeaways. The speakers expressed a positive sentiment towards the potential of AI in addressing climate change and achieving environmental sustainability.

David ERAY

Artificial Intelligence (AI) technologies have the potential to significantly contribute to creating greener cities and regions by optimizing energy usage, handling power fluctuations, improving energy storage, and predicting energy demand. By analyzing complex and multifaceted datasets, including real-time data on energy consumption, water use, and weather, AI systems can make energy consumption more efficient and reduce unnecessary wastage. This can lead to substantial energy savings and a reduction in carbon footprint.

Local and regional elected representatives play a crucial role in environmental governance. Recognizing the link between the fundamental right to the environment and good governance at the local and regional levels, the Congress emphasized the importance of considering the environmental issue in their decision-making processes. The Congress is working on raising awareness among elected representatives by sharing good practices regarding the environment and AI through handbooks and guidance for smart cities and regions. This highlights the vital role that local and regional governance plays in addressing environmental concerns.

In the realm of public transportation, incentive-based systems can prove effective in managing capacity and reducing the need for extra transport capacity and investments. Such systems often offer different prices for train or bus tickets depending on the transport capacity, thereby encouraging people to choose less crowded public transport options. The implementation of AI-based systems has been observed to increase the modal shift from road to public transport, promoting more sustainable and efficient transportation practices.

The Swiss Energy Park is a unique initiative that encompasses three types of energy production: hydraulic power, solar panels, and wind crafts. By analyzing the consumption and production of energy in the region, the Swiss Energy Park allows for a comprehensive understanding of energy needs and facilitates targeted efforts in energy conservation. It is noteworthy that climate change can significantly impact energy production, as seen in instances where insufficient water for hydraulic power resulted from a lack of rainfall. This demonstrates the interplay between environmental factors and energy production, highlighting the importance of sustainable energy solutions.

Furthermore, AI has the potential to contribute significantly to combating environmental issues and reducing carbon footprint. It plays a vital role in managing public transport, leading to a decrease in carbon emissions. Additionally, AI technologies assist in managing resources in energy parks, allowing for better mitigation of the effects of climate change. These AI-driven solutions have the potential to revolutionize environmental conservation efforts and promote sustainable development.

However, the implementation of AI in policymaking comes with challenges, particularly in terms of privacy protection and data security. Deploying smart grid systems that manage energy consumption requires access to personal routines, raising concerns about the transparency of personal information if the system is hacked. Protecting privacy and preventing data breaches are essential considerations when integrating AI technologies into policymaking processes.

Overall, AI technologies present tremendous opportunities for creating greener and more sustainable cities and regions. By optimizing energy usage, managing public transport, and analyzing environmental data, AI has the potential to significantly reduce carbon footprint, enhance energy efficiency, and promote sustainable development. However, it is crucial to balance the use of AI with care, ensuring responsible energy consumption and safeguarding privacy. The involvement of local and regional elected representatives is pivotal for effective environmental governance and the successful integration of AI solutions in addressing environmental challenges.

Session transcript

Patrick:
It appears that we have a little technical difficulty, but we’ll solve that very soon so we can get started. We will also be showing some slides, at least some of the speakers, otherwise, Fadim, we can maybe change the order of speakers if they are not available right now. So this workshop is about AI and environment and the connection between the two. So it’s my great pleasure here to be in Kyoto, first of all, I also have some colleagues here and some friends from different parts, we also have a number of people that are following online even though right now we’ve combined everything, we are in presence, we are online, nice to see familiar faces and less familiar faces in the room, nice and friendly faces, I’m sure that we also have nice and friendly faces online. So thank you for coming to this workshop, the Council of Europe obviously has had a very special interest in both artificial intelligence and environment for a number of years, and we’ve developed a number of both treaties, but also partial agreements around environment, we’re currently working on a new treaty on artificial intelligence. Both these things were put to the forefront in our Summit of Heads of State and Government in Reykjavik, where the Heads of State and Government also requested that we pay particular attention to that and devise new tools. in this field. Council of Europe works in that, not only with a specific committee on artificial intelligence, but has a number of services that are looking directly into artificial intelligence. As we also know, every human right ultimately depends on a healthy biosphere. Without healthy functioning ecosystems, there would be no clean air to breathe, no safe water to drink, or nutritious food to eat. We need to create that and preserve that. Of course, the artificial intelligence may be a helpful tool in this respect, but we also have to ensure that this helpful tool serves its purpose. That’s why we’ve put together a panel of people that are on the one side scientists and researchers, but also decision makers that have to take on a daily basis the decision to whether or not imply and apply certain methodologies or not. Our very special keynote speaker today is someone who has been involved in the work of what we call the CAHI, the Ad-Hoc Committee on Artificial Intelligence, but also on the Committee of Artificial Intelligence on the Regulation of Artificial Intelligence for some time. He’s a minister, a minister for the environment of the Canton of Jura in Switzerland, and he’s also the spokesperson on digitalization and artificial intelligence of the Congress of Local and Regional Authorities of the Council of Europe. So I’d like to welcome Mr. David Herré. He is uniquely placed in this respect to share his experience as both an active policymaker domestically at the canton of Jura and at the European level and someone who has first-hand experience of actually working with those topics daily and locally as a minister for the environment. Without any further ado, I would like to give the floor to Mr. Aire, who will speak from Switzerland. He had some urgent business, unfortunately, in his government today, otherwise he would have preferred to be with us here in Kyoto, I’m quite sure. Mr. Aire, if you’re there, the floor is yours.

David ERAY:
Yes, I’m there. Thank you so much. I’m here in Switzerland. It’s still the end of the night, so I should say good morning from here and I’m sure you are already in the afternoon. So it’s a pleasure for me to address this session and I’m really grateful in the name of the Congress to be able to share our thoughts. So as you said, I’m Speaker of the Congress for Artificial Intelligence and Numerization. The Congress has a number of 46 state members and this is really a huge organization and we try to have a focus on these thematics that are really important at the moment. As you said, in my country, I am Minister of Environment in the canton of Jura. Switzerland has 26 states and Jura is one of the 26 states. You may know some of the states which are well known, like Zurich, Geneva, Bern, etc. As a politician, a grassroots player in my country, and as a representative of the Congress, I want to share my vision. on this very relevant connection between AI and environment. In October, 2022, so one year ago, the Congress highlighted that the fundamental rights to environment is intrinsically linked to local and regional good governance. Indeed, there cannot be good governance exercised by local and regional authorities without taking into account the environmental issue. So the Congress explored how we can move toward a greener reading of the European Charter of Local Self-Government. We adopted a recommendation, and this is a proposition to have additional protocol to the Charter on this matter. We have several other proposals of international standards on environmental matters within the Council of Europe, including a possible protocol to the European Convention on Human Rights. Whatever option is eventually chosen by the Committee of Ministers, the role of local and regional elected representatives in environmental matters is key. Both the environment and artificial intelligence are high on the agenda of the Congress. The Congress works on raising awareness of elected representative by sharing good practices with respect to the environment and artificial intelligence through practical handbooks and guidance for smart cities and regions. Our communities can become better, can become better places to live if we maximize the use. of AI for the public good. Indeed, AI technologies can be game changers, optimizing the use of energy, handling power fluctuations, improving energy storage, and forecasting energy demand can all help to make energy consumption more sober. AI enables us to analyze complex, multi-faceted data sets, inclusive real-time data on energy consumption, water use, and weather. I want maybe, I want to share my experience in the continent of Jura in Switzerland. We do have several examples. So I want to share a PowerPoint. I don’t know if you can show it on the screen for the participants, if you have it available. This is just two, three slides. I can illustrate my, I’m sure we do.

Patrick:
We’ll immediately try to put that on screen.

David ERAY:
So, because it’s also, it’s always good to talk, but it’s also good if I can show you some examples. So in my country, we do have several examples on energy use, energy management, and also public transportation management. And in that topic, I don’t see anything on the screen, but I think it should come.

Patrick:
As long as there’s nothing on the screen, I would invite you to continue for the time being. We’re trying to resolve this technically, but please go ahead.

David ERAY:
Okay, so in the public transportation, we have… We have implemented something to be able to manage the capacity of transportation and the need to be transported by the people. And how do we do that? We do have something that we could call an incentive. So whenever you need to buy a train ticket or a bus ticket in Switzerland, the system will propose you several different prices, depending on the capacity available in the public transport that is foreseen. So I wanted to show you an example. If I want to go to Zurich next week, and let’s say I have a meeting at 12 noon in Zurich, and the system will propose me several possibilities, including one with a discounted price at 12 francs instead of 22 francs. And this is a way to move the people, not in the train and bus that are already supposed to be full, but the one that has capacity. And this brings three effects. First of all, we have a better use of public transport. So we use the capacity and we don’t overload when it’s already full. Second effect, this can reduce the need of extra transport capacity. So this can reduce the investment that we, the states like Jura, like Bern, like Zurich, need to invest in our transport material. And the third effect. is also important. This has increased the modal shift from road to public transport. So three effects with a system based on AI and also based on the tools that we have online. The second example I wanted to share is what we have in my region called the Swiss Energy Park. So in my region, we have an energy park that includes three kinds of production, hydraulic power on the river, solar panels in a big solar plant, and wind crafts on the mountain. And in this park, we can analyze online the consumption of the region and the production of the region. And we see immediately that whenever we have wind, water, and sun, this is quite cool because we have enough energy. And during the period like now in Switzerland where we have sun, no wind, not enough water in the river, then we need to import energy from outside the region. And this is something that I wanted to show you on the slide that are not coming, but this is okay. I’m sure you can share the slide later. And on the analysis, okay, this is coming. So we go directly to the seventh slide. Okay, this is the next one, because I don’t want to repeat what I said. Okay, this one. On this slide, we can see on a yearly basis, the black line is the need, the consumption of the region. The green one is the wind crafts production. so we can see that like in December 2022 or February we had quite a lot of wind, enough wind to our consumption. The blue is the water production, so we see that the period from August to now we are having not enough rain in the region, so not enough water in the river, so almost no production. And the sun is also an energy that we have especially in summer and that is not present in winter. So this is interesting to see first of all the management of the energy in the region, also the effect of climate change because we see that when we have not enough water like now due to the climate change we are in trouble and we can also see that the wind is a really high energy possibility or potential, but this is not predictable so we cannot be sure. So this is just two examples that I wanted to show and maybe you can come back to the third slide just to show quickly, okay I come back to this public transportation. On the left you can see that this application you can just select, oh I want to go to Zurich on Friday November 10th for a meeting at 12 noon. In the middle you can see the possibilities offered, so the one that is arriving at 12 is 14 Swiss francs and if you want to be like in the previous proposition it’s 19 Swiss francs and if you want to be at Zurich at 11.26 you pay the full price 22. So this system is a way to as I said to use with the best efficient way the public transport capacity that we have in Switzerland. So this is what I wanted to show you, and I think this is good to make the link between AI and environment, energy and carbon footprint. We see that we have potential, and I think there are a lot, still a lot to do in this topic. Thank you.

Patrick:
Thank you so much, Mr. Aré. I think energy management, the use of real-time data, it’s incredibly important, and sometimes it may be better to go to Zurich a little bit earlier or a little bit later and have a free lunch in Zurich to have it compensated by your train ticket, basically. So thank you very much for this very local experience and how AI can really help in making sure that our environment is also getting better of it. Now let me introduce you to the work of our first panellist, because Mr. Aré was our keynote speaker. Our first panellist is Professor Yamagata from the Ayo University, who is all about developing a new urban system design framework that integrates architecture, transportation and human behaviour in cities. Professor, if you don’t mind telling us about your work on AI and sustainable urban systems, that would be very interesting for this audience, I’m quite sure. What are the main challenges and how did you deal with them? Could you tell us more about this? Thank you.

Yoshiki YAMAGATA:
Thank you very much, Chairman. It is my great pleasure to be able to talk at this session about our recent studies. I’m Yoshiki Yamagata, I’m talking from Teio University, Yokohama. So at my laboratory, we are studying urban systems design for achieving climate resilience. cities. So climate resiliency is two meanings. One is the response to the climate change, because we are experiencing a lot of climate change impacts already, like heat waves and floodings. Another climate change measure is, of course, the carbon neutral, the carbonization of the cities. This is also urgent to meet the target of the Paris Agreement. So for that purpose, we are introducing a lot of IoT, big data, and AI techniques to achieve this goal. So let me explain one example of my studies at the city center, Tokyo. Maybe you have seen this sky tree at the city center, Tokyo. This is a tourist tower in Japan, and we are analyzing this area using big data. So one big data we are analyzing is the occupancy of the offices and shops and restaurants, et cetera, using big data. And the second big data case is this mobile phone mobility information. We are deleting all the privacy information and using the trajectory of the people moving inside the city. So we are using the machine learning technique, which is an AI technology, to detect the transport mode. So by looking at the trajectory, AI can judge. if this is a car, or a train, or walking behavior, transport mode. We’re still working, still studying to improve the accuracy of the classification, but the walking behavior is really challenging for us. So, by combining these building and transport information using GIS information, like total floor area height, and load release node, and big data like occupancy information, and people’s mobility information in the buildings and in the load networks, in combination with the sensor data like smart meter measurement data and statistics, and also the actual transport measurements at the load network, we could estimate the dynamic carbon mapping, which visualizes carbon emissions from the urban activities. This red color means that emissions from the building energy use. Blue color means indicating the emission from the load car traffic, from the engine car. So, from this diagram, we can easily, intuitively understand where the carbon dioxide is emitting, and who is responsible for these emissions. So, it is really important to understand visually, intuitively, for the policy maker, as well as citizens, and in many cases, building owners. other business people in the cities understand what is the goal of carbon emission deductions. So this kind of information can also be used for detecting the heat wave risks by combining heat hazard maps. Remote sensing data can be available for this purpose and we can use this worker’s location information as a heat exposure to the risks of a heat hazard. For instance, if an older person suffering some diseases is walking in the street in a very high temperature location more than one hour, so there is a huge chance that this person gets heat stroke. So if these kind of people are staying in the same place for say 1,000 people, then maybe there is a high chance the ambulance will be called soon. So in advance we can prepare ambulance and send enough number of ambulance to the high risk area to save the lives of people who are suffering the heat strokes. So at the same time we can also do analyze the comfort of people. Actually walking behavior inside the cities is really important health improving well-being experience inside the cities. So by knowing how to improve people’s walkability inside the city is really important indicators for their people’s health and improving the well-being of the citizens. So there are some new technologies available for this purpose. And the big data and AI for using this people’s flow there is really a huge potential. This is ongoing studies I’m conducting with researchers of ETH, Zurich, Switzerland. And so we have a exchange program between KU University and ETH. So I’m very much looking forward to collaborate with policy makers and the researchers in Switzerland in the near future. Thank you for your attention.

Patrick:
Thank you very much, Professor Yamagata. I think that’s really exciting to look at this information, how climate resilient cities and decarbonization can impact or hopefully not impact further climate change. I think I would only give you one suggestion before you prepare the ambulances to prepare for heat strokes. Maybe more importantly that we foresee some other activity that prevents heat strokes to take place. Our next panelist is Peter Clottenbrock from the UK Center for AI and Climate. He works in in-depth on issues of creating data marketplace in relation to transition to net zero, and more specifically changing requirements for data for improved grid management. This may sound strange to you, so we will let Mr. Peter Clottenbrock explain what is meant by all of this. Peter, floor is yours. Peter, are you there? because we see your slide, but we can’t hear you.

Peter CLUTTON BROCK:
Okay. Is that working now? Can you hear me?

Patrick:
That’s perfect, Peter. Thank you.

Peter CLUTTON BROCK:
Great. Thank you very much for the chance to speak today. It’s great to be here, if not in person, then in spirit. I’m going to talk for about nine minutes today about what some of the opportunities are to apply AI and data science to support the transition to net zero, as well as what we can do to help free up some of the data required to do so. I have to move the screen along. Excuse me. There we go. A little bit about us before I dive in. The Center for AI and Climate is one of the leading organizations focused on advancing the application of data science and AI to accelerate action on climate change. We do this in two main ways. The first is thought leadership. We look to inform the debate about what the main opportunities are to apply data science and AI to accelerate the transition to net zero, as well as what some of the bottlenecks and barriers are that are holding back that adoption. Secondly, we look to dive into some of those bottlenecks and barriers and help develop the digital architecture and infrastructure necessary to do so. Perhaps it’s useful to start with a little bit of a framework to think about what kinds of problem AI is good for helping to address. Because there are obviously many challenges in the transition to net zero, some of them AI can potentially help with, others, it’s not the best tool to be used. We need to make sure it’s being used in the right ways for the right kinds of problem. Here, I’ve just summarized four of the types of problem where AI is particularly good at supporting the addressing of challenges. The first is system optimization. This often uses a tool called reinforcement learning, where you effectively inform the AI agent about a particular system that you’re looking to optimize. You give it data on the controls that can use to change that system and the environment that affects the system. And then it will effectively optimize using those controls the optimal outcome for that system. And this could apply for a whole system. So for example, the energy system, but also parts of that system. So a particular battery asset within that system could be optimized using reinforcement learning, for example. A particular subset of this is around accelerated experimentation. So we can deploy AI to support faster accelerated experimentation for new battery designs and new battery chemistries, for example, but also potentially for new ways of making steel, which we need new forms of experimentation for. Thirdly, prediction and forecasting. So a lot of the data that we need that we use in sectors relevant to climate change uses something called time series data, which tracks different variables over time. And here, if we’ve got enough historical data on that particular variable, we can find the patterns using AI in that data and predict forward much more accurately using AI than we could with previous techniques. And fourthly, classification. So this is useful if, for example, we have map image imagery or satellite imagery, and we want to be able to classify areas on rooftops that we could deploy solar panels on, or grid infrastructure, or whatever it is. We can deploy AI to help classify different data within that image. AI is not something that’s theoretical at this stage. It’s already being deployed, as David set out in his examples in Switzerland. But there are many others that we’re seeing bubble up throughout the community that are really exciting. So I’ve pulled out three that we think are interesting here. So the Climate Trace Coalition uses AI and satellite imagery to improve the accuracy and transparency of global emissions inventories. Secondly, Unisat’s Flood AI tool enables high-frequency flood reports that have improved. disaster response already in Asia and Africa. And thirdly, DeepMind have used their AI to increase the energy efficiency of Google’s data centers by between 30 to 40 percent, and that’s focused on improving the efficiency of their cooling systems. So that’s just using software they’re able to achieve really significant increases in energy efficiency. It’s worth saying that despite the fact that there are a lot of examples already deployed applying AI to climate action, we still think the potential for further applications is huge. And we think actually it’s probably some of the most important ones that we’re likely to see have yet to be developed. So it’s still a wide open field, and we’re just seeing the tip of the iceberg when it comes to the potential application. So what do we need to do to enable further application adoption of this technology? Well, probably the main barrier and bottleneck that comes up when you talk to the data scientists working in this field is around data. And in particular, two types of data frustration come up in conversations with data scientists, and these are data discovery and data access. So just to be clear on what I mean by data discovery, I’m talking about the process of locating and identifying already open data sets. So for example, an innovator might be searching for solar irradiance patterns in Africa. It might just take them a long time to find this data, despite it being already openly available. And by data access, I’m talking about the process of gaining access to commercial data that is currently not available openly on the internet. So for example, this might be accessing data on EV charging assets from a commercial charging asset operator where they’re not currently opening up their data. So then the question comes of what can we actually do to enable, to address these two challenges that I’ve focused on. And here we see three key opportunities. So the first is better data discovery tools. So ultimately what we think is needed here is better ways of organizing and helping signpost people to data that already exists. So here we think there’s a need for a well-organized and intelligent data catalog focused on climate action. This is actually something that the Center for AI and Climate is already working on developing to really help users and signpost users to where there is data for a particular type. And the organization of that is really key. If there are any country representatives who want to get in touch about that and find out how that could help support data cataloging in your country, please do let me know. Secondly, we see an increasing need for better regulation to open up data, especially in monopolistic sectors. So when it comes to climate action, a lot of the sectors that we care about most often have natural monopolies, whether it’s the electricity sector or the transport sector. We’re often dealing with areas where you have sectors that companies that have a monopoly over particular areas, whether it’s electricity networks, such as distribution and transmission networks or transport networks. And we see a real need to focus on requiring some of these monopolies to open up their data and in particular for commercial licensing. And that last piece is really key. So we want to be able to enable innovators to build products and services on top of data that’s opened up by these types of companies. So making sure it’s available on a commercial license is actually really important. And thirdly, commercial data markets. So to complement the open data piece, we actually see there being a real need to create the financial incentives for commercial companies to share more of their data, in particular in the sectors that we care about, again, when it comes to climate change. And the way you create those kinds of financial incentives is to effectively create a market. for that data. And again, this is something that we’re working on directly. So what I’ve talked through hopefully is a combination of things. So I’ve highlighted a framework by which we can think about the opportunities and the problem types that AI is good for addressing. I’ve talked about some of the case studies about how it’s already being applied and deployed in the world. I’ve highlighted some of the key bottlenecks in particular around data that we need to address if we want to see further and faster adoption of these technologies. And I’ve set out what we think is some of the key ways of addressing those bottlenecks to address these challenges. So with that, I’ll close and say thank you very much again for the opportunity to speak today. And I’ll look forward to addressing any questions. Thank you very much.

Patrick:
Thank you so much, Peter. Very, very interesting also how you highlighted the central role actually that data play in artificial intelligence, the system optimization, if you have access to those data. But you also pointed at high energy costs for storing those data and also deploying artificial intelligence on them. So our last speaker is going to be, we’re extremely lucky to have the new director of the Directorate of Science, Technology and Innovation of the OECD, Mr. Jerry Sheehan, who will present the OECD’s work and activities in the field of AI and environment. And in particular, the excellent report on measuring the environmental effects of AI computing and applications published at the end of last year. So clearly, Jerry, OECD has a key role to play. Over to you to tell us about your work.

Jerry SHEEHAN:
All right, thank you very much, Patrick. I’m delighted to be able to join you, even though it can only be virtually today, as much as I’d prefer to be there in person with you. Let me just say, I do have some slides. I don’t know if they can be presented here. I don’t seem to be able to pull them up and share my screen myself. But let me go ahead just to keep us on time and tell you a little bit about the work that we’ve been doing. Very good, thank you. So just to say that accelerating the green transition has been a major theme, continues to be a major theme of our work here in OECD’s Directorate for Science, Technology and Innovation. Among other areas, we have focused on issues of decarbonization of industrial activity, including in some more traditional fields like shipbuilding and steel. We recently released a report as well on AI and science that I’d call to your attention as it highlights a number of ways in which AI can be applied to research across a broad range of disciplines, many of which can inform and accelerate our green transition, including through a number of areas that were just described, through improved modeling, through improved data access and ability, and in fields ranging from environmental impact to transportation to material science. All of which can help us make our world a bit greener. We have been doing work on AI since at least 2017 and including in that have looked specifically at the relationship between AI and the environment. So as noted last year at the COP 27 last November, we launched a report that was asking about the environmental footprint of artificial intelligence. We heard a word about this in terms of some of the large data sets that we’ve been working on. So we have a number of data sets that must be used to inform AI. And I’m happy to share with you today some findings of this work. And actually the slide we have here is just the right one. So for us, we’ve been focused on the notion of the twin transitions, the green transition and the digital transition and looking at ways that digital technologies can be better leveraged for environmental sustainability in the future. As you’ve heard from other panelists already, this is happening in many ways. AI applications can enable sustainability. For example, AI is transforming climate modeling by creating digital twins. The destination earth, for example, is creating a digital twin planet of the earth powered by Europe’s high-performance computing centers and its AI capacity. The climate trace project is harnessing AI to track human-related greenhouse gas emissions with unprecedented detail and speed. DeepMind are using AI to make data centers more efficient by applying reinforcement learning algorithms to reduce their energy use. One example is carbon-aware computing where AI shifts compute tasks to data centers and areas with more availability of carbon-free energy. Let’s go to the next slide, please, just to say that we know compute is on the rise. And as we see computational needs of AI systems going, there are climate impacts as well. We often perceive AI as some sort of an abstract, non-tangible technical system, right, that we interact with through our screens. As noted, it’s enabled by physical infrastructure and hardware together with software that are collectively known as AI compute. And in the last decade or more, as you can see on this slide here, the computing needs of AI systems have grown dramatically, entering what some call the large-scale era of compute. This is no doubt motivated by the increasing capabilities of large and more compute-intensive AI systems, and of course, the rise of deep learning and large language models. Tools like Chatbot. are becoming more widely used, and the computing needs for inferencing of AI systems, contrast to the training of AI systems, is also becoming more relevant. Let’s go to the next slide, please. So why is this problematic? Well, simply put, as AI systems get bigger, not only can they help us address AI challenges, but they need and use more computing resources, which in turn consumes more energy, natural resources, and they produce increasing CO2 emissions. Although some researchers have produced numbers for AI’s environmental impacts at an AI model level, I think an example being for Bloom and for GPT-3, we don’t really know how severe this problem is at a national, let alone at a global level, especially then in comparison to other sectors that contribute to CO2 emissions. That’s because AI-specific measures are still scarce, and those that we do have tend to overestimate AI’s negative impact. So to help fill this measurement gap, we’ve conducted a stock taking report and developed a framework to help better quantify AI’s environmental impacts. Let’s go to the next slide. I’ll tell you a little bit about the analytical framework that we’ve used. The framework builds on work that has been done already by researchers on the direct and indirect environmental impacts of AI. The direct environmental impacts are defined as those that result from AI compute along with the resource’s life cycle, which includes the production, the transport, and operations of this compute, as well as its end of life. There are various environmental impacts, as you can imagine, along this life cycle, everything from critical minerals extraction to transportation, water consumption, carbon emissions, recycling, and waste disposal. For direct impacts, it’s important to note that operations, that is the actual running and operating of servers in a data center being used to train an AI model, for example, are a major source of environmental impact. The majority of resources and existing indicators are in just this area. We should also note though, that direct impacts can also be positive. For example, the heat from data centers is being repurposed, but these cases are still probably too rare. When it comes to some of the indirect impacts, that is from the applications of AI, we found many, many positive examples as well as some that were more negative. So on the positive side, we know that there are sectoral applications. We’ve heard some of those today already, such as AI for energy grid efficiency. There’s climate mitigation and adaption approaches, such as AI for flood prediction and AI for environmental modeling, such as the example of creating a digital twin of the earth. On the negative side, these AI applications also increase consumption patterns in ways that may or may not be sustainable. So let me go to the next slide, please. And I can share with you some of the key findings of our work here. So using this, we identified really five key findings that I wanna share with you just briefly this morning or this afternoon for an evening, for those of you who are joining from other parts of the world. So the first is that common measurement standards are needed to track and analyze environmental impact. And this should allow for greater data comparability between and among countries. Second, we find that data collection on environmental impacts of AI compute could be expanded, should be expanded in a number of ways. Third, AI specific measurements are sometimes difficult to differentiate from general purpose compute. We see this, for instance, in data center usage, where estimates of the percentage of data centers for use as AI compute. is not clear across countries. Maybe not even always as clear within individual data centers. Fourth, we need more data collection on different types of environmental impacts, such as carbon use, water and other natural resource use, and supply chain impacts. All of these are needed. Fifth and finally, we think international efforts, including sharing best practices on AI compute towards environmental equity and transparency, are vital. Let me go to the next slide, please. So just to note that the framework that we developed over the past few years coincides with the emergence of generative AI. Of course, the big question now is whether the arrival and the proliferation of generative AI would change our analysis. We’ve already seen exciting new applications of generative AI for climate action, such as chat climate. We also see considerable interest from countries. In a recent OECD stock taking that we did for the G7, for example, five out of the seven G countries responded that climate action is among their top five opportunities for generative AI. On the other hand, there are questions about the direct environmental impacts of the large scale use of generative AI. For example, on water, it was already reported that Microsoft’s water usage has significantly increased last year, largely due to investments in their operations of generative AI. Of course, I tried asking chat GPT if it knew how much energy it took to run this particular question. But as you see here, coming up with specific numbers is challenging and there’s considerable work still to be done. So organizations like mine here at the OECD, through our OECD compute expert group, for example, are continuing this important analysis, engaged with experts and partners from various stakeholder groups and from around the world. And we hope to be able to come back to you in the future with even more refined results of our analysis. So for now, I’m going to go to the last slide. And thank you for your attention. This is where you can find our report. And again, we’ll have more findings coming out of our OECD Compute Expert Group in coming months and years that we look forward to sharing with you. Thank you very much for your attention today. And I look forward to joining in the panel discussion.

Patrick:
Thank you so much, Jerry. It’s really a question of checks and balances, knowing how much energy is needed to generate the artificial intelligence on the one side, and how much is it going to help us to diminish the, let’s say, the carbon footprint on our development. I already have a number of questions here that come from the online. And one question, Jerry, is actually directed to you. We know that everyone looks at OECD with regards to defining AI as such. I’m not going to ask you that now. But the question here is, what do you see as the role of international organizations, such as the OECD, in working with artificial intelligence?

Jerry SHEEHAN:
Yeah, thank you for that question. I think the international organizations like OECD have a critical role to play here in simple terms. We’re the connective tissue that helps bring countries together to solve collective problems, including around the green transition and digital transitions and relationships between them. I think this is especially critical when the stakes are high and when it involves complex issues that cross borders. And this is particularly relevant for climate change and AI, given that it’s a general purpose technology that can be applied to many different sectors. We’ve been focusing on environment here today, but we know that AI diffusion is ramping up in almost every sector of our economies, from agriculture to health care. and again in all countries at different speeds. Applications like ChatGPT have made AI tangible and usable to the average person. So I think we at OECD and others remain hopeful that the breakthroughs that can be enabled by AI can help us save the planet, right? So these are the benefits and we’ve seen a lot of those in the panel today. We’ve also been attentive to some of the negative impacts, environmental impacts and some of the risks among those, including effects on labor markets and so forth. As noted, these aren’t well enough understood yet. They’re difficult to measure, especially as AI gets scaled up and is applied on a bigger scale. And that’s where I think OECD and other international organizations have a critical role to play because we can help put in place measurement frameworks that can apply across all of these countries.

Patrick:
Thank you.

Jerry SHEEHAN:
And just on a final note as I see you’re getting the microphone going, just to say of course now-

Patrick:
Yes, I’m trying to get it going. Thank you so much for your input. I think indeed, and as Council of Europe, obviously we work very closely also with the OECD and other international organizations around artificial intelligence and the impact of artificial intelligence. I already have a question. I have another question for Professor Yamagata because he showed us quite a number of visualization research, the use of AI in the sphere of sustainable urban systems. But Professor Yamagata, the question is also how can these systems be used in policymaking and do policymakers make use of them? I’m sure that Mr. Ere will be very interested in your response on that.

Yoshiki YAMAGATA:
Thank you very much for- There are interesting questions and that is very vitally important questions. At the moment, we are studying these visualizations using big data and AI for the stakeholders of the area. Of course, this includes the policy makers, but usually the policy makers need to see directly the policy options in this visualization, rather than the low carbon emissions. Of course, carbon emission is a final parameter to reduce, but perhaps the policy makers need to understand more closely the details of the different policy options, like energy management options, or urban planning options, or digitalization options, which also could have a positive and negative impact. This is a really important research question, how to involve policy makers into the use of AI.

Patrick:
Thank you. Let’s ask the policy maker, Mr. Herre, what would be, for implementing artificial intelligence in local and regional authorities, what do you see as the biggest challenge in implementation of artificial intelligence in day-to-day policy making?

David ERAY:
Thank you for the question. I think the biggest challenge is linked to the privacy, the protection of the privacy. What I showed before regarding this energy production and consumption, we try to implement what we call the smart grid, and that needs to implement in every house, in every apartment, a system that can manage. the need of energy. Let’s say you come back home at night, you want to load your electric car, so the system, like the big brother, should know that you are home, that the next day it’s like 7 a.m. you want to leave to go to Geneva, so you need the full load and then the system should manage in the best way to load your car linked to the production capacity and production perspective that we have during the night. So if you imagine that the system would be hacked, then that means that the entire life of the people could be transparent and given to the hackers. And this is maybe a big issue that we would have in terms of data protection and privacy respect.

Patrick:
We don’t hear you. Microphones are AI steered, so that basically means my simple intelligence doesn’t manage to get it going at the right time. Now, Peter, I have, if you’re still there because I don’t see you on the screen, but Peter, there’s a bit of a stargazing question for you. That is, what do you think a digitally managed energy system will look like in 20 years time? Give us a bit of glass ball staring.

Peter CLUTTON BROCK:
It’s a really good question and I’m not sure I’m going to be able to do the question justice, but I think effectively what we see is that AI will flow into a lot of the decision-making processes throughout the energy system. So just starting at the bottom, when you’re looking at when an asset developer might be looking to develop a solar farm or battery asset. AI will flow into optimizing those investment decisions. And then for the networks themselves, the electricity networks, they’re making decisions around what can connect to the networks, as well as what upgrades they’ll need to make to the networks. Again, all of those decisions will be optimized using AI. And so increasingly, increasingly, I think we’ll move to a system where electricity systems are effectively automated and the human capacity in them is more to check, to make sure that the AIs are working in the right way and the way that we want. But increasingly, we’ll see those humans in the loop starting to come out of the loop as the trust from the AI system is built. So ultimately, I think we will be heading towards a pretty much completely automated electricity system, albeit one where there is good democratic input, which may be perhaps the limiting factor on some of these automation features.

Patrick:
Thank you. I did put you a little bit on the spot there, but I will alleviate a little bit the burden of you and ask the same question because we have two minutes left. So I’ll ask the same questions. How do we see those checks and balances between the use of artificial intelligence? How do we make sure that the benefits outweigh the risks in the use of artificial intelligence in the coming years? And since we have four speakers, you have 30 seconds to reply to that. Shall I start with David?

David ERAY:
Yes, exactly. I think the check and balance, we need to be careful with the energy consumption of the AI hardware and the balance of the data. The benefit in terms of environment and energy saving, thanks to AI. So this is where I see a big challenge for us. 20 seconds.

Patrick:
Hello, thank you. Professor Yamagata, 30 seconds to check some balances for the future.

Yoshiki YAMAGATA:
Yeah, thank you very much. Actually, it is very important to see the benefit, understand the benefit of the users of the system. If the user enjoys the benefit, I think they understand why this system is actually useful for the community. If they don’t understand this is just a scary privacy problem, that’s my point.

Patrick:
Thank you. Jerry?

Jerry SHEEHAN:
So yeah, I would say that the way to do this is to ensure we’ve got a principle-based approach to AI, whether it’s applied in energy grid, whether it’s applied in transportation or others, that adheres to what I would say are the OECD principles around AI, which include issues of transparency, engagement. It’s a human-centered approach, which I think is what we were just hearing about engagement of the public and understanding the benefits, the risks, and having the opportunity for transparency into the policymaking process and the system development. I will just note that we at OECD are in the process of reviewing the 2019 AI recommendation with a view toward its revision, and this is happening at the time when generative AI is raising a number of new questions. So we hope to have something more to report on that in 2024.

Patrick:
Thank you so much. Thank you. With this, we are right in time to have finalized the discussion. Sorry, Peter, I haven’t given you back the floor a second time on this very difficult question. Thank you for the audience online and here in the room to have followed this session with so much interest. Thank you for the many questions that we received and thank you for your active participation. Thank you so much. Bye.

Audience:
Normally, it should not turn off if you do not have the plug, but here it is on. It turns off and then you have to turn it on and then there are several positions that are not clear. OK, because already last night, exactly. And so you turn it on, but you do not know if it is on. But it was perfect. Alone on a panel. Alone on a panel, it was perfect. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Normally, it is on. Normally, it is on. Hello. Hello, nice to meet you. How are you? The teacher should normally be in person. Hello. Hello. Hello. Hello. We met online. I wrote about the blockchain. We had the launch in the webinar. You were there. I will just introduce myself. I attended many events of the Council. I haven’t been there. No worries, go ahead. No worries, go ahead. No worries, go ahead. I removed one of the panelists. You sent a video. Yes. You sent a video. No, no. No, no. Yes. That’s what I did. You sent a video. Yes. That would be complete. . . . . . . . . th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th th Thank you.

Audience

Speech speed

61 words per minute

Speech length

367 words

Speech time

364 secs

David ERAY

Speech speed

139 words per minute

Speech length

1748 words

Speech time

755 secs

Jerry SHEEHAN

Speech speed

181 words per minute

Speech length

2301 words

Speech time

762 secs

Patrick

Speech speed

140 words per minute

Speech length

1810 words

Speech time

777 secs

Peter CLUTTON BROCK

Speech speed

191 words per minute

Speech length

1887 words

Speech time

592 secs

Yoshiki YAMAGATA

Speech speed

115 words per minute

Speech length

974 words

Speech time

510 secs

Multistakeholder Model – Driver for Global Services and SDGs | IGF 2023 Open Forum #89

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The multi-stakeholder model of ICANN has successfully built trust among users, as demonstrated by Varun Dhanapala from the government of Sri Lanka who shared his positive experience after attending an orientation session in Kathmandu. This highlights the effectiveness of the model in fostering user confidence.

Furthermore, the collaboration between the International Governance Forum (IGF) and ICANN is not competitive but complementary. This was evident during an event hosted by the Sri Lanka Mission in New York, which shed light on ICANN’s mission and work. The partnership between IGF and ICANN is crucial for effective internet governance.

ICANN places significant importance on active participation, even during the pandemic. They have supported and promoted participation in their meetings, demonstrating their commitment to inclusivity and ensuring all stakeholders have a voice in shaping internet policies.

Diversity within ICANN is also emphasised, with a need for representation from various age groups, languages, and backgrounds. This diversity brings different perspectives to the decision-making process and ensures policies cater to the needs of a wide range of users.

ICANN’s role in coordinating the technical aspects of the internet, specifically the domain name system (DNS), is crucial for maintaining stability and security. The reliable nature of ICANN is highlighted by VeriSign’s 26 years of uninterrupted uptime for ComNet and root servers. This underscores the significance of ICANN’s multi-stakeholder community in supporting technical coordination.

The internet’s expansive outreach and untapped potential should be fully harnessed to achieve SDG 9: Industry, Innovation, and Infrastructure. The internet has immense capabilities that can drive innovation and create opportunities for social and economic development.

ICANN recognises the influence of different stakeholders, such as governments, civil society, and the business community. Each stakeholder group has unique contributions to make, and their influence is acknowledged within the ICANN framework. This balanced and inclusive approach ensures comprehensive policy development.

However, ICANN faces several challenges that need to be addressed. The role and influence of ICANN will be assessed by the General Assembly in less than two years, emphasising the need for periodic evaluation and reassessment of its effectiveness. Additionally, ICANN needs to streamline decision-making processes to respond effectively to evolving internet governance issues.

While ICANN is acknowledged as being effective, there is an emphasis on the need for continuous improvement. This highlights ICANN’s ability to adapt and embrace change. Experts with specific areas of expertise are considered valuable contributors to ICANN’s work, even without full-time commitment.

Consensus building within the multi-stakeholder community is viewed as crucial for ICANN’s mission. However, it needs to be carefully approached to ensure predictability and the secure, stable, and resilient operation of networks. This will safeguard the unity of the internet and prevent fragmentation.

Critically, ICANN’s governance has faced scrutiny for its limited interaction with other significant processes, such as the CA browser forum, the Financial Stability Board, and the Decentralised Identity Foundation. There is a call for ICANN to broaden in-house consultation and recognise stakeholders beyond just domain holders, for a more inclusive and comprehensive approach to governance.

In conclusion, ICANN plays a critical role in internet governance and coordination, ensuring the stability and security of the DNS. The multi-stakeholder model of ICANN has successfully built trust among users, and collaboration with organisations like the IGF is seen as essential. Active participation, diversity, and consensus building are key, while continuous improvements and addressing gaps are necessary. Overall, ICANN has the potential to evolve, adapt to change, and effectively shape internet policies through the involvement of various stakeholders.

Veni Markovski

The analysis highlights several key points. Firstly, it emphasises the importance of multistakeholder participation in technology development. It underscores that technologies are not created in isolation but are intended to serve a purpose and engage multiple stakeholders. The analysis suggests that no party works in isolation and the implementation of technology should be in line with prevailing laws and policies. It also highlights that technology stimulates the economy. These supporting facts indicate the positive impact of technology on society.

The second point raised in the analysis is the need for ICANN (Internet Corporation for Assigned Names and Numbers) to improve its engagement with governments. The analysis argues that commitments made by governments to participate in ICANN should be followed by action. This signifies the importance of effective government involvement in shaping internet governance policies. The analysis includes evidence such as Rwanda hosting a high-level governmental meeting and increased government commitment to participate more actively in ICANN. The sentiment towards this point is positive, suggesting a belief in the potential benefits of closer collaboration between ICANN and governments.

The third point highlighted in the analysis is the potential impact of upcoming international processes on ICANN’s work. It mentions that international processes related to ICANN’s mission are taking place at the United Nations (UN), International Telecommunication Union (ITU), and potentially at the European Union (EU) level. It implies that these processes may influence ICANN’s role in maintaining and allocating internet resources. While it states a neutral sentiment, it underscores the need for ICANN’s active involvement in these global processes.

Furthermore, the analysis discusses the untapped potential of the Internet Governance Forum (IGF) to provide recommendations on new technologies such as Artificial Intelligence (AI). It suggests that the IGF, as established by the World Summit on the Information Society (WSIS) Tunis agenda, could play a more significant role in shaping discussions and offering recommendations on emerging technologies. It recommends using the WSIS plus 20 to improve the IGF and increase its contributions. The analysis presents a positive sentiment towards this point.

Overall, the analysis highlights the importance of multistakeholder participation in technology development, the need for ICANN to engage more with governments, the potential impact of international processes on ICANN, and the untapped potential of the IGF. These points reinforce the significance of collaboration, effective governance, and active involvement in shaping technology policies and the future of the internet.

Edmon Chung

ICANN, the Internet Corporation for Assigned Names and Numbers, is widely recognised as a successful multi-stakeholder model for internet governance. It has demonstrated resilience and adaptability through three updates in the past two decades, signifying its commitment to evolving with the changing landscape of the internet. Furthermore, ICANN has incorporated safeguards to protect the system from attempts to extinguish it, highlighting its dedication to ensuring the continuity and stability of the internet.

One of the key strengths of the multi-stakeholder model employed by ICANN is its bottom-up agenda setting approach and consensus-based decision-making. By involving a diverse range of stakeholders, including governments, civil society, and the private sector, ICANN fosters an inclusive dialogue that allows for the consideration of various perspectives and interests. This approach is crucial as it helps to generate broad consensus and ensures that decisions reflect the needs and aspirations of different stakeholders.

The importance of rough consensus is also stressed in the multi-stakeholder model. While achieving full agreement on every aspect may not always be possible, the concept of rough consensus allows for agreement on enough points to continue working together towards a common goal. This principle helps to maintain a single, unfragmented internet and promotes the collective efforts of stakeholders in addressing the challenges and opportunities presented by the digital landscape.

The multi-stakeholder model of internet governance goes beyond addressing technical aspects; it also encompasses broader issues such as sustainability, the environment, and digital inclusion. The model provides a platform for discussions on these topics, enabling stakeholders to work together towards achieving goals such as reduced inequality and industry innovation, as outlined in the United Nations’ Sustainable Development Goals.

The Internet Governance Forum (IGF) is another entity that can benefit from the multi-stakeholder model. By embracing this approach, the IGF can facilitate discussions on internet governance within the context of sustainability and the environment. This not only increases awareness and understanding of these critical issues but also ensures that their consideration becomes an integral part of national and regional IGF discussions.

In conclusion, the multi-stakeholder model adopted by ICANN has proven to be successful in governing the internet. Its bottom-up agenda setting, consensus-based decision-making, and commitment to evolution and adaptability make it a resilient and inclusive approach. The model not only addresses technical aspects but also allows for conversations around sustainability, the environment, and digital inclusion. Both ICANN and the IGF can continue to improve and develop protection mechanisms while emphasising the importance of rough consensus and maintaining a single, unfragmented internet.

Leon Sanchez

The multi-stakeholder model plays a crucial role not only in Internet governance but also in other realms of society. It ensures that all stakeholders have a seat at the table and a say in decision-making processes. This model operates in a horizontal structure where every stakeholder’s voice is heard and considered. The positive sentiment towards the multi-stakeholder model reflects its effectiveness and importance in achieving SDG 16 (Peace, Justice and Strong Institutions).

While the multi-stakeholder model is widely endorsed, it is acknowledged that it is not perfect and has room for improvement. This neutral sentiment suggests that there are areas where the model could be enhanced. However, the overall consensus is that the multi-stakeholder model should be upheld and fostered for future generations. Its positive impact on various aspects, such as making the Internet work and ensuring connectivity during the pandemic, further solidifies the argument for its continued support.

During the pandemic, the multi-stakeholder model proved successful in facilitating online education for students who had connectivity. It also led to the implementation of electronic filing and litigation, ensuring the continuity of the justice system. These examples highlight the adaptability and effectiveness of the multi-stakeholder model, particularly in times of crisis like the COVID-19 pandemic. This positive sentiment towards the model demonstrates its capacity to address challenges and find innovative solutions.

Contrary to the positive sentiment towards the multi-stakeholder model, there is a negative sentiment towards the idea of legislating the internet. It is argued that existing regulations and conduct in the physical world are sufficient to govern the digital world. This sentiment suggests a preference for self-regulation within the multi-stakeholder model rather than imposing stricter legislative measures.

Furthermore, the importance of connecting the next set of users and expanding access to the internet is highlighted as an argument in support of the multi-stakeholder model. SDG 10 (Reduced Inequalities) emphasises the need to bridge the digital divide and ensure equal access to information and resources. The multi-stakeholder model can play a vital role in addressing this issue and promoting inclusivity.

One noteworthy observation is the potential for the multi-stakeholder model to transform representative democracy into a participative one. Utilising this model could enable greater citizen engagement and involvement in decision-making processes, aligning with SDG 16.

In conclusion, the multi-stakeholder model is essential for Internet governance and various aspects of society. While it has room for improvement, its positive impact during the pandemic and the need to address connectivity and digital inequality make a strong case for upholding and fostering this model. The negative sentiment towards legislating the internet highlights the preference for self-regulation within the multi-stakeholder model. By turning representative democracy into a participative one, the multi-stakeholder model has the potential to create a more inclusive and equitable society.

Danko Jevtovic

The success of the internet can be attributed to its foundation on open standards and a user-centred approach. The technical community plays a crucial role in this success through their open, liberal, and voluntary approach. This means that the internet’s technical layer is based on standards that are open and accessible to everyone. The acceptance of voluntarily defined addresses of the root server system has also contributed to the success of the internet. Additionally, the power of the network itself, which attracts users, has played a significant role.

The multi-stakeholder model, which involves various stakeholders such as governments, academia, civil society, and businesses, has proven to be an effective framework for governing the internet. Each stakeholder group has an important role to play, contributing to the development and advancement of the internet.

Celebrating its 25th anniversary, the Internet Corporation for Assigned Names and Numbers (ICANN) has played a pivotal role in the success of the internet. ICANN’s contributions are recognized, and their role in the internet’s evolution is celebrated. Furthermore, ICANN has actively engaged in ensuring that the technical consequences of potential legislation are thoroughly explained to all stakeholders.

It is important to understand the consequences of potential legislative processes and initiatives related to the internet. There are ongoing discussions and initiatives happening in various fora, and it is crucial to assess and comprehend the implications of these actions.

The desire for the internet to continue evolving is emphasized in order to meet the changing needs of both individuals and businesses. This reflects the dynamic nature of the internet and the importance of keeping up with advancements in technology, innovation, and infrastructure.

The Internet Governance Forum (IGF) meetings, according to Danko Jevtovic, have been successful and continue to improve each year. Jevtovic, who has been a member of the Multistakeholder Advisory Group (MAG), has participated in various IGF meetings. He praises the current IGF meeting hosted by Japan and believes that the IGF serves as leverage to create a better internet and work towards achieving the Sustainable Development Goals (SDGs).

Notably, Jevtovic does not see the need to change or create something parallel to the IGF. He emphasizes the importance of utilizing the IGF platform to improve the internet and address the SDGs effectively.

In conclusion, the success of the internet lies in its foundation of open standards and a user-centred approach. The technical community’s open and voluntary approach, the multi-stakeholder model, and ICANN’s contributions have been instrumental in the internet’s success. Understanding the consequences of legislative processes and initiatives related to the internet is important. The desire for the evolution of the internet to meet the needs of individuals and businesses is crucial. The IGF meetings have been viewed as successful and improving each year, providing a platform to work towards a better internet and achieve the SDGs.

Vera Major

The analysis reveals several noteworthy points about ICANN. Firstly, ICANN is commended for its commitment to gender diversity within its organisation. Notably, there are two women in prominent leadership positions – the board chair and the interim CEO. This showcases ICANN’s dedication to promoting gender equality and increasing the representation of women in key decision-making roles. It is an encouraging sign of progress and a step towards creating a more inclusive and diverse environment within the field of internet governance.

Secondly, ICANN demonstrates a commendable level of transparency by making the letters it receives available to the public. This includes letters from governments, the military, and intelligence agencies, providing insights into internet traffic and policy choices. By publishing these letters and providing a link for access, ICANN promotes openness and enables stakeholders to have a deeper understanding of the considerations and decisions shaping internet governance.

Furthermore, the analysis highlights the recognition of Sustainable Development Goal (SDG) 9.1 within the context of internet infrastructure. SDG 9.1 focuses on developing quality, reliable, sustainable, and resilient infrastructure, with an emphasis on regional and transborder infrastructure that supports economic development and human well-being. This demonstrates that ICANN acknowledges the importance of internet infrastructure as a crucial component of achieving sustainable development goals. By aligning with SDG 9.1, ICANN contributes to the global effort to provide affordable and equitable access to the internet for all individuals, regardless of their geographical location or socio-economic background.

Overall, the analysis underscores ICANN’s positive strides in gender diversity, applauds its transparency through the publication of received letters, and acknowledges its alignment with SDG 9.1. These findings showcase ICANN’s commitment to inclusivity, accountability, and sustainable development. It is encouraging to see such initiatives within the realm of internet governance, as they contribute to a more equitable and accessible digital landscape for the benefit of all individuals and communities worldwide.

Tripti Sinha

ICANN, the Internet Corporation for Assigned Names and Numbers, is a non-profit organization that coordinates the Internet’s unique identifier systems. These systems, which include domain names, IP addresses, and protocol parameters, are crucial for the proper functioning of the Internet. ICANN ensures that these identifiers are managed effectively.

At the heart of ICANN’s work lies the multi-stakeholder model, which shapes policies and manages unique identifiers. This model involves the participation of various stakeholders, such as governments, businesses, civil society, and technical experts. The multi-stakeholder approach ensures inclusive and democratic decision-making, which is essential for the continued success of the global Internet.

The Internet operates on a set of protocols and standards that enable connectivity. Thousands of people from around the world collaborate to maintain and improve these systems. ICANN’s governmental advisory committee, with its member governments and observer organizations, exemplifies the global collaboration required for Internet governance.

Discussions on the multi-stakeholder model explore ways to align it with sustainable development goals (SDGs). The model promotes inclusivity, innovation, and engagement to support the digital economy. It has proven effective in ensuring the Internet’s stability over the years, despite the increasing number of users and traffic.

Participants in these discussions highlight the importance of looking beyond existing systems for solutions and proactively driving change. They emphasize the need to involve a wider range of stakeholders and promote diverse perspectives in Internet governance.

While the multi-stakeholder model is widely appreciated, it is cautioned that deviating from democratic principles toward multilateralism could have negative consequences. Upholding democratic decision-making is key to preserving the openness and transparency of Internet governance.

In summary, ICANN plays a vital role in coordinating the Internet’s unique identifier systems. The multi-stakeholder model ensures inclusive and democratic decision-making, which is crucial for the successful functioning of the Internet. Collaboration and engagement from stakeholders worldwide are necessary for effective Internet governance. Discussions focus on aligning the model with SDGs, seeking innovative solutions, and promoting stakeholder inclusion.

Sally Costerton

Upon analysis of the provided information, several key points emerge regarding ICANN and its efforts to shape a more inclusive and multilingual internet. Firstly, ICANN is actively working on expanding the Domain Name System (DNS) to accommodate a wider range of languages and scripts. This initiative arises from recognizing that the next billion users coming online belong to communities with languages and scripts divergent from English and ASCII. By supporting more languages and scripts in the DNS, ICANN aspires to foster a more inclusive digital environment.

Secondly, the multi-stakeholder model of internet governance has played a crucial role in allowing the internet and the digital economy to flourish. This model has facilitated a smooth and stable transition from US government oversight to a global community oversight, ensuring the security and stability of the internet. This observation highlights the importance of the collaborative efforts of various stakeholders in shaping the internet’s governance framework.

Moreover, the COVID-19 pandemic showcased the internet’s pivotal role in supporting remote work, education, healthcare, and connectivity. Governments, internet service providers, technology companies, and civil society organizations collaborated to ensure the internet’s smooth functioning during this crisis. The ability of the internet to handle the surge in usage during the pandemic attests to the effectiveness of the multi-stakeholder model in maintaining the internet’s resilience and reliability.

Trust is identified as a critical factor for the functionality of the internet. Trust is built between individuals, structures, organizations, and governments, and it is essential for the secure and reliable operation of the internet. The multi-stakeholder approach, with its emphasis on inclusivity and representation, aims to foster trust among different stakeholders in the internet ecosystem.

The stability, security, and resiliency of the DNS are central to ICANN’s mission. ICANN recognizes that every online interaction is connected to the DNS and is committed to delivering a stable, secure, resilient, and open DNS for the global public interest. This emphasis on DNS underscores the crucial role played by the multi-stakeholder model in maintaining the internet’s resiliency.

The internet is increasingly pivotal in driving and shaping societal change. Its power stems from being a single interoperable system accessible globally. This recognition further highlights the significance of the internet as a catalyst for innovation, infrastructure development, and economic growth.

Meaningful participation in policy creation requires empowered stakeholders armed with the appropriate skills, knowledge, and confidence. ICANN acknowledges the importance of individual skills and domain-specific knowledge to effectively contribute to sustainable policy creation. This observation emphasizes the need for capacity building efforts to equip stakeholders with the necessary tools to participate actively in shaping internet policies.

Additionally, sustainable policy creation should take into account the voices of many to reduce inequalities. ICANN supports the idea that policies should be influenced through the diverse perspectives and experiences of a wide range of stakeholders. Inclusivity in policy development is seen as a means to promote justice, peace, and strong institutions.

The multi-stakeholder approach advocated by ICANN needs to be inclusive and representative. ICANN has carried out extensive work to bring newcomers from diverse backgrounds into the internet ecosystem and has emphasized the importance of raising awareness about the functioning of the internet within the broader community. This drive towards inclusivity recognizes the necessity of ensuring representation and participation from all stakeholders for a fair and equitable internet governance framework.

Capacity building is highlighted as a vital aspect of ICANN’s efforts to empower individuals within the internet ecosystem. These capacity building efforts involve providing personal and professional skills to individuals, involving different languages and groups worldwide. The training covers various aspects, ranging from personal skills and time management to technical areas like infrastructure implementation. Such efforts aim to enhance the knowledge and capabilities of stakeholders, ultimately contributing to a more resilient and inclusive internet.

Expanding internet understanding and increasing participation in policy-making processes are identified as key priorities. ICANN recognizes the necessity of generating interest among individuals to comprehend the workings of the internet and the impact of internet policies on their lives. Capacity building is viewed as a crucial step towards enhancing understanding and involvement in shaping these policies.

The analysis also acknowledges the importance of international and multilateral processes that have relevance to ICANN’s mission. These processes occur at various levels, including the UN, ITU, and the European Union, and their significance is emphasized in the context of the upcoming WSIS plus 20 process. This observation highlights the broader global context in which ICANN operates and the need to engage actively in these processes.

Regarding ICANN’s role in internet governance, Sally Costerton expresses her belief in upholding the multi-stakeholder model that has contributed significantly to the internet’s success. The upcoming ICANN AGM in Hamburg is expected to extensively discuss this model, emphasizing its critical importance. Sally Costerton also recognizes the vital role played by the Governmental Advisory Committee (GAC) in facilitating understanding and fostering dialogue between members and their respective governments.

The analysis concludes by extending appreciation for participants’ commitment and passion during ICANN meetings, which indicates a collective determination to address critical issues within the internet ecosystem. Furthermore, ICANN’s emphasis on continuous discussion, communication, and issue-raising reflects its commitment to engaging with stakeholders and maintaining transparency in its processes.

Overall, this comprehensive analysis highlights ICANN’s dedication to an inclusive and multilingual internet, the significance of the multi-stakeholder model in internet governance, and the resilience of the internet during the COVID-19 pandemic. It underscores the importance of trust, capacity building, and broad participation in policy creation to ensure a sustainable and equitable internet ecosystem. The analysis also acknowledges the global context in which ICANN operates and the importance of international and multilateral processes.

Session transcript

Veni Markovski:
If we start on time, we might be the first session starting on time, so I wonder whether we should give a couple of minutes to the people. But we have somebody who is waiting online to speak, actually, Sally Costerton. So it’s a little bit, I think it’s 1.45 where she is, which is a little bit cool. So I think without a further delay, thank you, everyone, for coming to the ICANN Open Forum, the multi-stakeholder model driver for global services and sustainable development goals. My name is Veni Markowski. I’m head of government engagement for ICANN. And we have several speakers with us, but the room is, as you can see, we are very open and close to each other. So if you have any questions, also online moderation is provided by my colleague, Vera. If there is any questions, you can just bring me Vera, and we will introduce the questions. So we will give the floor to Tripti Sinha, our chair of the board. We have also later on Danko, Leon, and Edmond speaking, and we have Sally Costerton, our president and CEO, who will be joining us online. So Tripti, I think with that, we can take the floor for your welcoming remarks.

Tripti Sinha:
Thank you, Veni. Good morning, everyone, and welcome to ICANN’s Open Forum. It is a privilege to join you today to explore the role of the multi-stakeholder governance model in shaping the Internet ecosystem over the last 25 years. The Internet, as you know, is a dynamic and ever-evolving landscape, has woven itself into the fabric of our lives, it connects people, transcending borders and cultures, and bestows us with a wealth of knowledge and communication. But what is often hidden behind this seamless connectivity is the participation of thousands of stakeholders who work together to maintain a stable and reliable Internet. So in today’s discussion, we will delve into the multi-stakeholder model of governance and Internet governance and how it has played a pivotal role in creating our digital economy while contributing to the realization of the UN Sustainable Development Goals, the SDGs. So this model lies at the heart of everything ICANN does, shaping policy, implementing changes, and managing the unique identifiers that maintain the Internet’s stability and interoperability. ICANN, or the Internet Corporation for Assigned Names and Numbers, is a non-profit organization that coordinates these identifiers. Every time you go online, regardless of the device you are using, the network you are connected to, or where you are in the world, you interact with the Internet’s unique identifier systems that are coordinated and managed by ICANN. For example, when you type a domain name such as ICANN.org into your browser, ICANN ensures, in coordination with many others, that you end up at the correct website. We make that happen at a technical level. ICANN also coordinates policy development around the technical aspects of the Internet. These policies are developed by a multi-stakeholder community, a rich tapestry of representatives from the private sector, governments, the technical community, civil society, and even individual Internet users. Together, this community is committed to serving the best interests of the public, not only the billions of users online currently, but those who are waiting to connect. Today, nearly every device that’s connected to the Internet runs on the same set of protocols and standards and uses the same identifier systems. By using this shared voluntary system, they are all able to communicate with each other, creating a vast interconnected network. At ICANN, we take seriously our responsibility to inform and collaborate with policymakers to ensure that their efforts to protect their communities do not unintentionally damage the Internet’s functionality. Furthermore, governments and intergovernmental organizations are encouraged to participate in ICANN’s multi-stakeholder policy development process. Our governmental advisory committee, which advises the ICANN board on public policy issues, currently has 182 member governments and 38 observer organizations. The Internet, as you know, knows no political or geographic boundaries. Keeping the Internet running is a worldwide effort involving thousands of people with a shared goal, to connect. As we delve into the workings of this multi-stakeholder model of intergovernance, it is essential to recognize that this approach is one of the most inclusive and democratic forms of decision-making ever devised. This approach produces strong results because everyone has a stake in the outcome. The multi-stakeholder model has allowed the Internet and the digital economy to flourish. It has allowed the Internet to function without fail for nearly 40 years, even as the number of users and traffic has exploded. This bottom-up inclusive model is not just an idea, it’s a reality. So thank you for being part of this important conversation. Let us work together to further understand, appreciate, and contribute to the continued success of the multi-stakeholder model in ensuring a stable, reliable, and unified global Internet that benefits everyone. Now I will turn it over to Sally Costerton, ICANN’s interim president and CEO, to share how the multi-stakeholder model and ICANN community is creating a more inclusive Internet. Sally, over to you.

Sally Costerton:
Thank you, Tripti. Can you hear me? Yes. Good start. Thank you very much. Thank you, and once again, welcome everyone to ICANN’s open forum. And whether you are participating here in person or online, I look forward to engaging in this discussion with you. Since the start of the COVID-19 pandemic, ICANN has worked hard to ensure equitable participation in our meetings for both in-person and remote attendees. We continue to apply those lessons learned to ensure effective engagement with all our stakeholders on this hybrid model. Building on what Tripti said, I’d like to take a few minutes to delve a bit deeper into a couple of examples that demonstrate the power of the multi-stakeholder model that Tripti described in shaping our digital world. A pivotal moment that showcased the model’s effectiveness took place seven years ago this month. In October 2016, oversight of the coordination and management of the Domain Name System, or DNS, was handed from the US government to the global Internet community. It was a profound exercise in trust, collaboration, and consensus-driven decision-making. Through countless hours of dialogue and negotiation, stakeholders from all corners of the globe came together to ensure that the transition would be smooth and that the Internet’s stability and security would remain uncompromised. This transition established ICANN as an independent, global organization accountable to the world that exemplifies how collective efforts and shared responsibility drive positive change. In the seven years since, the global community has demonstrated that the IANA stewardship transition was a resounding success, a testament to the multi-stakeholder model’s ability to work in the best interests of the global Internet community. It showed that when diverse voices collaborate with a common goal, we can achieve remarkable outcomes. More recently, the world was struck by the COVID-19 pandemic. The unprecedented crisis tested the Internet in ways we could never have imagined. Overnight, the world turned to the Internet for everything, for remote work, education, healthcare, and staying connected with loved ones. The Internet’s ability to scale up and provide essential connectivity during this crisis in a sustainable way was nothing short of remarkable. But what’s even more noteworthy is how the multi-stakeholder model played a crucial role in ensuring that the Internet continued to function seamlessly. Governments, Internet service providers, technology companies, and civil society organizations joined forces to keep the digital infrastructure running smoothly. They worked together to address challenges such as increased bandwidth and ever-changing cybersecurity threats. Now, the multi-stakeholder ICANN community has turned its focus to creating a more multilingual, inclusive Internet. Everyone, regardless of their background, culture, language, or location, should be able to make full use of the Internet, and ICANN is working to expand the DNS to support more languages and scripts. Many of the current users and most of the next billion users coming online are already part of communities that speak and write in languages other than English, and scripts other than ASCII. True, local, and global meaningful access to the Internet can only be accomplished when all Internet-enabled applications, devices, and systems work with and accept all valid domain names and email addresses. As we work towards true digital inclusivity, let us remember that all the multi-stakeholder community, let us remember all that the multi-stakeholder community has achieved so far. Let us continue to embrace the multi-stakeholder model as a guiding principle in Internet governance, ensuring that the Internet remains a powerful force for good in the world. Thank you for your attention, and I look forward to hearing your thoughts on these important topics.

Leon Sanchez:
Thank you, Sally, and thanks for being with us given the late, or early, rather, time. But feel free, by the way, if you have any comments when we start the conversation to raise your hand or just unmute yourself, because I understand you’re a co-host, and you can do that and intervene in our conversation. So I’m going to open the conversation with a couple of guiding questions, but guys who are here, feel free to, again, raise your hand. There are microphones enough in the room, and I see familiar faces here, so you can comment or ask questions. So we would love to hear also your contributions on the topic that we’re discussing. So I spend most of my time at the United Nations, so the topic of the SDGs is near and dear to my heart. So I would rather start the conversation with questions to you guys, the panelists, as to what are the tenets of the multi-stakeholder approach to Internet governance, and why are they important to ensuring this open, secure, and interoperable and resilient Internet ecosystem that we have. So who would want to take the first? Leon? Thank you very much, Veni, and thank you, everyone, for being here with us. I think that, as Veni was trying to say, we’re trying to make this more as a dialogue rather than a monologue, so feel free to chime in and raise your hand and just contribute to the conversation at any time. So I think the multi-stakeholder model is essential to not only Internet governance, but I see it, you know, penetrating other real dreams of society nowadays, and I think it’s important because it’s the place where everyone has a seat at the table, and not only a seat at the table, but a seat on a horizontal structure, right? Rather than a top-down model, at least how it works in ICANN, it’s on a bottom-up fashion, right? So it guarantees and it ensures that every stakeholder and every interest group has a saying and is able to raise their voice, and that voice is taken into account. Now, we often confuse being taken into account with producing the outcome that we wished that it was produced by raising our voices. Now, that’s definitely not how any model works, I guess, and the multi-stakeholder model is not an exception, but I think what’s important is that everyone is heard, everyone is, again, sat at the table and able to voice their thoughts, voice their interests, and if, you know, the arguments are so that your point of view prevails, then that is one of the wonderful things that the multi-stakeholder model has, that by consensus, it fosters this interaction between stakeholders that sometimes may have, you know, very opposite points of views and very opposite interests. Nevertheless, through dialogue and through these conversations, we find common grounds that enable us to take action and to build policies and agreements that make the Internet make the Internet work how it works, and I think it’s one of the principles that we should continue to uphold and foster for the next generations to learn and to improve, because, of course, the model is not perfect, right? So we have room for improvement within the model, and I think that’s essential to what we do here in the IGF, right? To try to find those areas of opportunity in which we can improve the multi-stakeholder model and try to then go back to our communities and implement those improvements to strengthen and to make the multi-stakeholder model more efficient. So that’s my initial contribution, Veni, and of course, happy to hear other thoughts.

Veni Markovski:
Thanks, Leon. What you’re saying about the fact that everyone is heard is very important, because indeed, in the multi-stakeholder universe and at the IGF, it is the case. And at ICANN, it is the case. Anybody can come to the microphone, take the floor, and speak equally with the others. Danko, did you want to say something? Yes.

Danko Jevtovic:
Thank you, Veni. Danko Jevtovic, for the record. So I think Leon has very nicely outlined how the model works. But also, we are here at the IGF to celebrate the success of the internet in contributing to the humanity, to the strategic sustainable development goals that UN has. And in discussing that, we are looking also at the model, but I was originally a techie, so I would like to comment a bit why the technical community is an important part of the multi-stakeholder model. So we heard in the Tripti’s and Sally’s opening remarks some very important words, like interoperability, voluntarily system, open standards, and everything. So I would like us to remind us that basically, the technical layer of the internet is on top of the world telecommunication network that now often includes mobile telephony. But this technical layer is actually based on those very open standards and on accepting what is defining the internet, and this is IPv4, IPv6, and the DNS system. And the key to the success of the system is acceptance of this voluntarily defined addresses of the root server system that actually everyone wants to use, because everyone is there. So the power of the network that is happening is something that is attracting the users. So the importance of the internet in today’s life is not coming from some sort of the top down approach, but is coming from the interest of the end users to actually use this network we have. So I think this shows that the open, liberal, and voluntarily approach taken on by the technical community is very much contributing to the success of the whole model. So this is, in my opinion, one of the reasons why the model works. And now, of course, in a multi-stakeholder way, there is a very important role of the other stakeholders, governments caring for the public interest in their democratic processes, academia, obviously civil society, and businesses. So we are all together in this. And in exercising the model, as Leon commented, it works. And we celebrate that here. Thank you.

Veni Markovski:
I’d just like to add something to what Danko and Leon just said. Fundamentally, when technology is created, invented, and developed, it’s not done in isolation, and it’s done for a reason. They’re created for enablement. There’s a form and function to it. So you essentially have to bring multiple stakeholders to the table, because no one is working here in isolation, because then it serves no purpose. So typically, technologies, a technical community develops a technology to enable a user community. And then around that, you need to wrap policies and so forth, so that they implement it fairly and abide by prevailing laws, so on and so forth. And technology in this, in the internet in particular, of course, or stimulates the economy and businesses and so forth. So we’re all sort of interconnected. So let’s not forget the fundamental premise that we don’t work in isolation. Thanks, Edmund, you?

Edmon Chung:
Yeah, Edmund Chung here. Happy to add a little bit of my perspective. I think we, I love ICANN and it’s very important. I grew up with ICANN and almost, but we’re here to celebrate the multi-stakeholder model, not just ICANN. ICANN is one of many successful multi-stakeholder models for internet governance. IGF here, the multi-stakeholder advisory group, the MAG of the IGF is another one. The IETF, the RIRs use a different model, but they are all successful multi-stakeholder models for the global internet and that’s what makes it work. I think that’s a very important part. It’s the global internet governance ecosystem that really makes it tick because I think both Tripti and Sally earlier mentioned that really every click on the internet touches the DNS, touches the IP addressing system. And ICANN plays a role in maintaining the unique identifiers, but these identifiers are developed by the IETF and also maintained by the RIRs as well. So it is working together that makes the global internet governance system work. So another thing that Sally touched on that I want to add is the internationalized domain names. Those who know me know that that’s a topic of passion, but that has been a topic of passion for 25 years. 25 years ago, I went to ICANN and the door was completely open and allowed me to really pick up the mic and start speaking. That goes to Leon’s point about being able to raise a voice for issues that are important because I do believe that a fully multilingual internet is important and it is a foundation towards digital inclusion, which brings me to the topic of SDGs. It’s the sustainable development goals that is really important because the internet is supporting the achievement of these goals. And if you look at IDNs, for example, the acceptance, the universal acceptance of the multilingual internet needs other stakeholders as well. And it’s not just ICANN, it’s not even just the internet governance ecosystem. We need other stakeholders, more of the governments, more of the academia, more of the industry and civil society around the world to make this work. So that’s, in my mind, what is, I guess, beautiful about this model. And Veni started the open the discussion by saying, what are some of the tenants? Well, multi-stakeholder model is of course one of them, but there is a little bit more. So what is in the multi-stakeholder model is also important. And I will highlight two of them that I believe are important tenants. One is the bottom-up agenda setting. And that’s what ICANN is about. That’s what 25 years ago, I was able to step into the mic and speak about. That’s what the IGF here embraces as well, the bottom-up agenda setting, the setting of the program, the MAG and the workshops. The other thing is consensus-based and rough consensus. And I think both Leon and Danko mentioned it, not necessarily always be on the side of the consensus. Sometimes you would be on the side of the rough, right? For the first 10 years, I have been struggling to get anyone being interested in IDNs to, and it took time for the technology and policy to develop. So those two things are equally important, I think, as part of the ecosystem. So finally, I wanted to touch on one thing. There are attacks to the system. There are attacks to ICANN. There are attacks to this global multi-stakeholder model. And for example, that it’s being slow, that things aren’t getting done. I think it’s the time it takes to develop a global consensus on global policies need time. But that doesn’t mean there are not situations that almost borders into filibustering, right? I mean, there are those cases. And that’s why we need to continue to improve. Is it fully democratic? No. Is it a representative democracy? No, but it is a more deliberative, it is a more liquid kind of democracy, but we need to continuously improve it. And at the GNSO at ICANN, you would hear that the policy development process is now in 3.0. What that means is that over the last 20 years, it has been updated three times. So that’s an important part of the multi-stakeholder model, I think. And finally, I wanna say, I very much believe that a noisy ICANN is a healthy ICANN. That being said, for those who wanna challenge this system, and those who wanna really sort of extinguish this open and bottom-up and consensus-based approach, we also need to develop protection mechanisms. And I think ICANN has developed some of it. And then I think also the internet institutions are going through processes to improve that because we need these inoculation mechanisms to built into it so that the multi-stakeholder model can continue to thrive. So for those who wanna come and challenge, I would challenge them to come and participate and change ICANN, but also be warned that we do have these inoculation mechanisms to repel those who intend to kill the multi-stakeholder bottom-up consensus-based mode. So I guess this is what defines ICANN in my mind and why I love it. And the internet governance ecosystem has really proven its resilience and value to humanity. And let’s build it better.

Veni Markovski:
Thanks a lot. I don’t know, Sally, if you wanna step in a little bit on that question. Or if not, I have another one for you, but let me see what you think.

Sally Costerton:
I’m happy to take a slide. Why don’t we go to the next question, Veni?

Veni Markovski:
Unless anybody, I mean, I think there is- There is a question in the room, yeah. Yeah. Can you just introduce yourself?

Audience:
Yes, I’m Varun Dhanapala from Sri Lanka, government of Sri Lanka and one time GAC alternate member. So just to add to the colleagues, so there are a lot of argument of this model, multi-stakeholder model and all these things. I was actually new to this ICANN business. My colleague, Jayanthi Fernandes, introduced me and then I went for an orientation session in Kathmandu. Then only I realized what it really is and engaged with various stakeholders, but we could build trust in this model. So there is a, I attended a couple of sessions, one of the AGMs in maybe Montreal or Barcelona, and there are some arguments with the, it is, whether there’s a competition between IGF and ICANN and I see it rather complimentary rather than competitive. So there’s a lot of give and take things from the state-driven or multi-stakeholder approaches. So that’s what I got and I also, having a diplomat head in New York, I think we could host one of the events for ICANN by the Srilanka Mission in New York for various nation states to give some light on what ICANN is doing in New York. So I think when it compares to many of the infrastructure, I think the internet has a wide outreach and there should be a real strength should be harnessed through all these aspects. That’s my comment and to this, thank you.

Veni Markovski:
Thank you, thank you very much. So Sally, we were talking, I think Edmund mentioned about the resiliency and how the service is working. So the DNS is providing uninterrupted service, which illustrates its reliability and connectivity. How do you think the multi-stakeholder model contributed to maintaining this level of resiliency and do you think this model actually is the one that helped the internet?

Sally Costerton:
Thank you, Veni. Yes, it’s the short answer to the last part of your question, but let me explain a little bit about how I think that works. Well, I think probably everybody maybe in this room understands this, but I think it merits repeating that it is not, part of ICANN’s mission is to serve the global public interest. So the maintenance, the mandate to ensure the stable, secure and resilient and open DNS is how we do that. That is what we are actually delivering to the internet users of the world. And you may say, well, why does that matter? Well, because every time any of us go online, whatever device you’re using, you are, or whatever type of network you’re connected to or wherever you are in the world, you’re going to touch something that originates from ICANN. And those, that takes the form of the unique identifiers, particularly the domain name system identifiers that enable internet users to connect to each other. So at its simplest level, if you type ICANN.org into a browser, that system ensures that you can end up on the right site. And we make that work at a technical level. Do it in coordination with partners in the technical ecosystem. And we are not a political organization. So it’s ICANN, it is different groups of people from many diverse communities, as many people have referred to already this morning, performing that mission and doing it using a bottom-up process that uses consensus. And what that means is that when those policies are developed, in my experience at ICANN, and I think what ICANN has brought to the world in the last 25 years, it’s our 25th anniversary this year, is sustainable policy, because it has the hands of so many people over it. The consensus model is so critical because diversity means people come to ICANN and they don’t agree. They may come from many different points of view in the analog world where they have different ways of doing things. But when it comes to making the policy and coordinating the identifiers that deliver this service to the world, this critical service that keeps the internet open and functioning and always on, they do it using this consensus model where you come into ICANN and you agree that you will find a way forward. And that policy will then have the stamp of agreement and approval and support of the multi-stakeholder community that represents the world’s internet users at ICANN. And that is, as Edmund said, and he’s right, sometimes that process can take quite a long time because to get people to agree to something that is sustainable and that works and that contribute to that critical infrastructure that we all rely on so much, you can’t rush these things. They have to work. They have to work technically. They have to work between all the stakeholders that use the internet. And in order to do that, we have to do one thing that maybe we haven’t talked about yet today, but I want to stress it. And that is we have to build trust. And trust is built between people one-to-one, structures as in organizational groups, organizations, countries, governments, and we all have to have that climate of trust, that ability to trust that we are going to do what we said we were going to do every second, every minute, every day, every hour, wherever internet users are in the world. And the power of the internet comes from the fact that it is a single interoperable system, which is accessible globally and locally, such that one of the, that great strength that we deliver that all the time can sometimes be challenging because it looks so easy. It can look to people that don’t understand how it works like it’s just there. It’s just always there, if you like, like a magic trick. But the reality is, as Tripti said in her comments, that to focus on that part of the internet, we need an internet governance structure as we work so closely today with the MAG, with the IGF, and have done for the whole of the period of this, the time ICANN has been in existence, that the aspects of internet governance around content are outside our remit. So when we are working, we’re working around our mission, around the governance of specifically the technical infrastructure layer. And I think that level of focus has been an incredibly important part of ICANN’s ability to deliver that success over the years. And finally, I just wanted to say, is that one of the key elements we have to focus on, I’ve been very involved in this in the time I’ve been working in this organisation, we have to keep bringing new people in. The world is changing constantly. The internet is an increasingly important element and aspect and driver of that change. And in order to do that, we have to keep widening the net. We need to keep bringing new users into ICANN and we need to help them with capacity development tools and with working together to come into our world, like Edmund said, to show up, to feel welcome, to feel heard, to feel equal, to feel empowered, so that they can make meaningful contributions. I hope that’s helpful, thank you.

Veni Markovski:
Very helpful, thank you. Before I see a couple of hands, but I would just want to make sure if somebody, okay. Oh, Edmund wants, oh, you know. Okay, go ahead. Sebastian was first, I think. Can you? Yeah, there is a microphone, but it’s okay. All right, Steve Del Bianco. I know, you can argue who’s gonna first.

Audience:
Steve Del Bianco with NetChoice and we’re halfway through this session and thus far, the audience to which the message has been directed would be an audience of multi-stakeholders who don’t really have any experience with ICANN. To that extent, it’s been normative and aspirational about how welcome it is to have a voice. I think that’s appropriate to bring people into the consideration of trying ICANN. But at least half of our audience at an IGF, the 17th IGF, are people that have been working within ICANN, as I have for 20 years. That audience realizes that having a voice is not the same thing as having influence and that audience might regard with some skepticism the ease with which having a voice affects outcomes. We actually have a good story to tell to that second audience and our story would be how business, government, and civil society, if they do more than exercise a voice, if they actually show up and participate, can, over time, begin to nudge and change policies. Giving examples that will affect the thinking that will go on, the debate that will go on in governments and the General Assembly over the next two years. Some of that audience experienced ICANN and we need to remind them how governments, through the GAC, have a special form of influence at ICANN, how we engineered their special role through the transition, how governments affect the way ICANN moves and has a huge role on the new GTLD program. Civil society, we want to remind them that, over a decade of effort, they did have a significant nudge to the way ICANN handled the publication of WHOIS data and the new policies that emerged. And in the business community, the story’s a little more muddled because different parts of the business community see ICANN in a different way, but all are able to influence policy, but only by participation. I really believe you oversell, you oversell the value of, oh, having a voice, giving a speech and an open microphone. That is not gonna ring true and effective for the audience that will decide the degree to which governments will accommodate ICANN’s role in the vote the General Assembly takes in less than two years.

Veni Markovski:
For those of you who are wondering what this vote is in two years, it’s the WSIS + 20 review, which many sessions here actually have touched on that. Sebastien.

Audience:
Thank you, Sebastien Bachelet. I wanted to come back to a few positive things and maybe some less positive on ICANN. The first is that, yes, it is participation. And ICANN, it’s the only multi-stakeholder organization who support participation to some of the people. I will just take one example. During the pandemic, ICANN decide to help people to be connected to internet, to participate to the meeting of ICANN in some country where it was expensive, difficult. And there are many other example. I don’t want to take too long on that. But for me, it’s one of the, it’s the only organization doing that. Therefore, it’s important to put that on the table. I understand that you are happy with what ICANN is doing and we are happy with ICANN is doing. But I feel that we can bring new people, yes, but we need also old timers. And we can’t say one and not the other. The diversity, it’s important. We can’t have people just, okay, let’s have everybody out and put new people. We need to have this diversity. It’s also important to continue to have more and more diversity of age, diversity of language. background and so on and so forth. But I really feel that after 25 years and more than 20 years of the last real reorganization of ICANN, it’s time to sit down and to think about that. Yes, I know there are pressing issues, the next round of TLD, the RDAP, and so on and so forth. But if we don’t sit down now, and I say now, to discuss how we can evolve ICANN to shorten the time of decision. Yes, we need time, but maybe we don’t need so much time. With 20 years, more than 20 years, we are discussing on that. It’s sometime we need to find a way to have decision-making, maybe a little bit different. Yes, maybe it will be to take out to some bodies as a final decision, maybe to rebalance the things was done after the transition. But if we don’t sit down now, I am in trouble for the future. Yes, we are doing very good things and yes, ICANN is essential and the way ICANN is working, it’s very good, but we can do better. Thank you.

Veni Markovski:
To address Steve’s comments, you’re absolutely right that it’s one thing to say we are open and you know anyone, but participation is important. One thing that is coming out of literally like last week, I’m already lost what days today, is that with regards to the governments, because I know you understand the importance of them participating and ICANN not just having a voice, is that there will be a high-level governmental meeting in Rwanda on June 9th and we’ve already used the opportunity in our meetings here with government officials to invite them and there are some commitments already of people to come. There will be some official announcement coming at the ICANN Hamburg meeting, but it’s important we, as important as the other stakeholders participation or maybe even more so because of the process that you mentioned, the WSIS plus 20 and the global digital compact which is next year. So we are not happy about the fact that you know it’s an open microphone, anybody can speak, but we are happy that governments actually we see now with Rwanda taking the lead you know to host a high-level governmental meeting and with the conversations that we had here and the bilaterals that I have in the last several months, there is I would say a new commitment by governments to participate, not just to have the right to participate. So we’ll see, you know, they make promises, we’ll see how it goes. Do we have any questions online or no? I don’t know if Sally, do you have any comments?

Sally Costerton:
Thank you, Veni. I absolutely agree with what Steve and Sebastian said about participation, meaningful participation and that requires an empowered stakeholders who are equipped with the right skills and knowledge and confidence. So people skills, individual skills, as well as the knowledge that they need, the subject matter, the knowledge that they need to contribute. Because Steve’s right, the purpose is to is to create sustainable policy and to have an influence on that and to make sure that it’s done through the voices of many and not few.

Veni Markovski:
Thanks, Sally. We have several comments here, go ahead.

Audience:
Thanks, Jonathan Zuck from the Innovators Network Foundation and I’m currently serve as the chair of the ALAC which is the part of the ICANN community that’s endeavors to represent the interests of individual and users. But just speaking for myself and talking about ICANN generally, it’s interesting what Sebastian said that it’s we’re good but we could be better and I suspect that no matter what we do that will always be the description of ICANN, right? There’s that song Imperfectly Perfect or Perfectly Imperfect or whatever it is, right? And that’s going to be the answer. When Jordan Carter yesterday asked people to raise their hand if you thought that internet governance is perfect, I raised my hand. He didn’t see me thankfully because it might have led to an extended conversation. But the truth of the matter is it doesn’t mean that if you have a perfect system you’re gonna have perfect outcomes or anything like that. It just means that you have a system that has the capacity to evolve, that has the capacity to deal with change, etc. But one thing I feel like I’ve been talking about for about 20 years that I think is a challenge much more so than the structure of ICANN is the ability to involve people periodically. Because there’s a lot of people out there in this sort of internet community that have specific areas of expertise but don’t have a general interest in devoting their life to the work of ICANN. And we sort of create this binary that says okay you can come participate and as Steve says you can influence things and Steve’s managed to influence things with just participating for 20 years in the ICANN process. And I think we really need to find a way to, and this was part of the GNSS efforts with PDP 3.0, was to find a way to make the efforts more granular so that I’m asking smaller questions, I’m packaging them in a way that people that have domain expertise can participate for the duration of that small conversation and go back to their regular life. I mean we don’t want twice as many people twice as many lifers that ICANN but we want more voices when they matter, when they count them, when that expertise can be brought to bear. And I think that’s something we should really focus on doing is helping people with periodic participation.

Veni Markovski:
Thanks. I think I mean Tripti has a short comment.

Tripti Sinha:
Thank you Jonathan. Thank you Sebastian. I just want to remind everyone this is not an ICANN meeting. And point well taken but this is about the multi-stakeholder model and how we can sharpen that model and contribute towards the United Nations SDGs and so forth. So just just to remind everyone this is not an ICANN meeting. But I’m glad that you’re interchanging it, getting it confused with IGF and multi-stakeholder when I can. So at least it means that we’re on solid footing when it comes to multi-stakeholderism.

Veni Markovski:
Thanks Tripti. I think Edmund, who is first?

Edmon Chung:
I can quickly because I think it adds on to what Tripti said. I think this is a great demonstration of a noisy ICANN that I think is a healthy ICANN. So but I did want to highlight one of the things that that yes I agree the the evolution of the system is is very important. But one of the things that that I want to highlight on the resilience it is this thing this type of argument that that that supports the resilience of the governance system as well and not counting on full agreement. That’s the the beauty of rough consensus right. But in terms of rough consensus there’s also a nugget where we do agree. We agree enough to continue to work together and not go out and do something else. And I think that is equally important. And that is the the nugget of rough consensus that that’s what maintains one Internet unfragmented. And then you know that’s that’s the one thing that I wanted to add.

Audience:
My name is Werner Staub. I’m also part of the that half of the audience was attending ICANN meetings on a regular basis. And in the context of how we should organize the multi-stakeholder process of ICANN we can actually look back on 20 years of experience and see successes and one enormous failure. And that only enormous failure is the fact that if you have this pyramid of multi-stakeholderism focusing on the top which is the ICANN itself you know the it’s it’s it’s a it’s governance structure. It fails to interact with other processes that produce useful things that would actually be very much necessary for the community that ICANN is supposed to serve. And that community is not the domain holders. That community is the end-users of the Internet. And specifically I can give a couple of examples of other processes. They’re even kind of not so represented in in this IGF but they’re really key to it. One of them is the CA browser forum. We have a lack of interaction with that organization which is critical form for most of stuff that affect the users of what ICANN ultimately outputs. Secondly we have the Financial Stability Board which finally took action about identification of legal entities worldwide. And compare that to ICANN’s conclusion that it was unable to distinguish effectively between organizations and natural persons. It beggars belief that we had that result simply because we’re looking for a solution inside of this pyramid when actually the solution comes from from somewhere else. And finally we’ve got another example which is the identity forums. You know there’s a number of there’s a number of initiatives around there. The Decentralized Identity Foundation and so on. All these they would need some interaction but we cannot organize this you know with just some of the stakeholders going there. It needs some interaction from the top of ICANN as well.

Tripti Sinha:
Thank you. Once again just a response to your comment. You’re talking using ICANN as an example again but the takeaway from your comment is that regardless of what multi-stakeholder model we use whether it’s for the IGF or for ICANN or any other body let’s make sure that we look outside of our own system. And good point well taken. Thank you.

Veni Markovski:
Thanks and I want to bring Sally again in the conversation because I think it goes along with a couple of the comments we heard. So my next question is how to ensure that this multi-stakeholder approach is inclusive yet representative which is something that Steve was talking about. And particularly of underrepresented groups and regions. And what is the role of capacity building that and how ICANN is engaging to expand and to bring new people. So Sally do you want to comment on that?

Sally Costerton:
Thanks Vani yes it’s essential and right as Tripti said not just within ICANN but right across the internet ecosystem. We’ve done we’ve done an enormous amount of work on bringing in newcomers of all different types to ICANN. Now what we discovered early on and I’ve been at ICANN about 11 years and during this time we discover this and I know that our colleagues at ISOC and in the RIRs and right across the the system have the same challenge that as Jonathan said the first thing is you have to get people interested and that means they have to understand why what ICANN does affects them. People will not give up hundreds and hundreds of hours of their own time it’s a volunteer community to do something however important we may think it is until they understand why they think it’s important and so that’s that’s a critical hurdle that we have to get over as an internet community not just as an ICANN community and one of the things we have to do to do that effectively and this is an incredibly important part of the role of the IGF I think is to raise awareness of how the internet actually works and how those how people can come and be part of that and why it matters and how they have an influence and as those 10 years have gone on there’s been more of more interest in how the internet works more and more interest in the internet itself which is no surprise as the as the users have grown so very much and that means then we also have to bring people in and say here’s what you need to know. Now some people come to ICANN and they know they very well understand the content but they may not have the personal skills they may come from environments where they’re not trained in in the personal confidence skills and the time management skills chairing meetings participating in meetings and so forth and the work of drafting and editing that goes into policymaking which we see is again not just not just ICANN but in in many other groups that are involved in this. So the capacity building with what we mean is in what usually my definition of that would be is we’re giving people a skill so that they they can stand on their own feet this is sort of the teach a man to fish and teach a woman to fish idea. So much of the capacity building we do at ICANN around the world is about working with groups of people to help them to learn for themselves the skills that they need to to use that energy and ambition and excitement that they have to be part of it in a meaningful way so they are not just talking in a microphone they are participating in that policymaking process and we have to do that in multiple languages we have to do it in multiple time zones and we have to do it with different groups of of participants. So what students might need engineering students might need for example in Asia Pacific might be very different from what a new GAC rep might need in Latin America in South America or Latin America. So we do lots of different kinds of training capacity building we create a lot of different content in lots of different languages and some of that capacity building is very hands-on particularly our technical training so when people are putting in new infrastructure in their or in their countries and their organizations we do everything we can to help them to do that to make sure that they understand how to how to put in those things like DNSSEC for example security for the DNS they can do it effectively and they have the confidence to do that moving forward.

Veni Markovski:
Thanks Sally. Keith you’re giving priority here.

Audience:
Thank you very much Benny and hi everybody Keith Drezik I work for Verisign we are the registry operator for the .com and .net top level domains we operate two of the internet’s root servers and we perform the root zone maintainer function under a contract with ICANN and that may be well-known information to some of the ICANNers in the room but I’m sort of introducing myself in the company for those who may be following online maybe watching the recording later. I want to take this conversation perhaps up a level and back to the focus on ICANN’s role and the ICANN multi-stakeholder community’s role in supporting its mission its technical coordination mission of the IANA functions as well as in support of the SDGs. I think as we talk about this session the way that it was teed up it’s you know important to note that ICANN has a very important role at the technical layer of the internet it has a very important multi-stakeholder engagement in support of the development of policies that impact that technical layer of the internet and as we look at the SDGs there is no one that is you know a direct recipient of something specific that is coming out of the ICANN process but fundamentally what ICANN does in the coordination of that technical layer and of the IANA functions specifically the coordination of domain names of IP addresses of protocols that come out of the IETF is that what we do in a predictable manner in the ICANN space it creates stability security and resiliency for that technical layer that enables everything else to function in a predictable way for the work that needs to take place to deliver on the sustainable development goals it all in our interconnected world relies on the predictable stable secure and resilient operation of the DNS and I think it’s just critical to recognize that ICANN’s mission is a narrow one by necessity ICANN does it very well VeriSign has been delivering on our DNS uptime for ComNet and the root servers that we operate 100% uptime for more than 26 years we are able to do that because of the policies and because of the predictability that exists in the ICANN space in the management of the IANA functions our ability to do that and other registries and registrars and the service operators the RIRs we are able to do that because of that predictable nature but it’s really important to note that policies do need to evolve policies do need to change attackers and the DNS are getting more sophisticated they are evolving we need to evolve our policies accordingly and there’s probably another you know a range of more options or examples that I could provide but just you know to summarize I think we have very good engagement in a multi-stakeholder way in the ICANN space multi-stakeholder consensus building is really about compromise at the end of the day consensus building bottom-up consensus building in a multi-stakeholder fashion is about compromising but it needs to be done carefully and it needs to be done to ensure that predictability and secure stable resilient operation of the networks so thank you very much thank

Veni Markovski:
thank you this could have been actually a good fine final statement for the meeting but we still have time and we have people who raise their hands so first is Danko and then Leon.

Danko Jevtovic:
Thank you Kit I think this is a great introduction also to what I wanted to say so by celebrating the success of internet here probably we are talking so much about ICANN also because well it’s a lot of ICANNers here but we also celebrating 25th anniversary of ICANN here and I think this is part of this success story for the whole internet so inside the ICANN ecosystem we are coordinating those policies that enable this mid-layer to function and to be the fundamental for all the services and content and everything why the users are there but also as Sally explained it is our role to engage and to explain the technical consequences of possible legislations that are coming so in these discussions in the IJF I think it’s very important to contribute because we are walking towards the global digital compact we are walking towards voices plus 20 and for all of this technical community and things that are coordinating in the multi-stakeholder model well through ICANN and through IJF and for others it is important that we understand what are the consequences of the possible legislative processes and initiatives that are happening also in other fora and give our best help, assessment, expertise to be able to, for this great internet to continue for next 25 and 25 and more years after that and obviously to evolve to serve the needs of the end users of the people of the world and businesses. So I think this is very good explanation how actually the things work and it will continue to be helpful.

Leon Sanchez:
Thank you, so just as Tripti was reminding us that this is not an ICANN meeting, I’d like to take the conversation a little bit out of the realm of the ICANN world and try to remind us how the multi-stakeholder model and what it produces actually has been successful in forwarding or going forward or progressing at least two of the SDGs and I’m gonna center in SDG four, which is quality education and SDG 16, which deals with justice. And we could see the results of how the multi-stakeholder model delivered in progressing these two goals during the pandemic, right? If it hadn’t been because of the products made out of the multi-stakeholder model, children, a lot of children around the world, of course, didn’t have the benefits of having continuous education during the pandemic, that’s for sure. But those who were able to connect, those who had connectivity, they were able to continue having their lessons taught. They were able to continue learning. And that is another challenge of the multi-stakeholder model to connect the next users that are still not connected. So this is an effort that only, at least in my mind, only through a multi-stakeholder model of doing things we will be able to achieve. And that will also progress on the SDGs of equity, et cetera, et cetera. And in terms of justice, I know that this might not apply to all legal systems around the world, but at least I can tell you my experience being a practicing lawyer in Mexico. We’ve had legislation that established electronic means for filing, for litigating, et cetera, et cetera, for years, but they were never implemented because we didn’t need them. And as soon as the pandemic hit us, then all of a sudden, the courts, the different offices in government, et cetera, et cetera, they implemented this legislation that had been dormant for years. And we were able to continue litigating, we were able to continue filing all types of affairs in front of government offices because of the products that we produce in this multi-stakeholder model. Not only within ICANN, again, but through the different allies and the different bodies that conform the wider internet community, right? And I think one lesson that I’d like to convey or to share with those decision makers that might be listening to us is that we don’t necessarily need to legislate the internet because it’s already regulated, right? Because regulation regulates conduct. It doesn’t regulate means or media. So whatever we do in our physical world, it already has an equivalent conduct in the digital world. So whatever legislation we apply to the physical world, we can port that to the digital world. Of course, there might be gaps that we need to look at, but we should look at them in a very careful way and by all means through a multi-stakeholder approach because that will ensure that whatever legislation is crafted will take into account the interests of those who will be affected by those legislation. So again, I think porting this multi-stakeholder model not only to the internet community, but to a larger model like the representative democracy and turning it into a participative democracy would be the ultimate goal for us to prove that the multi-stakeholder model is fruitful.

Veni Markovski:
Thanks. I understand there are some comments online if you wanna read them.

Vera Major:
Yes, thank you, Benny. Can you hear me? Thank you. Thank you, Benny. You can hear me now? So we have several comments in the chat as well as a question which was answered in the chat, but I’d still like to read it out loud. First, Desiree Milosevic commented, I would like to highlight a really great recent development that ICANN in terms of diversity. ICANN has two women at the helm of ICANN, the board chair and the interim CEO, and many diverse members of the ICANN community in leadership positions. And there was a question for Morgan Rockwell. Is there a transparent report on how governments and military and intelligence agencies requested ICANN to interfere in internet traffic, IP designation, or any policy choices? Which was answered by Mikaela Nalon in the chat. ICANN published the letters they received and he provided the link. So for anyone who would like to see the link, please go to the Zoom room in the chat. And finally, an observation from our board member, Edmund Chung. There’s one specific SDG 9.1, develop quality, reliable and sustainable and resilient infrastructure, including regional and transborder infrastructure to support economic development and human wellbeing with a focus on affordable and equitable access for all. That was it for now. Thank you, Vera.

Veni Markovski:
That kind of brings me, the whole conversation here brings me to the point where I wanna skip a couple of the questions I was having in mind, but go to the point that Steve was mentioning, which is the WSIS plus 20 process and to see like in the next two, three years, there are international and multilateral processes that have relation to ICANN’s mission. They’re happening at the UN, they’re happening at the ITU, they may be happening at the European Union level with different elections next year and different legislation. So I was wondering whether maybe Sally, cause we don’t see you here in the room, but I can see a little screen with you there. So on our screen, so maybe do you wanna take it from here and say how do you see ICANN’s role in the next couple of years vis-a-vis those international intergovernmental processes that in some of them we cannot participate because they are closed only for governments and in others, they try to open them with stakeholder consultations and stuff like that and to give you some background, two days ago, I believe, in one of the sessions, Jordan Carton, I believe, from Aude said that it’s enough with the consultation, we want to be involved. Now, within the UN General Assembly, we cannot be involved because the rules of procedure do not allow us to be involved. So consultation is the only path forward. But what do you think about these processes internationally and intergovernmentally that might impact ICANN’s mission?

Sally Costerton:
Thank you, Vani. Yes, it’s an extremely critical topic for ICANN, for the world and for the internet as we go forward into the next two years of this discussion. And we are about to have our next meeting in Hamburg, our AGM, which will take place in a couple of weeks’ time. And I know from having seen the agenda and talked to many stakeholders that we will have a lot of discussion on this exact question you’re asking. So first thing I would say is I think we need to raise awareness of the importance of the discussion inside ICANN. Although it is not an ICANN process, clearly it is a process in which ICANN is very affected and is very involved. So everybody that participates in ICANN needs to increase their awareness about what it is, why it matters, and what they can do and what we should do as organisations and groups to contribute. It revolves around two fundamental objectives in terms of our position, I would say. The primary objective is to uphold the multi-stakeholder model of internet governance that was created 20 years ago. And I think there is a, certainly my view and I think view of many people in our ecosystem is that it is that multi-stakeholder model, that process that we’ve discussed a lot this morning in this discussion, that has delivered the success that we see from the internet today, which Leon referred to, many other people referred to the outcomes of what the internet has been able to deliver. And that extraordinary model at the centre of it that was created 20 years ago has been probably one of the most important contributory factors that has enabled that to happen. So maintaining that as we move forward with a dramatically expanded internet, as I said earlier on, and Edwin referred to, of many new participants with different languages, different scripts, different needs to participate, but that model needs to stay at the heart of it. And to achieve that objective, we’re dedicated to raising awareness, not just inside ICANN, but also awareness amongst the member states that will discuss this and all stakeholders by whom they will, who will also participate and who will also be very influential onto the member states that are involved. And that means sharing our knowledge about how the internet works, the consequences of unintended regulation, some of the topics we’ve talked about this morning, and going back to something Steve referred to earlier, and we have an ex-GAC member in the room, I know, the role of the Government Advisory Committee is such an important part of the way that ICANN works. It’s a very unusual setup. It’s extraordinarily, I think, an important part of ICANN and making sure that the GAC, the individual GAC members and the GAC as a group are fully equipped and fully empowered to participate in that discussion and that we’re maximizing the knowledge and access and relationships that the GAC members who come to ICANN have within their own countries and their own governments to increase the understanding of those governments in these critical issues. The final thing I’d say is that we have two specific, two aims for WSIS-20. Firstly, to increase the awareness of the global digital cooperation, the GDC, as you mentioned already, Bernie, and the review. And the second is, as I said earlier, to draw attention to the key issues within that review that have the potential to adversely affect the internet and adversely affect ICANN’s ability to deliver its mission in the successful way that it has for the last 25 years. Thank you.

Veni Markovski:
Thank you, Sally. Actually, on that point, with regards to the WSIS-20 process, this is one of the, ICANN is, I don’t know if it’s the only organization, but ICANN definitely has a goal, a CEO goal, which is about WSIS-20. So that shows commitment of not only the CEO, but the whole organization to make sure that we pay attention, we raise awareness, we continue to provide technical, neutral information to governments around the world and in the United Nations so that when they go to those negotiations behind closed doors, we hope there will be enough knowledge there so that they are not going to propose stuff which we have seen in the previous discussions, especially in the WSIS plus 10 negotiations in 2015. So a lot of work needs to be done, but we are hopeful that we’ll continue to do that. Any comments on that question in the room online? No. So another question, though, related for me, then related to this one would be for you guys, the panelists, is to, do you think there is, I mean, we all hear about the proposals for creating a multilateral forum. There have been conversation here in the hallways whether this multilateral forum that it might be or may not be, we don’t know, created next year at the Summit of the Future and the Global Digital Compact, whether this could mean a replacement for the Internet Governance Forum. On the other hand, we are at the IGF and we heard a lot of public statements in support of the IGF, but I’m just wondering whether any of you want to, like, maybe even make a guess or, and then we can remind you two years in the, Steve is nodding no, want to make, doesn’t want to make a guess. Take bets. We can make a little bet, you know, a glass of water or something like that. Any, Edmund? Happy to add Edmund here.

Edmon Chung:
Well, I guess this is topic that is one of the hot topics around, you know, as I was discussing with different people here at the IGF. I think generally, at least, maybe it’s because I’m from ICANN and from, you know, supporting the IGF as it is, but I generally, I think here that the community thinks that the IGF needs improvement, but the IGF and the multi-stakeholder model that it takes actually can work better and will work better based on a multi-stakeholder model that can actually bring in different stakeholders. Yesterday, I was at the main session on sustainability and environment, and there’s a clear need to bring in other stakeholders, which is the, I guess, the benefit of a multi-stakeholder model versus another type of model, because then the changes might be much harder to make, and from that main session, at least my conclusion is that we need to take the discussion about internet governance in relation to sustainability and environment to the national and regional IGFs, which then comes back and inform the global IGF, and again, that is the multi-stakeholder model in action and how it would work that I think the IGF can build on and is kind of the right model to build further, and that’s sort of what I’m, at least I’m hearing from the community as well.

Veni Markovski:
Thank you. Yes, Danko.

Danko Jevtovic:
Thank you, Veni. Danko speaking. So I am happy to take possible criticism that I’m too optimistic here, but today is the last day of this great IGF meeting, and I was a MAG member from 2017 until 20, so Paris IGF, Berlin IGF, and a virtual Katowice IGF, and I think, first, I’m very grateful for the Japanese hosts here, but I think this is great IGF. It is getting better every year, and I think this is actually proof not only of the success of internet, as I’m often saying, obviously, but the proof that IGF is functioning and it is getting better, so I don’t, I see those discussions. I see that UN, as the organization of member states, has a certain point of view, but I don’t really see the need to change IGF or to create something in parallel to it. We need to discuss, we need to evolve, and we want to strengthen the IGF, but I think this year is a great success, and I think it should be celebrated by striving to get better and better IGFs, but using that as leverage to create a better internet and to work on the sustainable development goals.

Veni Markovski:
Thank you, Danko. Keith? Okay, thank you, Veni.

Audience:
Thanks again, Keith Drezic, Verisign. So I’d like to build on a bit of what Danko and Edmund just said. I think as it relates to IGF, Verisign has been a longtime supporter of the IGF as a multi-stakeholder engagement, very important. So I think we think of multi-stakeholder internet governance, there’s a macro level, right? There’s the big picture where it’s very, very important that stakeholders have a voice in the development of policy and the development of governance structures. I think that’s all very important and critical at a macro level, but I think in order to have the internet governance forum be relevant and to encourage participation, and as Danko said, we’ve seen tremendous participation here at this IGF meeting in Kyoto. Which I think is really positive, but I think it’s important to be able to identify specific issues and specific topics that need focus and that need contribution and need dialogue and discussion. And I’ll give you an example just from this week. So the Dynamic Coalition on DNS Issues was originally established, I wanna say six years ago, and its focus at that time time was on universal acceptance issues, IDNs, but universal acceptance in general. And then during the pandemic, it sort of went dormant. And obviously, there were the challenges of lack of in-person participation, but the group sort of went quiet. And we have just re-energized, re-established the dynamic coalition on DNS issues. It’s now we were able to get a dynamic coalition session here in Kyoto. And that session was focused on the governance gaps as it relates to the DNS. As was noted earlier, ICANN’s role is very limited. It’s limited to primarily a technical function. And it is clearly not in the content arena. ICANN’s bylaws prohibit it from, it and its contracted parties and others in various ways, from engaging in content moderation. So one of the governance gaps that we’ve identified is, how do we have policy development in a multi-stakeholder way, or even just dialogue and development of best practices in a multi-stakeholder way on content-related harms or content-related matters? So we’re starting this dialogue in a parallel multi-stakeholder track outside of ICANN, but within the Internet Governance Forum context. This is just beginning. I think there’s an opportunity there for a range of views and voices to contribute to that effort. And so, again, just to summarize, macro internet governance, multi-stakeholder IGF, really important. But I think the micro issues, where you get into the more concrete details, will generate more participation, more contribution, and more engagement. Thanks.

Veni Markovski:
Thank you. I want to use the fact to have somebody from DESA here and talk a little one comment on the IGF, which is, I mean, we hear a lot about the IGF. It’s a place for discussion. Actually, the WSIS Tunis agenda in the paragraphs that establish and define what the IGF is, it actually says that the IGF can provide recommendations on new technologies. So I mean, I was thinking, listening to the Secretary General in the last few months, he says, you know, we need an AI agency because AI is dangerous, et cetera, et cetera. This is one thing that the IGF could also do. There could be sessions on AI, and there could be some recommendations expressed by the IGF. So there are still unused opportunities of the IGF. I think that we need to go back and reread the document, the WSIS Tunis agenda, and the WSIS plus 10 outcome document, and maybe provide some feedback to the governments in New York and our national governments, and tell them, hey, you can use the WSIS plus 20, actually, to improve the IGF. And also, you can urge the IGF to be more contributing into what we are going to say. You don’t need to comment, but you have the microphone.

Audience:
Yeah, I do. This is why I’m walking around. But I think you have pretty much stated the lines. But I just like to also, on the record, if you revisit the Secretary General message for the opening of this IGF, I think that that is actually a very big compliment about the IGF, how it has demonstrated in the past 18 years of the multi-stakeholder model. The question in context is actually whether there need to be a separate body on AI. So right now, the approach of the Secretary General is of a high-level advisory body. That doesn’t mean that there will be a high AI advisory board, high-level advisory to him in giving recommendation. So that does not stop IGF from giving recommendations. As a matter of fact, one of the themes of this IGF is on AI. So hopefully, we will have significant or good enough key messages that will talk about what will be the AI, the trends, or the recommendations, or anything for that matter. But having said that, I think for the remaining two years, this is still a possibility for the IGF to reinvent, because that will also be demonstrative of how, during the review in 2025, about what specific impact. And that also relate to the future mandate of the IGF. So I think it’s important. And as a staff member, I heard many compliments about the relevance of IGF. And there’s no need for other bodies. But I think it’s also within this room or this hall, we do have to look at what are the views of those who still think that what are the sort of gaps that IGF ends up being able to fulfill. Thank you.

Veni Markovski:
Back to you. Thank you. I don’t know if there are any comments. There are no comments online, right? OK. So Sally, we are coming to the end of this session. I wanted to see whether you have some final remarks of what you’re thinking about this whole discussion that we had here. And then I’ll pass it to Tripti for the final comments.

Sally Costerton:
Thank you, Veni. I want to thank everybody for coming together at the IGF meeting, for having the energy, the focus, the commitment, and the passion to continue to focus on these critical issues that we’ve been discussing today, and for helping ICANN to continue to raise awareness of the issues that are so, so important as we move through these next couple of years, which we’ve been discussing, particularly in the second half of this meeting. So any of you who are coming to Hamburg to our meeting, I look forward to seeing you there, either online or in person. And in meanwhile, if there is anything that anybody would like to raise with us, with ICANN, that we’ve discussed this morning, we have plenty of ways of communicating with us. Please do that. And thank you very much for your participation this morning. It’s been really a very, very important discussion.

Veni Markovski:
And thank you, Sally, for staying with us. We understand what the time is in the UK. So thanks a lot. Really appreciate it. Tripti?

Tripti Sinha:
Thank you, Veni. So thank you, everyone, for the discussion. I was reflecting on how I’ve come to realize that ICANN is a synonym for the MSM and the IGF. And so I take that as a compliment that our discussion kept going back to ICANN. I think it’s being used more as a model of a multi-stakeholder model that it’s functioning. And as you pointed out, Steve, you’re saying let’s not just have a voice. Let’s be influencers. Let’s move the needle on issues. And we need greater engagement, just more proaction in how we actually bring about change and effect change. And as you were saying, the other gentleman, that oftentimes the solution may exist outside of the perimeters of whatever system we’re working within. And you’re absolutely right. And to me, the gap that that addresses is that perhaps we don’t have everyone inside. We need more stakeholders. And we should go seek them. And that is a point that has come up in the discussions this week here, which is what’s missing at the IGF? And who’s not at the table? And let’s bring them in. And I think that applies to any model of MSM. And no model is perfect, Jonathan. But I think we’re doing quite well. And if I could end on one note, which is in many ways, this is a democracy, if you will, at a high level of abstraction. This is a democracy where you’re trying to bring everyone’s voice and influence to play. And if we’re moving towards multilateralism, I’d say, sadly, the old truism that democracy dies in darkness, that is indeed what will happen. You take away some very important players and you begin to create instability and destabilize the system. So on that note, I’d like to say, let’s just chisel away and make this a better system. Thank you.

Veni Markovski:
Thank you, Tripti. Thank you, everyone, for coming. I understand it’s the last day. So it’s kind of a, you know, you’re looking forward to leave the venue. But thanks again for coming. And thank you, Vera, for the online support. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks. Thank you. Thank you. Thank you. Okay. You too, let’s see next. Oh, that’s okay. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

3316 words

Speech time

1187 secs

Danko Jevtovic

Speech speed

149 words per minute

Speech length

904 words

Speech time

364 secs

Edmon Chung

Speech speed

158 words per minute

Speech length

1462 words

Speech time

554 secs

Leon Sanchez

Speech speed

145 words per minute

Speech length

1302 words

Speech time

539 secs

Sally Costerton

Speech speed

180 words per minute

Speech length

3364 words

Speech time

1123 secs

Tripti Sinha

Speech speed

170 words per minute

Speech length

1220 words

Speech time

430 secs

Veni Markovski

Speech speed

175 words per minute

Speech length

2439 words

Speech time

835 secs

Vera Major

Speech speed

182 words per minute

Speech length

254 words

Speech time

84 secs

Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Christian von Essen

The implementation of AI language understanding has yielded promising results in reducing the presence of inappropriate sexual content in search results. It was reported in 2022 that there had been a 30% decrease in such content from the previous year, thanks to the application of AI algorithms. This positive development has continued in subsequent years, with ongoing efforts to further decrease the presence of harmful content.

Addressing bias in AI is a crucial aspect of promoting equality, and specific measures have been taken to ensure that training data includes protected minority groups. To counteract bias, training data now includes groups such as “Caucasian girls,” “Asian girls,” and “Irish girls.” Additionally, patterns across different groups are utilized to automatically expand the scope from one group to another, effectively reducing biases in AI systems.

Success in mitigating bias is measured by comparing the performance of classifiers across different data slices, including LGBTQ, gender, and race. The goal is to ensure that the probability of predicting inappropriate content remains consistent across all data slices, regardless of individual characteristics.

The inclusion of corrective training data and the application of additional methods have led to significant improvements in the equality of quality across different data slices. These improvements are evident when comparing models to baseline models. Furthermore, the introduction of more methods and data further enhances these gains.

Counterfactual fairness in AI involves making sure that the outcome of a classifier doesn’t significantly change when certain terms related to marginalized minority groups are modified. For example, if a search query includes the term “black woman video,” the classifier should predict a similar outcome if the term is replaced with “black man video” or “white woman video.” This approach ensures fairness across all user groups, regardless of their background or identity.

Ablation, which is also a part of counterfactual fairness, focuses on maintaining fairness even when specific terms are removed from a query. The output of classifiers should not change significantly, whether the query includes terms like “black woman video,” “black woman dress,” or simply “woman dress.” This helps ensure fairness in AI systems by reducing the impact of specific terms or keywords.

Fairness in AI systems should not be limited to gender and race-related terms. The behavior of classifiers and systems should remain consistent across all data slices, including categories such as LGBTQ queries. This comprehensive approach ensures fairness for all users, irrespective of their identities or preferences.

Counterfactual fairness is considered a necessary initial step in augmenting training data and creating fair classifiers. By ensuring that classifiers’ predictions remain consistent across different query modifications or term replacements related to marginalized minority groups, AI systems can strive for fairness and inclusivity.

While the initial focus of language models like BERT was on creating credible and useful models, efforts to address bias and fine-tune these models were incorporated later. It was vital to establish the credibility and usefulness of such models before incorporating bias correction techniques.

As AI models continue to grow in size, selecting appropriate training data becomes increasingly challenging. This recognition highlights the need for meticulous data selection and representation to ensure the accuracy and fairness of AI systems.

Ensuring the representativeness of training data is seen as a priority before fine-tuning the models. By incorporating representative data from diverse sources and groups, AI systems can better account for the various perspectives and experiences of users.

The distinction between fine-tuning and the initial training step is becoming more blurred, making it difficult to identify where one ends and the other begins. This intermingling of steps in the training process further emphasizes the complexity and nuances involved in effectively training AI models.

In conclusion, the use of AI language understanding has made significant progress in reducing inappropriate sexual content in search results. Efforts to address bias and promote equality through the inclusion of training data for protected minority groups, comparing classifier performance across different data slices, and ensuring counterfactual fairness have proven successful. However, it is essential to extend fairness beyond gender and race to encompass other categories such as LGBTQ queries. The ongoing efforts to improve the credibility, bias correction, and selection of training data highlight the commitment to creating fair and inclusive AI systems.

Emma Gibson – audience

The Equal Rights Trust has recently launched a set of equality by design principles, which has received support from Emma Gibson. Emma, a strong advocate for gender equality and reduced inequalities, believes in the importance of incorporating these principles at all stages of digital technology development. Her endorsement highlights the significance of considering inclusivity and fairness during the design and implementation of digital systems.

Emma also emphasizes the need for independent audits to prevent digital systems from perpetuating existing biases. She emphasizes the importance of ensuring that these systems do not perpetuate discriminatory practices and instead promote fairness and justice. Conducting regular audits allows for the identification and effective addressing of any biases or discriminatory patterns within these digital systems.

The alignment between these principles and audits with the Sustainable Development Goals (SDGs) further reinforces their importance. Specifically, they contribute to SDG 5 on Gender Equality, SDG 10 on Reduced Inequalities, and SDG 16 on Peace, Justice, and Strong Institutions. By integrating these principles and performing regular audits, we can strive towards bridging the digital divide, reducing inequalities, and fostering a more inclusive and just society.

In conclusion, the equality by design principles introduced by the Equal Rights Trust, with support from Emma Gibson, offer valuable guidance for digital technology development. Emma’s advocacy for independent audits underscores the necessity of bias-free systems. By embracing these principles and conducting regular audits, we can work towards creating a more inclusive, equal, and just digital landscape.

Audience

The discussions surrounding gender inclusivity in AI highlight several concerns. One prominent issue is the presence of biased outputs, which are often identified after the fact and require corrections or fine-tuning. This reactive approach implies that more proactive measures are needed to address these biases. Furthermore, the training data used for AI might perpetuate gender gaps, as there is a lack of transparency regarding the percentage of women-authored data used. This opacity poses a challenge in accurately assessing the gender inclusivity of AI models.

Another factor contributing to gender gaps in AI is the digital divide between the Global North and the Global South. It has been observed that most online users in the Global South are male, which suggests a lack of diverse representation in the training data. This further widens the gender gap within AI systems.

To promote gender inclusivity, there is a growing consensus that greater diversity in training data is necessary. While post-output fine-tuning is important, it is equally essential to ensure the diversity of inputs. This can be achieved by using more representative training data that includes contributions from a wide range of demographics.

There are also concerns about the interaction between AI and gender inclusivity, particularly with regards to surveillance. The use of AI in surveillance systems raises questions about privacy, biases, and potential infringements on individuals’ rights. This highlights the need for careful consideration of the impact of AI systems on gender equality, as they can unintentionally reinforce existing power dynamics.

In terms of governance, there is a debate about the value of non-binding principles in regulating AI. Many international processes have attempted to set out guidelines for AI governance, but few are binding. This lack of consistency and overlapping initiatives raises doubts about the effectiveness of these non-binding principles.

On the other hand, there is growing support for the implementation of independent audit mechanisms to assess AI outcomes. An independent audit would allow for the examination of actions taken by companies like Google to determine whether they are producing the desired outcomes. This mechanism would provide a more objective assessment of the impact of AI and help hold companies accountable.

Investing in developing independent audit mechanisms for AI is seen as a more beneficial approach than engaging in non-binding conversations or relying solely on voluntary principles. This suggests that tangible actions and oversight are needed to ensure that AI systems operate in an ethical and inclusive manner.

The representation of women in the tech field remains extremely low. Factors such as language barriers and a lack of representation in visual search results contribute to this underrepresentation. To address this, there needs to be a greater focus on upskilling, reskilling, and the introduction of the female voice in AI. This includes encouraging more girls to pursue technology-related studies and creating opportunities for women to engage with AI-based technologies.

Overall, while there are challenges and concerns surrounding gender inclusivity in AI, there is also recognition of the positive vision and opportunities that AI adoption can provide for female workers. By addressing these issues and actively working towards gender equality, AI has the potential to become a powerful tool for promoting a more inclusive and diverse society.

Emma Higham

Google is leveraging the power of Artificial Intelligence (AI) to enhance the safety and inclusivity of their search system. Emma Higham, a product manager at Google, works closely with the SafeSearch engineering team to achieve this goal. By employing AI technology, they can test and refine their systems, ensuring a safer and more inclusive user experience.

Google’s mission is to organize the world’s information and make it universally accessible and helpful. Emma Higham highlights this commitment, emphasizing Google’s dedication to ensuring information is available to all. AI technology plays a vital role in this mission, facilitating efficient pattern matching at scale and addressing inclusion issues effectively.

Google’s approach prioritizes providing search results that do not shock or offend users with explicit or graphic content unrelated to their search. Emma Higham mentions that this principle is one of their guidelines, reflecting Google’s commitment to user safety and a positive search experience.

Guidelines are crucial for assessing search result quality and improving user satisfaction. Google has comprehensive guidelines for raters, aiming to enhance search result quality. These guidelines include the principle of avoiding shocking or offending users with unsought explicit content. Adhering to these guidelines ensures search results that meet user needs and expectations.

Addressing biases in AI systems is another important aspect for Google. Emma Higham acknowledges that AI algorithms can reflect biases present in training data. To promote fairness, Google systematically tests the fairness of their AI systems across diverse user groups. This commitment to accountability ensures equitable search results and user experiences for everyone.

Google actively collaborates with NGOs worldwide to enhance safety and handle crisis situations effectively. Their powerful AI system, MUM, enables more efficient handling of personal crisis searches. With operability in 75 languages and partnerships with NGOs, Google aims to improve user safety on a global scale.

In the development process of AI technology, Google follows a cyclical approach. It involves creating the technology initially, followed by fine-tuning and continuous improvement. If the technology does not meet the desired standards, it goes back to the first step, allowing Google to iterate and refine their AI systems.

Safety and inclusivity are essential considerations in the design of AI technology. Emma Higham emphasizes the importance of proactive design to ensure new technologies are developed with safety and inclusivity in mind. By incorporating these principles from the beginning, Google aims to create products that are accessible to all users.

AI has also made significant strides in language and concept understanding. Emma Higham highlights improvements in Google Translate, where AI technology has enhanced gender inclusion by allowing users to set their preferred form factor. This eliminates the need for default assumptions about a user’s gender and promotes inclusivity in language translation.

User feedback is paramount in improving systems and meeting high standards. Emma Higham provides an example of how user feedback led to improvements in the Google Search engine during the Women’s World Cup. Holding themselves accountable to user feedback drives Google to deliver better services and ensure their products consistently meet user expectations.

In conclusion, Google’s use of AI technology is instrumental in creating a safe and inclusive search system. Through collaboration with the SafeSearch engineering team, Google ensures continuous testing and improvement of their systems. Guided by their mission to organize information and make it universally accessible, AI aids pattern matching at scale and tackles complex mathematical problems. Google’s commitment to avoiding explicit content, addressing biases, and incorporating user feedback strengthens their efforts towards a safer and more inclusive search experience. Additionally, their partnership with NGOs and the development of MUM showcases their dedication to improving safety and handling crisis situations effectively. By embracing proactive design and incorporating user preferences, AI technology expands inclusivity in products such as Google Translate.

Bobina Zulfa

A recent analysis of different viewpoints on AI technologies has revealed several key themes. One prominent concern raised by some is the need to understand the concept of “benefit” in relation to different communities. The argument is that as AI technologies evolve and are adopted across various communities, it is vital to discern what “benefit” means for each community. This is crucial because technologies may produce unexpected outcomes and may potentially harm rather than help in certain instances. This negative sentiment stems from the recognition that the impact of AI technologies is not uniform and cannot be assumed to be universally advantageous.

On the other hand, there is a call to promote emancipatory and liberatory AI, which is seen as a positive development. The proponents of this argument are interested in moving towards greater agency, freedom, non-discrimination, and equality in AI technologies. The emphasis is on AI technologies being relevant to communities’ needs and realities, ensuring that they support the ideals of non-discrimination and equality. This perspective acknowledges the importance of considering the socio-cultural context in which AI technologies are deployed and the need to design and implement them in a way that reflects the values and goals of diverse communities.

Another critical view that emerged from the analysis is the need to move away from techno-chauvinism and solutionism. Techno-chauvinism refers to the belief that any and every technology is inherently good, while techno-solutionism often overlooks the potential negative impacts of technologies. The argument against these views is that it is crucial to recognize that not all technologies are beneficial for everyone and that some technologies may not be relevant to communities’ needs. It is essential to critically evaluate the potential harms and benefits of AI technologies and avoid assuming their inherent goodness.

The analysis also highlighted concerns regarding data cleaning work and labour. It is important to acknowledge and support the people who perform this cleaning work, as their labour has implications for their quality of life. This perspective aligns with the goal of SDG 8: Decent Work and Economic Growth, which emphasizes promoting decent work conditions and ensuring fair treatment of workers involved in data cleaning processes.

Furthermore, the analysis identified issues with consent in Femtech apps. Femtech refers to technology aimed at improving women’s health and well-being. The concerns raised encompass confusing terms and conditions and possible data sharing with third parties. The lack of meaningful consent regimes in Femtech apps can have significant implications for gender inequality. This observation underscores the need for robust privacy measures and clear and transparent consent processes in Femtech applications.

The analysis also noted the importance of considering potential issues and impacts of AI technologies from the early stages of development. Taking a proactive approach, rather than a reactive one, can help address and mitigate any potential negative consequences. By anticipating and addressing these issues, the development and implementation of AI technologies can be more socially responsible and in line with the ideals of sustainable development.

Skepticism was expressed towards the idea of using small data sets to detect bias. The argument is that limited data sets may not represent a significant portion of the global majority. If the data used in AI algorithms is not representative, it could lead to biased outcomes in the end products. This skepticism highlights the need to ensure diverse and inclusive data sets that reflect the diversity of communities and avoid reinforcing existing biases.

Finally, the analysis highlighted initiatives such as OECD’s principles that could help address the potential issues surrounding AI technologies. These principles stimulate critical thinking about the potential social, economic, and ethical impacts of AI technologies from the outset. Several organizations are actively promoting these principles, indicating a positive and proactive approach towards ensuring responsible and trustworthy AI development and deployment.

In conclusion, the analysis of different viewpoints on AI technologies revealed a range of concerns and perspectives. It is important to understand the notion of benefit for different communities and recognize that technologies may have unintended harmful consequences. However, there is also a call for the promotion of emancipatory and liberatory AI that is relevant to communities’ needs, supports non-discrimination and equality. Critical views on techno-chauvinism and solutionism emphasized the need to move away from assuming the inherent goodness of all technologies. Additional concerns included issues with data cleaning work and labour, consent in Femtech apps, potential issues and impacts from the start of AI technology development, skepticism towards using small data sets to detect bias, and the importance of initiatives like OECD’s principles. This analysis provides valuable insights into the complex landscape of AI technologies and highlights the need for responsible and ethical decision making throughout their development and deployment.

Jim Prendergast

Dr. Luciana Bonatti, a representative from the National University of Cordoba in Argentina, was unable to present due to an outbreak of wildfires in the area. The severity of the situation forced her and her family to evacuate their home, resulting in her unavoidable absence.

The wildfires that plagued the region prompted Dr. Bonatti’s evacuation, highlighting the immediate danger posed by the natural disaster. The outbreak of wildfires is a significant concern, not only for Dr. Bonatti, but also for the affected community as a whole. The intensity of the situation can be inferred from the negative sentiment expressed in the summary.

Jim Prendergast, perhaps an attendee or colleague, demonstrated empathy and solidarity towards Dr. Bonatti during this challenging time. Acknowledging her circumstances, Prendergast expressed sympathy and conveyed his well wishes, hoping for a positive resolution for Dr. Bonatti and her family. His positive sentiment demonstrates support and concern for her well-being.

It is worth noting the related Sustainable Development Goals (SDGs) mentioned in the summary. The wildfire outbreak in Argentina aligns with SDG 13: Climate Action, as efforts are necessary to address and mitigate the impacts of climate change-induced disasters like wildfires. Additionally, the mention of SDG 3: Good Health and Well-being and SDG 11: Sustainable Cities and Communities in relation to Jim Prendergast’s stance signifies the broader implications of the situation on public health and urban resilience.

In conclusion, Dr. Luciana Bonatti’s absence from the presentation was a result of the wildfire outbreak in Argentina, which compelled her and her family to evacuate. This unfortunate circumstance received empathetic support from Jim Prendergast, who expressed sympathy and wished for a positive outcome. The summary highlights the implications of the natural disaster in the context of climate action and sustainable development goals.

Lucia Russo

The Organisation for Economic Cooperation and Development (OECD) has developed a set of principles aimed at guiding responsible and innovative artificial intelligence (AI) development. These principles promote gender equality and are based on human-centered values and fairness, with a focus on inclusive growth and sustainable development. Currently, 46 countries have adhered to these principles.

To implement these principles, countries have taken various policy initiatives. For example, the United States has established a program to improve data quality for AI and increase the representation of underrepresented communities in the AI industry. Similarly, the Alan Turing Institute in the United Kingdom has launched a program to increase women’s participation in AI and examine gender gaps in AI design. The Netherlands and Finland have also worked on developing guidelines for non-discriminatory AI systems in the public sector. These policy efforts demonstrate a commitment to aligning national strategies with the OECD AI principles.

The OECD AI Policy Observatory serves as a platform for sharing tools and resources related to reliable AI. This platform allows organizations worldwide to submit their AI tools for use by others. It also includes a searchable database of tools aimed at various objectives, including reducing bias and discrimination. By facilitating the sharing of best practices and tools, the Observatory promotes the development of AI in line with the OECD principles.

In addition to the policy-focused initiatives, the OECD has published papers on generative AI and big trends in AI analysis. These papers provide analysis on AI models, their evolution, policy implications, safety measures, and the G7 Hiroshima process involving generative AI. While the OECD focuses on analyzing major trends in AI, it is not primarily focused on providing specific tools or resources.

There is an acknowledgement of the need for more alignment and coordination in the field of AI regulation. Efforts are being made to bring stakeholders together and promote coordination. For instance, the United Kingdom is promoting a safety summit to address AI risks, and the United Nations is advancing work in this area. The existence of ongoing discussions and developments demonstrates that the approach to AI regulation is still in the experimental phase.

The representation of women in the AI industry is a significant concern. Statistics show a low representation of women in the industry, with more than twice as many young men as women capable of programming in OECD countries. Only 1 in 4 researchers publishing on AI worldwide are women, and female professionals with AI skills represent less than 2% of workers in most countries. To address this issue, policies encouraging women’s involvement in science, technology, engineering, and mathematics (STEM) fields are important. Role models, early exposure to coding, and scholarships are mentioned as ways to increase women’s participation in these areas.

Furthermore, there is a need to promote and invest in the development of large language models in languages other than English. This would contribute to achieving Sustainable Development Goals related to industry, innovation, infrastructure, and reduced inequalities.

Overall, the OECD’s principles and initiatives provide a framework for responsible and inclusive AI development. However, there is a need for greater coordination, alignment, and regulation in the field. Efforts to increase women’s representation in the AI industry and promote diversity in language models are essential for a more equitable and sustainable AI ecosystem.

Jenna Manhau Fung

The analysis of the speeches reveals several significant findings. Firstly, it highlights that AI can eliminate unintentional human bias and bring more impartiality. This is valuable as it ensures fair decision-making processes and reduces discrimination that may arise from human biases. Leveraging AI technology can enable organizations to improve their practices and achieve greater objectivity.

Another important point emphasized in the analysis is the significance of involving users and technical experts in the policymaking process, particularly in relation to complex technologies like AI. By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leading to the creation of more comprehensive and effective policies. This ensures that policies address the diverse needs and concerns of different stakeholders and promote equality and inclusivity.

Moreover, the analysis underscores the importance of international standards in the context of AI and related industries. International standards can assist countries in modernizing their legal frameworks and guiding industries in a way that aligns with ethical considerations and societal needs. These standards promote consistency and harmonization across different regions and facilitate the adoption of AI technologies in an accountable and inclusive manner.

In addition to these main points, the analysis highlights the need for an inclusion mechanism for small-scale writers. It argues that such a mechanism is essential to address situations where the content of these writers does not appear in search engine results due to certain policies. This observation is supported by a personal experience shared by one of the speakers, who explained that her newsletter did not appear in Google search results because of existing policies. Creating an inclusion mechanism would ensure fair visibility and opportunities for small-scale writers, promoting diversity and reducing inequality in the digital domain.

Overall, the analysis emphasizes the transformative potential of AI in eliminating biases and promoting neutrality. It underscores the importance of involving users and technical experts in policymaking, the significance of international standards, and the need for an inclusion mechanism for small-scale writers. These insights reflect the importance of considering diverse perspectives, fostering inclusivity, and striving for fairness and equality in the development and implementation of AI technologies.

Moderator – Charles Bradley

Charles Bradley is hosting a session that aims to explore the potential of artificial intelligence (AI) in promoting gender inclusivity. The session features a panel of experienced speakers who will challenge existing beliefs and encourage participants to adopt new perspectives. This indicates a positive sentiment towards leveraging AI as a tool for good.

Bradley encourages the panelists to engage with each other’s presentations and find connections between their work. By fostering collaboration, he believes that the session can achieve something interesting. This highlights the importance of collaborative efforts in advancing gender inclusivity through AI. The related sustainable development goals (SDGs) identified for this topic are SDG 5: Gender Equality and SDG 17: Partnerships for the Goals.

Specific mention is made of Jenna Manhau Fung’s experiences in youth engagement in AI and policy-making, as well as her expertise in dealing with Google’s search policies. This recognition indicates neutral sentiment towards the acknowledgement of Fung’s insights and experiences. The related SDGs for this discussion are SDG 4: Quality Education and SDG 9: Industry, Innovation and Infrastructure.

Furthermore, Bradley invites audience members to contribute to the discussion and asks for questions, fostering an open dialogue. This reflects a positive sentiment towards creating an interactive and engaging session.

Another topic of interest for Bradley is Google’s approach to counterfactual fairness, which is met with a neutral sentiment. This indicates that Bradley is curious about Google’s methods of achieving fairness within AI systems. The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure.

The discussion on biases in AI systems highlights the need for trust and the measurement of bias. Google’s efforts in measuring and reducing biases are acknowledged, signaling neutral sentiment towards their work in this area. The related SDG for this topic is SDG 9: Industry, Innovation and Infrastructure.

Bradley believes that the work on principles will set the stage for upcoming regulation, indicating a positive sentiment towards the importance of establishing regulations for AI. The enforceable output of regulation is seen as more effective than principles alone. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure.

The session also explores the positive aspects of generative AI in the fields of coding and learning. It is suggested that generative AI can speed up the coding process and serve as a tool for individuals to learn coding quickly. This perspective is met with a positive sentiment and highlights the potential of AI in advancing coding and learning. The related SDGs for this topic are SDG 4: Quality Education and SDG 9: Industry, Innovation, and Infrastructure.

Moreover, Bradley emphasizes the importance of investing in AI training in languages other than English, implying a neutral sentiment towards the necessity of language diversity in AI. This recognizes the need to expand AI capabilities beyond the English language. The related SDG for this topic is SDG 9: Industry, Innovation, and Infrastructure.

Lastly, the role of role models in encouraging more young women to enter the fields of science and coding is discussed with a positive sentiment. Policies and actions to motivate women in science are emphasized, highlighting the importance of representation in these fields. The related SDGs for this topic are SDG 4: Quality Education and SDG 5: Gender Equality.

In conclusion, Charles Bradley’s session focuses on exploring the potential of AI in promoting gender inclusivity. The session aims to challenge existing beliefs, foster learning new perspectives, and encourage collaboration among panelists. It covers a range of topics, including youth engagement in AI, counterfactual fairness, measuring biases, guiding principles, generative AI in coding and learning, investing in language diversity, and the importance of role models. The session promotes open dialogue and aims to set the stage for future AI regulation.

Session transcript

Moderator – Charles Bradley:
Hi, everybody. This is the session after lunch, and we’re quite far away from lunch physically, so we’re just waiting for a few more people to walk into the room, so we’ll wait another minute Well, hi, everybody. My name’s Charles Bradley. I work at ADAPT, a tech policy and human rights consultancy based in London. I’m very excited to be here on the last day of this IGF. We’re in a very large room, and we would encourage people who are here to come to the table as we’ll try and ensure we have a good conversation in a bit. The more we can see your lovely faces, the more we can engage with you, and the more interesting this is going to be. I think this is my ninth IGF. It’s been a fun one, and there have been an inordinate number of discussions about AI, and this is another one. We have a great panel with lots of experience and a range of expertise on the topic that we’re going to talk about, and I’m going to try and make this as sort of focused and as practical as possible. There have been lots of conversations floating at all different levels of feet, and we really want to make sure that we leave this having learned something or having something that we believe being challenged. So that’s our sort of task today. So we actually leave the room with something new or thinking about something that we haven’t thought about in the same way before. The session is titled Leveraging AI to Support Gender Inclusivity, and there are obviously many, many routes that this could take. We really want to focus the session on leveraging AI as a tool for good. So how can AI actually be used to solve some of these problems? We’re going to sort of kick off, as nearly every session at the IGF does, with a round of sort of presentations and opening remarks from the panel. Rather than me go through a very long introduction of their names, organizations, which you will immediately forget, I will ask them to introduce themselves as they speak, and then we’re going to have plenty of time today for a discussion, both across the panelists and within the room. So I’d like us to leave the room knowing something new or having an existing sort of belief or something sort of being challenged. The other challenge I pose to you, which is unique, is that we actually engage with what people are saying in the room. So we’d like our speakers to think about what the other speakers have said and try to connect their work to their peers, and also for when we’re asking questions, to really engage with what’s already been said. I think that will really help us try and get to something interesting for today. So with that, I will pass to our first speakers, Christian and Emma from Google, who are joining us virtually. Christian and Emma, over to you.

Emma Higham:
Hi there, Charles. Can you see and hear me okay? Yes, we can. Fantastic. Thanks all so much for having us. My name is Emma Hyam, and I’m here from Google, where I work with the SafeSearch engineering team as a product manager. I’m here with my colleague Christian von Essen, who’s a lead engineer on the team, and we want to talk about one of the ways that Google is using AI to make search safer, but also more inclusive. This sometimes poses unique challenges, which we can dive into in a second. But in general, we’re really excited about the technology and the way that it is actually enabling us to test our systems and provide a more inclusive system, a more inclusive experience in a way that we can validate and return back to users. Now, Christian, I’ll pass to you to introduce yourself and then we can kick off with a few slides. I’ll just get them up.

Christian von Essen:
Sure, but you did a good job introducing me already, I think. So, hi, my name is Christian. I work for Google as a tech lead and a manager. I’ve been doing this for close to 10 years now, and the kind of work that we’re going to present here has been one of the biggest breakthroughs that we had in the last 10 years that I’ve been doing this.

Emma Higham:
Awesome. Well, if you don’t mind guys we’ll just spend a few minutes sharing a few slides because I think this will make it more tangible and then we’re looking forward to the discussion. So, I’ll start by saying that you know everything we do at Google goes back to our mission, organizing the world’s mission, organizing the world’s information to make it universally accessible and useful. And one of the things about the world’s information is it’s a lot of information, and the information needs that we see are also at a huge scale. And they’re dynamic, people come to us with new kinds of questions every day. In fact, 15% of all searches are new, daily. That means that we need systems that are also dynamic. We have hundreds of billions of web pages, 15% of queries are new every day. The question that Christian and I really ask ourselves in our job is how can we do content moderation, how can we offer safe systems which we designed to be inclusive. And how can we do it at scale. We want to do that while still returning useful search results, ones that answer your questions. So it’s a dynamic challenge, and what we find is with these kind of scaled dynamic problems pattern matching is really helpful. And one thing that I found as I’ve deep dived on AI is AI is really pattern matching at scale. It’s using computers to do pattern matching in a way that we perhaps weren’t able to do before. It’s a way to understand patterns that help us do math, but also that help us understand sometimes inclusion problems. So, I’ll start by just kind of one of the fundamental principles that guide our work here, and then I’m going to pass to Christian to share some of the tangible ways that we have tried to improve on this approach. So the first thing I’ll share is that one of our principles in search is that we never want to shock or offend people with explicit or graphic content when that’s not what they’re looking for. You know, this is part of the fundamental thing of helping you find quality and relevant information. And people often ask us, how do your algorithms work? Like, how should we understand what you think of as quality? And something that kind of, I was really impressed by as I started working with the search teams, is they actually publish 100, and I think it’s 160 pages now, of guidelines to raters that we use to help us understand the quality of results. And it’s in these guidelines that you see this principle codified. The principle that we never want to shock or offend you with explicit or graphic content when it’s not what you’re looking for. And the way we do that is really by understanding the intent behind your query. Understanding the intent behind your query requires language understanding. Now, in the most sort of brute force way, this would be, you type in a query, I’m sitting here in Mountain View, California, you type in a query Mountain View, and we understand that Mountain View doesn’t actually just mean a view from the top of a mountain. It means a place, because we have an understanding that Mountain View refers to a place. And we know that because it matches a bunch of web documents about the place. What we’re seeing with natural language processing is that this is getting a lot smarter. Our ability to do pattern matching goes far beyond just understanding that Mountain View is a place. And that’s making us much more effective at understanding when you were seeking out something that may have been a little racy, versus when you had more innocent interpretation of the query. But many of you may be wondering, why was there ever a problem with encountering the shocking racy content in the first place? So I’m going to hand over to Christian to shed a bit more light on that.

Christian von Essen:
Thank you. So, in particular in the past but still nowadays to a large extent at the core search algorithms like what Google is really work by finding documents that have the same words that appear in your query. And so, these results really are a reflection of what the internet has to offer for these particular words. And for a query, say like amateur, the vast majority of these documents on the internet is pornographic, right? Amateur porn is a very popular thing. But amateur doesn’t really necessarily have porn intent, right? The user might be looking for something else and might be very surprised to be confronted with pornography. To counteract this effect, we have that requires special subsystems. And these subsystems always had also to focus on queries that touch on identity terms, right? So that they are not unevenly affected by shocking content. Can we move to the next slide? Yes. Thank you. In 2022, we shared that we reduced unnecessary sexual results by 30% in the previous year. And we used AI language understanding, natural language understanding to achieve this huge reduction. And we’ve seen a similar improvement in the following year. And we’re still working on reducing the bad content further. Now, how can we use AI language understanding like BERT to do so? Let’s go to the next slide. You might say that it’s as simple as just trying to classify to predict when sexual content is okay, right? Yeah, but as we all here know, that’s why we’re here, AI comes with its own challenges. In particular, an AI can suffer from biases that would limit the usefulness, right? If AI thinks amateur means porn, then it doesn’t help us. So how do we address the bias in AI? Can we move to the next slide? Thank you. We specifically include training data, in this case, for protected minority groups. For example, Caucasian girls, Asian girls, Irish girls. And as you can see here, many patterns that we see as problematic are the same across groups. Black girl videos, white girl videos, something like that. And then when generating this training data for protected minority groups, we make use of these patterns to expand from one group to another automatically. And we can exploit the same kind of patterns, not only to address issues in biases of AI, but also in the biases of human races that generate this training data. Can we go to the next slide? Now, we have this wonderful approach, but does it actually work? To know that, we need to measure. To measure if they actually are successful in mitigating the biases, we see how our classifiers do as we compare across different slices. So are we, for example, as good or as bad as in a random other slice or in the whole slice? When we look at just queries touching on LGBTQ or touching on gender, touching on race. So a bit more formal, the probability of predicting this sort of porn could be the same for any slice of data, no matter what the slice is, given the same labels, given that it actually looks for porn or doesn’t. And compared to the baseline models that we had earlier or that we have without this corrective training data, we do see significant gains in equity, in being the same quality for in-slices and out-slices. And then as we added more methods and more data, we saw even further gains. And that’s this part. And then back to Emma.

Emma Higham:
Yes, I think this is really exciting to me because I think we often worry about, is the system working fairly for all user groups? Is the system working fairly and really representing the world in the way that it is fair to all user groups? What we’ve found here is that there’s a way to actually test that. And does that mean that every single system, when first naively built, is going to be fair? No, because it’s going to reflect biases and training data because it’s going to reflect the biases of people that may make it. That’s kind of true of any institution or system that we build. So that’s one way to hold our systems accountable. And what I’ve been really excited about with AI is both the power of the natural language processing that we’re seeing, the ability to understand users at scale across a wide range of locales, and understand the nuances of what they’re saying, while also holding that system accountable to making sure that it’s working fairly across all of these different groups. And I wanted to share that because what we’re also seeing is that similar to BERT, which is one form of natural language processing, we are also able to apply MUM, another very powerful system, to making our search results safer. So a critical example that’s really close to my heart is how we’ve applied MUM to improve personal crisis searches. We see queries like how to get help in search, queries, unfortunately, like, I want to kill myself. These are queries, which show the severity of a moment that a user is in. And they are not always written in naive terms, they’re not always written in a way that is easy for us to understand. With natural language processing, we’re able to translate the queries and say, this looks like a user may be in a moment of crisis, which makes us more able to return relevant results and return helpful resources. And, you know, for some of the severe queries I just mentioned, we really focus on partnering with NGOs around the world to provide helpful resources. And what we’re particularly excited about with MUM is that we’re able to be really effective across languages. There’s 75 locales where MUM is trained and operating highly effectively and that was the kind of power we were able to bring to the problem of personal crisis searches, leading to major improvements last year. So that’s it for today. We’re really excited to talk more about AI and how we’ve seen it work, not just be effective to the problem of being more inclusive across genders, but also to making systems safer at scale.

Moderator – Charles Bradley:
Thank you, Emma and Christian. I think it’s really useful to set us up with that. We needed to learn something new from today. I’ve got BERT, MUM, pattern matching, slices, lots of things that I have questions about. And I’m sure people want to dig into which we’ll get into in a bit, but that’s really, really sort of set the scene in very practical ways that this technology or technologies can be used for gender inclusivity. I’m going to come to Babina next from policy. So Babina, please introduce yourself and the floor is yours. Thank you.

Bobina Zulfa:
Sure. Good morning. Can you hear me? Yes, thanks. Perfect. Morning. It’s morning where I am. I understand it’s afternoon over there. A pleasure to be a part of this discussion. My name is Babina Zulfa and I’m a data digital rights researcher with Policy. So Policy is a feminist collective of researchers, academics, designers, etc. We work at the intersection of data, tech and society. So a lot of our work is socio-technical in a sense. So we are Pan-African and so a lot of our work just looks at how technologies are being adopted across the continent and how that is impacting communities in just different ways for the better or for worse. And we do that, especially through our research. We document that and come up with recommendations, particularly for government, but also now for other groups, civil society and technologists as well. I took this session over from Nima, who is our outgoing ED. She wasn’t able to be part of this, but it’s a pleasure to just be able to jump in and take this on. You did talk about tying in with the previous speakers and it’s interesting because I was thinking around, I guess I’ll jump into that in a bit. But I just did want to say that from the work we’ve been doing, we have a three-part report called Women in AI. So we’ve been looking at the intersection of gender and AI for the past maybe three years. And we’ve documented that and just looked at how these technologies are being used by African women who are in many ways, much less involved in terms of access, in terms of usage, meaningful usage, where there are limitations in terms of language, in terms of literacy, etc. But just recently, actually yesterday, we have a new handbook that just published. We’ve been doing the work with IDRC and this is sort of putting across draft principles to guide policymakers in thinking about how to govern these systems, but not just policymakers, civil society and technologists as well as they’re developing these systems. So just of the background, I think for my sharing today, I just wanted to point out that a lot of our work has been in a sense critical because we’re feminists and so we use the Afrofeminist lens to analyze this intersection that I’ve been talking about. And I’ll just start from a point of, I think something that for us we’ve been, especially with the last piece of work, the handbook that I’ve been talking about, is we’ve been broadly questioning the notion of as technologies are being developed and adopted across the continent. I’m noting this very much within our work, which is on the African continent, but I am. open to open this up, is that the notion of benefit, right, that these technologies are benefiting people in such and such a way. I think that’s a very broad term and our work has been working to, you know, sort of demystify that or just make that very clear what does benefit mean for different communities as maybe a model is being, there is satellite models we’re seeing that are being brought about to just maybe look at how much communities are getting electrified. What does that do for the communities as they’re maybe getting, you know, more surveilled and then losing their privacy. So we’ve just been working to understand that notion of benefit, what does benefit mean indeed. And so from that, we’ve been moving to a point of, you know, I think we did, we’ve seen that a lot of the research that’s being done around, you know, understanding ethics and responsibility when it comes to development and adoption of AI is the notion of safety and security. But I think we’re trying to move more to a place of emancipatory and liberatory AI. How do these technologies bring just more agency, more freedom, more non-discrimination, more equality for the people who these technologies are being, you know, created for or as governments are bringing them down onto the people for, you know, public benefit or private sector using them for whatever reasons. And so I’ll just say then that, you know, a number of things, I’ll just again, I think maybe quickly tie in with what Emma and Christian were sharing, which was something that I think I’d wanted to talk about a little, very interesting to hear about, for example, the MUM model and the crisis, you know, touches, that’s really, really interesting to hear about when you’re talking about the, you know, trying to shelter the users from the, say, explicit or graphic information. That’s something I think first we’ve been exploring on the other end, and just the, the, the, the, the broader trying to bring to question, how does that happen as you’re trying to clean up those data sets. So visibilizing of the workers who are behind doing that work. So I’ve been very interested in hearing that from both of you, Emma and Christian, because we’ve been talking so much about that, you know, in the broader, you know, data, just data justice and data exploitation conversation. Because we do know that these models, well, are, you know, of course, advancing greatly and are able to, in, you know, many ways, do sort of self cleaning. But there is again, you know, human labor that is doing that, that cleaning. And so what does that mean for the people that are doing that work? Is it what are the, you know, what’s their quality of life from doing that work? So that’s, that’s one of the things I just want to quickly tie that in with that with the, you know, bias and just trying to debias the systems. And then, just broadly, I think we’ve been looking at as our societies are increasingly data buying. And so part of that is, you know, intelligence systems are being taken up in different, you know, parts of our societies. We’ve been looking at, for example, femtech, which is, I think, something that’s becoming popular, especially here on the continent, where, for example, women haven’t typically had easy access to medical services. And now, there are these, you know, these, these are femtech apps that you could use, whether they’re menstrual health apps, or pregnancy apps. And now we’ve read work, for example, I think Mozilla has done a lot of work on this, showing that, you know, there is the consent, the regimes are faulty, or then they’re not very meaningful in the sense that the terms and conditions that are offered in there are sometimes just the legalese is too much for the people to understand, or they’re confusing, or they do live on certain notions where maybe your data will be shared with a third party. So these are just a number of issues that we’re exploring in our work as well. Meaningful consent, etc. We’re looking at also techno-chauvinism, as a lot of these technologies are being brought up. This is, I think, from Meredith Brewster’s work, we were looking at, you know, again, going back from where I started, which is, you know, the notion of benefits. Sometimes technologies are brought onto communities, and they do not do more good, and they do more harm. And so we’re questioning the notion of this notion of any and every technology is for the good. And so the moving away from the idea of techno-solutionism, and, you know, moving to a place of, you know, getting solutions on board that actually are relevant to communities needs and their realities, etc. So I think for us in our broader conversation, we find that we engage a lot with the conversation of power symmetries. Again, there is the developer, there is the end user. And along that, especially for the end user, how do these technologies, you know, impact their lives for better or for worse. And we look at that, and we find that usually it’s not uni-dimensional, usually it’s intersectional in a way, you know, you find if it is harm, it’s happening at a very intersectional level, at different levels. And so just to wrap up my submission, I just want to say for us, we’re very much interested in moving towards a place of, you know, realising AI technologies that are more, you know, liberatory and emancipatory to the communities that these technologies are being brought to. Thank you.

Moderator – Charles Bradley:
Thank you very much. Yeah, and really sort of helped paint a picture of the wide variety of ways that these technology can, you know, can be very beneficial and really improve on some of these values that we’ve been talking about. Jim, I’m going to pass you.

Jim Prendergast:
Testing. Oh, there we go. Sorry about that. So Charles, I just wanted to point out that we were supposed to have another academic present, Dr. Luciana Bonatti from the National University of Cordoba in Argentina. I guess being on the other side of the world, sometimes you miss news, but apparently there’s an outbreak of wildfires in that part of Argentina, and she and her family had to evacuate. So if she watches this down the road, we just want to let you know, we’re thinking of you, and we hope everything works out for you. And we look forward to working with you in the future.

Moderator – Charles Bradley:
Thanks, Jim. We’re going to go to Lucia at the OECD next. Over to you.

Lucia Russo:
Hello, good morning. Good afternoon. Thanks for the invitation for this very interesting panel. My name is Lucia Russo. I’m from the OECD, the Artificial Intelligence Unit, and I will talk a little bit about the OECD AI principles and the way they, excuse me, they promote gender equality in AI. So just as a bit of a background, what are the OECD AI principles? The OECD principles are a set of principles, an intergovernmental standard on artificial intelligence that were adopted in 2019, and were developed through a multi-stakeholder process that we involved over 50 experts with the objective of coming up with principles that would be a common guideline for countries and AI actors in developing transport AI and to steer technology in an innovative way, but also in a responsible way. These principles were also endorsed later on by the G20, and so we are today over, today 46 countries have adhered to these principles. These are principles that are not binding in nature, but still they represent a commitment from countries that adhere to them to steer technology in a way that is embedding those principles, and they are ten principles which are organized into five value-based principles and five recommendations to policy makers. So, in terms of the value-based, these are they call for promoting AI, which is aimed at inclusive growth, sustainable development and well-being, that embed human-centered values and fairness, AI that is transparent and explainable, safe, secure, and robust, and they call for actors to be accountable throughout the AI life cycle. And then, the five recommendations, which are policy makers, they call for promoting AI, which is aimed at inclusive growth, and safe, secure, and robust, and they call for actors to be accountable throughout the AI life cycle. And then, the five recommendations to government concern policy recommendations around investing in AI, research and development, fostering a digital ecosystem for AI, shaping and enabling policy environment, building human capacity and preparing for labor market transformation, and then, the six principles, which are the five principles, they are, first, they are touch, obviously, on gender equality, but in particular, the first and the second call on stakeholders to proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people on the planet, and in advancing inclusion of underrepresented people. And then, the second principle calls on AI actors to respect the rule of law, human rights, democratic values, and including non-discrimination and equality, diversity, and fairness. So, I would point to these two as perhaps the most relevant in this conversation, and then, obviously, these are very high-level guidelines for countries. So, what have we been doing and what are countries doing to implement those principles? So, since 2019, we have been working at the OECD to help countries implement in practical ways these principles, and we have been monitoring through the OECD AI Policy Observatory policies that countries have been putting in place to meet, to address all of these principles. So, here, obviously, I won’t be exhaustive. I wanted just to point to a few examples of policies that have been adopted, implemented in countries. For instance, in the United States, when we talk about, well, we know that to make AI more inclusive and also to reduce bias and increase fairness, one important aspect that was discussed by Google is about data quality. And so, in the United States, an example is the Artificial Intelligence Machine Learning Consortium to advance health equity and researcher diversity that basically is a program that aims to make electronic health record data more representative so that training data is of higher quality, but also to increase the participation and representation of researchers from underrepresented communities in AI and machine learning, so that basically algorithm bias is ensured by including data from different genders, ethnicities, and backgrounds, but also by a more diverse representation in AI development. And, again, in another example, fostering inclusivity and equity in AI development is a program in the UK promoted by Alan Turing Institute, which is Women in AI and Data Science. And here, there are three pillars to this program. First one, map the participation of women in data science and AI in the UK, but also globally, with the ultimate objective of increasing women participation in these fields. Second, examine diversity and inclusion in online and physical workplace. And last, exploring how gender gap affects scientific knowledge and technology, technological innovation, and then promoting gender inclusive AI design. So, these are two examples. And then last two points I would make, there are also other approaches taken by countries, for instance, in the Netherlands and Finland, there have been attempts to build guidelines and assessment frameworks for non-discriminatory AI systems that basically help identify and manage risk of discrimination, especially in public sector AI systems. And so, these are guidelines for especially public servants when they use or procure AI systems. And the last point is, last year, we launched a catalog of tools still on the same platform, the OECD AI Policy Observatory, and this is really a platform that is intended to share tools for trustworthy AI, and basically institutions around the globe can submit tools so that other organizations can use them in their work. And just having a quick check, so it’s a searchable database where you can search for objectives that these tools are aimed at achieving, and for instance, looking at reducing bias and discrimination and ensure fairness. We have over 100 tools, and for instance, one that I was checking yesterday came up is at Google, the People Plus AI Research multidisciplinary team that explores the human side of AI. So, this is one example. Other example is, for instance, a tool which is called CounterGen, which is a framework for auditing and reducing bias in NLP, and basically, it generates counterfactual data sets. So, comparing the output of a model between cases where the input is a member of protected category and two cases where it’s not. So, these are just examples. One can search and browse for more. So, I wanted to give a bit of an overview of things that exist, but obviously, this is all illustrative, and I look forward to questions and discussion. Thank you.

Moderator – Charles Bradley:
Thank you so much, Lucia. There’s so much in what you said. I’m sort of trying to scrabble around your website to find all the amazing resources that you shared. So, maybe we can pick back up some of those points because they also tie back into some of the key ones earlier around data, like proving that we know what is happening, baselining, and trying to improve outcomes, and it feels like that might be something that we sort of want to dig into a bit more as we get into this discussion. I’m going to pass to our last speaker, Jenna.

Jenna Manhau Fung:
Thank you. Thank you for having me on this panel today. My name is Jenna Fung. I am the program coordinator of the Asia Pacific Youth Internet Governance Forum. As I share my thoughts, I will perhaps change my head a little bit, but to start with, I probably will refer most of the outcomes from our regional output as well as some response to all the information we just got. I came from a background that’s totally not technical. I don’t have research background either, and so this is a really fruitful sharing earlier to me, and I actually was assigned to give reactions, and so I was paying so much attention, and it actually made me thought of a few points, but I will share it at the end of my speech because I probably want to point out a few things that the Asia Pacific Youth actually talked about. While most of the people, and I think we have had enough session at IGF that talks a lot, how we concern about the impact and risk with AI and the implication of it, but maybe because with the youth, because of our lack of experience and expectation and knowledge, we are quite positive. That’s my observation by working closely with the youth, but of course, with that group of youth, it’s just Asia Pacific voice. We know that there’s like majorities of the online populations are formed by young people, but we don’t really get to invite all of them to our conference, so this is still just a representation of voice, but what we see is that when we erase those knowledge and baggage or things that adult would usually carry, the younger generations are quite positive, and the reality is we must implement these things to our everyday life because I personally see it that way as well, and I think with the technology, especially after Christian’s and Emma’s sharing, I really think that AI can eliminate human bias, which is something we unconsciously act out and we don’t know, and so I am positive about that, just like, just name some examples. I’m an Asian, and sometimes just picking up, we might use marginalised group or minority to describe certain groups of people, but with that, that means we unconsciously subscribe to certain ideas, that’s why we have that kind of concept, right? But earlier with Lucia, and use a different adjective to use, I think Lucia actually used underrepresented group instead, which is rather neutral. We did not intentionally do that, but we would have this kind of bias sometimes, and I do think technologies can help us with that, and, of course, because we will have policy in place where we, I believe everyone who is in this room will subscribe to the ideas of having a multi-stakeholder approach, my assumption, to form these policies, and if those policies are in place, I believe we can proactively eliminate this kind of bias that we don’t intentionally send out. And so, just bringing in some ideas from the youth forum that we had, I think it’s really important to get the users, the consumer, to co-design all these policies, and also have like technical community to be involved in policy-making as well, because they have the knowledge about the technologies, but not all of them, like currently might be included in all levels of policy-making, and so if we have them participate more into this process of making policy for like such complex technologies like AI, I think that will be really important as well. And I believe international standards are really important, because that’s how different countries can modernize their legal framework, and so that they can cater the needs of their own nations, and it will also help different industries to follow and to handle their space, because, for example, what I see is that Big Tech is running most of the service platform where I live on. I am a Gen Z. So these are privately owned public space, which govern and regulated by private sector. And I think international standards is really important because that will provide a comprehensive guideline for that, which is human centric, as another speaker mentioned. And before I wrapped up, I want to take this opportunity to probably bring out just something really personal. I hope that I am not appear to being too rude. Other than my usual work with the youth, I am a writer. And I have a Substack newsletter. But I am a really small scale writer. And so I don’t really get the money to pay and get my own domain. And so my newsletter is actually not really appearing on Google search result because of the policies between probably, well, I don’t really have the knowledge. But I assume it’s like the policy between Google and Substack. And I think there might be something to do with Substack. They changed policy at some point, which my newsletter is not showing on Google anymore. And so that’s just one personal example that I want to throw it here. Because Google, it’s one of the biggest search engines that adopt by most people in this world. And I just wonder, if we are talking about inclusivity, how can we or how can enterprise put a mechanism in place to ensure small scales writer, for example, in my case, to be included as well. But yeah, thank you so much.

Moderator – Charles Bradley:
Thank you very much. Yes, really good points raised from the, obviously, the conversations you’ve been having with youth community before and before. And a very specific question at the end that we might want to take offline to someone that might know the answer to that one. We’re going to start getting people involved and have a proper conversation. So if there are things to come along, please do put it in the chat if you’re online or raise your hand. I wanted to first come back to Emma and Christian. And I’m definitely going to come to people who have good questions. And also, there was a point from Lucio’s talked about the sort of counterfactual fairness at Google. And I wanted to see whether Emma or Christian, you could share a bit more about your experience of that, if you can answer that.

Christian von Essen:
Yeah. I’m happy to talk about that. We’ve had a similar approach. And I have this slide with these. We see similar patterns, right? This replacement there is exactly the counterfactual similarity that we are trying to get here. This has been central and super useful to us. What also is helpful is ablation of certain terms. Sorry, yes?

Moderator – Charles Bradley:
I was going to ask, could you give us a 10-second definition of what that means for people who might not know what counterfactual fairness means in that context?

Christian von Essen:
Yes, of course. So the idea is when you take a user’s query, for example, and it has a marginalized minority group in there, like, I don’t know, black woman video, then the likelihood that a classifier predicts something about this person for this query should be the same for black woman video as for the counterfactual query, where you replace black woman video with black man video, or white woman video. If you replace these terms, the output of the classifier should not change significantly. The other part, then, is ablation. It shouldn’t matter much whether you talk about black woman video, black woman dress, or just woman dress. That is also essential to what we’ve been doing here. But if you do this counterfactual fairness, you’re still sticking, in a certain sense, to a slice of the data. We are still sticking to gender terms, race terms. Also, outside of these slices, these particular slices, the behavior of these methods of these classifiers and systems should be the same. Doesn’t matter if we’re talking about genders or LGBTQ queries. The quality of the classifiers between these slices needs to be the same as well. That’s the metric part that we had. So counterfactual is great, ablation is great, and then we go beyond that. But it’s a fantastic first step to augment your training data to get the classifiers to say the right things and be fair.

Moderator – Charles Bradley:
And then, sorry, Emma.

Emma Higham:
Yeah, I was just going to say, I think a lot of this is about asking your system questions and seeing how it performs. And what you really want is to be able to ask the question of black woman hairstyles, white woman hairstyles, see are we getting results that we consider to be equivalent, see what happens if we type in the query hairstyles. There will always be some disparity because these systems are operating at mass scale. But we aim to have a way to hold the system accountable and reduce any disparity that we see. And I think I did hear a question earlier on. I think it was from Zulfa around data justice. I think one thing that I’ve been impressed by here is that these systems are able to learn from patterns such that sometimes you can have a relatively small amount of data to start to interrogate the system. And you can see that a system is not behaving well with just a few examples. You don’t have to find every potential item in a large set of potential identity groups in order to interrogate the system. You just need a few to say, is this system behaving wrong? And that already helps. So this idea of small data being enough to interrogate the system has been very powerful.

Moderator – Charles Bradley:
Are there any questions on this point particularly? So we can carry on this thought. Yes, please.

Audience:
Thank you very much for all the sharing. It’s really interesting. So I have a bit of a specific question. So it’s on leveraging AI to reach a goal of gender inclusivity. But to what extent is this corrections you’re talking about that are happening after? So in terms of fine tuning rather than beforehand, which is when you’re feeding in the training data. Because I think there was a recently published article about a study from the University of Pittsburgh about how there is no clear data, no clear percentage of how much of the training data being used to train these LLM, how much of it is women-authored data. And so it perpetuates the gender gap. Because, OK, when you’re looking at the digital divide between Global North and Global South, then if you look closely at those online in the Global South, more likely they’re going to be male users online. And then so I just want thoughts on what do you feel about this particular problem that is it more fine tuning that’s happening after you’re finding these bias outputs? Or how much percentage of effort is going into looking at using more diverse training data?

Moderator – Charles Bradley:
Thank you very much. I think that’s for Google.

Christian von Essen:
Yeah, so in the beginning a few years ago, when we started with BERT and the language models became bigger, and the first step was to create models that are credible and useful at all, it was more of a fine tuning step later to address and correct these biases. But as we’re getting more into even larger models where training data selection now becomes a more challenging problem, and where also these kinds of concerns have spread more through the community and get more scrutiny not only outside, but also from communities inside Google, this gets more and more into the first step of training. So before fine tuning is happening, correcting the first step of data and making sure that that is representative gets ever more into that first step as well. And fine tuning and first step also get ever more mixed and intermingled. So that the question as such becomes very tricky to answer. Where does the first step end and fine tuning get started as we’re talking about mixtures of training?

Emma Higham:
Yeah, I mean, I would just plus one. I think these things are increasingly very, very intermingled. But what you do see is what an amazing technology. So let’s see what this technology can do. As we’re applying this new technology, how could we design it in a safe way? How could we design it in a way that it’s inclusive? You look at that first version of the technology. And then the first thing you do before you think about bringing it to market is you interrogate it. You do the fine tuning based on those tests. And then if it didn’t work well, you go back to the first step again. So this is really cyclical. And there are many layers at which we can hold our systems accountable. Often, you have foundational models that you’re using for lots of different use cases. And you want to make sure those are working well, as well as specific use cases, seeing how it’s behaving in context and making sure that in context it’s working well for users for a specific product experience. Great question.

Moderator – Charles Bradley:
Yeah, very good. While we’re still on the point, Lucia, is there advice, tools, resources on this particular point on the OECD that we should be looking at?

Lucia Russo:
Well, not on this specific. No, we are more on analyzing the big trends. So just to mention that we have two papers on generative AI, one that is really analyzing some preliminary considerations around these aspects that have been discussed, like what are these models, how they are evolving, what kind of policy implications they have around safety, for instance, and what kind of measures are developers implementing. So this is one paper. And then there is another paper that we did to support this G7 Hiroshima process around generative AI. And basically, there is an analysis of what countries, based on a questionnaire to G7 members, on what countries feel are the main risks around generative AI but also the main opportunities, and so what kind of actions internationally can be undertaken. So this is more in terms of, again, on policy responses. This is the contribution from the OECD, very much enjoying the conversation to understand better at which point you can intervene. This is very enlightening for us as well.

Moderator – Charles Bradley:
Great, yeah, absolutely. And obviously, the role that you’re playing on the bigger picture of this conversation, it’s sort of critical to get into the real weeds here, because the devil is really in the detail, isn’t it? We have a question online, and then.

Audience:
Yeah, thanks, Charles. So it’s from Samridhi Kumar. It’s a bit of a comment and a question. I think I still remain a tad bit skeptical about how AI and gender inclusivity may interact, especially when AI may present itself as a popular tool for surveilling people based on gender. What are the possible solutions for this dilemma?

Moderator – Charles Bradley:
Babina, what do you think? How could it be a solution to this dilemma? What are the solutions here?

Bobina Zulfa:
I think it’s a lot of things that the panel is trying to speak directly to here. But I think I share the sentiment with the person who asked the question that I’m also very skeptical as to how realistic some of these things are or how feasible they are. So for example, the persons from Google have been sharing how from the previous question, when the person asked about the training data sets as opposed to fine tuning. And then, Emma, I think you did share your optimism that we do have a good tool. And we can, before we send it out to the market, get it in a much better place before we send it out to the communities. But I think that, in a sense, again, is the, I mean, I think for me, and this may tie in with Lucia’s work as well with some of the work we’re doing on the regulatory arm of things, is it comes into balance out competing interests. Because there is Google, which is developing these technologies. And there is a number of interests that they have from the information sharing to being a profit-making company as well, to the communities or the persons that these technologies are being pushed out to the end user, who these technologies could pose real-life impacts on. And so I think, for me, it’s just, I think we just need to be very intentional about thinking about these things from the get-go. And I think it’s a lot of what we are reiterating here, everyone else of us. It could be with our OECD’s principles, thinking about these things from the get-go as we’re getting into development, even from the ideation stage. And then I think then we think about how more intersectionally factoring in these factors, as opposed to waiting to, oh, we push this out. And then, oh, we can now try to, we’re putting fires out, in a sense. And so, for example, the data issue, I am very skeptical as well when you mention that small data sets. I do totally agree these technologies have immensely evolved, and they’re able to use just small data sets to just do so many of these things you’ve been talking about, like looking out for bias, et cetera. But then again, that speaks to, again, someone mentioned this, if we do have limited data sets that are not representative of a big part of maybe the global majority, how do we expect, realistically, that not to reflect in the products that are pushed out at the end? And so I think it’s a lot of the caution, or the skepticism has been expressed through a lot of scholars’ work over the last year or two. And I think a number of principles, like the OECD is doing, UNESCO, et cetera, the work we’re doing, and so many other organizations, civil society, et cetera, are saying, are factoring these things from the get-go, and think about these things from the get-go. And then that could counter the skepticism, because then we’re sure that we are pushing out products that are safe and are going to actually be of benefit to the person that these products are being pushed out to.

Moderator – Charles Bradley:
Thank you. Yeah, we gave you the really hard question, so thank you for giving us such an eloquent answer to it. We have another question in the room. Andrew.

Audience:
Thanks, Anne. Thanks, Google, for the opening presentation, which is kind of interesting to get a bit more into the weeds about how you actually are trying to manage these problems. And I guess my question is a bit about the value of non-binding principles. There are currently about over 40 international processes setting out how to govern AI. A couple are binding, European. There’s a cluster of UN ones which may go nowhere. And there’s 25-plus voluntary non-binding initiatives being developed by a variety of industry and other types of bodies. And I just queried the value of endlessly producing high-level sets of principles, which don’t overlap or aren’t consistent, but may all offer slightly different variations. And it strikes me what was interesting about the Google presentation is what would be of real value to the wider public would be something that I think doesn’t yet exist, which is a mechanism to independently audit what you’re doing to assess whether the steps you’re taking at the engineering level are actually producing the outcomes that you want to be desirable. And if they do, you get some kind of kite mark or some recognition that what you’re doing with AI is actually fulfilling those wider social goals. And it strikes me that that would be, given the time, money, and effort that goes into things like the IGF, which is a whole series of fairly non-binding conversations or these voluntary principles, whether investing some of that time and money in developing those independent audit mechanisms might be a more useful use of the planet’s resources in terms of getting at what we want to get at.

Moderator – Charles Bradley:
I think I’ll let the OECD respond first. Lucia.

Lucia Russo:
OK. Well, thank you. So I’ve never done an analysis of all of the principles that exist, so I don’t know to which extent it’s fair to say that they don’t overlap, because I would assume that there is a large overlapping among these principles. And for instance, if one takes, well, recently the UK came up with their approach to AI regulation, and that is based on, again, high-level principles, cross-sectoral principles. And they do overlap to a large extent, or even almost all of the principles overlap with the OECD principles. The same, I don’t know, the NIST management framework in the US, it’s really closely linked to the work of the OECD. We did a classification framework for AI systems, but basically what it says is that not all AI systems are equal. They don’t have the same risks. They don’t have the same impact on the different contexts they work in. So there needs to be this risk-based approach, which is something that is becoming the approach taken in most jurisdictions. Even the UAI Act takes a risk-based approach by classifying risks of AI systems and having provisions related to the different systems based on the risk category they fall in. So I’m just saying I understand the concern of having a plethora of principles. I don’t think there is a hierarchy of principles, yet there are, I think, some principles that are being implemented more uniformly across countries and with some variation, of course. And I have not done this exercise, but perhaps try and check where they overlap, because I’m sure there is a lot that has to do, again, with fairness, with transparency, with accountability, and safety, security. So I understand, and I think it’s a fair concern to say that everyone is doing its own principles. Perhaps there is also, I mean, this is also a very new field. Everything is in the making, so even regulation. is experimenting really and trying to understand what’s the best approach. So I would say, yeah, perhaps there needs to be some more alignment and there are attempts lately to have more international coordination. As I was saying, the G7 is one. The UK is promoting this safety summit at beginning of November. The UN is also advancing work. So I think there are activities to come together and have more coordination on that. And I think the mechanism of auditing the systems that I agree there is not such a thing yet. When it comes, certainly with standards and with the UIAC, there will be a check on the system. So it won’t be perhaps the same thing that it was proposed. But I think there is a lot that is in the making. So all of this is just being developed right now. So I don’t have a full answer. I’m sorry, it’s a very difficult question. But I just want to say that there are a lot of discussions and there is a lot of commonalities, despite the fact that there seems to be a lot of lack of convergence. And yeah, that’s what I wanted to say.

Moderator – Charles Bradley:
Thank you. Yeah, and I think the sort of the principles have started to sort of, the principles work that’s been going on for a while now has started to sort of give us the train tracks for the regulation that is coming. And that has a lot more, obviously a lot more teeth to it. And that might sort of get to some of the points that Andrew was raising. Does anyone else want to come on this point before I ask another question from the panel? No. I suppose one of the things that comes, any more questions in the room whilst we’re before? One of the things this sort of gets to is trust in measurement. Like Google have, and Christian have given us this great presentation around what you’re already doing to measure bias and reduce sort of certain biases in that, in your work, and also how you’ve been able to reduce shocking offensive content through some of the technologies that you’ve used. But we’ve also heard the sort of the flip of that, which is sort of Google marking its own sort of homework and showing that how you’re measuring against your own known biases and how you’re improving your own system against your own sort of measurements. So I think it’s sort of, some of this is really about how do we build trust in that measurement and in that system. And I wonder whether any of the panelists had some thoughts on that. So if we’re going to use AI and we believe in the potential of AI to increase gender inclusivity, how do we know that it’s actually doing that? And do we trust that? And how might we trust that more? Any sort of reflections or thoughts on that from the panel? Or anyone in the room? Thank you.

Audience:
Can I just repeat my plea for independent audit process? I mean, the only way you know is, if you don’t trust the company to mark its own homework, someone else has to mark the homework. And I think my point is, going back to the OECD, I’m not saying there aren’t, I think there is agreement, fairness, inclusivity, there’s a set of things we already know we want AI to do or need to do. What we don’t have is any method of assessing whether any of the applications are actually doing it. And that’s where I’m saying time and investment needs to go within the wider community rather than in doing yet more sets of principles. So I think the independent audit is the key thing. And I have no reason to distrust what Google are doing. You know, on the basis of what I’ve heard today, it sounds perfectly credible, perfectly sensible, and they’re trying to work with the limitations of data, et cetera. But obviously for the rest of the wider public, it needs to be audited in some way to satisfy us that gender equality is being promoted through these kinds of systems. And surely that is where the conversation should be and where the investment should be and not on high level principles and the endless discussion of high level principles, which has gone on in IGFs from year after year for like 20 years. Thank you, sorry.

Moderator – Charles Bradley:
Yes.

Emma Gibson – audience:
Hi, I’m Emma Gibson from the Alliance for Universal Digital Rights or AUDRI for short. And I definitely agree with the gentleman who’s talking about independent audits. But unfortunately, I also want to introduce another set of principles that we launched this week. It’s the Principles for a Feminist Global Digital Compact. And it’s 10 principles. And one of them is around adopting equality by design principles and a human rights based approach throughout all phases of digital technology development. And the Equal Rights Trust last week launched some equality by design principles themselves. And really that is including things like gender rights, impact assessments, incorporating them into the development of algorithmic decision making systems or digital systems prior to deployment. So whatever you call them, there absolutely is appetite for that kind of thing. And they do need to be independent to make sure that we’re not amplifying and perpetuating existing biases.

Moderator – Charles Bradley:
Thank you. I think we should come back to some of the challenges that this technology might also be able to help with. So we were trying to get the session to focus on ways in which AI can solve some of these problems. And I wonder whether there are like particular challenges that the panel people in the room think that we should be spending our time, effort, money on, that we can actually sort of promote gender inclusivity and inequality. Like what should we be focusing on and how might AI help us do that? Or other examples of things that are already sort of underway very practically.

Lucia Russo:
Maybe I’ll go first. And here, I’m not going to talk about the technical tools. I would go more broadly, I think, around, again, what kind of policy actions can be put in place to increase gender equality in AI. One, when we talk about gender equality, when we look at data on AI, on women representation in AI, the landscape is still very much not very positive for women. So we know that in OECD countries, more than twice as many young men than women can program, which is essential for AI development. So there is already this discrepancy. Then in terms of AI researchers, only one in four researchers publishing on AI worldwide is a woman. So there is, again, not fair representation in AI research. And when we look at developers, again, this share, it’s even lower. From a 2022 survey of Stack Overflow users, only 4% of respondents were female. And LinkedIn data suggests that female professional with AI skills represent less than 2% of workers in most countries. So I would say that there are still policies, basic policies that concern really development of AI-specific skills for women that are essential. As we said at the beginning, I mean, one key aspect is to increase women representation in design of the systems, in research of the systems. So this is a key policy that countries should look at. And there are countries, obviously, already doing that by promoting, by promoting scholarships or even programs at universities in Germany, for instance, sorry, providing funding to women-led research teams in AI. So I would say there is some policies that countries can certainly do, that is really to address one key gender gap, which is the one of representation of women in AI research. So that’s what, yeah, what I would suggest to increase gender, this is, to increase gender, reduce gender gaps, this is essential.

Moderator – Charles Bradley:
Thank you very much. Emma, I think I want to come to you actually, if that’s okay. And because obviously you’ve shared us a little bit about how you’re using AI for safe search and the sort of ranking. I wonder whether you had any more like specific examples that you could talk to and how like inclusion is being used in those products as well.

Emma Higham:
Yeah, absolutely. I mean, one of the things I’m really excited about is how AI is improving our ability to do language understanding and to understand concepts at scale. One area that I’ve seen this have significant impact is a product I used to work on, Google Translate, where products like Google Translate, Google Search, we are actually all able to test them. We use them, many of us, every day. And we find when they don’t work well for us, and we hear that from users. One thing that we heard in the past was during Women’s World Cup, women would be typing in queries like France versus Brazil, and you’d find that it would take you to the men’s football team. Typing in the England team, you see the men’s team. That’s something that we heard from users That’s something that we heard from users. We heard scrutinized, and we looked to solve. Actually, it was a non-trivial problem to solve as we had to build the right partnerships. But this year, we were pleased to see that we were able to address that. The Women’s World Cup, you could get easy and accessible results about women’s football in just the same form factor that you could for men’s. That’s a great example of how users held us accountable and were able to improve our systems. In the same way for Google Translate, we’ve seen that there were some cases where translations were, in the past, not fully inclusive. This can be because language is very complex around the way that we think about gender in different languages. It’s not always easy for a computer to translate that well. But as we have seen AI get better at pattern matching, and our systems, our internal accountability, our internal ability to test these systems at scale, we have seen that Google Translate has got significantly better in this regard. And we’ve been able to test and validate that Translate is working across a wide range of languages in a way that we think is really effective for understanding gender in different ways. One specific example about a recent application of this is I can actually now talk to Translate and tell it in what form factor I want to be speaking. Do I want to be speaking in the formal version? Do I want to translate something so that it is in feminine tense or male tense? And this means we no longer need to default, right? We don’t need to make assumptions around, were you talking about a male audience or a female audience? We can set that in the tool. And this is the kind of thing that’s now possible and newly possible because of this technology. I hope that made sense. But I think the thing I’m excited about here is you’ve all been holding us accountable for many years. That’s one of the great things about working at Google is users hold us to a high standard. And I’m excited about AI as a tool that helps us meet that high standard better.

Moderator – Charles Bradley:
Thank you. Yeah, and it made a lot of sense. And it’s just really good to hear these very practical but very large impact shifts that are really starting to dig into the question here. And it’s things that impact people on a day-to-day basis as well, which I think is really good. And Google’s been particularly good at solving for day-to-day problems. It’s built quite a large business out of it. Everyone here who doesn’t speak Japanese has probably used Google Translate or Lens or something to navigate the street signs or the menus this week. I definitely have. Any final questions or thoughts from the room? Yes, please. Yeah, come and take the mic. Thank you.

Audience:
Hello. Yes, my name is Natalia. I’m working in the field of education. And what Luchiy just mentioned really resonates with me. I work in Cambodia for the past eight years, and I’m the founder of the first female coding club. And the representation of women in the field of technology is extremely low. It’s even lower than Luchiy has mentioned. And if you type in Google search Asian programmer out of 20 images, you will see maybe one or two Asian faces as programmers. But at the same time, like AI adoption and the growth is giving me a lot of positive vision because I do believe that actually, especially generative AI tools may bring a lot of opportunities for female workers in the field. As we know, most of the girls would choose a social or humanitarian subject. And this is where generative AI can be really a great field for the development and application of these interests in human science and social science interests in human science and social science mixed with technology. However, my question is, how can the policymakers make sure that this component of the broader introduction and engagement of female workers and students would be applied across the world? I work in Cambodia where only 1.2% of girls choose to study technology. This is extremely low. And Khmer language, like in Google, they don’t use, it’s not that very well working yet. So there are many barriers and I really wanna see much more focus on the upskilling, reskilling and introduction of female voice in the field of AI. And I think generative AI is a great pipeline for that. So are there any comments or I would like to hear? Thank you.

Moderator – Charles Bradley:
Thank you. Thank you very much. Lucia, are there any sort of thoughts, work from the OECD on this? You touched on the same sort of deficit earlier.

Lucia Russo:
Yeah. I mean, it’s one point that actually I forgot to make on the positive side is indeed that generative AI can help because you have a coding co-pilots that actually can speed up time to code. And I think it can also be a tool for people to learn much quickly to code. So there may be some, as it was suggested, some opportunities there that are from generative AI. The one thing of course is that the language that is trained, the generative AI large language models are trained on the data needs to be there. So what we see is also a lot of investment in training large language models in languages other than English. And so this is one thing that needs to be also promoted by countries so that these models exist not only for the languages that have the most of the data. And then in terms of policies, again, that is the question, how do you bring more interest for women? And I think one of the motivation that it was mentioned that is key. So we have been in a lot of policies like coding from earlier age, but also, as I said, scholarships, but also role models are quite important to make young girls also identify with type of jobs that they could take on later on. So this is a big question, how you have more women in science. But as I said, there are examples that span this kind of policy actions.

Moderator – Charles Bradley:
Thank you very much. Yeah, and yeah, definitely multifaceted challenge to do that. But I think the point here is that this becomes something that’s in our sort of day-to-day apparatus and therefore people are gonna be more interested in being part of it. So thank you, Natalia, for that comment and question. We’re coming to the end. I wanna sort of wrap up in about 30 seconds or so, but I wanted to just see if any of our panelists had anything burning they wanted to share or respond to before I did that. No. Great, well, huge thank you to our panelists for joining us in a wide variety of time zones and appreciate you staying up or getting up early to do so. I definitely found it a very interesting conversation. We were able to get into some of the practical aspects of this topic. We also touched on the multi-layered and complex nature of this topic as well. And I think that it’s been really good to see that there’s a lot of interest in developing solutions that can solve this problem with more people from a more inclusive way. We’ve had some principles launched in the session. We’ve had some discussions about the value of principles in the session. We’ve had some very practical data and sort of measures sort of shared. So I’ve learned something and thank you for doing that and for being part of this conversation. And with that, I would like to close the session, say thank you again and hope to see you all again soon. Thanks, bye.

Audience

Speech speed

183 words per minute

Speech length

1207 words

Speech time

395 secs

Bobina Zulfa

Speech speed

184 words per minute

Speech length

2153 words

Speech time

702 secs

Christian von Essen

Speech speed

144 words per minute

Speech length

1242 words

Speech time

518 secs

Emma Gibson – audience

Speech speed

170 words per minute

Speech length

175 words

Speech time

62 secs

Emma Higham

Speech speed

186 words per minute

Speech length

2385 words

Speech time

768 secs

Jenna Manhau Fung

Speech speed

147 words per minute

Speech length

1011 words

Speech time

412 secs

Jim Prendergast

Speech speed

248 words per minute

Speech length

118 words

Speech time

29 secs

Lucia Russo

Speech speed

132 words per minute

Speech length

2608 words

Speech time

1188 secs

Moderator – Charles Bradley

Speech speed

168 words per minute

Speech length

2232 words

Speech time

795 secs

Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Emily Bender

The analysis discussed various aspects of language models (LLMs) and artificial intelligence (AI). One key point raised was the limitation of web data scraping for training LLMs. Speakers highlighted that the current data collection for LLMs is often haphazard and lacks consent. They argued that this indiscriminate scraping of web data can violate privacy, copyright, and consent. Sacha Costanza-Chock’s concept of consentful technology, which emphasises meaningful opt-in data collection, was presented as a better alternative.

The speakers also stressed that LLMs are not always reliable sources of information. They pointed out that LLMs reflect biases of the Global North due to data imbalance. This uneven representation can lead to skewed outputs and perpetuate existing inequalities. Therefore, there were concerns about incorporating LLMs into search engines, as it could amplify these biases and hinder the dissemination of objective and diverse information.

Another topic of discussion was the risks associated with synthetic media spills. Speakers highlighted that synthetic media can easily spread to other internet sites, raising concerns about disinformation and misinformation. They recommended that synthetic text should be properly marked and tracked in order to enable detection and ensure accountability.

On the positive side, the analysis explored approaches to detect AI-generated content. Speakers acknowledged that once synthetic text is disseminated, it becomes difficult to detect. However, they expressed optimism that watermarking could serve as a potential solution to track AI-generated content and differentiate it from human-generated content.

In terms of reframing discussions, there was a call to shift the focus from AI to automation. By doing so, a clearer understanding of the societal impact can be achieved, ensuring that potential risks are thoroughly assessed.

Regarding language-related AI models, speakers emphasized the importance of not conflating them and carefully considering their usage in different tasks. This highlights the need for a nuanced approach that takes into account the specific capabilities and limitations of different AI models for various language processing tasks.

The analysis also emphasized the importance of communities having control over their data for cultural preservation. Speakers stressed that languages belong to their respective communities, and they should have the power to determine how their data is used. The ‘no-language-left-behind’ model, which aims to preserve all languages, was criticized as being viewed as a colonialist project that fails to address power imbalances and the profits gained by multinational corporations. It was argued that if profit is to be made from language technology in the Global South, it should be reinvested back into the communities.

In summary, the analysis delved into the complexities and challenges surrounding LLMs and AI. It highlighted the limitations of web scraping for data collection and the associated concerns of privacy, copyright, and consent. The biases in LLMs and the potential risks of incorporating them into search engines were thoroughly discussed. The analysis also examined the risks and detection of synthetic media spills, as well as the need for reframing discussions about AI in terms of automation. The importance of considering language-related AI models in different tasks and the control of data by communities were underscored. Criticisms were made of the ‘no-language-left-behind’ model and the profiting of multinational corporations in the Global North from language technology in the Global South.

Diogo Cortiz da Silva

The use of the web as a data source for training large language models (LLMs) has sparked concerns surrounding user consent, copyright infringement, and privacy. These concerns raise ethical and legal questions about the sources of the data and the permissions granted by users. Furthermore, there are concerns about potential copyright violations when LLMs generate content that closely resembles copyrighted works. Privacy is also a major concern as the web contains vast amounts of personal and sensitive information, and using this data without proper consent raises privacy implications.

In response to these concerns, tech companies such as OpenAI and Google are actively working on developing solutions to provide users with greater control over their content. These companies recognise the need for transparency and user consent and are exploring ways to incorporate user preferences and permissions into their LLM training processes. By giving users more control, these companies aim to address the ethical and legal challenges associated with web data usage.

The incorporation of LLMs into search engines has the potential to significantly impact web traffic and the digital economy. This integration raises policy questions regarding the potential risks and regulatory complexities of using LLMs as chatbot interfaces. As LLMs become more sophisticated, integrating them into search engines could revolutionise the way users interact with online platforms and consume information. However, there are concerns about the accuracy and reliability of LLM-driven search results, as well as the potential for biased or manipulative outcomes.

In addition to these concerns, the association of generative AI with web content presents challenges related to the detection, management, and accountability of sensitive content. Generative AI technologies have the capability to autonomously produce and post web content, raising queries about how to effectively monitor and regulate this content. Detecting and managing sensitive or harmful content is crucial in ensuring the responsible use of generative AI while addressing the potential risks associated with false information, hate speech, or illegal materials. Similarly, holding responsible parties accountable for the content generated by AI systems remains a complex issue.

To address these challenges, technical and governance approaches are being discussed. These approaches aim to strike a balance between innovation and responsible use of AI technologies. By implementing robust systems for content detection and moderation, as well as establishing clear accountability frameworks, stakeholders can work towards effectively managing generative AI-driven web content.

In conclusion, the use of the web as a training data source for LLMs has raised concerns regarding user consent, copyright infringement, and privacy. Tech companies are actively working on providing users with more control over their content to address these concerns. The integration of LLMs into search engines has the potential to impact web traffic and the digital economy, leading to policy questions about potential risks and regulatory complexities. The association of generative AI with web content raises queries about detecting sensitive content and ensuring accountability. Technical and governance approaches are being explored to navigate these challenges and foster responsible and ethical practices in the use of LLMs and generative AI technologies.

Audience

The discussion revolved around various topics related to the effects of generative AI and LLM (Large Language Models) development. Julius Endert from Deutsche Welle Academy is currently researching the impact of generative AI on freedom of speech. This research sheds light on the potential consequences of AI on individuals’ ability to express themselves.

The regulation of LLM development was also discussed during the session. The representative from META suggested that regulation should focus on the outcome of LLM development, rather than the process itself. This raises the question of how to strike the right balance between regulating the technology and ensuring positive outcomes.

The control of platforms and social media was another aspect of the discussion. It was noted that a few businesses have significant control over these platforms and the development of LLMs. This concentration of power raises concerns about competition and potential limitations on innovation.

The role of the state and openness in regulating LLMs was a topic of inquiry. The participants examined the role that the state should play in regulating LLM development and how to promote openness in this process. However, there was no clear consensus on this issue, highlighting the complexity of governing emerging technologies.

The discussion also explored the neutrality of technology, recognizing that different people have different values and use contexts for technology. It was acknowledged that technology is not inherently neutral, and its use and creation context vary among individuals and values.

Transparency in content creation by large language models was another area of concern. Unlike web page content and search engines, large language models lack clear mechanisms for finding and controlling content. This lack of transparency raises questions about the responsibility for the content created by these models and how stakeholders should be considered.

The discussion emphasized the need for the alignment of values in language models, with participation from different languages and communities. This inclusive approach recognizes the importance of diverse perspectives and ensures that the values embedded in language models reflect the needs and voices of various groups.

The notion of the internet as a ‘public knowledge infrastructure’ was also brought up, advocating for shaping the governance aspects of the internet to align with this goal. This highlights the need to democratize access to information and knowledge.

Furthermore, the economic aspects of content creation and the internet were given attention. It was noted that these aspects are often overlooked in discussions on internet governance. Participants argued for engaging in discussions about taxing and financing the internet and multimedia, particularly when creating new economic revenue streams for quality content.

These discussions provide valuable insights into the complexities and potential consequences of generative AI and LLM development. They underscore the importance of careful regulation, transparency, inclusivity, and economic considerations to ensure that these technologies are leveraged for the benefit of society. The discussions also highlight the significance of promoting openness and preserving freedom of speech in the digital era.

Dominique Hazaël Massieux

The analysis examines several aspects related to LLMs and web data scraping, content creation, AI technology, search engines, and accountability. It asserts that LLMs and search engines have different impacts when it comes to web data scraping. While web data scraping has been practiced since the early days of the internet, LLMs, being mostly a black box, make it difficult to determine the sources used for training and building answers. This lack of transparency and accountability poses challenges.

Furthermore, the analysis argues for explicit consent from content creators for the use of their content in LLM training. The current robots exclusion protocol is considered insufficient in ensuring content creators’ explicit consent. This stance aligns with SDG 9 – Industry, Innovation, and Infrastructure, suggesting the need to establish a mechanism for obtaining explicit consent to maintain content creators’ control over their materials.

In addition, the analysis proposes that the content used for LLM training should evolve based on regulations and individual rights. This aligns with the principles of SDG 16 – Peace, Justice, and Strong Institutions. It highlights the need for a dynamic approach to permissible content, guided by evolving regulations and the protection of individual rights.

The integration of chatbots into search engines is seen as a UI challenge. Users perceive search engines as reliable sources of information with verifiable provenance. However, the incorporation of chatbots, which may not always provide trustworthy information, raises concerns about the reliability and trustworthiness of the information presented. Striking a balance between reliable search results and chatbot integration is a challenging task.

Making AI-generated content detectable presents significant challenges. The process of watermarking text in a meaningful and resistant manner poses difficulties. Detecting and verifying AI-generated content is complex and has implications for authenticity and trust.

The main issues revolve around accountability and transparency regarding the source of content. The prevalence of fake information and spam existed before LLMs and AI, but these technologies amplify the problem. Addressing accountability and transparency is crucial in combatting the spread of misinformation and promoting reliable information dissemination.

The analysis emphasizes the benefits and drawbacks of open sourcing LLM models. Open sourcing improves transparency, accountability, and research through wider access to models, but the valuable training data that contributes to their effectiveness is not open sourced. Careful consideration is required to balance the advantages and drawbacks of open sourcing LLMs.

Lastly, more transparency is needed in the selection and curation of training data for LLMs. The value of training data is underscored, and discussions on transparency in data sources and curation processes are necessary to ensure the integrity and reliability of LLMs.

In conclusion, the analysis thoroughly examines various dimensions surrounding LLMs and their implications. It explores web data scraping, content creation, AI-generated content, chatbot integration, and accountability/transparency. The arguments presented call for thoughtful measures to ensure ethical and responsible use of LLMs in a constantly evolving digital landscape.

Rafael Evangelista

The analysis provides a comprehensive examination of the current landscape of online content creation and compensation structures. One of the primary concerns highlighted is the financial model that rewards content creators based on the number of views or clicks their content generates. This system often leads to the production of sensationalist and misleading content. The detrimental effects of this model were evident during the 2018 elections in Brazil, where far-right factions used instant messaging platforms to spread and amplify misleading content for profit. This example exemplifies the potential harm caused by the production of low-quality content driven by the pursuit of financial gain.

Another significant aspect discussed is the need to reconsider compensation structures for content creation. The analysis points out that many online platforms profit from journalistic content without adequately compensating the individuals who produce it. This raises concerns about the sustainability and quality of journalism, as content creators may struggle to earn a fair income for their work. The discussion calls for a reevaluation of the compensation models to ensure that content creators, particularly journalists, are appropriately remunerated for their contributions.

On a more positive note, there is an emphasis on acknowledging the collective essence of knowledge production and investing in public digital infrastructures. The analysis argues that resources should be directed towards the development of these infrastructures to support the creation and dissemination of knowledge. The knowledge that underpins large language models (LLMs/IOMs) is portrayed as a collective commons, and it is suggested that efforts should be made to recognize and support this collective nature.

However, there is also criticism towards the improvement of existing copyright frameworks. The distinction between fact, opinion, and entertainment is increasingly blurred, making it challenging to establish universally accepted compensation standards. Instead of bolstering copyright frameworks, the analysis recommends encouraging the creation of high-quality content that benefits the collective.

The analysis also highlights the potential negative impact of automated online media (AOMs), even in free and democratic societies. AOMs can incentivize the production of low-quality content, thereby hindering the quality and accuracy of information available online. To address this issue, the suggestion is made to tax AOM-related companies and utilize the funds to create public incentives for producing high-quality content.

In terms of governance, the analysis suggests that states should invest in developing publicly accessible AI technology. This investment would enable states to train models and maintain servers, therefore ensuring wider access to AI technology and its benefits. Additionally, there is an argument for prioritising state governance over web content functionality, as the web is regarded as something that states should take responsibility for.

The role of economic incentives in shaping the internet and web technology is highlighted, emphasising the influence of capitalist society and the need to please shareholders on internet companies. The analysis suggests viewing the internet and web through the lens of economic incentives to better understand their development and operation.

Finally, the importance of institutions in guiding content production is emphasised. The analysis posits that there is a need to regain belief in institutions that can hold social discussions and establish guidelines for content creation. The Internet Governance Forum (IGF) is specifically mentioned as a platform that can contribute to building new institutions or re-institutionalising the creation of culture and knowledge.

In conclusion, the analysis provides a thorough examination of the current state of online content creation and compensation structures. It highlights concerns regarding the financial model that incentivises low-quality content, calls for reevaluation of compensation structures, advocates for recognising the collective essence of knowledge production, criticises existing copyright frameworks, explores the potential negatives of AOMs, proposes taxation of AOM-related companies for public incentives, stresses the need for state investment in AI technology and governance over web content functionality, emphasises the role of economic incentives in shaping the internet, and highlights the importance of institutions in content creation. These insights provide valuable perspectives on the challenges and opportunities present in the online content landscape.

Vagner Santana

The analysis explored the concept of responsible technology and the potential challenges associated with it. It delved into various aspects of technology and its impact, shedding light on key points.

One major concern raised was the development of Web 3 and its potential to exacerbate issues related to data bias in technology. The analysis highlighted that machine learning models (LLMs) trained on biased data can perpetuate these biases, posing challenges for responsible AI use. Additionally, the lack of transparency in black box models, which conceal the data they contain, was identified as a concern.

The importance of language and context in technology creation was also emphasized. The analysis pointed out that discussions often focus on the context of creation rather than the diverse usage of AI and LLMs, particularly in relation to their potential to replace human professions. It highlighted how language and context significantly influence the worldwide usage and benefits of technology, with local conditions and currency playing a crucial role in determining access and usage of technological platforms.

The analysis advocated for moral responsibility and accountability in AI creation. It expressed concern that LLMs, with their ability to generate vast amounts of content, might be used irresponsibly in the absence of moral responsibility. It argued that technological creators should have a vested interest in their creations to promote accountability for AI-generated content.

There was an emphasis on the need to study technology usage to understand its real impact. The analysis acknowledged that people often repurpose technologies and use them in unexpected ways. It noted that the prevalent culture of “building fast and breaking things” in the technology industry leads to an imbalanced perspective. Thus, comprehensive studies are necessary to assess and comprehend the true consequences of technology.

The analysis highlighted the delicate balance between freedom to innovate and responsible innovation principles. While innovation requires the freedom to experiment, adhering to responsible innovation principles is essential to mitigate potential harm. It pointed out that regulations often emerge as a response to changes and issues stemming from technology.

The analysis acknowledged the non-neutrality of technology, recognizing that different perspectives arise from the lens through which we perceive and discuss it. It emphasized that individuals bring diverse values to the creation and use of technology, underscoring the subjective nature of its impact.

Furthermore, transparency issues were identified regarding web content and LLMs. The analysis noted that creative commons offer control mechanisms for web content, but there is a lack of transparency in large language models. This raised concerns about control mechanisms and participation in aligning these models, suggesting a need for greater transparency in this area.

In conclusion, the analysis emphasized the significance of developing and using technology responsibly to prevent harm and optimize benefits. It examined concerns such as data bias, language bias, transparency issues, and the importance of moral responsibility. The analysis also recognized the varied values individuals bring to technology and the importance of studying its usage. Overall, responsible technology development and usage were advocated as crucial for societal progress.

Yuki Arase

In the discussion, several concerns were raised regarding web data, large language models, chat-based search engines, and information trustworthiness. One major point made was that web data does not accurately represent real people due to the highly skewed nature of content creators. SNS texts from specific groups, such as young people, were found to dominate a significant portion of web data. This unbalanced distribution of content creators leads to biased representations and an overemphasis on particular perspectives. Furthermore, it was noted that biases and hate speech may be more prevalent in web data than in the real world, underscoring the issue of inaccurate representation.

Another concern addressed was the inherent biases and limitations of large language models trained on skewed web data. These models, which are increasingly used in various applications, rely on the information provided during training. As a result, the biases present in the training data are perpetuated by the models, resulting in potentially biased outputs. It was argued that balancing web data to accurately represent people from all around the world is practically impossible, further amplifying biases in language models.

The discussion also touched upon the impact of chat-based search engines on information trustworthiness. It was suggested that these search engines may accelerate the tendency to accept responses as accurate without verifying information from different sources. This raises concerns about the dissemination of inaccurate or unreliable information, as people may place unwarranted trust in the responses generated by these systems.

However, a positive point was made regarding the use of provenance information to enhance information trustworthiness. Provenance information refers to documenting the origin and history of generated text. By linking the generated text to data sources, individuals can verify the reliability of information provided by chatbots or similar systems. This approach can help increase trust in the information and mitigate the tendency to accept responses without verification.

The discussion also highlighted the impact of current large language models primarily catering to major languages, which could exacerbate the digital divide across the world. It was pointed out that training language models requires a substantial amount of text, which is predominantly available in major languages. Consequently, languages with smaller user bases may not have the same level of representation in language models, further marginalising those communities.

Lastly, the discussion mentioned the potential of technical solutions like watermarking to track the source of generated texts, a step towards ensuring accountability for AI-generated content. However, it was noted that the effectiveness of these technical solutions also depends on appropriate policies and governance frameworks that align with their implementation. Without these measures, the full potential of such solutions may not be realised.

In conclusion, the speakers highlighted several concerns related to web data, large language models, chat-based search engines, and information trustworthiness. The skewed nature of web data and biases in language models present challenges in accurately representing real people and avoiding biased outputs. The tendency to accept responses from chat-based search engines as accurate without verification raises concerns about the dissemination of inaccurate information. However, the use of provenance information and technical solutions like watermarking offer potential strategies to enhance information trustworthiness and ensure accountability. Additionally, the digital divide may worsen as current language models primarily cater to major languages, further marginalising communities using less represented languages. Overall, a comprehensive approach involving both technical solutions and policy frameworks is necessary to address these concerns and ensure a more accurate and trustworthy digital landscape.

Ryan Budish

Generative AI technology has the potential to bring about significant positive impacts in various sectors, including businesses, healthcare, public services, and the advancement of the United Nations’ Sustainable Development Goals (SDGs). One notable application of generative AI is its ability to provide high-quality translations for nearly 200 languages, making digital content accessible to billions of people globally. Moreover, generative AI has been used in innovative applications like generative protein design and improving online content moderation. These examples demonstrate the versatility and potential of generative AI in solving complex problems and contributing to scientific breakthroughs.

In terms of regulation, Meta supports a principled, risk-based, technology-neutral approach. Instead of focusing on specific technologies, regulations should prioritize outcomes. This ensures a future-proof regulatory framework that balances innovation and risk mitigation. By adopting an outcome-oriented approach, regulations can adapt to the evolving landscape of AI technologies while safeguarding against potential harms.

Building generative AI tools in a safe and responsible manner is crucial. Rigorous internal privacy reviews are conducted to address privacy concerns and protect personal data. Generative AI models are also trained to minimize the possibility of private information appearing in responses to others. This responsible development approach helps mitigate potential negative consequences.

An open innovation approach can further enhance the safety and effectiveness of AI technologies. Open sourcing AI models allows for the identification and mitigation of potential risks more effectively. It also encourages collaboration between researchers, developers, and businesses, leading to improved model quality and innovative applications. Open source AI models benefit research and development efforts for companies and the wider global community.

Ryan Budish, an advocate for open source and open innovation, believes in the benefits of open sourcing large language models. He argues that public access to these models encourages research, innovation, and prevents a concentration of power within the tech industry. By making models publicly accessible, flaws and issues can be identified and fixed by a diverse range of researchers, improving overall model quality. This collaborative approach fosters an environment of innovation, inclusivity, and prevents monopolies by a few tech companies.

In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances communication, contributes to scientific advancements, and improves online safety. A principled, risk-based, technology-neutral approach to regulation is vital for balancing innovation and risk mitigation. Responsible development and use of generative AI tools, along with open innovation practices, further enhance the safety, quality, and inclusivity of AI technologies.

Session transcript

Diogo Cortiz da Silva:
Good afternoon, good evening for everybody. Thank you to join us in this session about large language models and the impact on the web. And we plan to anticipate some questions. We are now in the last day of IGF, and we had a lot of sessions regarding generative AI. And this session is a little bit different because we will try to focus on some technical aspects and how generative AI, in general sense, could impact the web ecosystem. So when we planned this activity, we designed a structure in three main topics. One about data mining from web content. And we have a policy question for this. I read this. And so we have three main dimensions and three key policy questions that will guide our discussion here. But of course, we can go further on some aspects. And the first dimension is the web as data source for LLMs. And we have the policy questions. What are the limits of scrapping web data to train LLMs? And what measures should be implemented within a governance framework to ensure privacy, prevent copyright infringement, and effectively manage content creator consent? And we prepared these policy questions, I think, that four months ago. And since then, we see some work on this. For example, OpenAI and also Google, they create a way to block data mining. So it’s an approach to give user more control of their content. We have a second dimension that’s what happens if we incorporate generative AI, chatbots, on search engines. And for this dimension, we have the following policy questions. What are the potential risks and governance complexes associated with incorporated large language models into search engines as chatbot interfaces? And how should different regions? And for example, GlobalSoft respond to the impact on web traffic and consequently in digital economy. So if you have search engines replying directly to the query and not giving access or links to the original content. But we have a lot of other technical and ethical questions about this that can go further. And the third dimension is the web as the platform to post content generated by AI. And for this, we have the following policy questions. What are the technical and governance approach to detect AI-generated content posted on the web, restrain the dissemination of sensitive content, and provide means of accountability? And for this workshop, we have an excellent team of speakers from different backgrounds, from different stakeholder groups, and from different regions. We will have Professor Emily Bender from University of Washington that will join us online. We will have Wagner Santana from IBM Research. We will have Yuki Arasai from Osaka Universe that’s here in person. We will have Rian Budish from Meta that will join us online. We will have Dominic Rassel from W3C, the World Wide Web Consortium, that will join us online. And we’ll have Rafael Evangelista from the Brazilian Internet Steering Committee and Professor of University of Campinas that is also here. So I will start. Actually, every speaker will have 10 minutes for initial considerations. And we will start with Professor Emily Bender. Professor Emily Bender, thank you for joining us and accept our invitation. The floor is yours.

Emily Bender:
Thank you so much. Ohayou gozaimasu. I’m joining you from Seattle, where it is the evening. And I have prepared just a few remarks. And I’m hoping I can share my screen for some visual aids partway through. But I’ll try that when I get there. To the first question about the limits of scraping web data to train LLMs, I think it is really unfortunate that we have come around as a global society to a situation where the default seems to be if somebody can grab the data, it’s theirs. That doesn’t have to be the policy standpoint. But we have to take action if we want to change it. And what I would like to see it change to is what Sacha Costanza-Chock calls consentful technology, where the data is collected in a meaningful opt-in way, only with consent of the people contributing the data. And the benefit that will come with that is that such data collection has to be intentional. Right now, the data underlying LLMs is largely collected very haphazardly. The push has been to get the largest possible data set because that leads to more fluent output. That leads to output that can seem to speak to more topics. And so it’s just been, let’s grab everything we can. That hasn’t left room for documenting it so that we know what’s there. And it also hasn’t left resources or room for really building something that is representative of the world we would like to build. It’s also, incidentally, not representative of the world as it is because the internet, as we’ll see with my examples in a moment, doesn’t reflect a neutral viewpoint on the world. Moving on to the second question, what are the potential risks and governance complexities associated with incorporating LLMs into search engines? These are enormous. And it’s really important to understand that a large language model is not an information source. The information that is stored in a large language model is literally just information about the distribution of word forms in text. It’s not information about the world. It’s not information about people’s opinions about the world. It does include reflections of opinions in the form of biases that are expressed via the distribution of word forms in text. Thinking about the implications for the Global South in particular, and starting first with that idea of bias, here’s where I want to try to share my screen. Let’s see if this works. I teach with Zoom all the time, so it should work. Just going to be brave and share the desktop. All right. Do you see a tweet? Hopefully. This is an author advertising a preprint paper. And what they did in this paper was they looked at the ways in which mentions of people and places basically cluster together in very large-scale collections of text. They’re looking at Lama Tzu. And this was presented as though it were a world model, rather than just correlations that entities in the US tend to be mentioned in the same kinds of textual circumstances. What is particularly striking, actually, about this graphic is just how sparse the data is in the Global South. And so we are getting lack of representation and then misrepresentation because we are relying on these data sets that heavily weight the gaze of the Global North. And that’s a big problem. The other thing that I wanted to show you has to do with pollution of the information ecosystem. So as we let these synthetic media machines just spill their synthetic text into the web, it doesn’t stay contained as the output of ChatGPT, but it moves from location to location. I tested this today. It is unfortunately still true. If you put in the Google search query, no country in Africa starts with K, which isn’t even a question, but it’s a search query, out comes this false snippet. While there are 54 recognized countries in Africa, none of them begin with the letter K. And then it nonsensically continues, the closest is Kenya, which starts with a K. And where did this come from? So this is Google search. I’m not even using BARD here, but this is Google search taking a snippet from the first hit for this query, which is this page called Emergent Mind, where some developer has chosen to post the output of ChatGPT. I don’t know who this person is. I don’t know why they chose to post this thing. But somebody decided to give ChatGPT the input, did you know that there is no country in Africa that starts with the letter K? ChatGPT is designed to provide outputs that human raters say this is good. In other words, it’s designed to output a sequence of text that reads as what you want to hear. And so ChatGPT replies, yes, that’s correct, and then continues with that same string that we saw Google pulling up as its snippet for the search result. So there’s two big problems here. One is we have the output of the synthetic media machine that looks like very fluent English, and so it sort of slides in with other kinds of information. And the other is that our information ecosystem, just like a natural ecosystem, really is this interdependent collection of sites. And the synthetic text doesn’t stay quarantined to where it was output. I’ll stop the share there so that I can see my notes when you can’t. I want to move on to point C here. The question is, what are the technical and governance approaches to detect AI-generated content posted on the web, restrain the dissemination of sensitive content, and provide means of accountability? So technically speaking, with the synthetic text that we have now, this cannot be detected after the fact. It has to be marked at the source, and that means watermarking. That is not impossible. There’s really interesting work, for example, published at ICML this year for very clever ideas about how to put in watermarks in synthetic text that would be hard to detect and remove. But honestly, even something that is relatively easy to remove would be an improvement. Because if we have watermarks, then the default use case would contain the watermarks, and we could filter the synthetic text. And just like oil spills in the natural ecosystem, synthetic media spills in the information ecosystem are a situation where less pollution is better. Even if we can’t get rid of all of it, it’s worth designing policies to minimize it. So I really think we need policy action here, and we can’t just pin our hopes on some technological solution that would allow us to detect this stuff after the fact. So I think that is everything I plan to say. I want to make sure there’s time for everyone to speak. I look forward to learning from you all. Thank you.

Diogo Cortiz da Silva:
Thank you, Professor Emily Bender, for your considerations. And now I invite Wagner Santana from IBM Research. Wagner, the floor is yours.

Vagner Santana :
Thank you. I’ll try to share my screen just a second. I’ll need to quit and try again. Sorry. So we’re waiting for Wagner to join us online. So we move to Professor Yuki.

Yuki Arase:
So thank you for inviting me to this exciting panel. So my points are quite largely overlapped with what Emily just said. But for the first point, first question, the limitations of scraping the web data to train large language models is that we should be aware that web data never represents people in the real world. It is highly skewed in many ways due to unbalanced distributions of content creators. For example, now, SNS texts occupy a large portion of web data, which come from mostly a specific group, particularly young groups of people using SNS. And also, like, social biases or even hate speeches can be more significant in the web data than what we really see in the real world. And there are a large amount of automatically-generated content, including noisy or even toxic ones in the web data. So web data can never be balanced to equally represent people in the world. And large language models trained on such data inevitably inherit the same feature or same trend or same characteristics of such web data. So it won’t be perfect, like the correct or trustworthy model as it is. So we should be aware of that. And for the second point, what are the potential risks and governance complications associated with incorporating large language models into search engines? So I think one of the serious concerns is that chat-based search can be too handy for people to use, which may accelerate the tendency to accept a response as correct or trustworthy without looking up different sources of information. So as I just said, the web data does not represent real people. And the web data, sometimes, there is a lot of wrong information. So large language models trained on such data has the same trend. So there’s no doubt. So the search is now our lifeline. And its advancement is really appreciated. But we must ensure the way to access various sources of information so that we can check the information is trustworthy or it’s something we should believe. So for this, I think it’s a good way to address this problem is to have a way to link the generative text to some kind of data sources, which allows us to understand what information these texts are based on. So as our group, we have been working on this kind of problem. So natural language processing can somehow help to identify alignments between generated and text in the real world. Such kind of provenance information gives us a chance to step back and think, wait, is this chatbot response really trustworthy or not? So another concern is that the current large language models cover mostly major languages because they are data-hungry and require a large amount of text for training. So text data of such scale is available only for major languages. And besides the evaluation and benchmark data sets that we are heavily rely on to developing such large language models, also concentrate on major languages. So yeah, so this trend may hinder the expansion of the technology to regional or local languages. which may worsen the digital divide across the world. So we should explore the way to train large language models in a data-efficient way and cover various languages and cultures and so on. So for the third question, what are the technical and governance approaches to detect AI-generated content? I think, yeah, I was about to refer to the same paper just Emily mentioned, like watermarking for the generated text. This is a technical way so that we can track down who generated, for which model generated such text. But as Emily said, this is just a technical solution and we need a policy or governance to really work with such kind of technology, really work in the world. So that’s all from my side, thank you. Thank you.

Diogo Cortiz da Silva:
Thank you, Professor Yuki. And now I invite Ryan Budish from Meta. Thank you, Ryan, to join us. Thank you very much. Before I start, did you want to go back? I see Wagner is back on. I know we skipped him, so I just wanted to make sure that. Ah, okay, Wagner is back, right? No, so, yeah. I’m here. So can you try to, you are going to share your screen, Wagner? Yes.

Vagner Santana :
Okay, so let’s try. Can you see my screen? Okay. So now it’s okay, so. Thanks, I’m sorry for the previous situation. Well, I prepared a few slides just to try to delve into the questions presented by Diogo, but under the lens of this idea of thinking about the context of creations of technology and the context of use of technology. Well, as Diogo mentioned, we’re thinking about scraping web data, privacy, copyright, and also the use of LLMs to search engines, different regions, and how the digital economy may be impacted, and also the whole idea of detecting AI-generated content, dissemination accountability. For the first point, I like to think about how we came here, right? So first we had the web one, then web two, the social web, and then now with blockchain, with the promise of providing more trust. But if we pause here and think about LLMs, now we’re having this data used to train models, and then we have this black box without transparency about the data that is inside, and how is it going to be this web three plus with data? And the concerning thing about is how is it going to be when it started to be retrained on the data that it’s using, right? What are the biases? And this has already been mentioned by other panelists that we have bias, we know that, and how is this going to be amplified by this, the way we have? And we have approaches like the robots TXT file to block, but that shouldn’t be the default, right? Capture anything that you have to block for someone to not use your contents, right? So, and we also can start thinking about machine learning attacks. People can start creating pages just to poison LLMs that are going to be trained on those datasets. So these are some of the aspects that I wanted to bring first. And moving to the second question, well, I often see this discussion about humans and humans substituted, replaced by humans plus AI, humans plus LLMs, right? And again, back in web two, we had just content creators creating content, e-commerce platforms, and then the consumption by people, social, and then conversions, and then this coming back to the platform and part to the creators, right? And nowadays with adding LLMs to this equation, we have this idea of having LLMs with the content creators creating maybe more content. And then there’s this whole promise of increasing productivity. Again, we have platforms, but now the consumption is not only by people. We have also robots consuming that to their own interests, right? Conversion, and then this will come back in some form and distributed. But back to this idea of replacement, I’d like to bring a discussion around. Usually we have this examples of certain categories and the one that I like to explore is about attorneys, for instance. So there’s this whole discussion that attorneys are going to be replaced by attorneys that use LLMs. But if you think about the language that these LLMs are being trained and the laws, and usually, so those platforms are paying US dollars, right? I’m located in New York, TJ Watson Lab, but I’m originally from Brazil and the currency impacts a lot on how you use those platforms, right? So this is one aspect I wanted to bring. And the thing is these discussions usually about replacement are closest to the context of creation of technology, right? People that speak the same or the most used language on those data sets used for training, right? And moving towards the third question, and we already saw some of these aspects. I think that one aspect that I wanted to emphasize here is the idea of the accountability and generated content and the understanding of how technology works, right? It predicts the next word. And if we get this into scale, we have really large content being created, right? And I like to see that and discuss about that being as a one way. And if we try to get, for instance, a reverse prompt in which we give a content and ask for a prompt, you will not get this. It is not trained for that. And it has no means of getting the input back. And there’s this whole idea of understanding the limitations and in Responsible AI, there’s this question around moral entanglement in which we should have technology creators of being morally entangled with the data and the technology they create, right? And I would also expand that to the content because nowadays we’re seeing people using some LLMs in a not better way, not right way. I brought some examples that I saw in a prompt engineering course of ways that people present on how to use large language models and here some quotes on creating blogs and answering social media about things that people don’t know, right? So I think that we should have also this idea of moral entanglement for content that also people create. And the thing is that we have this huge technology that is consistent and predictable and it’s hard to cover all possible outcomes of context. So this idea of techno-solutionism and techno-centrism brought us here, right? I brought really just an outline of the Responsible Inclusive Framework that we proposed in our team. The R&I Framework, it brings some discussions around context of creation versus context of use and this distance, how this distance can be concerning and a notion of stakeholders that goes to self-business up to society. And here, the idea of presenting this picture is that… Okay, no. And well, in context of use, we have creation of technology, prototyping, development, training and deployment. What we have in context of use, sorry, we have users, tasks, equipment and social and physical environments and all the possible variations of those, right? So we have really complex situations. So it’s nearly impossible to predict all possible contexts before and after, even after deployment. Let’s think here, some examples, people riding bikes while using mobile phones, right? And for developers, it’s hard for us to think about tasks and ways of people using mobile phones while riding bikes. And imagine this last one with six mobile phones and riding a bike. And here we can see that it’s the same app, Pokemon Go, but imagine that we are in a context that people are using six different LLMs interacting while riding bikes. It’s impossible to predict all of these possibilities. So why does distance matter? Because we have, the higher the distance, the more impersonal technology is. And that’s what we see nowadays. Technologies created in one region, used in all around the globe, a lived experience for people creating technologies are different from the ones impacted by these technologies, right? And this culture of build fast, break and fix, which is often popular, it influences in this impersonality for technology. And there’s also imbalance in terms of perspectives, considers. And unfortunately, the ones with power to compete, understand and promote changes are very few. So to conclude, without studying how technology is used, we are hindered for the real impacts. And our premises when creating technology are limited in terms of coverage of possible contexts. And we need more ways of covering all these possibilities, diverse teams and all the things that we already know. But there’s one interesting aspect is that people repurpose those technologies. We have been repurposing technology since web one, right? And some people use that in a really good way. So we need to empower those users, but also to prevent harmful aspects. And there’s this whole idea that innovation may need freedom to experiment, right? But also responsible innovation teaches us that we need avoid harms, do good, and implement governance to make sure that these two things are happening at the same time, right? And we see that usually we have regulations reporting to changes. And I think that this is one interesting way of starting and starting a change and responding to the things that we are seeing out there. Thank you.

Diogo Cortiz da Silva:
Thank you, Wagner, to bring your considerations from the industry perspective. And now I invite Ryan from ETA to also share inputs from the industry. Thank you, Ryan. The floor is yours. Great, thank you so much. I’m thrilled to be here.

Ryan Budish :
I’m coming from Boston, Massachusetts, where it is quite late at night. So I’m going to try not to speak too loudly because my kids are sleeping in the room next to me, but let me know if you can’t hear me. So I wanna start by taking a step back. And I think that it’s still, even though it doesn’t feel like it some days, it’s still very much early days for generative AI technologies. And I think what these technologies might look like as they unfold is still a bit fuzzy, but it isn’t hard to imagine some of the huge positive impacts that they could have for businesses, large and small, for healthcare, for the delivery of public services, for advancing the UN Sustainable Development Goals and much more. And I think a lot of people, maybe they think about AI chat bots or some of the really fun generative AI tools, like some of those that Meta announced just a few weeks ago. But before getting into these questions, I just wanted to mention a couple of uses of large language models that we’ve developed that I think highlight some of the tremendous opportunity here. One area is translation, and we’ve published groundbreaking research and shared models for translation, such as our No Language Left Behind model and our Universal Speech Translator models. No Language Left Behind, NLLB, is a first of its kind AI research project that open sources models capable of delivering high quality translations directly between nearly 200 languages. Because high quality language translation tools don’t exist for hundreds of languages, billions of people can’t access digital content or participate fully in online communications and communities on the web in their preferred or native languages. And tools like NLLB can help address some of that. And when comparing the quality of translations to previous AI research, the NLLB 200 models scored an average of 44% higher, and it was even significantly more higher than that for some African and Indian-based languages. And we’re also developing this Universal Speech Translator, where the innovation there is that it can translate from speech in one language to another in real time, which is something that can work even where there is no standard writing system. And that’s really important because when you think about how a lot of language translation models work, particularly speech-based ones, they start with speech, translate it to text, translate the text from one language to another, and then translate that back in, transform that back into speech. And that breaks down if you don’t have a standard writing system in the middle there. And so something like Universal Speech Translator can help address that. And eliminating language barriers could be a profound benefit, making it possible for billions of people to access information online, across the web in their native or preferred languages. And we’ve also made other large language models available to researchers and have seen really tremendous research and innovation there, including like our OPT175B model, which has been used for all kinds of interesting applications, like generative protein design to improving content moderation tools online. And so I think that there is really a potential for immense benefits of these large language models on the web. But at the same time, there’s also undoubtedly risks and problems. And like any technology, an LLM itself is not inherently good or bad, but the critical question is what is it used for? And I think AI technologies and LLMs can drive progress on some of the most pressing challenges that we’re facing today. So when we think about governance, we have to strike a balance between mitigating these potential risks, particularly from high risk applications, while ensuring that we can continue to benefit from innovation and economic growth. And as we’ve heard already a couple of times today, in order to build these large language models and to have these benefits that they’re able to potentially bring, the volume of material required to train them is almost incomprehensible in scale. We’re talking hundreds of millions and sometimes billions of pieces of information is required to train a large language model. And in order to build these groundbreaking tools and have the training data necessary, many companies have to use data from a wide variety of sources, including data publicly available from across the internet. And the sheer scale of these systems is partly why these issues that Diego has teed up, rightly so, is why they’re so important and so complex. So on the first question, the piece that I wanted to dive into, to at least to start with, is about… is about privacy. And I want to talk about some of the ways that we’re trying to develop these technologies in a safe and responsible way with respect to privacy. I think we know we have a responsibility to protect people’s privacy. And we have teams dedicated to this work for everything we build, including our generative AI tools. A few weeks ago, for instance, we announced a bunch of exciting new generative AI products. And privacy was really important for how we develop those features with a variety of important privacy safeguards to protect people’s information and to help them understand how these features work. Our generative AI features go through rigorous internal privacy review process, for example, which helps us ensure that we’re using people’s data responsibly while building better experiences for connection and to help people express themselves online. For publicly available information, for example, we filtered the data set to exclude certain websites that commonly share personal information. And importantly, we didn’t train these models on people’s private posts. And for publicly shared posts on things like Instagram and Facebook, they were a part of the data used to train generative AI tools. And we train our generative AI models to limit the possibility of private information that one person may share while using a generative AI feature from appearing in responses to other people. Now, on the second question, this is something that we think a lot about how we can build these tools so that they can benefit everyone, including people in the global South. And one important way that we’re trying to do this is by making AI technologies more accessible to more people. We’ve been very public about our views on open source, most recently releasing Llama 2 and Code Llama models. And we do this because we believe that the benefits of AI should be for the whole of society, not just for a handful of companies. And we believe that this approach can actually make AI better for everyone. With thousands of open source contributors working to make an AI system better, we can more quickly find and mitigate potential risks in systems and improve the tuning to prevent erroneous outputs. And the more AI-related risks are identified by a broad range of stakeholders, including researchers, academics, policymakers, developers, and other companies, then the more solutions that the AI community, including tech companies, will be able to find for implementing guardrails to make these technologies safer. And an open innovation approach also has economic and competition benefits. I mean, LLMs are extremely expensive to develop and train. And that’s why, increasingly, AI development and major discoveries happen in private companies. But with open source AI, anyone can benefit from the research and development, both within companies, but also across the entire arena, across the entire global community of developers and researchers. And this is something we’ve experienced firsthand in other contexts. Our engineers, for example, developed an open source frameworks that are now industry standards, like React, which is a leading framework for making web and mobile applications, as well as PyTorch, which is now the leading framework for AI. And so now, on to the third question. Meta has learned from a range of experiences, both positive and negative, over the last decade. And we’re using these lessons to build safeguards to our AI products from the beginning, so that people can have safer and ultimately more enjoyable experiences. I think it’s important, when we talk about watermarking, particularly for something like text, that our view is that generative AI doesn’t help bad actors spread content once it’s created. Bad actors can really only spread problematic content, whether AI-generated or not, through known tactics, like fake accounts or scripted behavior. And this means that we can actually continue to detect malicious attempts to spread or amplify AI-generated content, using many of the same behavioral signals that we already rely on. And we know that generative AI can help bad actors create problematic content. So we have teams that are constantly working to get better at identifying and stopping the spread of harmful content. And we’re actually optimistic about using generative AI tools themselves to help us enforce our policies. And this issue is not unique to META. It’s a concern across industry. And that’s why META and many of our industry peers voluntarily joined the White House commitments that include a commitment about watermarking AI content that would otherwise be indistinguishable from reality. But make no mistake, this is a deep and significant technical challenge. And currently, there really aren’t any common standards for identifying and labeling AI-generated content across industry. And we think there should be. And so we’re working with other companies through forums like the Partnership on AI in the hope of developing them. And so what should governance of this technology look like? And I think that we support principled, risk-based, technology-neutral approaches to regulation of AI. We think that measures should not be focused on specific technologies, such as generative AI. Instead, our view is that regulation should be focused on the what, the outcomes that regulation wants to achieve or prevent, rather than the how. We believe that this approach is more future-proof and helps strike a better balance between enabling innovation while continuing to help us minimize the risks. So with that, I’ll stop there. So thank you.

Diogo Cortiz da Silva:
Thank you, Ryan. Now we move to technical community. And I invite Dominique from W3C, the World Wide Web Consortium, to join us. Hi, everyone. Thank you, Diogo, for the invitation.

Dominique Hazaël Massieux:
Just a quick few words about what W3C is and maybe why I’m here. So W3C is a worldwide web consortium and why I’m here. So W3C is one of the leading standard organizations for web technologies. And in particular, in W3C, I’ve been in charge of developing our work on bringing machine learning technologies to web browsers, which has led me to look at the broader impact of AI on web content. So to the three questions that were raised for this panel, the first one around the limits of scraping web data. So I think it’s interesting when you look at that question and you look at what exists today, scraping web data is something that probably started from the very early days of the web that has been a critical component of one of the tools we all rely on, which, of course, are search engines. And so one of the questions I wanted to raise is how do LLMs and search engines differ in terms of scraping web data and why should they be handled differently? And I think one of the clear answers to that has already been alluded to. Search engines today fulfill a role of intermediation between content creators and content consumers, where content creators can expect something back in the form of a link back to the original content. If you look at an LLM, in most cases, and maybe this will change, as others have said, this is a very fast evolving space. But today, an LLM is mostly a black box. You get an answer, but you don’t know the sources from the training of that LLM, and you don’t know exactly which sources were used to build such an answer. And some of it is structural to the technology itself. It’s not just a limitation. Part of what an LLM does is compress all this information they gathered across a whole corpus of data they collected. So given the fact that copyright itself was, at least from my understanding, always building as a trade-off between incentivizing content creation and making sure the content would get widely published and distributed, I think the fact that LLMs today have, to say the least, an unclear story about how they consider the copyright of the content they integrate in their training, I think to me there is here a really fundamental question. Understanding, indeed, whether it’s permissible for LLM to use any kind of available text and data for that training, or whether, as Professor Bender said, this needed a lot more explicit consent from the content creators. And my perspective is that, indeed, the current robots exclusion protocol, which is really about excluding crawlers, not saying anything about what the crawling data should be reused for, is not a sufficient mechanism to ensure the explicit consent of content creators. We need something a lot more robust and a lot more opt-in rather than opt-out from my perspective. I think the question about privacy is also interesting. Again, if you think about the search engine comparison, something that has emerged over the past few years is the so-called right to be forgotten, where, at least in some regions, search engines have been mandated to remove content that is private of nature. And of course, there is also some controversy about the feasibility of this request and the overall impact on the information space. But if you think about that particular question and LLMs, again, is it even feasible today to untrain such a specific part of an LLM that could have been learned over data that would have otherwise been removed from the public information space? I mean, to me, that illustrates some of the really tricky questions. No matter how careful the training might have been, the data might have been created, it assumes that this is a static set of permissible data when, in fact, what is permissible has to evolve over time based on evolution of regulations, based on evolution of individual rights, and so on. So I guess, to me, the answers to what are the limits, they are, to me, pretty large. I think there needs to be a significant rethink of how training should be done. And of course, there is a lot of value in having a lot of text to create some of the really impressive output that LLMs have been able to bring. But that cannot be at the expense of making sure that, in particular, content creators get the incentive to continue to create and publish that content. Because otherwise, at the end of the day, of course, there won’t be anything left for LLM to build on if content creators stop publishing their data, no matter what. In terms of the questions around the complexities of incorporating chatbots into search engines, some of the main points, I think, have already been made. I mean, to me, one of the critical points, again, was made by Professor Bender, mixing something that users have approached as a source of reliable information with checkable provenance with something that is not meant as a tool of necessarily trustable information or checkable information is a really challenging UI question. Typically, probably not a good idea, although there could be protections around it. Sorry, it’s 3am here, so the brain is still a bit waking up. And the fact that these interfaces are really sleek, in a way, makes the problem even more damning. But in terms of the complexity of the governance question, I think we are dealing with the questions we’ve seen emerge again and again. What are the limits that can be put into things that are primarily products and user interface or even user experience considerations? I think we all agree that there has been a lot of value in allowing a lot of innovation, a lot of competition in that space. And so there are limits to what governance, external governance can impose in that space. We are seeing some evolutions in this limits with some of the regulations, for instance, emerging with the Digital Service Act in the EU. But to me, there is something here structural in terms of governance, that is, who should have a say about what gets exposed in a search engine interface. And even if some of this may or may not be a good idea, who is going to be at the table to participate to these conversations? I don’t think it’s a simple question. Again, there’s a trade-off between enabling new ideas, new interfaces, new interactions, and making sure we don’t weaken some of these tools that have become structural, systematic in their importance, I think is something that we are going to be facing for the years to come. But again, in terms of one of the impact that I think we need to keep repeating in the importance of the web ecosystem, the fact that today LLMs don’t generate backlinks, they generate digested, compressed content, is something that further goes against the grain of the role of search engines, not only social role, but also economical role of search engines, which, again, typically have operated with the notion that they serve as this intermediary between content creators and content consumers. Finally, on the third question around the approaches to making AI-generated content detectable, there is definitely a challenging technical question. How do you watermark text in a way that is meaningfully detectable and resist to changes? And the latter, I think, points maybe to the more structural issue to me in that space is that some content that gets released and published is purely AI-generated, and LLMs allow to provide scale and possibly, unfortunately, some level of trustworthiness in the sense that they provide very sleek outcome. But increasingly, my guess at least is that LLMs will be used not just as pure generators, but as authoring tools, something that help people create content, not just create content that gets released as is. And so when you get into that mode, it’s no longer a binary, yes, this was created by AI versus this was created by human. I expect a lot of content that we will see in the years to come will be hybrid content with AI having either provided a fast version, having provided corrections to existing content, or even a more iterative process between human and AI-generated content. And how do you… such a content, even without thinking of what or marking what kind of metadata could be used to reflect this I think is, to say the least, challenging. Of course, the need to mark at least purely generated AI content I think remains important and worthy of addressing itself. And I would say it’s probably even worth addressing for LLM trainers themselves. If you’re training your LLM on generated content, you’re going to create likely a lot of drift in the quality of the training over time so there is value in being able to either exclude or at least treat differently such content. But at the end of the day, I think the real question that this particular trend of AI generated content is bringing even more strongly to the surface is one of indeed accountability and transparency about the source of content. So, for example, fake information, fake news haven’t waited for LLMs to emerge, the content for spamming, the farms for spamming content haven’t waited for LLMs either. LLMs are very likely going to bring a different scale to the issue and so that doesn’t address the problem, but to me I think it’s really important we address the broad issue as the issue about how do we get as a society to managing this different level of quality of content, the notion of who is responsible for content that gets published, and that we take into account the impact that LLM brings to the scale of that issue, but I doubt that focusing specifically on LLM or AI generated content is the right framework for the discussion. I think the real critical gap I’m seeing in terms of governance here is one that I think this very panel is trying to address. I think we have a lot more structured conversations between technologists, between research, between regulatory bodies in structuring this space. So far, it’s a lot, it’s way too much siloed conversations among our own small communities, having places, having opportunities more than a panel, really day long conversations about how do we, with our various stakeholders, with our various perspectives on the problem space, come to a set of, if not solutions, at least directions, at least places for experimentation that cross these barriers across technology and regulation, I think is really the critical piece because until these silos remain, then the gaps between these conversations are the places where the things we don’t want to appear are going to thrive. Thank you.

Diogo Cortiz da Silva:
Okay. Thank you, Dominique, for your contribution. And now I move to Rafael from the Brazilian Internet Steering Committee and also professor at the University of Campinas in Sao Paulo, Brazil. Thank you, Rafael.

Rafael Evangelista:
Thank you, Diogo. Firstly, I would like to thank you for the invitation and congratulate the organizers for the quality of the questions presented in this panel. However, I must say I won’t be able to address the complexity of all the issues mentioned in the activity description. One pressing concern I would like to address is the proliferation of low-quality content on the Internet, and the root of this issue, in my opinion, is the financial model that underpins much of the web’s content creation. The digital advertising ecosystem, which rewards content creators based on the number of views or clicks, has inadvertently incentivized the production of sensationalist or even misleading content. This is particularly evident in Brazil, where such content has not only misled the public, but also has posed significant threats to the democratic process. A case in point is the 2018 elections, during which certain far-right factions adeptly utilized instant messaging groups to disseminate and amplify online content. This content was then monetized either directly through the platforms or indirectly via digital advertising. And something similar happened in the context of the 2016 US elections, where the actions of Macedonian groups seeking economic gains are well documented. From the perspective of the developing nations or the so-called global north, these practices might seem distant or even improbable. However, the reality in the global south, characterized by stark economic disparities and significant currency fluctuations, paints a different picture. There, many individuals, including young professionals, find themselves resorting to producing subpar or misleading content as a viable means of income. This trend isn’t limited to mainstream platforms. Even alternative media outlets, which traditionally championed unbiased and independent reporting, are succumbing to the allure of increased clicks and the subsequent revenue. The overall quality of content produced in Portuguese, speaking of the case of Brazil, has dropped considerably due to the perverse economic incentives for web publishing. The advent of large language models further complicates this landscape. There is a growing concern that LLMs might exacerbate and spread low-quality information. To counteract this, we must re-evaluate and overhaul the existing compensation structures governing web content production. The current business models, especially those of major big tech platforms, have inadvertently skewed the balance, often to the detriment of genuine, high-quality cultural and informational content. In my capacity as a board member of CGI.PR, we have dedicated time and effort to discuss potential legislative actions to curb that scenario. Our primary aim is to find ways to reallocate the enormous wealth accumulated by major technology corporations to fund better quality content. We believe that these resources can be instrumental in promoting and sustaining high-quality, diverse and inclusive journalism, which is crucial for a well-informed society. Our team is not just looking for short-term solutions. Instead, we are determined to craft a strategy that can overcome the prevailing marketing incentives, which, more often than not, tend to favor quantity over quality. A substantial part of our discussions focus on how journalists and content curators can be fairly compensated for their work. Many suggestions on the table are rooted in copyright claims. The core argument here is that many online platforms are reaping significant profits from journalistic content without providing just compensations to those who produce it, which is similar to what is happening with the LLMs. Interestingly, this debate parallels the discussions about the training of artificial intelligence systems, especially when it comes to the use of vast amounts of data without proper acknowledgment of or compensation. While I personally find these arguments compelling and worth considering, the field of journalism introduces its own set of complexities. One of the most pressing issues is defining the boundaries of what truly qualifies as journalistic content and what not. The blurred lines between opinion, fact, and entertainment content make it a daunting task to set universally accepted compensation standards. I believe that the solution isn’t merely to bolster existing copyright frameworks. Instead, we should focus on cultivating an environment that encourages the creation of high-quality content that benefits the collective. In the realm of journalism, this could manifest as public funds sourced from tech giants, but managed transparently and democratically, dedicated to promoting quality journalism. Implementing such mechanisms won’t be without its challenges, especially when it comes to defining quality journalism and safeguarding it from undue external influences. The challenges posed by IOMs are analogous. Take, for example, Cielo, a digital library that offers open access to scientific journals. Initially a Brazilian initiative, it now boasts participation from 16 countries, predominantly Portuguese and Spanish speakers. With over 1,200 open access journals, it’s a treasure trove of information readily available to IOMs for training purposes. This represents a significant public investment from the global south, which is now being harnessed to train technologies predominantly controlled by a select few corporations. In my view, the answer is not to restrict access to such invaluable resources, nor is it feasible to compensate every individual author of these scientific papers directly. Many of these authors are already compensated by their academic institutions to produce publicly accessible knowledge. It’s essential to recognize that while IOMs might be the brainchild of major corporations, the knowledge that fuels them is derived from a collective commons. Thus, our governance solutions should pivot away from individualistic compensation models. Instead, we should champion initiatives that acknowledge the collective essence of knowledge production and channel resources towards bolstering public digital infrastructures. In the sense, IOMs are used as public digital infrastructures. Along with these public digital infrastructures, we need to establish governance and financing mechanisms that ensure the fulfillment of public and democratic interests. It seems clear that the technological and financial difference between companies from the global north and the global south creates a situation where only states have a realistic capacity to compete. The web, with its open and collaborative nature, was an infrastructure that excited everyone at the beginning of the 21st century due to the possibilities of producing free and accessible cultural commons. However, social media platforms soon emerged with their walled gardens, blocking content interoperability and privately appropriating collective production. IOMs represent a new chapter in this challenge. They appropriate not only the expressed content, but also the ways we express ourselves, the form used to express ourselves. While IOMs undoubtedly bring benefits and have many uses, leading to their rapid adoption when used in the context of weakly regulated advertising and surveillance markets formed by distorted economic incentives, they become tools for further production of low-quality content. Thank you.

Diogo Cortiz da Silva:
Thank you, Rafael. Thank you all the speakers for initial remarks. We have different inputs from different stakeholders, and now I open the discussion. So, I invite the audience, both in person and online audience, to ask questions. And also, I invite the speakers to comment on the content discussed here. So, we have two questions, three questions here, four questions on site. So, I think that we can run the mics. Yeah, it’s better than go there, I think. No, I think that we can run this mic here.

Audience:
Yeah, my name is Julius Endert from Deutsche Welle Academy, German public broadcaster. So, I would like to connect what you said. So, we are also trying to find out how the effects of especially generative AI on the freedom of expression is. So, will it be a tool for allowing more people to express them freely, or will it maybe, on the other hand, be the opposite and that we see new limitations, especially in unfree media systems and surroundings and authoritarian regimes, or the effect also on the public discourse? That is my question. So, what is the effect on the freedom of speech and the public discourse, to make it shorter? Okay, so I think that you can reply now. Yeah.

Rafael Evangelista:
As I was trying to say, I don’t think that, of course, authoritarian countries represent a different challenge, more like a specific context. But I think that what I was trying to express is that AOMs will not only be used in this context, but even in the democratic context, in free countries, we have this bunch of incentives for the production of low-quality content. And I think the AOMs will be used for that. And the thing that I think could be useful to combat that, to try to avoid those things, is to understand that we have to tax the companies and to use that funds to create public incentives, to produce content that is of quality and regulated or governed by public institutions that can be democratic. I think that’s it.

Diogo Cortiz da Silva:
Thank you. So we move to the second question. It was over there. You can go to the mic there, I think.

Audience:
Hello, my name is Teo from the University of Brasilia. I’m starting my question with the point that the representative from META just brought up, which is the idea that you don’t regulate the form or the process, but you regulate the product, the outcome. And I’m wondering, we’re talking about this in the context of very few businesses, just as platforms and social media are controlled by the same few businesses that control the development of LLMs, and not even states can compete with the development and the pace of LLM development. What is the role then, what would be realistic roles for the state and for the role of openness in this scenario, considering that also openness is co-opted by the same platforms to develop their models? I wonder what your views are on the state and the openness models.

Diogo Cortiz da Silva:
Okay, so I think for these questions I invite… Ryan, to reply, and then I open to all the speakers to comment, okay? Ryan, are you there? Yeah, yeah.

Ryan Budish :
Yeah, so, I mean, thank you for this question. I mean, I think that it’s important to, you know, in some ways, I would push back a little on the framing of the question, because I think the, when you look at the companies that are developing these large language models, they are actually quite different, and have rather different business models and incentives, and, you know, and so I can, you know, speak for Meta and our view, and, you know, as I said in my prepared remarks, you know, that we believe very strongly in open source and open innovation, and that’s actually something that we believe not only will help improve the quality of the models, and improve the safety of the models, but will also help ensure that this isn’t just a domain of a handful of tech companies. You know, when you think about how difficult and expensive it is to train the models, you know, if the only options that are available, if you’re a small business or a researcher that wants to use a large language model, and the only options that are out there are proprietary models that you have to pay for, then you end up with a situation where there’s potentially a race for the bottom, where people choose, you know, cheaper, low quality models, or maybe they try to build their own models, and, you know, and maybe, you know, there’s a lot of challenges there as well, and so one of the things that we think about since is that by open sourcing many models and making them available, that we’re actually able to help support a lot of good research and good innovation in businesses by making it possible for people to have access to many high quality models, and so for us, I think it’s not about gating access to these models. It’s actually about how do we enable more people to take advantage of these models, and then to be able to make them better and build on them and innovate, and when researchers find flaws or issues with the models, and then those can then be fixed, pulled back into the models, and then those fixes can be shared by everyone who’s building on top of those models, so anyway, those are some of my thoughts on the openness piece of it.

Diogo Cortiz da Silva:
I open to all the speakers if you want to follow up on this question. Emily Bender, yeah, you can go, Emily, please. Actually, I have a comment on a different topic,

Emily Bender:
so I’ll wait. Yeah, I was going to comment on this topic, so if I may then.

Dominique Hazaël Massieux:
Yeah, so just reacting on what Ryan was sharing about open source as a potential solution in that space, so first, absolutely, the more open source we get on these models, I think the better in terms of transparency, accountability, research improvements, and distributions, indeed, of the benefits of LLMs, but I think there is a critical aspect of LLMs that makes open source a bit of a mixed story. You get open source access to the code that is, or to the models that are generated by the training, but you don’t get access, you don’t get open source access to the training data, which are clearly where the gist of the value of these models are. So, really, it’s only half open in that sense, and given all the stakes there are in terms of selection, curation of the data, the fact that, I mean, for understandable reasons, those training data are not part of the opening makes it, I think, an imperfect answer to the question of openness, and there are discussions that I think need to be had about transparency around training data sources and the curation process that has accompanied these sources, but until we are having this conversation, I don’t think that open sourcing’s resulting model is a sufficient

Diogo Cortiz da Silva:
answer to this desire of openness. Thank you, Dominique. Emile, you?

Emily Bender:
Yeah, so on a slightly different topic, I want to say that all of these discussions become clearer if we stop using the phrase artificial intelligence or AI, because it’s not well-defined. We should talk in terms of automation and then talk about what’s being automated, and as we talk about language models in particular, it is, I think, unhelpful to conflate things like the use of language models as a component in automatic transcription or automatic translation systems, and their use to generate synthetic media. Those are different tasks. They do happen to rely on the same trained models, but they’re being used very differently, and so from a governance perspective, I think it’s important to keep it straight. While I’m talking, I want to call out the fact that the no-language-left-behind model for MEDA is a very colonialist project. I believe that languages belong to their communities, and that means communities should have control over what happens to data in their language. They should have control over what kind of technology is built, and if there’s profit to be made from building that technology, it should be fed back into those communities. I think this is an extremely important point for people from the Global South. It is not right for multinational corporations in the Global North to be profiting off of language technology from Global South communities. Thank you. Thank you, Emily. Rafael, do you want

Rafael Evangelista:
to comment something? Just to add to the question made by Teo, I think that you said that states don’t have the conditions to compete with those. I think if they are really invested in creating something that can be used by the public, it can be like the word open source has been used here, and it’s really hard to define what it means, because it can be like it can use a license that is really free or can use a license that just, okay. But the point is, my point is, I think if the states recognize that the web is something that they should care for, and if these tools to produce content is something that should be really accessible and controlled by the states or the communities or the public, they can invest and not only train models, but have servers and have, because there’s a lot of costs, and I think it’s not really realistic to think of Global South companies trying to do that, but the states, or at least the bigger states of the Global South, we can think of the BRICS countries, etc.

Diogo Cortiz da Silva:
Okay, Wagner, you want to comment on this? Then I move to you.

Vagner Santana :
Yeah, I have quick comments around the idea of technology as being not good or bad. I think that that starts the discussion around a neutrality of technology, and I think that that connects a little bit of the discussion tried to bring on the context of creation and use, and how this is different, different people, different values, and that is not true, right? It’s not neutral, at least the lens that we apply to this discussion, and it’s interesting how we’re discussing about the content of web pages, and if we connect really simply with different contents, like a media or code, we need to express, or we have mechanisms for control, like creative commons, how to use, how to use, how to redistribute, and for a large language models, this was just take for granted and for gathering data, right? And when we compared and contrast with search engines, we discussed, we had a link back, we had ways of finding content, now it’s for generating content, and the content creators, we don’t have transparency on that, and how the stakeholders related to this very content that’s being created are being considered, right? So just wanted to discuss, and to the idea of languages, I totally agree with Professor Bender, and there’s the whole discussion on value alignment, who is aligning those models, right? If we’re talking about different languages, different communities, are they participating on these alignments, right? So, yeah, thanks. Thank you, Wagner. We have one comment here, please.

Audience:
Thank you very much. My name is Peter Puck, I’m the chairman of the World Summit Awards, and we have started in 2003 to look at and show in which way ICTs are used for creation of quality content, and over the last 20 years as part of the WSIS process, the World Summit Awards have created a library of about 12,500 examples of high quality or higher quality content projects, products, and initiatives, and about 1,600 winners on that level. I want to first start and congratulate the organizers of this session, because I think this has been one of the most substantial sessions of IGF this year, and I want to stress that very much. It has been exceptionally good, and I want to also make sure that you see that the value you give to the different kind of aspects, having somebody from Meta, having somebody from the World Wide Web Consortium, having different views, and also from academia and technology community, is really valuable. I want to stress a number of points and then come to a question. One of the points is I really appreciate Emily Bender’s point on looking at the colonialism with technology, especially when we look at the effects the platform intermediaries have had on the internet, and I want to just reiterate what I said in some other contexts here at the IGF. It is that we have actually, with the platform intermediaries, replaced through the internet and cannibalized the editorial intermediaries, and that is something really very, very key to the question also of the large language models which are creating new intermediary structures, and that I think is very important. The other thing is that I thought that Wagner’s insistence and bringing up this issue of the studying technology in the context of use and how people repurpose technology in multiple ways, I think that’s a very valuable, interesting, let’s say culturalist attitude towards the technology, but then the question is does he have actually examples of how large language models can actually do that and how they structure and so on, and I think there’s a lot of interesting aspects in this. My main point would, however, be on the issue of the question of we are looking at the web as a public information infrastructure, and that is something which is only, let’s say, part of the picture which is underlying the governance imperative to the internet. I would think that the governance imperative and the goals and aspects should go towards a public knowledge infrastructure, and that relates very much to the question of how to finance it, and when we come to this model of the journalism, the model of the journalism is actually a model of creation of, let’s say, having these two markets of having advertising and subscription, and now we need to go into looking at what are the economics actually of this new public knowledge infrastructure, and one of the criticisms which I have of IGF conversations and sessions is that the economic side is, let’s say, very much or largely ignored, and I want to thank very much here, Raphael, for bringing up this issue of the economics of content creation and how we do this, and I would be happy to engage at other fora on the issue of how to tax this and how to work this. I think Deutsche Welle is a very good example of how one can say that there is somebody who is really moving into the multimedia space in a very interesting way, combining public broadcasting model together with a creation of many different kind of knowledges, but my question would be in which way can we continue within the IGF this kind of conversation regarding, for instance, creating new economic revenue streams for quality content as part of a governance imperative for the internet. I hope that this has been a very clear question. Thank you very much for giving me this space. Thank you for your comments and your question, and Raphael, do you want to start?

Rafael Evangelista:
Yeah, thank you for your comments, and really insightful, and I think that we have to recognize that the internet doesn’t live by itself in a separate realm or something like that, it’s like we live in a capitalistic society, and this just drives the companies, and they can say they can have ethical worries and guidelines, but we know that at the end of the day, the thing that is most important is to please the shareholders, etc., and I think we have to look at the internet and the web with the lens of what are the economic incentives that are playing for content or for the development of technology, how this drives, so we have to, I think it’s important for, I think IGF can be part of that, to build new institutions or re-institutionalize the creation of culture, of knowledge, etc., like regain the belief in institutions that can socially discuss some guidelines for this kind of production, and to put much of our resources on this kind of institutions. Thank you. Any other speaker want to comment on this? Because we are running out of time, so we

Diogo Cortiz da Silva:
do not have time for more questions, so I would like to thank you, all the speakers and the audience, to join us today. We are in the beginning of a new era, and we are raising new questions, and I’m sure next IGF you will be here again discussing maybe the same topics, but with more information, and of course asking different new questions. So thank you all, and the session is closed. Thank you.

Audience

Speech speed

153 words per minute

Speech length

1004 words

Speech time

395 secs

Diogo Cortiz da Silva

Speech speed

134 words per minute

Speech length

1149 words

Speech time

516 secs

Dominique Hazaël Massieux

Speech speed

133 words per minute

Speech length

2253 words

Speech time

1016 secs

Emily Bender

Speech speed

178 words per minute

Speech length

1587 words

Speech time

536 secs

Rafael Evangelista

Speech speed

124 words per minute

Speech length

1690 words

Speech time

818 secs

Ryan Budish

Speech speed

162 words per minute

Speech length

2359 words

Speech time

876 secs

Vagner Santana

Speech speed

160 words per minute

Speech length

1878 words

Speech time

703 secs

Yuki Arase

Speech speed

146 words per minute

Speech length

735 words

Speech time

303 secs

Multilingual Internet: a Key Catalyst for Access & Inclusion | IGF 2023 Town Hall #75

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis consists of multiple arguments and stances on different topics. One argument is presented by Elisa Hever from the Dutch government, who raises concerns about the lack of significant progress in International Domain Names (IDNs). Despite the constant reiteration of their importance, Elisa questions why there has been little development in this area. It is mentioned that resolutions pertaining to IDNs have already been introduced by the International Telecommunication Union (ITU) for quite some time. Elisa suggests that governments and the business community should play a more active role in driving progress in the field of IDNs.

Another topic discussed is the role of language-based data in Artificial Intelligence (AI). It is asserted that AI heavily relies on substantial language-based data to learn and function effectively. This reliance on language-based data makes it difficult for AI to be applied to lesser used or minority languages if the necessary data is unavailable. The argument suggests that as AI requires a large amount of language-based data, its growth and application may be limited for lesser used languages.

One observation made is the lack of support for indigenous languages in global digital platforms. It is highlighted that Indonesia has over 700 indigenous languages, with over 30 languages using non-Latin scripts. The speaker’s attempts to register an International Domain Name (IDN) with ICANN for the Japanese and Balinese languages in Indonesia were denied based on specific requirements not being met. This lack of support for indigenous languages raises questions about the inclusivity and support provided by ICANN. It is further critiqued that ICANN’s denial of the IDN application was based on the languages not being used as official communication or administrative languages. The language requirements for IDNs are seen as a ‘chicken and egg’ problem, where support for the languages is limited due to their lack of recognition, and their lack of recognition is partly attributed to the lack of support.

Efforts are being made to address the need for digital tools to support indigenous languages and cultures. Collaboration with ICANN and other entities is being sought to develop a label generation role for these languages. By providing access to International Domain Names, it is believed that indigenous communities would be able to engage with digital platforms and enhance their cultural presence. However, further details or evidence about ongoing efforts in this area are not mentioned.

Another point discussed is how to encourage the private sector to prioritize language inclusivity when developing technology. Although an argument is provided, no further details or evidence are given to support this point. It remains unclear how or why the private sector should prioritize language inclusivity in technology development.

Lastly, the analysis highlights the challenges faced by the deaf and hard of hearing community in relation to auto-captioning services. It is argued that the community faces censorship when relying on auto-captioning services. An example is given of Lydia Best, who calls for uncensored auto-captioning services. The argument suggests that the deaf and hard of hearing community should be provided with uncensored auto-captioning services to ensure equal access to information.

In conclusion, the analysis presents various arguments and stances on different topics. It raises important questions about the progress and support in the field of IDNs, the limitations of AI in relation to language-based data, the lack of inclusivity for indigenous languages in global digital platforms and ICANN’s approach, the need for digital tools to support indigenous languages and cultures, the encouragement of language inclusivity in technology development by the private sector, and the challenges faced by the deaf and hard of hearing community with auto-captioning services. However, some arguments lack supporting evidence, and further details are required to fully understand the ongoing efforts and potential solutions in these areas.

Edmon Chung

The discussion centers around the importance of establishing a fully multilingual internet to foster digital inclusion and promote language justice. Presently, while there are over 6,500 languages spoken globally, approximately 60% of internet content is in English. This poses a challenge for the next billion internet users who do not have English as their first language. Therefore, it is crucial to develop a more inclusive internet that caters to the linguistic diversity of its users.

A fundamental aspect of achieving language justice is ensuring universal acceptance of internationalized domain names (IDNs) and email addresses. However, several obstacles must be addressed to make this a reality. Currently, only 10% of top-level domains are internationalized out of a total of 1,500 domains. Furthermore, out of the 350 million domain names registered worldwide, only 1% are internationalized. This emphasizes the need to increase adoption and usage of IDNs.

Technical and policy requirements also pose challenges to achieving universal acceptance. It is necessary to have the appropriate technical infrastructure in place to support IDNs and email addresses in different languages. Additionally, policy interventions are needed to ensure that stakeholders recognize and prioritize the importance of language justice.

Demand and support are also significant factors. Suppliers providing IDN registrations often do not perceive sufficient demand, necessitating government intervention to overcome this issue. Governments can play a vital role by integrating universal acceptance readiness into their tender processes and system upgrades. By making it a requirement, they can incentivize the adoption and support of IDNs and email addresses.

Education is another crucial factor in promoting the use of IDNs and email addresses. Currently, these are often treated as mere add-ons rather than being incorporated into the basic protocol. One suggestion is to teach these elements as part of Networking 101, which would help normalize their use and promote greater inclusivity.

Furthermore, the dominance of English on the internet has implications for artificial intelligence (AI). Currently, 57% of web content is in English, resulting in AI systems predominantly being English-based. This limits the capabilities and inclusivity of AI technologies. By promoting the use of IDNs and email addresses, content and services in different languages can be encouraged, making AI more inclusive and diverse.

The foundational infrastructure of the internet is essential for the development of multilingual content. The Domain Name System (DNS), created in 1983, serves as the backbone for the internet infrastructure. Without a well-developed DNS, it becomes challenging to create and access multilingual content effectively.

The International Corporation for Assigned Names and Numbers (ICANN) has initiated a universal acceptance program to address these issues. This program aims to bring about significant changes and upgrade ICANN’s internal systems to be universal acceptance ready. However, implementing universal acceptance faces challenges due to technical and policy requirements.

Additionally, ICANN is addressing the issue of indigenous languages through ongoing policy development. It is important to revisit the label generation process in light of the international decade of indigenous language. This demonstrates a commitment to inclusivity and the recognition of the importance of preserving and promoting indigenous languages online.

Edmon Chung, an advocate in this field, believes that relying solely on market forces will not be sufficient to support indigenous languages and universal acceptance, as market failure may occur. Therefore, policy intervention is necessary. Intervention could involve providing funds or enforcing requirements in tenders to motivate stakeholders to prioritize language justice and universal acceptance.

In conclusion, establishing a fully multilingual internet is crucial for achieving digital inclusion, language justice, and sustainable development. Universal acceptance of internationalized domain names and email addresses is a key step in this process. However, challenges related to technology, policy, and demand need to be overcome. Education, government support, and enhanced infrastructure are also necessary to promote inclusivity and diversity in internet content and services. The ongoing efforts by ICANN and the recognition of indigenous languages demonstrate a commitment to addressing these issues. Ultimately, policy intervention is crucial to ensure that universal acceptance becomes a priority and facilitates an internet that caters to the linguistic diversity of its users.

Mark Durdin

The analysis highlights several key issues related to internationalised domain names and universal acceptance adoption. It points out that technical issues persist in parsing certain email addresses, as demonstrated by Gmail’s struggle to recognise a Thai email address. This exemplifies the difficulties that users face with internationalised domain names.

Another important point raised is the need to support more languages, especially indigenous ones, in order to improve universal acceptance. It is noted that the Khmer label generation rules currently do not support most indigenous languages of Cambodia. However, there is hope as the software developed by Mark’s team has been able to correct most of the mis-encodings in the Khmer script.

Furthermore, the analysis highlights the crucial role of wide adoption of label generation rules for the uptake of internationalised domain names. It mentions how Mark registered a spoofed KhmerScript.com domain as proof that the rules aren’t widely adopted. It is also mentioned that many Asian scripts have multiple ways of encoding visually identical words, which creates potential for spoofing.

The computing industry is called upon to support recommendations around universal acceptance, and the analysis acknowledges those who have already contributed to this cause. Furthermore, it encourages the rest of the computing industry to start supporting these recommendations.

Engaging with open-source communities and major industry vendors is seen as a key step towards enhancing the accessibility and usage of less dominant languages in digital spaces. The launch of the Digitally Disadvantaged Languages Subcommittee by the Unico Consortium, along with the international decade of indigenous languages, provides an opportunity to raise awareness and collaborate with these communities.

It is also highlighted that the promotion of universal acceptance and internationalised domain names in an accessible format is crucial to raise awareness among software developers. The analysis notes that software developers often perceive ICANN as low-level, resulting in universal acceptance and internationalised domain names being overlooked. Accessible information on these topics is crucial to clarify common questions and better inform developers.

Finally, the analysis suggests evaluating the support level of universal acceptance in prominent internet powerhouses and end-user software. This can help identify gaps in terms of universal acceptance support and facilitate improvements to open-source communities, even without waiting for commercial priorities.

Overall, the analysis emphasises the importance of addressing the technical issues related to internationalised domain names and universal acceptance adoption. It calls for support from the computing industry, engagement with open-source communities, promotion and awareness campaigns, and evaluation of universal acceptance support in prominent platforms. By addressing these issues and implementing the suggested recommendations, it is believed that universal acceptance can be improved, leading to greater inclusivity and accessibility in digital spaces.

Nodumo Dhlamini

The analysis reveals that internationalised domain names (IDNs) have the potential to address several significant issues in Africa, including accessibility, inclusivity, and language preservation. IDNs facilitate accessibility in native language scripts, ensuring that individuals can access the internet in their preferred languages. This breaks the language barrier and allows more people to benefit from the opportunities provided by the internet.

Furthermore, IDNs contribute to inclusivity by enabling the creation and dissemination of local content in various languages. This allows communities to express themselves in their native languages and ensures that their voices are heard online. Additionally, IDNs support cultural and linguistic preservation, helping to safeguard Africa’s rich linguistic heritage.

However, the adoption of IDNs requires certain prerequisites. It is crucial to raise awareness about IDNs among internet users and promote technical improvements to support their implementation. Moreover, user education is essential to ensure the proper use of IDNs and address security risks. This includes educating users about the potential dangers of phishing and domain spoofing and providing them with the necessary tools and knowledge to protect themselves. Robust security measures are also necessary to safeguard users’ data and privacy.

To effectively reach underserved communities, a thoughtful and inclusive approach is crucial. This involves providing digital literacy training to ensure that individuals have the necessary skills to utilise IDNs and actively participate in the digital world. Additionally, efforts should be made to make internet access more affordable and accessible to these communities. Subsidising internet access and exploring offline engagement strategies, such as workshops and campaigns, can play a pivotal role in bridging the digital divide.

Monitoring the impact of IDNs adoption is essential for success. Implementing a feedback mechanism and impact assessment strategy will provide valuable insights into the challenges faced and the progress made. This information can guide future improvements and ensure that IDNs effectively address the needs of African communities.

In conclusion, IDNs can break the language barrier, promote inclusivity, and contribute to language preservation in Africa. However, their successful adoption requires raising awareness, technical advancements, user education, robust security measures, and an inclusive approach that includes digital literacy training and subsidised internet access. Monitoring the impact and gathering feedback will help refine and improve the implementation of IDNs in Africa.

Theresa Swinehart

ICANN, the organisation dedicated to Internationalised Domain Names (IDNs) and Universal Acceptance, has implemented a comprehensive strategy to support the adoption and use of IDNs and promote Universal Acceptance. This strategy involves raising awareness and providing training to various stakeholders, including domain name registries, registrars, developers, and users. Specific teams have been established within ICANN to focus on these efforts and ensure the widespread understanding and acceptance of IDNs.

In addition to its own efforts, ICANN collaborates with other relevant institutions such as the Universal Acceptance Steering Group and UNESCO. These collaborations aim to leverage the expertise and resources of these organizations to further promote IDNs and Universal Acceptance. ICANN recognises that achieving universal acceptance requires a collective effort and believes that partnerships and collaboration are key to realising this goal.

ICANN is also actively engaged in policy work related to domain names. Through its policy development processes, ICANN ensures that the rules and regulations governing domain names are continuously reviewed and updated to align with changing technology and user needs. By actively participating in policy discussions and consultations, ICANN advocates for the interests of all stakeholders and strives to create an inclusive and accessible domain name system.

In its commitment to fostering innovation and inclusivity, ICANN plans to open up another round for the introduction of new top-level domains (TLDs). This initiative will provide an opportunity for all language groups and different regions to register domain names in their local scripts. By enabling the use of local scripts, ICANN aims to encourage linguistic diversity on the internet, allowing people to express their identity and culture through their online presence.

To ensure the success of IDNs and Universal Acceptance, ICANN seeks to raise awareness and generate demand. It acknowledges that successful implementation of Internet Protocol version 6 (IPv6) through government contracts can create awareness among various stakeholders, including users, businesses, and service providers. Additionally, ICANN recognises the significance of local community education in encouraging the generation of local content and raising awareness about the importance of inclusive online platforms.

ICANN also emphasises the need to link the digital world with the preservation of culture and languages at the national level. By recognising the value and importance of cultural heritage, as highlighted by UNESCO and other entities, ICANN acknowledges the need for safeguarding and promoting languages and cultural diversity in the digital age.

Furthermore, ICANN emphasises the importance of creating consumer awareness to generate demand. By engaging with end-users and providing information about the benefits and possibilities of IDNs, ICANN aims to create a conducive environment for the adoption and usage of IDNs.

In the specific context of the Javanese language, ICANN is actively working with the Javanese community to resolve categorisation issues related to Javanese script in Unicode. The team is collaborating with the Javanese community to develop the Javanese script as a recommended identifier within Unicode. ICANN is supportive of ongoing collaboration with the Javanese community, recognising the importance of inclusivity and their expertise in resolving this matter.

In conclusion, ICANN is dedicated to the work around Internationalised Domain Names and Universal Acceptance. Its strategy includes various initiatives such as raising awareness, providing training, collaborating with relevant institutions, and advocating for policies that support inclusive domain name practices. By opening up a new round for the introduction of new top-level domains, advocating for collaboration and partnerships, raising awareness and demand, preserving culture and languages, and supporting community collaboration, ICANN strives to create an inclusive and accessible digital landscape for all.

Marielza Oliveira

Multilingualism and universal access to the internet are crucial for achieving digital inclusion and reducing the global digital divide. Astonishingly, around 37% of the world’s population, equivalent to approximately 2.7 billion people, currently lack internet access. This staggering figure highlights the urgent need to address this issue and ensure equal opportunities for all to participate in the digital realm.

One of the main obstacles to achieving digital inclusion is the lack of linguistic diversity in cyberspace. This problem disproportionately affects indigenous and underserved communities, who face difficulties in accessing digital services due to the absence of their languages online. Recognizing this challenge, UNESCO and the Internet Corporation for Assigned Names and Numbers (ICANN) are working collaboratively to promote multilingualism in cyberspace and develop a universal acceptance tool. This tool aims to facilitate access to online resources for individuals, irrespective of their native language, thereby promoting universal inclusion.

The impact of linguistic diversity in cyberspace cannot be overstated. Addressing the lack of multilingualism is not only vital for digital inclusion but also holds the potential for significant societal progress. The internet is globally recognized as a powerful tool for positive transformation. However, for the 37% of the world’s population who remain disconnected, this potential remains untapped.

The overall sentiment towards promoting multilingualism and universal access to the internet is positive. It is crucial to prioritize indigenous and underserved communities in the provision of digital services. By bridging the linguistic gap and ensuring equal internet access for all, we can make substantial strides towards achieving SDG 9 (Industry, Innovation, and Infrastructure) and SDG 10 (Reduced Inequalities).

In conclusion, promoting multilingualism and universal access to the internet is essential for achieving digital inclusion and reducing the digital divide. The efforts of organizations such as UNESCO and ICANN to address the lack of linguistic diversity in cyberspace are commendable. By developing a universal acceptance tool and focusing on underserved communities, we can unlock the vast potential of the internet for positive transformation and uplift the billions of individuals currently left behind.

Moderator

The speakers in the discussion emphasized the importance of a fully multilingual internet for digital inclusion and language justice. They highlighted the fact that there are over 6,500 languages worldwide, with over 2,000 in Asia alone, and yet almost 60% of the internet’s content is still in English. This creates a significant language barrier for the next billion internet users who do not have English as their first language. Therefore, a fully multilingual internet is seen as the foundation for achieving digital inclusion and language justice.

To ensure a multilingual internet, the speakers argued for the need for internationalised domain names and email addresses. They mentioned that domain names and email addresses are the starting points for people utilising the internet, and without support for different languages, the multilingual internet is incomplete. Currently, only 10% of top-level domains on the internet use languages other than the alphanumeric A to Z. Therefore, internationalised domain names and email addresses are seen as essential for achieving language justice.

The implementation of a multilingual internet requires both policy intervention and a multi-stakeholder approach. The speakers highlighted that governments should demand in their tenders for IT systems that the systems be IDN email-ready. Additionally, schools and universities should include internationalised domains and email addresses as basic protocols. This implies that policy intervention is necessary to drive the adoption of multilingual internet practices.

The speakers also recognised the potential of the internet as a tool for positive transformation and societal progress. They suggested that advocating for multilingualism and universal inclusion is necessary to harness this potential and ensure that no communities are left behind. However, they also pointed out that a significant portion of the world’s population, estimated to be around 2.7 billion people, are still not taking advantage of the internet’s transformative power. This creates a barrier between these communities and the vast pool of digital knowledge available.

UNESCO and the Internet Corporation for Assigned Names and Numbers (ICANN) were mentioned as organisations working together to enhance digital inclusion and multilingualism in cyberspace. Their partnership aims to bridge the language gap and ensure that internet access and content is available in multiple languages.

The discussion also explored the computing industry’s role in supporting universal acceptance. It was suggested that the industry should support the recommendations made for universal acceptance, which would contribute to greater inclusivity in terms of language diversity on the internet.

In addition to language barriers, AI bias was identified as another challenge. Due to the dominance of English in online content, AI systems tend to favour English and may have biases against other languages. This highlights the need to address language inclusivity concerns in AI development.

The open-source community was recognised as a potential ally in improving language inclusivity. Collaboration with the open-source community could raise awareness and drive the adoption of multilingual practices in software development.

The involvement of the Unico Consortium in engaging industry partners was seen as an important step in addressing the issue of digitally disadvantaged languages. The consortium, made up of major industry vendors, is working to promote language inclusivity and support underserved communities.

The discussion also touched on the low level of awareness among software developers regarding universal acceptance and internationalised domain names. It was noted that accessible documentation and guidelines are lacking in this area, and there is a need for clear and accessible documentation to engage the open-source community.

The speakers highlighted the importance of developing local digital content and awareness in the digital world. Governments and businesses were urged to take initiatives and promote local digital content, as the lack of local language content can hinder engagement with the online world. Creating consumer awareness and linking the preservation of culture and languages to the digital world were seen as ways to generate demand for local content.

Overall, the speakers urged a movement towards a multilingual internet that promotes digital inclusion and language justice. They emphasised the need for policy intervention, a multi-stakeholder approach, and collaboration with various stakeholders, including governments, industry, and the open-source community, to achieve this vision. By addressing language barriers, promoting universal acceptance, and developing local digital content, the internet can become a truly inclusive and transformative tool for all.

Session transcript

Moderator:
So, my name is Jarong, I work for ICANN, I’m the head of the Asia-Pacific Office, I’m based in Singapore, and today’s session is on multilingual internet, a key catalyst for access and inclusion. So, with me, we have two on-site speakers and three speakers speaking remotely, so I’ll introduce them. So, on my right is Mr Edmund Chung, he’s our board member from ICANN and also CEO of DotAsia. And on my left is Ms Teresa Swinehart, our Senior Vice President for Global Domains and Strategy from ICANN. And joining remotely, we have Dr Marielle Oliveira, she’s the Director for the Division for Digital Inclusion Policies and Transformation Communications and Information Sector from UNESCO. And also Ms Nodumo Vlamini, the Director for ICT Services and Knowledge Management, the Association of African Universities. And also, last but not least, Mr Mark Durden, he’s the Key Project Manager from SIL International. Thank you so much for taking the time to join me, my esteemed speakers, and also our participants for today. Now, let’s dive in, we only have an hour, with a pretty interesting topic. So, can I invite Edmund to first help us frame the issue of multilingual internet. Edmund, can you share about language, do you think it is a barrier to access? And at a high level, talk about what are some gaps or problems, particularly pertaining to access, such as internationalised domain names, which is using domain names in local scripts, as well as the related issues of adopting internationalised domain names. And a broader question to this, perhaps, is at a high level, do you think this is a policy problem, a technical problem, or a socio-economic problem, or all of them? Over to you, please, Edmund.

Edmon Chung:
Thank you, Jarung. Anything we discuss here is all of them, right, at IGF. But I guess, as Jarung mentioned, to ask me to start by framing the question, it’s really, I think the title today is really important. And I will start by saying that a fully multilingual internet really is the foundation towards digital inclusion. And that’s, you know, I think that’s very important. And if you look at the world, there are over 6,500 languages, you know, around the world. Many of them, over 2,000 of them actually here in Asia, where we are. And yet, today, on the web, almost 60% of the internet’s content is still in English. So really, a multilingual internet, I believe, is essential for digital inclusion, because the next billion who’s coming online do not have English as their first language. And that’s, you know, that’s the issue we have. That’s the topic we’re talking about. And the kind of the doorway to access information, and also one of the key kind of starting point for people utilizing the internet, is our domain names and email addresses. So really, having internationalized domain names and internationalized email addresses is a foundation for development of content and services in local languages. So in that, you know, taken in that context and in the digital inclusion context, we really see that, we can really see that universal acceptance of internationalized domain names and email addresses is really about language justice. It’s about marginalized language communities impacted by language barriers. And here we’re talking about people, you know, accessing the internet, and also as a beginning of accessing information on the internet in their local language. Again, you know, domain names and email addresses is maybe a very small part of it, but without which the multilingual internet is not complete. And speaking of language justice, I think the next speaker will talk more about it, but I think, put in a perspective of the UN, we are in the international decade of indigenous languages. And that’s an important way to frame this question as well. And I’m seeing in the audience, my friend Yudo there, that talked to me about having the Java, Javanese language to be expressed on the internet and in internationalized domain names. These are some of the things, because I think this is what we’re talking about. And the ICANN community, I think we have been working very hard for many years, working through these technical standards, the linguistic and script policies to ensure secure and stable introduction of internationalized domain names on the DNS. But can we do more? The answer is, of course, yes. We do need to do more, and yes, we can do more. Today, there are about 1,500 top-level domains on the internet, and only 10% is actually using other languages other than the alphanumeric A to Z 0 to 9. Actually, no 0 to 9, sorry about that. Top-level domains don’t include numbers. But the point is, only 10% are internationalized domain names. And out of the 350 million domain names registered worldwide, only 1%, about 1%, is internationalized domain names. So registries and registrars do need to work harder to ensure their systems are fully universal acceptance and IDN-ready. And this is also even for non-IDN-related, so even for registries and registrars not offering internationalized domain name services, it’s important for their systems to be ready for internationalized domain names and email addresses, because your registrant, even if they’re registering an English domain name, could be using an internationalized email address, right? I mean, that’s what we are talking about in terms of universal acceptance. So it is a technical implementation issue, back to Jaron’s question, but it will require policy intervention. I believe governments need to demand in their tenders for IT systems, for example, that systems be IDN email-ready. Schools and universities should include internationalized domain names and email addresses as basic protocols for networking 101, for example. And we need other stakeholders to join in the work, and that’s why we need to talk about it here at the IGF, because we need a movement, and this movement for language justice really starts here in the internet governance community. So I guess I’ll, you know, and finally I wanted to just note that, of course, just the internationalized domain names and email addresses themselves does not solve the multilingual internet issue, of course not. I mean, but it is a foundational component, because without which we cannot realize a fully multilingual internet, and, you know, this will require a multi-stakeholder approach to address the different issues that is beyond ICANN and the immediate community’s reach in many ways. And the key aspect, I believe, is to really get the end users and the community to realize that this is not just a matter of convenience or cool domain names and email addresses, but it is about realizing a sustainable multilingual internet that cares about language justice.

Moderator:
Thank you, Edmund. I really like the key word about language justice, and here at the IGF, I remember there was a side event I attended on the soft launch of a network for social justice and digital resilience. And I think, you know, just tying in one of those themes, really having a multilingual internet being about language justice is also a form of social justice. Now, let’s move on quickly to our next speaker, Dr. Marielsa. So, Marielsa is from UNESCO, and can you share about the background of multilingualism at the UN, and where are we today?

Marielza Oliveira:
Are there any recent multilingualism initiatives by the UN or UNESCO? Over to you, please. Thank you very much. Hello, everyone. Particularly our dear ICANN colleagues, with whom the UNESCO team has been working to advance multilingualism in cyberspace. I’m really happy to join this session today, as this is a very important topic to me. And I apologize that I’ll have to leave soon to another commitment, but let me share some thoughts with you. First, I love that the previous speaker was talking about language justice, because this is really about realizing a human right to freedom of expression and access to information. So, you can’t really share your thoughts or seek information if you cannot do it in your own language. And since its invention, the internet has been acknowledged as a really powerful tool for societal progress, a source of information, a means to exchange products and services, but it has also been recognized to have the ability to empower individuals, particularly granting them, enabling and upholding the rights to access to information, expression, and so on, while simultaneously amplifying the voices of marginalized groups. However, we must accept the current reality is that an estimated 37% of the world population, or close to 2.7 billion people, are still not taking advantage of the internet’s transformative power. And this means that there is a barrier that separates a large part of humanity from the pool of knowledge in the form of digital resources. And as more and more services are going digital, as noted by the UN Secretary General’s recent report, we are faced with the pressing challenge of connecting the next one billion users to really benefit from the digital processes. And that means that we must up the ante in providing digital services to the indigenous and underserved communities that have struggled for a long time with limited access and representation in the digital sphere. And the lack of multilingualism in cyberspace is an essential aspect of achieving, part of the big barrier for achieving digital inclusion. And in 2003, the UNESCO General Conference adopted the recommendation concerning the promotion and use of multilingualism and universal access to cyberspace. It is a landmark provision, and the recommendation provides a framework for the member states to adopt legislation and other measures that are conducive to the promotion of multilingualism in digital ecosystems. And this includes forging new partnerships, facilitating mechanisms for multilingual domain, names and associated tools, content and process. And UNESCO and ICANN have a longstanding partnership on this front, and even this session has been, you know, proposed on our collective understanding that the technology deployed still has to catch up to this progress to allow for digital inclusion in multilingual communities globally. It is our collective responsibility to ensure that not only new technologies have to be innovated, but the communities must also be brought closer to the ongoing digital transformation. And this can be done only if we address the glaring deficiency of linguistic diversity on the Internet. And so, as one of the co-leaders in the implementation of the International Decade of Indigenous Languages that goes from 2022 to 2032, UNESCO is playing a pivotal role in this, in championing the preservation, promotion, and revitalization of indigenous languages worldwide. There is a global action plan which highlights the importance of fostering favorable conditions for digital empowerment, freedom of expression, media development, access to information, and language technology. And this is where ICANN and UNESCO have been working together to bring synergy to their efforts. Collectively, we must really prioritize making Internet platforms and applications accessible to people with diverse linguistic abilities and thus ensure, really, universal participation and inclusion. And here, let me just say that the interplay between languages and universal acceptance is complex and multifaceted. While achieving universal acceptance of languages is unquestionably essential, like the previous speaker mentioned, a foundation, we must also focus on the creation of digital tools, products, and services tailored specifically to the needs of underserved communities and currently unserved communities. And it’s incumbent upon UNESCO and ICANN to create a tool for universal acceptance, one that encompasses the notion of universal inclusion and remedies this deficiency in Internet linguistic diversity. And so, I’d like to say to all our participants today, I urge all of us to maintain a real awareness of the Internet’s immense potential as a tool for positive transformation and for us to work together to unify, innovate, and advocate for multilingualism and universal inclusion. So, I hope that we all will be working together. So, thank you.

Moderator:
Thank you so much, Marielle. So, we’ve heard a couple of technical terms in some ways. So, internationalized domain names, which are domain names in different scripts beyond the English alphabet, and also the term universal acceptance, which is in software and applications to universally accept these domain names Now, let’s move on very quickly to Teresa. Teresa, can you share about ICANN’s work? Just now, Marielle just mentioned UNESCO and ICANN in close collaboration. But can you share more specifically about ICANN, like ICANN’s work in relation to internationalized domain names and universal acceptance?

Theresa Swinehart:
Please. Brilliant. Well, it’s great to be here and to be having this conversation. I think, as Edmund said, we need a movement and we need awareness. With the amount of languages spoken in all the different regions of the world that are not reflected in ASCII character sets or perhaps have a longer variation to the right of the dot, we need to afford the ability for those to be used and play a strong role in this and partner with others. A couple areas, how it’s anchored within the construct of the mission. The IDN work, or internationalized domain name and universal acceptance, is recognized both in the strategic focus and it’s included in our strategic plan. Our strategic plan, and the reason this is important, is developed with community input and really looking towards what the future is. It’s based on the analysis of trends and where the future goes. That’s compiled and then brought to the board and the board looks at what the final strategic plan is but also puts it out for awareness to the community. So it’s really an all-inclusive process as we do this. So the fact that IDNs and universal acceptance are anchored within the strategic plan for the 21 to 25 is an important factor and we anticipate it to be reflected likewise in the next iteration. Importantly, it’s also an element that’s reflected in our interim CEO’s goals very strongly. So again, important to show that we are taking this work that we’re doing seriously but also that this is really what the future is about and that’s important for ICANN’s mission and mandate in serving the public interest. Moving more specifically, with regards to internationalized domain names, there’s a couple areas. More on the operational technical side, the working on tables to make sure that if it’s registered it’s following a certain table and making sure that those tables are compiled. If we look at the policy side, there’s work within the generic name supporting organization with regards to policies for internationalized domain names on the right of the dot and within the country code supporting organization there is further work on what was a policy to enable initial country code top-level domains to be accepted in IDNs. There’s further work on that to ensure that there’s policies for the future around that. So the reason the generic space is different is because it applies to the generic top-level domains rather than the country top-level domains. So there’s quite a bit of policy work there. We also have a team that very specifically is going out to the community and working with them on both the table work, how to create awareness, how to look at this from a technical level, how to ensure trainings around that. So we partner with different groups in different regions around the world. If we look at universal acceptance, we work very well with what’s referred to the UASG, so the Universal Acceptance Steering Group, which is a group that has been active in this space, but also working with UNESCO, working with other organizations around the importance of the ability for platforms and for the ability of email or web addresses to be able to resolve so that you know that they actually go there. And again, we’re working on the technical side of trainings, participating in partnership with others to help awareness about both the issue but also problem-solve on the technical level. On universal acceptance, in addition, last year was the first Universal Acceptance Day in which more than 50 events across 40 countries brought awareness around the importance of universal acceptance. That was attended by about 9,500 people. But if you look at the ripple effect of that in the shared experience, plus the awareness in the media around it, it was a start. We’re looking forward to holding the next Universal Acceptance Day in 2024. And with that, looking forward to partnering with other organizations and platform providers in order to create a movement and create awareness around things. And we don’t do any of this alone. We can’t, we have a limited remit in this, but we are one of the elements to that and to partner with others. And finally, if I can just touch on the next round of the introduction of new top-level domains, so when we open up that opportunity for the introduction of those, looking very much at ensuring that those who wish to register a name in internationalized domain name character sets, so something that is not ASCII, or something that might require ensuring universal acceptance, but in ASCII character sets, we’re doing quite a bit of awareness-building around that in cooperation with the fact that we will be opening another round. And that round could afford the opportunity for all language groups or different regions of the world that currently do not have a presence, so to speak, but would like one, to have that opportunity to do it, but also that technically, it will then also resolve in the system. So those are just a few examples of some key work areas, and look forward to the conversation.

Moderator:
Thank you, Theresa. So let’s hold on to the piece about the UA Day, because I think that it ties in with what, so far, all the speakers are saying, a movement, like a call to action. I feel that’s something we can really work on at this session. So let’s move on to the other speakers. First, let’s go to Mark. So Mark, from your background, you’re more of a technical person, so I’ll ask a more technical question. So what technical issues, in your experience, are you seeing in relation to internationalized domain names and universal acceptance adoption? Mark, over to you, please.

Mark Durdin:
Thank you, Zhirong, for your question, and thank you for the invitation to join this panel. I’m quite excited by the goals of universal acceptance, not least because my organization, SIL International, works primarily with indigenous and ethnic minority communities, around 1,000 different communities around the world. And so this is very important for the communities that we’re working with. It’s essential to their engagement with the online world, with the rest of the world. So I’d like to share a story, first of all, about my experience just this year with internationalized email addresses. So a little earlier this year, I downloaded the 2022 Universal Acceptance Readiness Report in PDF file. And in that report, I read that only 10% of email systems currently meet the needs of universal acceptance. So out of curiosity, I clicked on the Thai email address example in the PDF just to try it out. And if you want to try it, it’s on page 11 of that report. And Gmail popped up its compose window, but it had a completely garbled, like mojibake email address in its to field, which meant that that just wasn’t going to work. So I wasn’t going to give up that quickly. I right clicked on the email address in the PDF and copied it to my clipboard and paste it into Gmail. And no, it still didn’t work. All I got was a string of dots with an at sign in the middle. So finally, I actually selected the text of the email address in the PDF file with my mouse. And I copied that to the clipboard and I was able to paste it into Gmail and try it out. But that still didn’t work because it turns out that the mail host for my personal domain does not yet support the SMTP UTF-8 mail protocol. And so the test email bounced. Now I did eventually get it working by sending it from my Gmail address, but I’m supposed to be some sort of expert in the area. And if I can’t get it working without trying that hard, I think our community of users worldwide are going to have a very poor experience. So we really do have a long way to go. And in some areas, the computer industry moves very quickly, but some of these things seem to take a very long time. And I really don’t think that the community can afford to wait for us. So I’ll just switch tracks just a little bit now and talk about two specific areas that are close to my heart and how they overlap with universal acceptance. And that is online security and input methods. So all the way back in 2016, I came across the label generation rules for the Khmer language of Cambodia. And I was really impressed at the level of detail and effort that had gone into trying to prepare these rules to make safe domain names in the Khmer script. Now I’ve been working with Khmer input methods, and there was obviously so much detail about spoofing attacks that would be possible in Khmer that would have been covered off in those label generation rules. And I’m going to use Khmer for most of the rest of my examples because that’s where I have the most experience. But a lot of these same principles apply to many writing systems across Asia and the world. Now, even for the Khmer script, it’s more than just the Khmer language. This is the international decade of indigenous languages, as Mariel’s pointed out in her chat comment. And we need to be thinking about all the language communities that use a particular script. Now, for Latin script, it’s many, many languages, and it’s fairly well known. But even the Khmer script is used by at least eight different languages in Cambodia today. So as an industry, we need to put much more effort towards supporting those indigenous languages all around the world. And for example, and I’m not criticizing the label generation rules group here, but the Khmer label generation rules do not yet support most of the indigenous languages of Cambodia because the rules that they’ve defined are too constrained to support the ways that those languages are working with the script. It’s a huge space with fuzzy boundaries, so we still need more dedicated effort to expand and support those languages. It was also personally disappointing to me when I found out that label generation rules have not been adopted by many major top-level domain industries, including mostvisibly.com. So to prove this to myself, I registered a spoofed KhmerScript.com domain and tested it out. And yeah, I still got that domain in my collection of useless domains. So wide adoption of those label generation rules is so important for the uptake of internationalized domain names. Many Asian scripts are vastly more complex to type and encode than Latin script, and there are myriad opportunities for spoofing attacks. So I think many of us have seen those alternate script examples like Apple.com written with Cyrillic letters. Mixing scripts is one thing, but in many Asian scripts, we don’t even need to mix the scripts to see these problems. For example, and again in the Khmer script, we’ve identified example words that can be encoded in up to 15 different ways in Unicode, but they look visually identical on all devices. And what’s worse, we found real examples where Khmer users had typed those example words into webpages in every single one of those wrong encodings. And sometimes, the incorrect encodings had more matches than the correct encodings. Now, smart input methods can help with this. So for Khmer, my team have introduced a KhmerEncore keyboard that’s powered by the software that we write that automatically corrects the vast majority of those mis-encodings. This allows us to use the software to correct the mis-encodings. This helps not just with preventing spoofing, but with any text task, searching, sorting, and so on. I’d like to say a big thank you to all of those who have contributed so far to universal acceptance and for all the progress we have seen, and particularly the IDNs that are starting to take root in many places. But I’d also like to encourage the rest of the computing industry to just slow down a little bit and listen to the universal acceptance steering group and start supporting the recommendations that have already been made.

Moderator:
Thank you, Mark. So there are some technical areas which are important, but I think a key takeaway also is the key term of wide adoption. So, you know, for the industry, I think Mark made a very good point. You know, we’re sometimes chasing the next big thing, and then sometimes forgetting the people we’re leaving behind. And I think this is a good segue to go to Nodumo, because coming from an underserved region like Africa, do you think internationalized domain names and its adoption would help break the language barrier for access and or help to preserve languages?

Nodumo Dhlamini:
Nodumo, over to you. Thank you. Yes, thank you very much. Thank you for having me on this panel. Yes, Africa is an underserved region for many reasons that include unequal access to technology, unequal access to the Internet and information resources. Yes, internationalized domain names can assist to break the language barrier for Internet access. For the reasons that the various speakers have also alluded to, mostly because they facilitate accessibility in native language scripts, inclusivity, cultural and linguistic preservation, and also because they could facilitate the creation and dissemination of local content in our various languages. And I think that users would also find it more fulfilling to go to the Internet and use their own languages. I also acknowledge the challenges that have been alluded to by the speakers before me concerning IDNs, especially technical compatibility and that our systems are not yet all ready. The security concerns that have been mentioned by the previous speaker. And I think user education is also a major issue in terms of the proper use of IDNs and also addressing the security risks associated with the international domain names. Of course, there’s a need to create awareness so that we generate the needed demand for uptake of the IDNs and universal acceptance. And I think awareness raising will be a very important aspect of what we need to prioritize so that more people can understand what is possible. And we also need to look at this, not just doing awareness raising, but go a little further to do it as a package or as a broader strategy that will also address technical improvements and also address the issue of tools, user-friendly tools that can be adopted to encourage the adoption of internationalized domain names. And we need to work together. We need to work with industry. We need to collaborate as Internet stakeholders so that we can really generate awareness and also adoption. And the issue of local content is also very pivotal and important for the success of IDNs because people are likely to adopt them if there’s content available in their language. And as the previous speaker mentioned, implementing robust security measures to protect users from phishing and domain speaking is also extremely important. Concerning how we can effectively reach the grassroots and specific language communities which are not yet online, I think this requires a thoughtful and inclusive approach. We need to understand the local context and the needs. We must involve the communities. We must create training materials for language localization in the local languages and also consider launching offline engagements, workshops, and campaigns within the communities. And digital literacy training, for example, in Africa is also very fundamental. If communities are going to participate and adopt these IDNs, they need to be digitally literate. So we must really develop these partnerships and collaborations very carefully and address issues of access to internet, access to devices in underserved communities, provide actually subsidized internet access, and encourage communities to participate by sharing their stories and also creating incentives for communities who are actually involved in getting others online and also involved in ensuring the adoption of the internationalized domain names. And lastly, I think we need to have very good feedback mechanisms, a impact assessment strategy so that we can understand the challenges, the concerns, and also how well we are doing in this endeavor towards the adoption of internationalized domain names. Thank you very much.

Moderator:
Thank you, Nodumo. Some very insightful points, and I think it ties in a lot to what I think this session will be going towards, which is thinking about a movement or an action plan amongst us, and how do we start? And I think Nodumo mentioned a few things, like thinking about addressing the issue holistically, but at the same time, being able to reach the grassroots. Each of us then can play a part because we are part of a community locally. So I thought to seed those couple of thoughts first, and let’s go into the next segment for this session, which is, do we have any questions for the speakers or any comments or thoughts about what we’ve discussed so far? Please, over to the mic. Hi, good afternoon.

Audience:
My name is Elisa Hever, and I’m from the Dutch government, from the Ministry of Economic Affairs, and a MAG member. So universal acceptance, or IDNs, is not something new. It’s been there for quite some time. And, well, I’ve been in this field for three years, so slightly longer than that we know about IDNs. And I’ve been hearing, actually, from basically day one, IDNs is an important topic. It’s a very important topic. It’s important. We need this. It’s important. It’s highly important. We really need this. And it doesn’t seem that there’s really, a big change happening up until now, at least. I do feel that we’re getting somewhere, even though there have been resolutions already in the ITU on this for quite some time. And I’m wondering how you really think that we as governments also should act on ensuring or creating more, well, how should I say this? Sorry, I’m slightly tired after a week of IGF. So what role you see for governments in this process? And which role you see for the technical, no, sorry, for the business community? Because I’m seeing ICANN here, and they are, well, more from the technical community, and also the example given about Google or Gmail and not really being instrumental in this. I wonder which role you see for that sector. Thank you.

Moderator:
Thank you for the question. I think there are two parts to your question, so I will try to dissect it for the speakers. I heard Mariela was going to leave early. Is she still on? She’s left? Because I think one part of the question was, how can governments help in terms of the policy aspect? Mariela would be very good to address that. If that’s the case, can I first ask Edmund, then Teresa?

Edmon Chung:
The question is how much time we have, right? We can talk about this for hours. You talk about three years. Actually, I’ve been working on this for 25 years, and it is improving. What I would like to say, a few things. First, why is it important, why is it not done? Yes, it’s also because of the technical and the policy requirements, so we’re still working through the technology and the policies to make it work. We are very close to completing that, as Teresa will I’m sure add. So now is the time, is really in my view. And I think there is also the issue of the suppliers that provide… IDN registrations, for example, is not seeing the demand. And that is a problem. And why didn’t they see the demand? Well, one of the reasons is because universal acceptance is not ready yet because emails are having still, different platforms are still not supporting it. Web hosting remains a problem. So that is why we need policy intervention. That’s why we need to break this chicken and egg issue. So what governments should do? I think governments should look at ICANN. What ICANN did was a few years ago, and that, in my view, was a significant change. ICANN, for the first 20 years, have been supporting but not really doing it themselves. But since, I think it was 2015, around that time, ICANN had a universal acceptance program and it’s starting to look at its own internal systems and make it ready. And that was a big thing. I think the next step needs to be, registries and registrars need to be completely universal acceptance ready. And if I can wave a wand and the GNSO would do everything, I would ask for registries and registrars to be required to be universal acceptance ready, me being one of the registries as well. And I know it’s very difficult. It’s actually difficult because it is a bit of a long tail. The reason why it’s difficult is because some of the systems that use, every part of the system you use touches on domain names and email addresses. And therefore, the actual change is small, but the long tail thing is, the long tail is pretty long. So governments look to ICANN, the tendering process, all the IT systems that governments use, maybe you can’t ask for them to be completely universal acceptance ready, but what you can ask for is a roadmap, right? You can ask for in your tenders is to ask the question, are you universal acceptance ready? If not, what is the plan to become universal acceptance ready? I mean, that’s the one big thing. Second big thing, I think, which I mentioned earlier as well, is about the education side. The curriculum needs to be updated. When students are taught Networking 101, internationalized domain names and email addresses should be the basic protocol, not an add-on. So I think at least those are two immediate things. And of course, the government systems themselves to become universal acceptance ready. Hopefully that’s useful.

Theresa Swinehart:
I think Edmund really identified some really core actions we’d seen with government contracts in relation to IPv6 that had been successful, or at least created awareness around it and encouraged businesses from that standpoint. I think on the local community side and the education side, encouraging the generation of local content and awareness that one could actually do local content then. So I think there’s quite a bit there through both the economic and the social areas within the governments. And then also with activities in some of the partner organizations. We heard from UNESCO and others around the need for this and the value of that. I think that we often hear at the national level the preservation of culture, the preservation of languages, linking that to the digital world and the opportunities that are there from a government standpoint. I think from a business standpoint as well, there’s always the argument of, is there the demand? Well, if one’s not providing it, one doesn’t know if there’s a demand. And there might be ways to create consumer awareness, to know to ask for it and to know to say, I would like to actually be able to have this resolved in that right way because one can create the mechanism and the technology for it. And as Edmund said, we’re still developing some of the policies and ironing out some of the different areas in the trainings. But there’s the overarching awareness that one actually can have something in one’s own language, just like one can have the ask for clean air or clean water. It’s not a utility from that standpoint, but it is something that is near and dear to every single individual in how they communicate with each other. So I think some of those angles.

Moderator:
Thank you. We have more questions or comments from the floor. Let’s go to the next gentleman. Thank you.

Audience:
Hi, my name is Keisuke Kamimura, professor of linguistics and Japanese at Daito Bunka University in Tokyo. And I attended a workshop on artificial intelligence this morning. And one of the speakers mentioned that artificial intelligence needs large chunk of language-based data. Otherwise, AI does not learn by themselves. So language-based data for lesser used languages or minority languages should be available before making artificial intelligence becoming meaningful for them. Is this kind of issue related to this panel or is it going to be dealt with somewhere else or somebody else?

Edmon Chung:
Well, I think it’s very relevant, actually, when one of the things that I mentioned is that today on the web, 57% of the content is still in English. So even for general AI, it is English-dominant. So the machine would have learned that English is the dominant language as well. So I think it is a matter that is important to address. Although I’m certainly not an expert on that, but I do know that, given that, we know that there is a bias with the AI and in that part, I do participate in the IEEE working group on algorithmic bias. But back to the topic here, I guess the relevance, I believe, is that internationalized domain names and email addresses can spark the creation of content and services in the different languages, whether it’s indigenous languages or different languages. And the reason why I like to use this, remind people of one interesting fact of the internet development, the DNS, the domain name system, was created in 1983. Six years later, the web was invented in 1989. The basic infrastructure needs to be there. The basic foundations of the domain name and email addressing system needs to be there for the content layer to flourish. And I think this is gonna be true for IDNs as well. Thank you.

Moderator:
Can I do a quick segue? I know there is a queue, but I think do a quick segue to Mark. Because when we were preparing for the session, we talked about working with open source community on standards and guidelines. I think ties in back to the question from the professor about availability of data in terms of languages. So I thought, Mark, maybe we can segue a quick one to you about in terms of relevance for AI on the one hand, but really working with the open source community, is there anything we can do to get this topic and going to raise awareness and to get people to adopt?

Mark Durdin:
Yeah, thank you. I think it’s very interesting hearing a lot of this discussion, and I’m really resonating with a lot of what the other panel speakers are saying and Edmund and Ndomo. And one thing I’d like to note is that the Unico Consortium, which kind of works at a slightly different level to ICANN, the Unico Consortium has launched the Digitally Disadvantaged Languages Subcommittee, and that’s a mouthful. But the DDL subcommittee has just been launched, and it’s a real opportunity for us to engage with industry partners because the Unico Consortium is a consortium of major industry vendors working in internationalization. And so there’s a real awareness right now coming out of the international decade of indigenous languages. So let’s make sure that we are engaging at that level there. And that correlates with the perception in my part of the tech space that ICANN is very low level. You sort of deal with all of these nuts and bolts at a level where normal software developers don’t need to think about it. And so I think for that reason, for many software developers, universal acceptance and IDN, it’s not even on their radar, just because it’s like, oh, it’s all low level stuff, somebody else has already dealt with it. So I think some level of promotion in an accessible space is the kind of thing that the W3C has done very well with a lot of their standards for the web. But doing some of that kind of thing with clear accessible FAQs around UA and what needs to be done and the gaps is a really good starting spot because I’ve been doing this for a while, but I haven’t actually found a one-stop shop where I can point people to, which is accessible. There’s lots of very detailed documentation, but nothing that really says, well, this is the problem, here’s your normal questions. So in terms of engaging with the open source community, that’s a really good starting spot. I’d be really good, I remember in the UA report from 2022, that there was a big list of major products and their level of support. It’d be good to do the same sort of thing with some of the low level libraries that really power the internet, things like Curl and OpenSSL, Node and PHP and even WordPress and just have a look at how well do these products support universal acceptance and where are their gaps. And even then going through to the other end of the space, looking at end user software, things like Firefox and Thunderbird. These are places where we have the opportunity to make improvements and submit changes to those communities and support them without needing to wait for commercial priorities necessarily. And that often then drives the commercial vendors to go, oh, we need to actually match that functionality. So it’s just another prong in the strategy.

Moderator:
Thank you, Mark. So, yeah. Appreciate that. Okay, we have a queue, so let’s try to move. So let’s go to the next gentleman, please.

Audience:
My name is Yudo, for the record. I’m from Indonesia. Serve as a board member, one of the board member of .id Registry Pandi and also working as a professor at Faculty of Computer Science, Universitas Indonesia. Indonesia, I’m coming from a country with more than 700 languages. Out of those 700 languages, we have identified more than 30 of them and actually use a non-Latin script. One of the language that spoken by many people in Indonesia is actually Japanese language, spoken by 60 million people in Indonesia. And we also have Balinese language. Everyone knows Bali. And the language is spoken by more than 3 million people in Indonesia, mostly in Bali. So, three years ago, we submit an IDN application, international domain name to ICANN. And the motive or the background of it is simply to serve the underserved community, to preserve the indigenous language and also to give universal access to them. Unfortunately, it was rejected by ICANN. As Teresa mentioned, there are several requirements, technical and also political. Now, I mentioned two of the political requirements mentioned in the document. Number one is that the panel, you know that ICANN simply just give it to the expert panel and then they will review the application. They mentioned that the Japanese language is written today only in Latin-based characters except for scholarly, historical and decorative purposes. And then there are also requirements that mention that the language must be used as an official communication of the relevant public authority and also serve as a language of administration. So, if you look at those two requirements, it’s like chicken and egg. I mean, if the language is actually served as the language of administration as used by the public authority and commonly used, then it is actually not an underserved community in my point of view. So, we think that by providing the IDN for Japanese, Balinese, then the people will have a room to play digitally because for non-digital work, I think UNESCO and my government and also Dutch government, if you mentioned University of Leiden actually has done lots of work for our manuscript. So, to make us happy, we are actually currently intensively communicate with ICANN, with Professor Sarmat and also PTNAN to develop the label generation role for the Japanese, Balinese and Pagan. But with those two requirements, because in Indonesia actually since 1928, we have an oath that we will use Bahasa Indonesia as the national language, but we still have those more than 700 indigenous language. So, we are not like India who actually they put most of the languages in their constitution. So, we just actually support very much when India is actually apply for the IDN. So, as long as we still have the requirements, I don’t think ICANN is actually very serious in the inclusivity, underserved community for the people who actually they don’t speak English or they don’t use common language like Japanese, Korean and also Chinese.

Moderator:
Thank you. Thank you. Before we go to Teresa to address it, because we only have three minutes, let’s not leave anyone behind. Let’s go for the question from on remote as well. Then let’s try to use the remaining time to wrap up. Yeah.

Audience:
So, this is a question from Amit Paria. How would the private sector be motivated to develop tech with language inclusivity as their priority? Okay, this one, would Mark or Emma want to take it? Hold on first, then we get the lady over there, please. Thank you. Thank you, panel. First of all, congratulations to anybody who is working tirelessly to improve access to languages digitally. Both AI and automated speech recognition. But at the same time, a plea. My name is Lydia Best and I am from the European Federation of Hard of Hearing People. And many of you know that I am a deaf person. And many of us rely on sometimes auto-captioning services. And the plea is, can you please make sure that we actually don’t get censors? Because often auto-captioning censors language. What hearing people can hear, we don’t, especially swear words. We also want to know if somebody was swearing. Thank you.

Moderator:
Thank you. All right, we have two minutes, maybe one minute for Teresa to view those question. And then maybe Emma can do a quick one for the remote participant. Thank you.

Theresa Swinehart:
Yes, and we can also follow up offline on this. But my understanding is that, I was just checking with a colleague, that we are working with the Javanese community to develop the LGR. And I believe you’re part of all of that work. And that one of the areas is that the Unicode categorizes the Javanese as a limited use language. But we’ve also requested to start working with a Unicode listed as a recommended script identifier. So hopefully that work will continue to bring progress to this issue, of course. And that will also help us with the process for new applications. So rest assured, work is underway as we’re trying to resolve the different areas. And we look forward to your continuing efforts on that. So thank you.

Edmon Chung:
I guess I will quickly add to that as well. We’ll very much encourage you to bring this up again at the ICANN public forum in Hamburg. But two things quickly, the IDNCC PDP, the policy development process is actually still ongoing and is addressing some of the issues that you’re saying. The second thing is, maybe we should ask ICANN to revisit the label generation process in light of the international decade of indigenous language. And if we embrace that, that would work well. In response to the question on the private sector incentive, I think we’re looking at an issue of market failure and therefore just relying on market forces is not gonna work, especially when you’re talking about indigenous languages and supporting universal acceptance that have a very long tail. So in that sense, then it really requires policy intervention and it’s either motivation as in giving you money to actually get it done, or the other way is a kind of quote unquote penalty or requirement, which is what I mentioned earlier, requirement in tenders. So I think those types of policy intervention would be useful.

Moderator:
Thank you. All right, we’ve run out of time, but this has been very engaging in terms of the discussion and very insightful comments from our speakers on the panel. And to wrap up, I’d like to ask everyone just to give it a think in terms of inclusion for multilingual internet. Let’s say if you are more of a technical person like Mark, if you come across a website that doesn’t accept domain names or emails in other scripts, what can we do to raise awareness about it to let the software developer or the company know that they should fix it? This will help to generate awareness. And also let’s say you are from a academic institution, we have professors here, students here, also Nodumo who shared, how can we include in the curriculum for our students to know about this space so that we can be more inclusive? Even for end users, you know, are there websites or software that you use day to day? And have you thought whether they can accept domain names or emails in other scripts? Let’s think about it. And perhaps the next time we come back, we can share some progress instead of just saying that it’s very important, very important, but perhaps next year when we come back, we can actually share some progress we’ve made together one step at a time. So with that, we’ll close for today’s session. Thank you so much for participating and help me thank my speakers. Thank you.

Audience

Speech speed

156 words per minute

Speech length

1124 words

Speech time

432 secs

Edmon Chung

Speech speed

161 words per minute

Speech length

1974 words

Speech time

735 secs

Marielza Oliveira

Speech speed

208 words per minute

Speech length

873 words

Speech time

252 secs

Mark Durdin

Speech speed

195 words per minute

Speech length

1792 words

Speech time

550 secs

Moderator

Speech speed

181 words per minute

Speech length

1564 words

Speech time

518 secs

Nodumo Dhlamini

Speech speed

110 words per minute

Speech length

665 words

Speech time

364 secs

Theresa Swinehart

Speech speed

208 words per minute

Speech length

1494 words

Speech time

431 secs

Internet Engineering Task Force Open Forum | IGF 2023 Town Hall #32

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Dhruv Dhody

In a series of discussions, Dhruv Dhody and the IAB outreach coordinator emphasise the importance of increasing participation and diversity in the ITF (Internet Technical Foundation). Dhruv Dhody specifically focuses on the need for more participation from India, particularly from multinational corporations and large network operators. His experience in implementing and designing Request for Comments (RFCs) has made him aware of the potential that India holds in contributing to the ITF. With the support of individuals like Suresh, Dhruv and others have been diligently working to encourage and enhance participation from India.

On the other hand, the IAB outreach coordinator discusses the various efforts being made to improve access to the ITF and increase diversity. They highlight the role of education and outreach in achieving these goals. As part of the ITF’s education and outreach team, the coordinator focuses on making it easier and more successful for women and individuals of diverse genders to participate. Their efforts have resulted in positive changes within the ITF since their involvement began around 2010.

The discussions indicate a positive sentiment towards increasing participation and diversity within the ITF. It is evident that both Dhruv Dhody and the IAB outreach coordinator recognise the significance of broadening the participation base and promoting inclusivity within the ITF community. By encouraging multinational corporations, large network operators, and individuals from underrepresented groups to actively engage and contribute their expertise, the ITF can benefit from a diverse range of perspectives and ideas.

Overall, the detailed analyses of Dhruv Dhody and the IAB outreach coordinator shed light on the ongoing efforts to create a more inclusive and diverse ITF. Their observations and insights emphasize the positive changes observed since their involvement began in 2010. These discussions serve as a call to action for increased participation from India and a concerted effort towards improving diversity within the ITF.

Colin Perkins

Colin Perkins, an esteemed member of the University of Glasgow, is highly involved in the Internet Research Task Force (IRTF), where he serves as chair. He has actively contributed to both the IRTF and the Internet Engineering Task Force (IETF) since the 1990s and has successfully led various IETF working groups.

The IRTF plays a crucial role in conducting long-term research, complementing the near-term standards work performed by the IETF. Perkins, as the chair of the IRTF, acts as a bridge between the research community and the standards development community within the IETF, facilitating coordination and collaboration between the two.

Perkins values his role in coordinating research and standards communities, considering it an essential aspect of his work within the IRTF. He believes that such collaboration is pivotal in driving innovation and growth within the industry.

One notable outcome of the collaboration between the IETF and the World Wide Web Consortium (W3C) is the development of the next version of the web transport protocol, known as HTTP3. This significant advancement in web technology was accomplished through joint efforts. Furthermore, the collaboration between the IETF and the W3C has also led to the creation of the WebRTC protocols for facilitating video conferencing.

Throughout his involvement in the community, Perkins has had a positive experience, finding the process to be remarkably straightforward. This observation highlights the effectiveness of the community in fostering a conducive and efficient environment for collaboration and development.

In conclusion, Colin Perkins, a highly regarded member of the University of Glasgow, serves as the chair of the IRTF. His active involvement in the IRTF and IETF, along with his expertise in coordinating research and standards communities, contributes to the advancement of long-term research and the development of standards within the industry. The collaboration between the IETF and the W3C has yielded significant results, such as the HTTP3 protocol and the WebRTC protocols. Perkins’ positive experience in the community further reflects the efficacy of the collaborative process.

Suresh Krishnan

Suresh Krishnan’s work on IPv6 is driven by his goal to bridge the digital divide between developing and developed countries. In the late 90s and early 2000s, it became evident that developing countries, such as India and China, lagged behind developed countries in IP address allocation. This discrepancy posed a significant challenge for these countries in terms of equal access to technology and communication.

IPv6 emerged as a new technology that was seen as a solution to this problem. It provided a much larger number of IP addresses compared to the limited supply of IPv4 addresses. By implementing IPv6, developing countries could access a larger pool of addresses, enabling them to expand their connectivity and reduce the digital divide. Recognizing the potential of IPv6, Krishnan dedicated his efforts to advancing this technology, with the aim of creating a more equitable digital landscape.

Krishnan is actively involved in the IETF community, which is known for its open and supportive approach. The community has made significant progress in promoting inclusivity in participation and collaboration, which plays a crucial role in addressing challenges and finding effective solutions. The IETF facilitates remote participation, allowing individuals who are unable to attend meetings in person to engage and contribute to discussions. Financial waivers are provided to those facing financial constraints, ensuring equal opportunities for participation. The community has also made provisions for childcare at meetings, demonstrating their commitment to supporting young parents and promoting inclusivity.

Krishnan emphasizes the importance of inclusivity in problem-solving through collaboration. Inclusivity ensures that diverse perspectives and ideas are considered, leading to more comprehensive and innovative solutions. His advocacy for inclusivity aligns with the belief that collective intelligence and diverse experiences contribute to more effective problem-solving.

The multi-stakeholder approach, which involves engaging various stakeholders such as governments, civil society organizations, and the private sector, has proven successful in problem-solving. The experience of the IETF community highlights the effectiveness of this approach in leveraging expertise, fostering cooperation, and achieving common goals.

In conclusion, Suresh Krishnan’s work in IPv6 focuses on reducing the digital divide between developing and developed countries. The IETF community promotes a supportive and inclusive environment, encouraging collaboration and inclusivity in problem-solving. The multi-stakeholder approach holds great potential for driving future development through collective efforts and diverse perspectives.

Lars Eggert

The Internet Engineering Task Force (IETF) is highly regarded for its open and inclusive platform that enables individuals to participate and contribute to improving internet protocols. This open participation model does not require a membership fee or any formal sign-up process, making it accessible to anyone interested in contributing to the development of the internet.

One example of the positive experiences individuals have had with the IETF is shared by Lars Eggert, who joined as a PhD student and contributed to the improvement of the TCP protocol. This highlights the opportunity for young researchers to get involved and make a meaningful impact on internet protocols.

Protocols such as IP, DNS, and TCP have been continuously evolving over the years. Despite carrying most of the bytes on the internet, these protocols have undergone significant changes since their inception. It is worth noting that despite sharing the same name, these protocols are vastly different than they were in the past.

The IETF’s unique process of designing technical specifications plays a crucial role in the development and maintenance of the internet. This process, which closely resembles maintaining an aeroplane in flight, has been in place since the inception of the internet. Discussions and developments within the IETF occur in a collaborative manner, allowing for the continuous improvement of internet protocols.

The IETF also shows a strong commitment to enhancing internet security and privacy protections. Two years ago, they published version 1.3 of the Transport Layer Security (TLS) protocol, which added significant security and privacy measures. This effort was intensified following the revelations made by Edward Snowden, which prompted additional work towards strengthening the security of the internet.

A notable development in internet traffic has been the introduction of QUIC (Quick UDP Internet Connections) with HTTP 3 and TLS 1.3. This combination has dramatically transformed the model of internet traffic. In fact, it is estimated that QUIC with HTTP 3 and TLS 1.3 already accounts for close to half of all web traffic. This serves as further evidence of the IETF’s ability to drive significant changes in the internet landscape.

The IETF has also taken steps to address the problem of stalking through devices like AirTags. They have initiated a Birds of a Feather session (DALT) to discuss this issue. Moreover, major device vendors have come together at the IETF to standardise measures and find solutions to prevent stalking incidents.

Overall, the IETF acts as a suitable platform for standardising measures for device tracking. It embraces an open and inclusive approach, allowing everyone to participate and contribute without any membership fees or restrictions. The clear rules established by the IETF ensure that the working process is understood by all participants.

In conclusion, the IETF’s open platform, dedication to evolving internet protocols, unique process of designing technical specifications, commitment to security and privacy, ability to drive change, and efforts to address emerging challenges make it a crucial institution for the development and maintenance of the internet.

Andrew Alston

Andrew Alston, one of the three routing area directors in the Internet Engineering Task Force (IETF), highlights the importance of increased operator participation in the IETF. He firmly believes that operators must actively engage and contribute to ensure that the internet functions in a way that benefits them. Alston acknowledges the critical role that operators play in maintaining and improving internet infrastructure, and their expertise is invaluable in shaping internet standards and protocols.

Additionally, Alston advocates for greater African representation and participation within the IETF. As a representative of Kenya in the IETF and the head of the Research and Development department for Liquid Telecom in Kenya, he emphasizes the significant discrepancy between Africa’s population of 1.2 billion people and its limited representation in the global internet standards body. Alston sees the IETF as a platform to address the unique needs and challenges of the African continent regarding internet protocols and standards. By encouraging increased African involvement, he aims to ensure that the development and governance of the internet are inclusive and responsive to the African perspective.

According to Alston, the IETF welcomes participation from operators, vendors, and governments, making it an open community. He believes that the IETF’s strength lies in its ability to bring together diverse perspectives and cultures, contributing to better decision-making and more robust internet standards. Alston recognizes the importance of a multi-stakeholder model in achieving these goals and acknowledges the IETF’s commitment to diversity.

However, Alston acknowledges that the IETF could do better in terms of diversity and inclusivity. While the organization embraces diversity as a core principle, there is still room for improvement. Alston’s admission reflects an understanding of the ongoing challenges faced by the IETF in ensuring equitable and inclusive representation.

In conclusion, Andrew Alston, as a routing area director in the IETF, advocates for increased operator participation and greater representation from Africa in the internet standards body. He emphasizes the crucial role of operators in shaping the internet and highlights the unique needs of the African continent. Additionally, Alston recognizes the IETF’s commitment to diversity but also acknowledges the need for further improvement in this area. His insights shed light on the importance of inclusivity and diversity in internet governance and the ongoing efforts to achieve these goals within the IETF.

Jane Coffin

The analysis reveals that the Internet Governance Forum (IGF) has limited representation of the technical community, as highlighted by audience comments. Efforts are being made, however, to address this issue. It is predicted that there will be increased participation from the technical community in the future.

Jane Coffin, in agreement with the audience’s observation about the lack of representation, indicates efforts to remedy this. She acknowledges that there was more participation from the technical community in the early years of the IGF. Coffin also points out that the IETF, IEB, and RERTF were present at the session, indicating some level of technical community involvement. She predicts that there will be even more participation in the future.

Furthermore, Coffin emphasizes the need for more valuable input on technical aspects in the IGF discussions. Specifically, she mentions internet exchange points, BGP, and IP addressing as areas where more input could provide valuable contributions to the Multistakeholder Advisory Group (MAG). She advocates for bringing back a past practice of focusing on these technical aspects.

In addition to technical input, Coffin appreciates the potential of the Internet Society (ISOC) and recommends its Japanese chapter for potential workshops. She used to work at ISOC and believes they have strong potential in helping with workshops.

Moreover, Coffin encourages engagement with the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). She expresses gratitude and encourages the audience to stay in touch with these technical bodies, highlighting their importance in the context of networking, digital cooperation, and sustainable development.

Overall, the analysis indicates the need for increased representation of the technical community in the IGF. Coffin’s arguments and recommendations provide valuable insights into how this can be achieved, including the focus on technical aspects and collaboration with relevant technical organizations. It is crucial for the IGF to involve the technical community to ensure comprehensive discussions and effective decision-making on internet governance issues.

Audience

During the event, speakers highlighted several key points. One major concern raised was the lack of diversity in standards bodies, with limited participation from women, civil society organizations, governments, end users, and the tech sector. Only around 10% of participation in the Internet Engineering Task Force (IETF) is from women, indicating a significant problem. This lack of diversity can have negative consequences for both the standards themselves and the broader industry.

On a positive note, it was argued that diversity is crucial for improving organizational culture and the quality of output. A diverse range of perspectives and experiences leads to more innovative and inclusive solutions. The importance of diversity in achieving the Sustainable Development Goal of reducing inequalities (SDG10) was also emphasized.

The existence of unintentional barriers hindering diversity in standards bodies was also discussed. These barriers affect both entry and ongoing participation, making it difficult for certain groups to get involved. Identifying and addressing these barriers is essential for promoting diversity and ensuring equal participation.

There is also a need to extend web standards to rural communities and remote locations, as highlighted by a question from a worker in a rural area of Bangalore, India. The speaker argued that web standards should go beyond urban areas and be accessible to everyone, including those in underserved areas. This aligns with SDG9, which focuses on industry, innovation, and infrastructure.

The positive impact of the IETF was recognized, particularly in the area of privacy. Danko Jevtovic, a member of the ICANN board, commended the IETF’s work on privacy standards. The open and free standards of the IETF, based on Internet Protocol (IP), were also praised as a successful strategy against closed systems.

However, there was concern over the lack of representation of the technical community in the Internet Governance Forum (IGF). It was argued that the technical community should have more involvement in the IGF to ensure balanced representation and better decision-making.

The challenges of transitioning from legacy technology and protocols to newer ones were also discussed. It was pointed out that some government systems, like those in Japan, still use outdated protocols such as FTP. While there is recognition of the need to move away from legacy technology, there are challenges that need to be addressed for a smooth transition.

Finally, the audience expressed the need for longer sessions and workshops to allow for more in-depth discussions and learning. While Jane Coffin’s moderation was appreciated, it was felt that more time was needed to fully explore the topics. Additionally, a preference for on-site work was mentioned, indicating a desire for physical presence and collaboration.

In conclusion, the analysis revealed various challenges and opportunities in the field of standards bodies and internet governance. The lack of diversity, unintentional barriers, the need to extend web standards, and the importance of the technical community’s representation were key concerns. On a positive note, the impact and effectiveness of the IETF’s work, as well as the benefits of diversity in organizational culture and quality of work, were highlighted. The event provided valuable insights and called for actions to promote diversity, address barriers, and ensure wider participation in shaping internet standards and policies.

Mallory Knodel

Mallory Knodel, a professional associated with the Centre for Democracy and Technology, actively participates in various technical communities and organisations. She serves on the Internet Architecture Board (IAB), alongside Dhruv Dhody and Suresh Krishnan, where she chairs a research group on human rights and collaborates with Suresh on a working group. Her work demonstrates a commitment to evolving and promoting ethical practices within the field.

Mallory’s involvement in the Internet Engineering Task Force (IETF) began almost a decade ago when she worked at the Association for Progressive Communications (APC). During her time at APC, Mallory discovered interesting and useful experiences within smaller tech communities, such as independent or NGO-operated community networks and web or email hosting. Recognising the value of these experiences, she strives to incorporate them into larger standards bodies.

In addition to her work with community networks, Mallory has expertise in digital security and journalism. She has conducted training sessions for journalists and activists, equipping them with crucial skills in digital security. Mallory acknowledges the challenges of teaching advanced concepts like PGP encrypted email but believes that by changing the Internet at the IETF level, it is possible to better serve individuals in vulnerable situations.

Furthermore, Mallory recognises the need to extend web standards to rural communities. While the World Wide Web Consortium (W3C) primarily establishes web standards, there is some overlap between the work of W3C and IETF. Mallory’s organization actively promotes diverse web standards, emphasizing the importance of catering to the needs of different communities.

In terms of Internet governance, Mallory sees an opportunity for the W3C to contribute to the Internet Governance Forum (IGF). Unlike the IGF, which primarily focuses on policy matters, the technical communities represented by organizations like the W3C can bridge the gap between policy and technical aspects. Currently, the W3C has limited presence at IGF, but their participation could significantly enhance the forum’s effectiveness.

Additionally, Mallory notes a decline in participation within technical institutions over the years. She agrees with Jane Coffin’s observation regarding the decreasing attendance at sessions held by organizations such as the IETF, IEB, and RERTF compared to a decade ago. Mallory and other members of the technical community are making concerted efforts to restore participation levels to their former heights, demonstrating a shared commitment to fostering a thriving technical landscape.

In conclusion, Mallory Knodel’s contributions and experiences within various technical communities and organizations encompass a wide range of significant areas. From her involvement with the IAB and efforts to incorporate smaller tech experiences into larger standards bodies, to her training of journalists and activists in digital security, and her recognition of the importance of extending web standards to rural communities, Mallory consistently exhibits dedication to promoting ethical practices and inclusivity within the rapidly evolving technological landscape.

Mirja Kühlewind

The Internet Engineering Task Force (IETF) is an influential organisation that drives internet standards. They focus on creating high-quality, industry-wide standards to promote interoperability. The success of the IETF is measured by the voluntary deployment of their protocols.

The IETF’s decision-making process, based on “rough consensus,” ensures that decisions are made by the community. This inclusive approach allows for progress even amidst differing opinions and concerns.

The openness of the IETF is crucial to its impact. They keep barriers low to encourage participation and promote transparency. This fosters collaboration and knowledge exchange.

However, engaging with the IETF can be overwhelming due to the complexity of information and tasks involved. It is a dynamic platform for knowledgeable individuals, but newcomers may find it challenging.

The IETF values diversity to ensure quality standards. They strive for inclusivity, recognising that not everyone has the same resources to participate. The freely accessible standards enable anyone to enhance them.

The IETF actively reaches out to policy stakeholders, explaining their work and establishing dialogue. They recognise the importance of updating old protocols to maintain internet health and security.

In conclusion, the IETF is an influential organisation driving internet standards. Their commitment to high-quality standards, inclusive decision-making, and knowledge sharing make them a dynamic platform. While engaging with the IETF may be challenging, their focus on accessibility and inclusivity ensures the continued development of internet standards.

Session transcript

Mirja Kühlewind:
IETF, the Internet Engineering Task Force, and we would like to talk about the importance of interoperability and the multi-stakeholder model of the IETF. My name is Mirja Kühlewind. I’m the chair of the IAB, the Internet Architecture Board, one of the two leadership groups in the IETF. I will quickly, and this is why you’re seeing some slides in front here, I will quickly go through like a handful of slides and give you some kind of the most important things about how the IETF works, a little bit also what we’re doing, but on a very high level. And then afterwards, we will have a little panel. We brought some of our leadership members here, so Colin, the IRTF chair, and Mallory is also an IAB member, and we have four more leadership members online. Lars Eckert, the IETF chair, who unfortunately couldn’t come here in person, then Suresh, another IAB member, Suresh Krishnan, then Andrew Alston, the routing AD, and Dhruv Dhodi, another IAB member. However, they will also introduce themselves during the panel, and I will just quickly run through the slides, and then I will hand over the discussion to Jane Coffin, who’s moderating the panel, and I hope at the end we have a lot of time for you to ask questions. Okay, so this is also just, you see us here in person, but you know, so you can actually see the names and some of the acronyms I just mentioned, and I will explain these acronyms a little bit in the next couple of slides. Again, I’m the chair of the Internet Architecture Board, Colin, the IRTF chair, and Mallory, an IAB member, and this one is probably more important if you’re not locked into Zoom, and so you also see the faces of the people online. Lars Eckert, the IETF chair in Finland at the moment, I think, early morning. Andrew Alston, our routing AD from Kenya. Dhruv Dhodi is based in India, and Suresh Krishnan from the US. So this is a short slide which, however, has like some of the important messages about how the IETF works. So very important point about the IETF is openness. Everybody can participate, and I think we try to be really as open as possible, so we don’t have membership feeds. Like, everybody can just subscribe to any mailing list and enter the discussion at any point of time. Anybody can come to a meeting. We have good online support for our meeting, and this openness is kind of in the heart not only of our organization, but it’s also something that is reflected on the Internet, because that is really one of the goals of the Internet, and we try to run the organization in the same way. And openness also means not only everybody would be able to participate, because there’s no, like, we try to keep barriers for participation as low as possible. It also means that the things we are producing, the standards, the protocols, are free for use. So there’s no fee for, like, accessing our documents. They are all online, and, like, again, this is like the spirit of the Internet, where you just need to, like, implement a standard, and then you are in the Internet and you can participate, and that’s also, I think, what has driven the success of the Internet. Another thing to notice is really that we are a very technical organization. Our meetings are focused around solving technical problems, sometimes very, very detailed, and what we’re trying to do is to make good judgment on the technical level, and that’s also why we can drive a consensus-driven process, because in a lot of cases you can actually come to compromise on the technical level and move forward. Of course, not everything we discuss in the ITF is only technical. There are implications that we need to be aware of, but our focus is really on doing good technical work that then gets adopted by companies because they get a value out of it and it improves the Internet. And that’s the other point. We are actually measuring our success. Did we do a good job based on, kind of, the voluntary deployment of our protocols? We cannot directly tell anybody to use our protocols or to, like, do a certain thing, but what we’re trying to do is to do good technical work to make the protocols as reliable and secure as we can in order that they actually address the needs people have and they get deployed because they are good and they improve the Internet. This is the slide where I very quickly explain some of the acronyms I already mentioned. So the IETF has actually two leadership groups. One is the Internet Engineering Steering Group and the IETF chair, Lars Ecker, is chairing that group. And this is a group that is looking at the actual standard process. So they approve and review all standard documents, they manage the working groups, the meetings, and their goal is really to make the standards process as good and as productive as possible. And the other leadership group is the Internet Architecture Board. And the Internet Architecture Board has also some kind of admin roles. There are little things we need to care about. But as the name said, there’s also a point about architectural oversight and what that means is that we’re trying to look at the work that’s happening in the IETF at a more higher level. We’re trying to understand are there any gaps that are important in order to follow the principles that the Internet is built on? Is there a discussion that doesn’t have enough attention that we need to drive forward because there’s an issue? And these kind of things. And then at the same time, we also do outreach and liaison management. So we are basically the Office of Foreign Affairs of the IETF, if you want to put it that way. And so we try to talk to other SDOs whenever there’s an overlap. But we also, and that’s also why we’re here, we’re trying to look what happens in the rest of the world. What are the important topics that we need to consider and that may impact the Internet and our standards work? And then I also would like to mention the IRTF. So this is the sister organization of the IETF. It’s the Internet Research Task Force. They have a very similar structure. They also have research groups. They have different processes. But they are in a sense integrated in the IETF that like we meet together in a common meeting. It’s all integrated and it’s very useful for two things. First, they look at the more long-term things. They look at the things that are not ready for standardization yet. They look at the things where we see a trend that we need to keep an eye on. And it’s also good for actually providing more diverse input. So this is a way to get researchers into the IETF and get an exchange between engineers, researchers and all the stakeholders and have a discussion. So I can just say from my own perspective, I’ve started as a researcher in the IETF and I could always provide some more neutral, different kind of input and was well received in the discussion. So I think this is a very strong point about the IETF. To give you a little bit of an idea, you know, like how big the IETF is, just a couple of numbers here and like you can mostly read them yourself. We currently have 130 working groups. This is changing a little bit more or less. Like for example, last year we created eight new working groups. We are closing some. We have some long-standing working groups that are there for many years but take up new work all the time. So there’s a lot of happening in the IETF. We are reaching the mark where we have nearly 10,000 RFCs. RFCs are our standard documents. But also RFC 1 goes back before the internet and before the IETF. So at the moment, or last year, we published nearly 200 new standards documents or documents that went through the standardization process. The participation numbers, you know, actually depends on like kind of how you define participation because there might be people who are only on mailing lists. There might be people who come only to some of the meetings or all of the meetings. People who write documents and whatever. And as we don’t have a membership, we cannot give you like this one number. So depending on how you look at this, it’s a couple of thousand people. There’s a lot of people who engage in discussions and who come to meetings to understand what we’re doing who might not be active authors or active, very active in the discussion. So, you know, it’s the people who probably write the standards are in the range of whatever, two to three thousand, something. I don’t know. You can read the numbers here and make your own conclusions. Okay, this one, this slide is a little bit crowded and I hope you can at least read some of it. But the reason why it’s crowded is because it has like a bunch of acronyms on it and I won’t explain all of them. I might not even be able to explain all of them and you don’t have to map all of them. It’s more to like give you a chance if you know some of the acronyms to figure out, you know, what the IATF, where to map it. But what you can get from this slide is that we are really working on maintaining, extending and developing the core protocols of the Internet. So we don’t do like the lower layer things. So we don’t do kind of any kind of radio or Wi-Fi or radio messaging or whatever or like Ethernet cable standards and so on. And what we also don’t do is like the very up layer, the application layer, the web itself or the things where like the user actually interacts very concretely with an application. But kind of everything in between. So and this is also how we organize ourself. We organized ourself a little bit in these layers to make sure we can we can coordinate correctly. And so we have the application layer where you have, for example, HTTP protocol. That’s what your browser is using. But also the protocols that are used for video conferencing. We have the transport. We have the routing. We have the Internet area. So this is where IPv4 and IPv6 lives and get further developed. But also DNS, for example. A lot of the infrastructure stuff. And then we have two more important areas. We have an operation management area. And that area is also working on protocols to manage the routers on the Internet, the devices on the Internet. But it also provides guidance and best practices about all the other protocols when you deploy them. So that’s why it’s shown here on the side. And then, of course, there’s security. And security is not a layer. Security is a function that you need everywhere. And you need to consider everywhere. So that is a very important part. And the people in security area are very busy. Okay. I have two more slides. This one is also just to give you a grasp about what the IAB is doing. The Internet Architecture Board. And as I said, we’re trying to figure out what are more kind of high layer topics that might impact. And this is just a list of topics that we’ve been discussing over the last whatever one or two years. There are also some references if you want to see some of the outputs. Not all of the discussion actually to documents or actions. But it leads to awareness. And you can see that like some of these topics here do map to discussions you have in this forum. Like fragmentation, censorship, security, of course. And then governance on the slide as well. So we’re trying to engage with these discussions and understand and create awareness about these discussions. But I also would like to note that I think when we discuss this topic in the IETF, it’s a different discussion than here. Because we really try to understand how does it impact our protocols? How does it impact the technology? Or the other way around, how does technology impact these topics? And how does it impact the internet architecture? And this is this is my last slide. But I really wanted to talk about this point a little bit more. Because it’s very important. And it’s openness. I was mentioning this at the beginning. And when I talk about openness, it’s really two things. It’s the openness of our standards. They are they are available at no charge. Which really fosters deployment and adoption of the standards. And it’s really is kind of one of the keys for interoperability. And why the internet has been so successful. Because all you need to do is really to kind of confirm to the standard. Implement it. And then go and connect to the internet. And that’s why we have a global network. Because everybody relies on these standards. And we can then talk to each other and create this one big network that we call the internet. And and and again like the more of our our work gets adopted on the internet, that’s how we measure our success. And for like some of the things we are doing, we actually see very good deployment. Sometimes it’s really hard for us to measure that for us. So this is kind of also where the focus is. And then the other part is really openness about the process. And like feel free to please ask later on any kind of questions you have about both of these things. Because I think like some of the aspects of the of the IETF work actually differently than other organizations. So maybe there’s worth the discussion. So we really don’t have a membership. Everybody can come. We have three meetings a year. Of course you have to pay a fee for the meeting. Because the meeting itself has cost, right? Yeah you get you get we have the rooms and and all the things you need for a meeting. But there’s also ways to support people if they don’t have the capabilities. We we make our whole process is extremely transparent. We not only make the documents our products at the end available for free. But also all the stages in between. Everything on the mailing lists. All the meeting minutes. And we have our own too which actually has an interface where you can get a lot of statistics about what’s happening. And there’s actually a bunch of researchers who do really interesting work. But trying to figure out you know how the dynamics are. And trying to figure out and things about you know driving forces and so on. On a more objective basis. Also something that is a little bit special for the ITF is that the whole decision-making is based on rough consensus. And that means we don’t have any kind of votings. And also we don’t have it’s not like the leadership that is deciding. The role of the leadership is to judge consensus. And decisions are taken by the community. And the way the reason why we have rough consensus is because that means that we can also move forward without even if there are you know if there are still concerns. Which doesn’t mean we are ignoring the concerns. We are we are trying to discuss all the concerns. Take them into account. But then if we if we see that like we have agreement between a good set of people who also want to apply the protocol and move forward. And then at some point we have to move forward and accept this roughness. Fortunately we don’t have a lot of roughness. I mean like for some topics for sure. That’s why we have the process. But in a lot of cases we have very good consensus. Because we can get like agreement on the technical level. And that’s where I want to stop. We have the panel discussion coming up. We will go into like a little bit more into some of the aspects. And I hand over to Jane.

Jane Coffin:
Thank you Miria. My name is Jane Coffin. I’m a co-chair of GAIA which is one of the IRTF research groups. Along with

Mirja Kühlewind:
Curtis Heimerl. I’m gonna do some quick rounds of questions with all of the panelists. So it’s Lars, Andrew, Suresh, Colin, Miria, and Mallory. You each have one to two minutes to tell our guests here and the participants online. How you’ve engaged with the IETF in the past. And what are your current work areas of focus. I’m gonna start with Lars. Go to Andrew. Go to Suresh, Colin, Miria, and Mallory. So get your answers ready. Lars you’re up.

Lars Eggert:
Hi. Good morning. I hope you guys can hear me. Okay. Excellent. Hi. So my name is Lars Eckert and I chair the IETF as Miria said. Greetings from Finland. It is 7.17. I really wish I could have made it to Kyoto. I’m sorry that that wasn’t possible. I hope I’ll see you next year. I thought I’d give you sort of maybe a little bit of a personal story. So to make it a bit more sort of personal about how somebody would start in the IETF. So I was a PhD student and I worked on this thing called TCP, the transmission control protocol. You might have heard of it. It carries most of the bias on the internet still. And so we worked on it. We did research. We came up with an improvement. And the question is how do you actually get that improvement out there onto the internet? And so you know you start looking you know so where does TCP come from? And you quickly sort of Google or at the time you used Lycos I think is what we used. And you quickly come across the IETF and specifically there’s a working group that works on this protocol right. And so you figure out there’s a mailing list and you figure out how to join the mailing list. And then you send an email that says hey you know we have this idea about an improvement to TCP. What does the group think about this? And in our case that change was sort of not uncontested let’s say. But the thing is they’re you know the experts participating typically in these groups. And so through these engagement we actually realized there’s a much better change that has a much broader impact. And it has some avoid some of the downsides. And so we revised our proposal and we discussed it. And eventually somebody said you know you should write this up so we can like you know put this forward towards publication. And then you learn about how you like format a document correctly so it can become an RFC and all of that. And then you also learn how does it get processed through the process that Mia just described. And in the end like there’s an RFC. And then if you’re lucky and the change is actually good implementers will pick it up and it’ll get deployed. So that is sort of an example of some of my work that started to come to the IETF. And as Maria said it is extremely open and it’s just possible for individuals to start participating. We don’t require a membership fee. We don’t even require any sort of formal sign up. So we have no notion of that. It is you know capable individuals that either come as individuals on their own time or obviously a lot of our participants are sponsored in some way by companies or other organizations like universities that sort of donate their time or their employers time to help improve the internet protocols. And there was a slide of me as it is with this hourglass slide that talked about the different areas that we’re working in. I often come across people that sort of think you know the internet architecture has is stale and isn’t changing and you know we need to have this you know complete revamp of how the internet worked technically. And that might be true if you look at the 10,000 foot level because we still have protocols like IP and the DNS and TCP. But these protocols have evolved constantly over the three to four decades of the internet’s existence. And all of them are very very different than they were 10 years ago 20 years ago or even two years ago in some cases. But the acronyms are still the same and they still fulfill more or less the same role in the architecture. And therefore it’s it’s easy to sort of assume just because there’s still a thing like the DNS the domain name system that you know the internet hasn’t really evolved when in reality it’s evolving all the time. So one analogy that I sort of use is we are basically maintaining an airplane in flight. We’re constantly changing everything about this airplane while it’s up in the air. And we take great care that it doesn’t crash. And that’s why it looks like nothing’s ever changing, because the plane just keeps flying and keeps rising. But we’re replacing the engines. We’re replacing the landing gear. We’re replacing the cabin interior. We’re replacing everything about this plane constantly. And this is sort of the power of the IETF, right? If you think about how would you actually do the technical standards for a global commons like the IETF, it needs to be done in a forum that is like the IETF, that is open, where everybody can participate no matter where they’re located or what time zone they’re in or what their background is. If they have the technical competence and the interest and the time to help us out, they can very easily do that. And our executive director, Jay Daly, is often saying that if we didn’t have the IETF, we couldn’t invent it now. But because the IETF was born together with the internet and the way of designing the technical specifications really is unique in the world. And if you think about the IETF, multi-stakeholderism is a very important concept that the IETF has. That actually probably originated with the IETF. This whole concept that we need to have different stakeholders participating in the standards process is something that the IETF had already in the 80s and 90s of last century. Because we had university people. We had the operators. We’ve had the equipment vendors. We’ve had various other constituents that came to the IETF to discuss the technical problems that the internet had to make it grow better. And the IETF has a really unique role and is a really unique organization. And those of you who are in a position to send engineers our way or participate as non-engineer stakeholders, please do. We have a few upcoming meetings that will be in Prague in the Czech Republic in November.

Jane Coffin:
We’re going to be in Brisbane in Australia in March of 2024. If I remember correctly, we’re going to be in Vancouver in the summer of next year, the northern summer, I should say, in July. And my memory is hazy from then on out. I hope we’re going to get some questions. I’ll pass it on to the next person. Thank you very much for your interest. Thank you, Lars. The next step is Andrew. Andrew, one to two minutes on what you’re doing and what we can learn more about what you’re participating in.

Andrew Alston:
Thanks very much. My name’s Andrew Alston. I am one of the three routing area directors, which means I basically handle the routing area and handle the standards coming out of the routing area as my primary responsibility in the IETF. When I first came to the IETF, it’s actually, I think, illustrates a little bit about the multi-stakeholder approach. So I live in Africa. I live here in Kenya. I was originally from South Africa, but I’ve been in Kenya for about 12 years. And as we were developing things, I hit up the R&D department for Liquid Telecom on this side of the world. And as we were developing things, we started to see that, firstly, we were facing certain challenges on the ground with regards to distances and other interesting issues that we find in running networks in Africa, as well as political changes which made us need certain things in the routing landscape that weren’t really catered for. And so that’s what brought me to the IETF originally. And I showed up and have never left. But I came in there as an operator. And the one thing that I would say is that I do believe that operators, we need more operators at the IETF. Because one of the things that I learned as I walked into the IETF is that if you want the internet to work in a way that works for you, you need to have your say. You’ve got to have your voice. And one of the things I spent quite a bit of time doing is trying to promote African participation in the IETF as well, to try and grow the participation from the continent. Because we’re sitting at the moment with 1.2 billion people on the continent, but the voices aren’t participating. And the IETF gives people a place to come and to participate and have their say and make sure that the protocols that we are deploying on the continent are not just a retrofit from everywhere else. It’s a consensus-driven approach where we can make our needs heard. And I think that’s really important. So yeah, I really hope to see a lot more operators, a lot more people at the IETF. And yeah, thanks very much for having me.

Jane Coffin:
Thank you, Andrew. Suresh, you’re up.

Suresh Krishnan:
And so with that. Thank you. Hey, my name is Suresh Krishnan. I’m an IEB member. And my thing is similar to Andrew’s. So I started working on IPv6. So IPv6 was a new technology in the late 90s and early 2000s. And it was part of an inclusivity issue. So if you look at the developed world, most of them had large blocks of IP addresses. And countries like India and China are really behind. We didn’t have that many addresses to go around. And a lot of us started doing work on IPv6. And if you look at Japan, Japan was really leading the stuff. Mori Sensei and Ito Jinsan, they were really ahead of the whole world in doing the IPv6 stuff. So that’s where I started. When I went in, I thought it would be this formidable thing and nobody would talk. And I found the experience very similar to last, that all these people you’ve seen in the standards and in the books and everything, they were all amazing people, totally willing to help out. So it was a really nice experience to come in and come up with your problems and solve them, like collaboratively with other people. So that thread has stuck around. And there’s also been a lot of things that we’ve done on the inclusivity front in the recent times. I think the remote participation is one. We always had good remote participation. It’s really ramped up quite a bit during the pandemic. And we continue it. And Nidia talked a little bit about the waiver. So we don’t want to have financial barriers for people to participate. So if you want to participate remotely and you’re not able to afford it, you can certainly get a waiver for that. We have, for people coming to meetings, we have childcare so young parents can continue doing the work on that. And we’ve done quite a bit to get people around from different constituencies, like academia, like Nidia said, with IRSE and IRTF work and operators who come in as well. And so like we are trying really hard to reach out and we would love to hear from people like about your problems, come work with us collaboratively so we can solve things together because as Lars said, the multi-stakeholder approach has really brought us this far and it’ll take us further going forward. Thank you.

Jane Coffin:
Thank you, Suresh. I’m right on time. Dhruv, you’re up next. One to two minutes.

Dhruv Dhody:
Thank you. I’m speaking here from Bangalore, India. I started participating in ITF as a software developer. So I was a consumer of RFCs for a long time. I have been reading RFCs, implementing them. And during that implementation process, you do realize that, oh, this feature is missing or something better can be done. And, oh, I wouldn’t have done it in this way. This is stupid. Let me come and fix this. So that’s how I got involved with my first document. And I wrote an internet draft, came to the first document, luckily got support. But then kind of did realize that it’s not like many people in my part of the world who are very active with RFCs, who read, who implement, but they never participate. They always thought of it as it’s something somebody else does. And we are the software arm of the company and we are going to go and implement things, but somebody else does the actual standard development. And this, I wanted to break. And with the help of, in fact, people like Suresh and other people, we started working within India, which has almost every MNC, every big vendor and huge operators in India, which managed really big networks. So started working with them and how we can increase participation from this part of the world. And over the years, yes, the participation has increasing. Remote participation has helped a lot, but still there is a long way to go. And yes, the journey is not over. I am also the IAB outreach coordinator. I’m part of the education and outreach team in the ITF as well. So this is very important for me as well. And I’m personally trying to put more and more effort in making ITF more accessible for people. I myself can see a lot of change has happened in the years that I started participating in, which is around 2010. So it’s been a while now. And when new people come in, how we can make it easier for them to participate in ITF has been very important. And with various programs as well, we have been doing it. As a non-binary person myself, making sure that the participation from women and other genders is as easy and as successful at ITF is also very key. And we have been focusing on that as well at ITF. Thank you.

Jane Coffin:
Thank you, Dhruv. That was really important to also note about the inclusion part. Colin, you’re up, one to two minutes.

Colin Perkins:
Okay, thank you, Jane. My name is Colin Perkins. I work at the University of Glasgow in the UK. I’m the current chair of the IRTF, the Internet Research Task Force. As Maria said earlier, the IRTF is a parallel organization to the IETF. And we focus on longer-term research to complement the near-term standards work in the IETF. And we try to act as a bridge between the research community and the standards development community in the IETF. I’ve been involved in the IETF and the IRTF since the mid-1990s. I had a somewhat similar experience to that of Lars when starting, I was doing research. In my case, it was multicast video conferencing rather than TCP. And we brought some of the ideas into the IETF community to try and get them standardized. And they got a surprisingly, for me at the time, a positive welcome. I thought this would be a big and scary thing to do, but it turned out to be surprisingly straightforward. And the ideas got some take-up and I got sucked into the process and have been involved ever since. Since then, I’ve continued to work on transport protocols, both in the research community and in the IETF. I’ve chaired a number of IETF working groups. And for the last five years or so, I’ve been sharing the IRTF, coordinating between the research and the standards communities and looking into the dynamics of the IETF standards process. Thank you.

Jane Coffin:
Excellent. Maria, over to you. How have you engaged with the IETF in the past and what’s your current work area of focus?

Mirja Kühlewind:
So I already talk a lot, so I will try to be very brief and we have enough time later for questions. My story is actually very similar. I also started as a PhD student, even working on TCP. And the one thing I wanna mention is that my first meeting was very overwhelming. There’s so much things going on and there’s a lot of things that you don’t understand from the first minute, but that would be a wrong expectation. But the one thing that I felt at the very first meeting I went to and whenever I engaged with the IETF was that there’s a lot of energy, there’s a lot of things happening and there’s a lot of smart people, there’s a lot of those people still there who kind of invented the internet or has been working on the very early protocols who have a lot of expertise, but also knowledge about the history and this whole spirit of having a network to openly connect people and to exchange freely information that’s still there. It was still there when I started and it is still there. And that is also for me the motivation why I keep engaging with the IETF beyond just my technical work, also taking over leadership positions and why I’m sharing the IAB at the moment. Just to give you a little bit more background, I’m an engineer, I’m working for Ericsson. I’m still working on transport, not TCP anymore, but like a new fancy transport protocol, so we’re actually doing some work there. And that’s a large portion of my time. And then the other portion of my time is sharing the IAB. And again, that is not driving my company’s interest forward, but it’s driving the internet and the IETF forward and I think this is really important.

Jane Coffin:
Thank you. Thank you, Miriam, that’s perfect. And Mallory, over to you.

Mallory Knodel:
Yeah, last one to go to introduce. Yeah, I’m Mallory Nodal. I work at the Center for Democracy and Technology. It’s my day job and I’m, as Miriam mentioned, I’m on the Internet Architecture Board with Dhruv and Suresh. Also, I chair a research group on human rights. So I guess a few other roles. I’ve chaired a working group with Suresh a little bit and I’m a reviewer for the general area. That’s another thing that folks can do at the IETF is you can review other people’s work. You don’t always have to be writing it. Something that I find really valuable and helps me follow what’s going on. My first meeting was almost 10 years ago, surprisingly. It was when I was working at the Association for Progressive Communications and I found it a really interesting place for two reasons. The first was that we at APC at the time were really fostering implementation of technology but not the way big tech does it. We were really tiny, sometimes independent, sometimes NGOs that were either running community networks or web or email hosting. And so I found it really interesting to try to infuse those views and those experiences into the larger standards bodies because I think it is often perceived as dominated by big tech and the problem space is just so wildly different. So I found that interesting and useful. And then the second thing that I found really fascinating and impactful about it and why I’ve really stuck around all these years is for a while at APC and then in previous positions, I was a trainer for digital security for like journalists and activists and people who are really at risk in authoritarian regimes and during protests and things like that. And it’s so hard to teach some of those concepts back then like PGP encrypted email, you could spend three days trying to teach journalists how to use it and they still would not always get it right. And you were worried about their security and whether they understood their threat model and things like that. And you still only trained like a handful of people after a week. And at the IETF, what I thought was really interesting is you could actually maybe start try to change the way the internet works for everyone so that you would have a lot more impact and keep keeping people safe online and keeping them connected because the internet itself would change and meet those needs of the most at risk people. And so anyway, that’s what I find really fascinating about working at the IETF level and yeah, looking for the discussion.

Jane Coffin:
Excellent. There’s a question I’ll ask Lars now and Lars, could you let us know what some of the hot topics are in the IETF that participants might want to learn more about? If you could take two to three minutes on that one so that we can get a couple other questions in and then have open it up for Q&A. Thank you.

Lars Eggert:
Yeah, sure. So there are obviously a lot of things happening at the IETF. We have around 120 different working groups that work on different areas of the internet space and all of them are doing something or most of them are doing something but I wanna like maybe hit on a few points. So one of the current themes that have been happening ever since the Snowden revelations over 10 years ago is that the IETF is really serious about strengthening the security of the internet and the privacy protections that users have. And we’ve done a lot of work there. One of the core protocols in that space is TLS, the transport layer security protocol. And we’ve recently, I think two years ago or something like that published version 1.3 of TLS which has significantly simplified the protocol and has also added to the security and privacy protections that are offered to users. And that is widely deployed now all major browsers, all browsers really implemented all servers and CDNs implemented and TLS 1.3 really has upped the game for online security. So that’s a thing we recently did. TLS 1.3 is also part of the QUIC protocol that you might’ve heard about which is another thing that the IETF has recently shipped. QUIC is not quite replacing TCP but at least providing similar features in terms of data transport for the new version of the HTTP web protocol which is HTTP version 3. And that is also a massive effort that I think by some counts QUIC with HTTP 3 and TLS 1.3 is already close to half of all web traffic within just a year or two or three after initial deployment. So that illustrates that work in the IETF sometimes takes a long time because it’s complicated and we need to get it all correct. Because remember maintaining the plane while it’s flying. So we don’t wanna crash it. But once something is ready and if it solves a need it can get global deployment very, very quickly. And so the internet is dramatically changing because of things the IETF is doing every day. And sometimes the entire model of the internet traffic is changing from within a few months from mostly HTTP 2 with TCP and TLS 1.2 to now QUIC and TLS 1.3. So that is sort of demonstrates the power that the IETF really has in terms of driving change in the internet. I wanna maybe mention one last topic which isn’t quite in part of this sort of core set of internet protocols but it’s very important. So there’s a way in which the IETF starts new work when we don’t have a working group that already fits that proposal which is called a birds of a feather session. had one, the acronym for it is DALT. I must admit, I forget what the expansion is. But never mind. The problem DALT is trying to solve is all of us now have like AirTags and various other Bluetooth trackers, location trackers in our luggage, or our backpacks, or a keychain, or a car, or somewhere else. And stalking through these devices is a huge problem. So AirTags and other devices like that work great, except when somebody slips you one of these things into your purse, or into your car, or somewhere else. And then they can track you. And obviously, that is a very real threat model. We know where people’s personal privacy and bodily harm is at stake. And DALT is an example where the big vendors of these devices have tried to come together and have looked for a forum for where they can all standardize on how can your Google Android phone alert you if someone has slipped you an Apple AirTag? Because there are two different ecosystems in terms of devices. But they need to cooperate around the standard for making sure that your phone, your Android phone alerts you if somebody has slipped an AirTag into your purse, although it’s an Apple device. And the security modeling and the solution space is very complicated. But it’s very, very, very important, given the vast amount of tags that are out there. And this is obviously just the beginning, because the more tags are out, the better the tracking works. And that enables new uses for yet more tags. This is a new work. So there’s not an ITF standard on it yet. It’s not even a working group yet, although it’s very likely that by the meeting in Prague in two or three weeks, we’ll likely start a working group. But it demonstrates that the ITF is sort of a natural home for some of these technical areas that are adjacent to the overall internet. Because these tracking networks become enabled because the internet exists. And so organizations that look for a home that has open participation, where it’s free to use the standards. Because we want everybody to be able to integrate this into their tracking networks. So we don’t want to have a solution that requires somebody to pay revenues or pay membership fees so they can participate in the setting of the standard or deploying it. And so they have chosen the ITF as a home, because we have clear rules about how we do our work. Everybody understands them. Everybody understands you just participate individually. There’s no membership fees. There’s no restrictions on the use of the outcome technology. And we’re hoping that that will get deployment very widely as well once the technical work is done.

Jane Coffin:
Thank you very much, Lars. We’re going to open it up for some Q&A in the room and online. Does anyone have any questions for any of our panelists about the IETF or the IRTF? And Dhruv is our online moderator. And Dhruv, I don’t see anything online. Do you? I might have missed something.

Dhruv Dhody:
No, none so far.

Jane Coffin:
OK, we have a question here in the room. Please, and if you could keep the question short so that we can give you a short answer.

Dhruv Dhody:
Sure.

Audience:
Yeah, hopefully this is on. Andrew Campling, I run a public policy, public affairs consultancy. And I’m an ITF enthusiast. We sort of touched on but haven’t really expanded on diversity in standards bodies. When we consider diversity on whatever axis you like, whether it’s geographic, ethnicity, age, or gender, it’s a big problem. So for example, the IETF is about 10% female participation to give one example. It’s not a multi-stakeholder process. So there’s very limited involvement of CSOs and those that do engage with relatively narrow perspective or represent relatively narrow points of view. Governments and their agencies are largely not involved. And equally, end users are not present. And as I think Andrew mentioned, the tech sector representation is pretty narrow. So we don’t have many network operators, for example. If we accept that diversity improves the culture of an organization and the quality of its output, what are the unintentional barriers to both entry and to ongoing participation that affect that diversity? And how can we fix them so that we get much better diversity of thought and therefore better standards?

Andrew Alston:
So Andrew, I’m going to ask you to answer that question. And could you do that in about a minute to two minutes? Yeah, sure. Andrew, the diversity question is always an interesting one for me. And we’ve had some quite extensive debates about this at the IETF. I think you’ve got to look at it as, what does diversity mean in the context of the IETF? Because I think that it goes so much deeper than when you start looking at what I would consider your standard diversity metrics of gender, race, et cetera. It comes down to, what is the diversity of technical thought and bringing that into the IETF? For example, I think that sitting here in Africa, I bring an African perspective, which is diverse. And I think that to say that the IETF is also not a multi-stakeholder engagement model, I think that that’s actually fundamentally inaccurate. Because there is a lot of multi-stakeholder engagement. You have the operators. You have the vendors. And the participation is open for anybody to come and participate. Be you an operator, be you a vendor, be you a government. I know that I’ve done a lot of presentations to various government entities saying, we need more involvement from Africa. It’s about encouraging people to come. But I would definitely say that the IETF is a multi-stakeholder organization. And we welcome that participation and actively encourage it. But I think on the diversity question, as I said, it comes down to, how do you define the diversity? And for me, that diversity is about bringing cultures. It’s about bringing different perspectives, different views from different segments of the industry, et cetera. And in that sense, I actually think the IETF is, it has a lot of diversity in that sense. We could do better. But I do think that it is there. So I hope that helps.

Jane Coffin:
Thank you, Andrew. We’re going to turn it over to Miria. But I would also want to just put out there, for those of you that may not know this, there is a policymakers group where ISOC funds that. And you’ve brought people from all over the world, from the governmental sector, from parliamentarians and others. It’s a quiet group. They meet with different people from the IETF and the IAB and the IRTF. And I would just say that there’s probably more going on on a multi-stakeholder level than some people would know. Miria, over to you.

Mirja Kühlewind:
Yes, I would like to add quickly a few points. And one point is that we totally understand actually engaging actively in the IETF needs resources, right? And the IETF also depends on these resources. We don’t have staff members who are working on the standards. Actually, the participants that voluntarily come to the IETF and do all the work. So if you want to engage on that level, that is a big commitment. And we totally understand that not even in the private sector everybody can afford that. But on the other hand, it’s important to have a certain diversity in order to ensure quality of our standards and then make sure everybody, even those people who didn’t have the resources to participate in the creation of the standards can freely use the standards and can engage if they want to enhance the standards. And this is a really important point where it’s not only about bringing people and taking the pen up, but reaching out and making sure people are aware about what we’re doing. And that’s something we try to do a lot more, including with policy stakeholders where we try to reach out and have a dialogue and try to explain what we’re doing, how it works, where touch points are, also bridging this information back into the IETF. So there are challenges in active participation. But I think to have a dialogue and to understand the requirements, we also need other ways to do that.

Jane Coffin:
Thank you, Mirja. And thank you for your question. It was an important question. Someone else in the room who has a question, please. OK, well, Dinesh, please.

Audience:
My name is Dinesh. I’m from Bangalore, India and working in a rural area. So my question is a little bit of segue, but I am starting with Maria. Maria, right? No, no, her. OK. Yeah, Mallory. I’m sorry, I’m sorry. So what you said, like you’re coming from APC background and then you’re coming from web background and then you’re doing IETF standards. My question for the whole conference has been in almost everywhere I’ve been, why is not anybody working on web standards, extending it to the communities out there? Is the internet done? You know, when it comes to web protocols, web standards, and all that, we need to push it.

Mallory Knodel:
That’s what I’m kind of trying to say. Absolutely. Well, the World Wide Web Consortium is largely responsible for the web standards, right? W3C. It would be great to have the W3C at the IGF more, I think. As someone who also engages there, my organization is really invested in the web standards, a variety of different standards that aren’t just about the web and all the other ways that it faces the users more. I think, yeah, it would be great to have them more. So it’s not necessarily a question for the IETF. There is some degree of overlap, right, between what happens there and we also have an established, IETF has an established liaison, or no, do we have an established liaison? We do, we have an established liaison relationship with the W3C. That’s what I was worried about. We had a recent conversation in which that has confused me. But no, so that’s important and that happens already. But in fact, yeah, it would be great to, in the IGF, which I think is mostly seen as a policy space, to actually have this bridging role where the technical community comes as well. So while this is our first, the IETF’s first time doing an open forum, we know ICANN has done one for a while, we know ISOC has been involved, maybe we can convince W3C to come next time as well.

Colin Perkins:
And just to follow up very briefly on that, Lars mentioned HTTP3, which is the next version of the web transport protocol, which was developed in the IETF very recently. And in collaboration with the W3C, we also did the WebRTC protocols for video conferencing, which I think we’ve all put a lot of effort into over the last few years with the pandemic. Yeah, as a follow-up on that, John, from the Yale Foundation, the main difference would be the membership system that they have in W3C. So participation is quite more difficult unless you can actually afford the fees. So there has to be a more reflection on how to proceed with that, thanks.

Jane Coffin:
Thank you, over to you.

Audience:
Thank you, I actually have a comment, not a question, but my name is Danko Jevtovic from ICANN board, and I would first like to forward the best from our chair Tripti Sinha, but she’s an bilateral, so couldn’t be here, and thank the AIB for sending Harald to us. I think he says always his role is to protect us from breaking the internet, so he’s very good at that, and thank you. But most of all, thank you for the standards you’re making and your work on privacy. This is the key underpinning of the technical layer that we are all working on together. I still remember my reading of the first RFC that was SMTP protocol when we were trying to connect in Serbia BBS system to internet to exchange our emails and writing code to read it. And for me personally, it was a shock how easy it was to read that document and how enabling that was to help in the internet. So ICANN obviously supports IETF and your specific multi-stakeholder model, and we often looking at the whole ecosystem of standard organizations. Sometimes there is discussion also about standards that are developed inside different organizations, but for me, it’s the clear that IETF is the key for the technical layer of the internet, and the strength is the openness and free standards that are based on IP and voluntarily accepted. So this is kind of the reason for the win of open networks against closed systems. And we often hear even in the IGF and other fora about ideas for changing the basic protocols in the kind of old ITU style way of thinking. But I think given this tremendous success of the internet, it’s clear that the way how IETF is doing is the way forward, and we are grateful for that. Keep on doing it, and we support you, thank you.

Jane Coffin:
Thank you, and thank you very much for what you do on the ICANN board. Is there anyone else? Okay, lovely, we’ve got about two more questions in the, we’ve got five minutes. So if you have a minute to ask it, we’ll get a minute to answer it and see what we can do.

Audience:
This is Ignacio Castro from Queen Mary University of London, and I chair a research group at the IETF. I’ve heard quite a few times here that certain groups are not represented in the technical community, but to be frank, I have been quite surprised to see how little representation of the technical community is in the IGF, and I’m wondering what would be the way to bridge that gap, because it looks like both communities are seeing exactly the same problem on the other side.

Jane Coffin:
I’ll just quickly say, I couldn’t agree with you more. There was more participation in the beginning 10 to 15 years, and it’s been, and this is one of the reasons the IETF, IEB, RERTF is here today and has this great session, and I think you’ll see more in the future.

Mallory Knodel:
No, I just wanted to say that, that I think there’s a recognition of that, and it’s certainly something that everyone I feel like I’ve spoken to today and this week from the technical community, even those that aren’t here, RERs and others, are aware of and think there’s a concerted effort now to shift that back to where it used to be, like Jane said.

Mirja Kühlewind:
I do want to acknowledge the point you make about challenges, because this forum is also very broad. I found it very interesting to be here. I learned a lot, also, just for me personally, caring about the internet as a citizen, but identifying the parts of the discussion where we can provide valuable input is challenging for us.

Jane Coffin:
And that’s a really good point, Mirja. We may want to see if we can talk to the MAG or make more input on a technical track, right? Which, because we used to come and do internet exchange points, BGP, a little bit of IP addressing, so maybe we bring that back. We have room for one more question if you ask it quickly. I don’t see any, oh, Bravo, go ahead.

Audience:
My name is Makoto Nakamura from the local government of Nara City, Japan. Now I fight against legacy technology or legacy people, and today’s government system in Japan often use FTP, still use FTP, or old protocols. Would you have any idea to quit the old protocol or legacy protocol into the trash? I know that the RFC backed obsolete, I know, but it’s a replacement of the new protocol or almost all of six cases. So how would you step up or move to new technology and push from the IDF? This is my first questions.

Mirja Kühlewind:
So, I mean, gladly, even so, these old protocols are still there, the internet doesn’t break, so that’s part of the architecture, and that’s the good news. I think a lot of these protocols, like there has been a lot of focus on security, for example, and sometimes that’s, unfortunately, a harder selling point than performance. If you have a protocol which gives you direct benefits that shows your investment on a short-term payback, then it’s easy to convince people. If you have to update your new protocol and you have to invest money, manpower, knowledge, and you don’t get a direct payback, that’s a challenge, but I think we need to just go and explain the importance of updating these protocols and the impact on the long-term to keep the internet healthy and to protect your own services you’re providing by getting a more reliable, more secure, and a better network. So I think it’s an education task for us all, and I understand the challenge.

Jane Coffin:
There’s also, just another shout-out to ISOC, I used to work there, I don’t work there now, just a disclaimer, but there’s a really strong Japanese chapter, and they may be able to do some workshops with you. They’re really great. I think we’re at time, and I think it’s time to just say thank you to everybody for participating, for everyone in the room, everyone online, thank you, Drew, for being the online moderator, and everyone else here, so give yourselves a round of applause, thank you very much, and stay in touch with the IETF, IAB.

Audience:
Thank you, Jane, that was crazy. I was thinking, how about we get a two-hour session into 40 minutes? I feel like it’s time to get into a recession. I don’t think we have time for a recession. No, for the open forum, there was only one hour with the only options. Yeah, maybe. Yeah, probably, yeah. Maybe, yeah. I think it was good, but it was scary. It’d be nice if we could. Virtual on-site, working really fast. It’s really good. It’s really small. It’s regular. It’s good to go to the on-site.

Andrew Alston

Speech speed

154 words per minute

Speech length

797 words

Speech time

311 secs

Audience

Speech speed

163 words per minute

Speech length

1022 words

Speech time

377 secs

Colin Perkins

Speech speed

192 words per minute

Speech length

413 words

Speech time

129 secs

Dhruv Dhody

Speech speed

198 words per minute

Speech length

472 words

Speech time

143 secs

Jane Coffin

Speech speed

237 words per minute

Speech length

859 words

Speech time

218 secs

Lars Eggert

Speech speed

186 words per minute

Speech length

2207 words

Speech time

713 secs

Mallory Knodel

Speech speed

197 words per minute

Speech length

853 words

Speech time

260 secs

Mirja Kühlewind

Speech speed

192 words per minute

Speech length

3964 words

Speech time

1242 secs

Suresh Krishnan

Speech speed

210 words per minute

Speech length

462 words

Speech time

132 secs