DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Dudley Stephen Wyber

Libraries and librarians have a significant role to play in the realm of Open Educational Resources (OER). They serve as catalysts for the discovery, awareness, and curation of OER while helping to overcome biased views about their value. Libraries actively update their roles by connecting individuals who need knowledge with available resources, thus raising awareness of the potential benefits that OER can offer.

Librarians, in particular, contribute to the curation of OER by evaluating these resources in line with the needs of faculty and other stakeholders. They bridge the gap between various resources and users, identifying any gaps or deficiencies in the existing OER portfolio. Librarians assist in ensuring that faculty and stakeholders have access to a comprehensive collection of OER.

It is important to note that the OER landscape is currently dominated by a few regions of the world. This geographic imbalance highlights the need for greater collaboration and dissemination of OER from a global and inclusive perspective. Librarians can empower stakeholders to create and share their own OER, contributing to a more diverse and inclusive OER ecosystem.

Librarians’ involvement extends beyond curation and dissemination. They provide guidance on usage rights and assist stakeholders in navigating complex legal frameworks surrounding copyright. Librarians can advocate for better regulatory frameworks that include robust educational exceptions in copyright laws, ensuring that OER are not only accessible but also legally protected and supported.

Dudley Stephen Wyber emphasizes the importance of adopting a recurring circular learning approach in education. This model advocates for active learning and participation, encouraging individuals to learn, explore, contribute, and continuously improve. Wyber also underscores the active involvement of teaching professionals and librarians in facilitating the use of online resources. According to Wyber, simply making educational content available online is insufficient; active facilitation and support are necessary to foster uptake and utilization.

Librarians should feel confident and responsible for guiding faculty and students to make the most of OER. By providing support and assistance, librarians enhance the educational experience and help individuals maximize the benefits offered by OER.

Additionally, there is a suggestion to apply the interoperability logic used to achieve compatibility between Open Access (OA) repositories to OER repositories. The work done by organizations such as COAR in Canada serves as a reference in this regard. Interoperability between repositories would enable seamless sharing and integration of OER, contributing to the growth and effectiveness of the OER ecosystem.

Finally, it is essential to strive for equity and parity between OER and Open Access. OER should be brought to the same level of recognition and value as Open Access, creating a system where both types of resources are equally supported and encouraged. This would foster a more open and inclusive education system, benefiting learners and educators worldwide.

In conclusion, libraries and librarians play a multifaceted role in the realm of OER. They contribute through the discovery, awareness, and curation of OER, bridging the gaps between available resources and users. Additionally, librarians guide stakeholders in utilizing rights, creating their own OER, and advocating for favorable legislative and regulatory frameworks. Their involvement, combined with the adoption of recurring circular learning approaches and the pursuit of interoperability and equity, is vital in realizing the full potential of OER in facilitating quality education for all.

Tawfik Jelassi

Open Educational Resources (OER) play a pivotal role in increasing access to quality education worldwide. In 2019, UNESCO adopted the recommendation on OER, a UN normative instrument to support inclusive access to digital learning platforms. This highlights the significance and recognition of OER in the educational landscape.

The recommendation by UNESCO advocates for the use of openly licensed digital education tools that can be accessed through the Internet. By embracing OER, educational institutions and learners can benefit from a wide range of freely available, adaptable, and shareable educational materials. This promotes inclusivity and equal opportunities for learners globally.

UNESCO’s emphasis on OER aligns with the Sustainable Development Goals (SDGs), particularly SDG 4: Quality Education. OER contributes to the achievement of multiple SDGs, including quality education, access to information and ICT, gender equality, and global partnerships. The adoption and implementation of OER can help bridge educational gaps, address gender disparities, and foster collaboration among nations.

Moreover, OER is part of the broader concept of digital public goods. These digital resources, including OER, drive sustainable models of education, knowledge sharing, and innovation. The 2019 OER recommendation highlights the importance of international collaboration for content, capacity, and infrastructure development, aligning with the Global Digital Compact principles. These principles promote an inclusive, open, secure, and shared Internet, enabling widespread access to knowledge and educational resources.

In addition to the global significance of OER, there is a recognition that the internet should be used as a force for good. UNESCO envisions a digital ecosystem where the internet serves as a powerful tool for learning, advancing human rights, and sustainable development. The internet has the potential to facilitate access to information, promote freedom of expression, and provide opportunities for lifelong learning.

To guide the development and use of the internet responsibly and inclusively, UNESCO established the OER Dynamic Coalition. This coalition brings together stakeholders from various sectors to build values and principles guiding the development and use of the internet. The coalition aims to ensure that the internet is harnessed as a tool for education while also promoting peace, justice, strong institutions, and partnerships.

In conclusion, the adoption and promotion of Open Educational Resources are vital for enhancing access to quality education worldwide. The UNESCO recommendation on OER highlights the importance of openly licensed digital education tools accessible through the Internet. By embracing OER, stakeholders can contribute to the achievement of the SDGs, drive sustainable models of education and innovation, and utilize the internet as a powerful tool for learning while advancing human rights and sustainable development. The establishment of the OER Dynamic Coalition further showcases the commitment to shaping the future of education inclusively and responsibly.

Audience

During the discussion, the speakers exhibited curiosity and a desire to understand the best practices related to decentralised repositories and open technologies. The conversation extensively explored various aspects of the implementation and functioning of these concepts.

Both speakers maintained a neutral stance throughout the discussion, refraining from taking a definitive position. However, they did not provide any specific supporting facts or evidence, leaving the conversation open-ended.

The Sustainable Development Goals and their connection to decentralised repositories and open technologies were not mentioned during the dialogue. This suggests that the primary focus of the conversation was to explore the concepts themselves rather than their potential impact on sustainable development.

The main takeaway from the discussion was the speakers’ curiosity about best practices in decentralised repositories and open technologies. Although the lack of supporting evidence or detailed arguments may indicate that this was an introductory exploration or a starting point for further research, it is important to note that no additional noteworthy observations or insights were identified.

Overall, the conversation revolved around the speakers’ neutral interest in decentralised repositories and open technologies, without delving into specific examples, cases, or implications.

Neil Butcher

The analysis examines various arguments and stances regarding education policies and their impact on sustainability, intellectual property, digital accessibility, procurement processes, and the quality of teaching materials. These arguments provide insights into the importance of effective policy implementation and its influence on achieving sustainable development goals.

A key point highlighted is the need for policies to enable government agencies to use open licences. Without such provisions, it is unlikely that open licences will be effectively utilised. Another crucial aspect is the inclusion of accessibility considerations in procurement processes. The analysis argues that accessibility should not be overlooked during contract execution, as it may compromise the educational experience for individuals with disabilities.

The quality of accessible teaching and learning materials is also a prominent focus. The analysis suggests that an excessive emphasis on quantity and accessibility could overlook the importance of quality. Instead, curated collections of resources that promote high-quality teaching and learning experiences are proposed.

The government’s responsibility in ensuring accessible and supportive public education systems for all is emphasized. The analysis states that the government plays a crucial role in providing accessible and supportive education, regardless of individuals’ backgrounds or abilities. Additionally, the monetization of the education space by the private sector is critiqued, with an argument for prioritising the quality of teaching and learning experiences over financial gains.

Investment strategies in education are highlighted as a means to prioritize the quality of teaching and learning experiences for everyone. Adequate investment in education is seen as essential in providing a conducive learning environment and promoting positive outcomes for all learners.

Open Educational Resources (OER) are also scrutinized, with a warning against compromising the quality of learning experiences while expanding access. If OER does not ensure high-quality learning experiences, it may be detrimental to education.

Furthermore, the analysis emphasizes the importance of community representation in improvement processes within education. Representatives from the target communities of learners should lead improvement efforts, ensuring that the education system meets their specific needs and addresses inequalities.

In conclusion, the analysis presents various perspectives on education policies and their implications for sustainability, intellectual property, digital accessibility, procurement processes, and the quality of teaching materials. Key takeaways include the importance of effective policy implementation, the need for open licences and accessibility considerations, the role of the government in providing accessible public education, critiquing the monetization by the private sector, the significance of investment strategies for quality education, the impact of OER on learning experiences, and the importance of community representation in improvement processes within education.

Tel Amiel

Open Educational Resources (OER) projects require sustainable funding to ensure their development and continued existence. This funding can be obtained through partnerships and donations from foundations. However, the success of sustainable funding models, such as open procurement, may vary in different contexts.

The practices surrounding OER and community engagement are essential factors for their success. Without active community involvement, the implementation of OER loses its meaning. It is crucial to foster collaboration and engagement within the educational community to maximize the benefits of OER.

Policies alone are insufficient to guarantee the effective implementation of OER initiatives. They need to be actively monitored by a diverse set of stakeholders. Involving various individuals and organizations from different sectors ensures that the implementation remains aligned with the goals and objectives of OER. Additionally, OER should be seen as an evolving concept that requires ongoing monitoring and adaptation to meet changing educational needs.

OER possesses unique qualities that make it a real public good, particularly in multi-stakeholder processes. Its adaptability, remixability, and reusability enable the inclusion of diverse cultural groups and cater to different educational requirements. Engaging with these resources in a pedagogical context enhances their value as a public good.

The potential of OER is currently understated, especially in interconnected, multilateral contexts. There is a need for further exploration and utilization of OER to maximize their impact. OER’s ability to share, revise, remix, and reuse content makes it a valuable resource that can enhance education on a global scale.

Successful implementation of OER requires the allocation of serious responsibilities and the active involvement of individuals. Without meaningful participation and responsibility, OER initiatives may stagnate and fail to realize their objectives. Therefore, it is crucial to involve people at all stages of the implementation process to ensure the effective utilization of OER.

In conclusion, sustainable funding is crucial for the success of OER initiatives, and partnerships and donations from foundations can provide the necessary financial support. Open procurement models are advocated by governments for sustainable funding, but their effectiveness may vary depending on the context. Community engagement, active monitoring by stakeholders, and recognizing the unique qualities of OER as a public good are vital for their successful implementation. Further exploration and utilization of OER are needed, especially in interconnected, multilateral contexts. Meaningful implementation of OER requires the involvement and allocation of responsibilities to individuals. Without active participation, OER risks becoming stagnant legislation with limited progress.

Moderator – Michel Kenmoe

Various stakeholders engaged in discussions about the importance of Open Education Resources (OER) and the challenges associated with its adoption. It was universally agreed that raising awareness among decision makers is crucial for OER adoption. Decision makers play a significant role in implementing and supporting OER initiatives. Developing OER strategies helps raise awareness and garner support from stakeholders.

The involvement of middle to top-level management was seen as vital for the successful implementation of OER. Without their support, gaining buy-in and implementing the recommendations for OER adoption would be difficult. This highlights the importance of securing support from influential individuals within educational institutions and policymaking bodies.

One major challenge in realizing OER strategies is concerns over funding. Governments are particularly concerned about finding adequate resources to support OER implementation. One suggested solution is for governments to ensure that part of the budget for OER production is supported by donors. This approach would alleviate the financial burden on governments and facilitate the production of open educational resources.

Designing OER strategies requires a collective effort involving multiple stakeholders. It was observed that five countries successfully developed their OER strategies through such collective efforts. This highlights the importance of engaging all relevant stakeholders, including educators, policymakers, and educational institutions, in developing and implementing OER strategies.

An important observation from the discussions is that many West African countries lack a dedicated budget for educational resource production. This poses a significant challenge to implementing OER strategies. The absence of a budget specifically allocated to educational resource production hinders the development and dissemination of OER. Therefore, it is imperative to raise awareness about the importance of investing in educational resource production and secure adequate funding to support OER initiatives.

In conclusion, the discussions on OER emphasized the need for raising awareness among decision makers, securing middle to top-level buy-ins, addressing funding concerns, fostering collective efforts involving multiple stakeholders, and promoting investment in educational resource production. These insights are crucial for the successful adoption and implementation of OER, contributing to the goal of quality education (SDG 4) and partnerships for sustainable development (SDG 17).

Patrick Paul Walsh

The stakeholders involved in the discussion, including government, academia, the private sector, and intergovernmental systems, agree that engagement is crucial for a comprehensive partnership. They recognize the need to work with UNESCO, SDSN, and a joint committee to implement the UNESCO-EOR recommendation. Additionally, there is a partnership agreement in place to manage an open education resource overlay platform, repository, or journal.

To ensure the quality of submitted courses, a rigorous quality assurance process has been established. Courses are evaluated not only for their academic and scientific content but also for compliance with UN policies and legal frameworks. The objective is to provide a community of practice with guidelines and playbooks on ensuring quality in submitted courses.

Various educational technologies are being used to manage and organize the courses. This includes open journal systems, copyright licensing management, and other tech tools. The effective utilization of these technologies is considered essential for managing the courses.

Community engagement is emphasized as a crucial aspect of the project. Collaborating with various user groups such as governments, corporates, academics, and schools is necessary to develop the required metadata and effectively manage the archives. This collaboration is referred to as “diamond engagement” and is seen as essential for the system to work effectively.

The freedom to create and contribute to a global knowledge commons is a fundamental principle. The open education resource recommendation supports the creation and contribution of educational content to the global knowledge commons. The content should be easily accessible, and everyone should have the opportunity to contribute freely.

The project also places importance on accessibility and inclusivity. Materials, including slides and videos, should be made accessible to all, including those with visual impairments. Ensuring compliance with disability regulations and providing equal access for everyone is considered crucial.

The decentralization and adaptability of open education resources to local contexts are promoted. It is essential to make sure that the resources can be repurposed and translated to suit specific local contexts. This flexibility ensures that the resources remain relevant and applicable in different regions.

There is a concern about the control of academic work archival by commercial entities. The argument is that academic works should not be owned by private entities, and hosting and archiving should be done by libraries rather than commercial entities.

Decentralized repositories are seen as beneficial as they allow for easy updates of courses. This enables courses to be updated locally and reuploaded to the system, ensuring that the content remains up-to-date and relevant.

Behavioral issues and the psychology of implementing digital infrastructure are important factors to consider. Jeffrey Sachs has highlighted the reality of sunk costs in initiating such projects, and the marginal costs of implementing digital infrastructure are relatively low. There is also the potential to add commercial value to the project, which could eventually generate returns on investment.

Government mistrust in receiving returns on their investments poses a significant challenge. The argument is that governments need to invest now for future returns, but past experiences of not receiving expected returns have eroded their trust.

There is disagreement regarding the commercialization of open education resources. While some reject the idea of commercializing the infrastructure or content, others propose value-added commercialization with profit-sharing arrangements if a private entity gains income from the public resource.

Advocacy exists for public or stakeholder ownership of open education resources. The argument is that open education resources should be either publicly owned or owned by relevant stakeholders to ensure their accessibility and availability to all.

In conclusion, the stakeholders involved in the discussion emphasize the importance of engagement in building a comprehensive partnership. Quality assurance processes have been implemented to ensure compliance with UN policies and legal frameworks. Various educational technologies are being utilized to manage the courses effectively. Community engagement is crucial for developing metadata and managing archives. The discourse on open education resources highlights the freedom to create and contribute to a global knowledge commons, as well as the need for accessibility, decentralization, and public ownership. Behavioral issues and government mistrust pose challenges, but there are also opportunities for commercial value and return on investment. Collaborative efforts and a shared vision are crucial for the successful implementation of open education resources and the promotion of quality education for all.

Melinda Bandaria

In order to create a more inclusive education system, it is crucial for teachers to have an awareness of who is excluded and the reasons behind their exclusion. Some common barriers include the cost of learning materials, physical challenges such as hearing or sight impairment, language barriers, and cultural diversity. By understanding these barriers, teachers can better address the needs of excluded students.

To enable more inclusive teaching and learning, teachers should possess knowledge of accessibility guidelines, universal design for learning, and cultural and linguistic diversity. The Web Content Accessibility Guidelines provide a framework for making online platforms accessible to different types of learners. Integrating the basic principles of universal design for learning into Open Educational Resources (OERs) ensures that they can be accessed by all students. Furthermore, translating OERs into local languages and respecting cultural diversity can enhance inclusivity.

Open Educational Resources (OERs) are a valuable tool in making teaching more inclusive and breaking down barriers. OERs address the cost barrier of learning materials, as they are freely available for use. They can also be modified to integrate features of universal design for learning, tailored to meet the needs of diverse learners. Additionally, translating OERs into local languages ensures that content is accessible to students who face language barriers.

Teachers need to possess the necessary skills and knowledge to make OERs more accessible and inclusive. Training programs for teachers should include training in cultural and linguistic diversity, understanding copyright laws and licences associated with OERs, and the ability to convert OERs into alternative formats such as OJO, Braille, and simplified text. Making OERs compatible with assistive technology and determining the readability of materials are also important skills for teachers to have.

The training for teachers should not stop at developing OER materials but should go beyond that to include a wide range of knowledge and skills to make OERs more inclusive and accessible. This requires ongoing learning and continuous professional development. Teachers should not only develop and share OERs but also make them accessible and inclusive for all learners, which necessitates additional knowledge and skills.

To ensure the quality of OERs, a quality assurance framework is important. This framework enables the evaluation of the OERs that teachers use, ensuring that they meet certain standards of quality. It serves as a guide for teachers in selecting and utilising high-quality OERs that enhance inclusivity in education.

Both teachers and universities have a role to play in ensuring the quality of OERs. Teachers are crucial in creating and sharing OERs, while universities can support them in this process. OERs are often reused, remixed, translated into local languages, and shared by teachers and universities, making collaborative efforts essential in enhancing the quality and inclusivity of OERs.

Policies should be implemented to promote the development and use of OERs. Institutional policies can actively encourage the use of OERs, creating a supportive environment for teachers. Moreover, it is beneficial to use public funds to produce OERs and make them open access, ensuring that cost is not a barrier to their availability.

Incentive systems for faculty members are also important in promoting the use and creation of OERs. Especially for universities, providing incentives to teachers and faculty members who utilize and create open educational resources helps foster a culture of innovation and inclusivity in education.

In conclusion, creating a truly inclusive education system requires teachers to have an understanding of barriers and exclusion, as well as the necessary skills and knowledge to make learning materials accessible and inclusive. Open Educational Resources (OERs) serve as a powerful tool in overcoming barriers and promoting inclusivity. By implementing policies and providing support, both teachers and universities can play a vital role in ensuring the quality and accessibility of OERs. With ongoing training and incentives for faculty members, education can become more inclusive for all learners.

Zeynep Varoglu

The OER (Open Educational Resources) Recommendation 2019 was unanimously adopted by all member states, providing a clear definition of OER and focusing on capacity building, policy implementation, quality assurance, inclusive multilingual OER, sustainability, and international cooperation. Zeynep Varoglu played a significant role in presenting and supporting the OER Recommendation 2019.

Open procurement models have become popular for developing and sustaining OER projects, although their effectiveness can vary depending on the country or context. While open procurement is seen as a transition to a more sustainable OER model, its implementation may face challenges in certain countries.

Multi-stakeholder working groups play a crucial role in monitoring policies and ensuring the success of OER initiatives. These groups can adapt to changes in OER through collaboration and representation of perspectives from all stakeholders.

Community engagement is identified as critical for the relevance and success of OER initiatives. Incentives and recognition are important for motivating individuals at all levels to actively participate in advancing OER goals.

The OER Dynamic Coalition event at the Internet Governance Forum (IGF) is a vital platform for knowledge sharing and collaboration among stakeholders. With around 500 participants from government, institutions, and civil society, it focuses on implementing the OER Recommendation.

The importance of openness in education and knowledge sharing is emphasized during the event. Zeynep Varoglu actively supports this idea, advocating for openness in education.

In conclusion, the OER Recommendation 2019 provides a comprehensive framework for the development, implementation, and sustainability of OER initiatives. Stakeholder involvement, such as Zeynep Varoglu’s support and multi-stakeholder working groups, along with community engagement and platforms like the OER Dynamic Coalition event, contribute to advancing OER goals. Emphasizing openness in education and knowledge sharing is crucial for promoting inclusive and quality education globally.

Lisa Petrides

The Institute for the Study of Knowledge Management in Education, led by Lisa Petrides, focuses on various aspects of Open Educational Resources (OER). They are involved in building OER libraries, providing professional development, and researching the impact of OER. Moreover, they emphasize the significance of OER repositories as the infrastructure supporting libraries. The institute promotes the implementation of the CARE framework, which prioritizes good stewardship of OER by emphasizing contribution, attribution, release, and empowerment. They also stress the importance of understanding the provenance of resources to build a transparent knowledge base. Additionally, the institute advocates for the accessibility and inclusivity of OER, viewing educators as experts in their knowledge, promoting decentralization in knowledge distribution, and resisting commercial private partnerships in education. They emphasize the need to integrate various open areas, such as education resources, pedagogy, data, science, access, and publishing, for better outcomes. Through these efforts, the institute aims to contribute to quality education and drive positive changes in the education system.

Session transcript

Moderator – Michel Kenmoe:
in Senegal. No, no, no. So it’s a pleasure to have you. We hope that our other panelists will be able to join us online and that they can participate in this session. So let me check once again. Do we have Zeynep online? I just rang her. She’s coming. Okay, great. Okay, thank you. So, and Neil? Neil Boettcher? Neil Boettcher is not yet online. Yeah. I hope, I hope while, while waiting for them to join, why don’t we give us two minutes for each of you to introduce yourself. Let’s say one minute, not two. Okay. This isn’t quite about one minute. Yeah, okay. Yeah. One minute.

Lisa Petrides:
Hello. My name is Lisa Petrides and I run the Institute for the Study of Knowledge Management in Education and we build OER libraries and we do professional development and we do a lot of research around the impact of OER.

Tel Amiel:
My name is Tel Emil. I’m a professor at the University of Brasilia. I had the UNESCO Chair in Distance Education and we had the Open Education Initiative, which is an activist research group for open education. Thank you. Over to you.

Patrick Paul Walsh:
Yeah, you have it. Yeah. Hello, everyone. So, my name is Patrick Paul Walsh. I’m a full professor at University College Dublin, but on secondment to the UN Sustainable Development Solutions Network as Vice President of Education and Director of the SDG Academy. Thank you.

Dudley Stephen Wyber:
Thank you very much. My name is… Stephen Weiber, I’m Director for Policy and Advocacy at the International Federation of Library Associations, which is sort of the global peak organization for libraries of all sorts.

Moderator – Michel Kenmoe:
Thank you very much, dear participant. We want to wish you a warm welcome to this session on the transformative role of open educational resources in digital inclusion. We are going to start the session by listening to an opening remark from Mr. Taufik Djelassi, who is the Assistant Director General for Communication and Information.

Tawfik Jelassi:
Excellencies, ladies, and gentlemen, dear colleagues, I am pleased to address you today at the 2023 IGF Forum and the first session of the Open Educational Resources Dynamic Coalition. This year’s theme, The Internet We Want, brings together policymakers, experts, civil society, and businesses to tackle the challenges and opportunities in our evolving digital landscape. UNESCO is committed to fostering dialogue and cooperation for a more inclusive, secure, and sustainable internet for all. We envision a digital ecosystem where the internet serves as a powerful tool for learning, and open educational resources play a pivotal role to increase access to quality education worldwide. In 2019, UNESCO adopted the recommendation on OER, which is a UN normative instrument to support inclusive access to digital learning platforms. Today, we gather in Kyoto, the ancient capital of Japan, to explore the transformative potential of OER in the age of the internet. Where information is needed. information, and educational materials are abundant. In alignment with the UN Secretary-General’s call on our Common Agenda, UNESCO has been advocating for the adoption of openly licensed digital education tools to be accessible through the Internet. The 2019 OER recommendation guides our efforts towards an open, accessible, and equitable education future. It emphasizes international collaboration for content, capacity, and infrastructure, aligning with the Global Digital Compact principles for an inclusive, open, secure, and shared Internet. Central to our discussion is the recognition of digital public goods, especially OER, defined by the UNESCO OER recommendation. The five areas of action, namely capacity building, policy support, inclusive and multilingual quality content, sustainability, and international collaboration, form the foundation for accessible online learning platforms benefiting both learners and educators. Digital public goods, such as OER, drive sustainable models of education, knowledge sharing, and innovation, thus contributing to the sustainable development goals, including quality education, access to information and ICT, gender equality, and global partnerships. This session is not only about dialogue, it’s a call for action. Digital transformation is rapidly reshaping societies. The platform society is intertwining digital platforms and artificial intelligence. We must navigate data privacy, transparency, and governance intricacies to effectively harness their potential. We call for all governments, partners and stakeholders to unite to implement the 2019 OER Recommendation and other norms that cultivate open and secure spaces for education. As stakeholders, our collective efforts through the OER Dynamic Coalition are crucial in shaping an inclusive, equitable and digitally empowered future via open educational resources. Your contributions will be invaluable in advancing our shared mission. Dear participants, UNESCO has been actively promoting open educational resources to expand access to quality education worldwide, underlying principles such as openness, accessibility, privacy and freedom of expression in the digital age. The OER Dynamic Coalition brings together stakeholders from various sectors to build values and principles guiding the development and use of the Internet. Let us work together to ensure that the Internet remains a force for good, advancing human rights and sustainable development. Thank you for your kind attention.

Moderator – Michel Kenmoe:
Thank you to the ADG, the Assistant Director General for Communication and Information at UNESCO for this opening remark in which, among other points, he highlighted that this meeting is about a call for action. We were normally to have Zeynep to present the Dynamic Coalition. I don’t know if Zeynep is online. Zeynep? So far, she’s not. It’s not yet online. So we are going to have a series of session during which some of our panelists will have shared their experiences from the different initiatives in which they are involved throughout the world. So I’m going to invite Dr. Melinda Bandaria to share their experience on the critical role in developing, creating, and reusing, as well as adapting and sharing OER. What skills do teachers need to ensure that the OER used in the courses is inclusive and accessible? Over to you. She’s joining us online.

Melinda Bandaria:
Yes. Thank you very much, and good day to everyone. Thank you for having me in this session to share my perspective about OERs and the important role of teachers in making OERs accessible and inclusive. So as introduced, I am Dr. Melinda Bandaria, and I am participating from the Philippines. I am also full professor and chancellor at the University of the Philippines Open University and appointed as ambassador of Open Educational Resources by the International Council for Open and Distance Education and has been actively involved in the OER Dynamic Coalition of UNESCO. So as to the question, considering that teachers and educators play a critical role in developing, creating, reusing, adopting, and sharing OER, so what are the skills and knowledge do teachers need to have so that we can ensure that OERs that are being used in their courses are inclusive and accessible? As we go through the skills and knowledge, it should also guide us in terms of developing training programs, courses for OERs, especially with the participation of our teachers. So… First, teachers need to know who are excluded in the teaching and learning ecosystem and why they are excluded. This knowledge would enable the teachers to put in place mechanisms and implement strategies to address the identified barriers. So in most cases, the barrier has to do with the cost of the learning materials, which using OERs aims to address. The other common barriers include physical challenges like hearing or sight impairment, language, given that most OERs are in English language, and other learners may feel excluded because of disregard to cultural diversity. So considering this, the teacher should have knowledge on the following. First is accessibility guidelines, like for instance, the Web Content Accessibility Guideline to make the online platform accessible to various types of learners. Universal design for learning, the knowledge about it can guide the teachers on how they can integrate even just the basic principles of universal design for learning to the OERs that they will be using, especially given the nature of the OERs that they can be reduced, then teachers can integrate the basic features of universal design for learning to these OERs. Cultural and linguistic diversity or making the content inclusive. In one of the studies conducted in Southeast Asia, one of the barriers cited by students on the use of OERs is that they are not available in the local language. So teachers can translate these OERs that they will be using in their courses and make sure that there is respect to cultural diversity, that there’s nothing in the content that would be offensive to a specific person.

Moderator – Michel Kenmoe:
Thank you, Dr. Melinda, for your input and for clarifying some of the principles that may actually help teachers to create content that are inclusive. Let me return to Zeynep, who informed that she’s not. online. Zeynep, can you make a short presentation of the OER dynamic correlation before we move forward? Yes, can you hear me? Yes. Okay, can you put up

Zeynep Varoglu:
the slide? Is it possible or not? No? If it’s not, it’s okay. This is the second slide. Otherwise, I’ll just go on. It’s a great pleasure to be here with you today. I’m very sorry there’s something wrong with the camera and I will try and fix it during the course of the session. I would just like to present you very quickly the OER recommendation 2019. This recommendation was adopted by all member states by consensus in 2019 and it basically has a very clear definition of OER which explains to you exactly what OER is and what it is not. I will read it out to you right now. The definition is that any learning, teaching or research material in any format that resides in the public domain or is under a copyright that has been released under an open license that permits no-cost access, reuse, repurposing, adaptation and redistribution by others. There is a clear definition of open license. I would invite you to go to the website of UNESCO, look up the name of the OER recommendation 2019 to have the full text. There is five areas of action and we’ll be going through each of the areas of action in this presentation. The first one is capacity building, the second is policy, the third one is on quality, inclusive multilingual OER and the fourth is on sustainability and the fifth one is on international cooperation. And the international cooperation is the basis of this OER recommendation, which of this OER dynamic coalition, which brings together the panel before you. I’d just like to also point out that the stakeholders in this recommendation are the entire knowledge community. So we have the education community, we have libraries, museums, and we have also publication. You have on the screen in the chat, if you’re online, you have the text of the recommendation there. We have a very full panel, so I will stop here and give the floor back to you, Michelle, to continue.

Moderator – Michel Kenmoe:
Thank you, Zareb. Can we stop the presentation, please? Yes. Okay, thank you. I want to check to know if Mr. Papaluga is online. Has he joined us? Oh, no. Papaluga was to share the experience on how learners can draw from the various cultural, linguistic, and socioeconomic background to create inclusive OER content. If he’s not online, then let me check if Ms. Jian Osman. Is he online? If not, can I check to make sure that Mr. Nail Bocha is online? Nail? Yes, I am online. Thank you, Nail. So this gives us the opportunity to move forward with the second part of this presentation, where I’m going to invite Elisa Pretodist to share her experience on OER repository. Elisa?

Lisa Petrides:
Yes, thank you. Will the slide be on the screen? Would it be easier to share from my screen? He can share. Okay, go ahead. I’ll use the right slide. But I’ll not be able to move. with this. Great, thank you so much. So I want to talk about really the sharing of knowledge and what that means in terms of OER libraries and repositories. The repository is really the underlying infrastructure of libraries. They’re vast and diverse. They’re across the world. They contain often metadata description of how content is created and used and adapted, which is extremely important. It’s not enough to have platforms where these content reside, but it’s equally as important to know, have very good descriptions for both the educator as well as the learner who is going to be using these resources. It’s not enough just to have a whole library if we don’t really understand what’s in it and why we might want to use it. Just like the librarian in a physical library, that person is probably one of the most important people in terms of their function, in terms of the search and discovery. So similarly in the online content, we rely on the metadata and often librarians behind the metadata creation to guide us through that kind of content. I want to just talk about this through the CARE framework, which is something that you can look at careframework.org because it’s, what it is, is the CARE framework is a way to show what is good OER stewardship and how to become OER stewards of OER. And so I thought it might be an interesting way to apply the CARE framework to platform and tools and how they can be designed in a user-centric way. So the first part of CARE, so its contribution is the… see attribution, release, and empower. So contribution is about advancing the awareness, improvement, and distribution of OER. And what this means specifically in terms of platforms and metadata is that we really have to focus on portability, interoperability, and the ability to adapt or localize. In terms of attribution, we’re talking about conspicuous attribution. And what I mean by that is if we don’t know the provenance of the resource, where that resource came from, how it’s been used along the way, we really lose the ability to describe and build a transparent knowledge base. And as you heard Zeyneb talk about in the OER recommendation, what we’re trying to create is really a commons, the knowledge commons around OER. The third piece, 30 seconds, did you say? No? Release, making sure that the content can be used beyond the platform in a way that it requires the platform to be interoperable with others. And last is empower. And perhaps I think one of the most important attributes today is meeting the needs of all learners, including those who have been traditionally excluded. So this requires content that is culturally relevant, inclusive, and accessible to those with disabilities. And again, when we think about the metadata that’s describing this content for search and discovery, I think that the CARE framework really helps to illuminate what those factors are. Thank you.

Moderator – Michel Kenmoe:
Thank you, Elisa, for sharing that on the importance of metadata and also OER repository. I’m turning now to Stephen to ask about the importance of collaboration, how to make this possible, collaboration between educational stakeholders to support OER initiatives.

Dudley Stephen Wyber:
Thank you very much, Michelle. And thank you for the invitation to be here today. I think just as an introductory point, there’s a lot of talk at the moment about digital public infrastructures and digital public goods. And OER is such a powerful example of this and is so often overlooked. So it’s really important that we’re having this session here today. So I think. At risk of repeating Lisa’s points, but without an attractive acronym to make sense of them so that everyone can take notes. The roles that libraries tend to play, often it is, I think as you said, supporting with a discovery awareness. As we know, the fact that something is available on the internet does not necessarily mean that it’s actually accessed or used. There’s an awful lot of shouting into the void online. Libraries have proven effective in so many cases and actually then updating their original roles of putting people who need knowledge in touch with knowledge, raising awareness of the possibilities. I think combating some of the assumptions there are that because OER is free, it’s worthless. And there is always this sort of human tendency to believe that unless you’ve paid for something, it’s not worth it. Wrong. Overcoming some of the ideas and the prejudices that doubtless exist about OER as resources. I think Lisa’s already covered the point about curation, but I think curating in a way that responds to need. Actually, again, bridging the materials that are out there, the resources that are out there, working with faculty, working out what’s actually there. So again, there’s that bridging role in there. I think, once again, working with educational stakeholders to take a critical overview. And I’m conscious, again, I risk echoing Lisa in this point that clearly the landscape of OER that’s available right now is, it is primarily from some parts of the world. There’s an awful lot coming from the parts of the world that have produced traditional textbooks and traditional materials. But given the training and given the experience they have in trying to evaluate the whole of knowledge that’s available, librarians can have a really powerful role working with stakeholders to think, well, what’s missing? What are we not seeing as opposed to what we are seeing? And actually then, once again, again, working to make sure that we’re coming up with OER that fits. I’m going to jump to the last point, but also in that role of encouraging. Librarians can have a really powerful role, too, in giving guidance about how do you use rights, what are the options, what are the channels for faculty, for education stakeholders to feel sufficient agency, sufficient empowerment to produce their own, which really does require to work with materials that are there, to produce their own materials, to share them, to really actually deepen that knowledge commons. And then I think the final point is, please do count on librarians as allies in pushing for legislative and regulatory frameworks that are favorable, that have decent educational exceptions in copyright, so that you’re not unnecessarily held back in using materials for educational purposes. It fits within the recommendation, but it’s an ongoing fight.

Moderator – Michel Kenmoe:
Thank you very much, Stephen, for sharing this. I’m going to turn to Patrick. When we are considering stakeholder engagement, I think the private sector can play a key role in this. So Patrick, what are the strategies that we can put in place to engage the private sector?

Patrick Paul Walsh:
Yeah. So the answer, just to say, the question I prepared is the broad partnership, which is a partnership between government academic libraries, intergovernmental system, and the private sector. So it’s the whole comprehensive partnership. So we have signed, or we’re working with UNESCO, SDSN, and a joint committee to implement the UNESCO-EOR recommendation. And we have a partnership agreement that we’re going to run what’s called a open education resource overlay platform, or repository, or journal, whatever way you want to think about it. And basically, we basically want to have courses submitted to us that we can quality assure and recommend. that we can put into archives that are properly metadata, open license, et cetera, quality assured, and then they can be used inappropriately in government for educational training or corporates or schools or academia in their courses. And of course, the whole reason for demonstrating this on the essay, for example, if we did it with SDG Academy courses, which are all up on edX, is to really show a community of practice how you’d actually do this with guidelines and kind of playbooks that people could actually apply this in other contexts. But just to give a sense of the partners and what’s going on. So one, people should be able to submit their LMS as their courses, and they’d be refereed and not just refereed from the point of view of academic and science content, but also adherence to say UN policy or UN legal frameworks, et cetera. So they’re quality assured and published in the normal academic way. When they went to the repositories, they will follow fair care and Farr’s principles. So thank you for explaining the care principles. But basically, this stuff has to be findable, accessible, interoperable, reusable, but there has to be what I call good citizenship or stewardship of it and also good governance of it. You do need quite a lot of ed tech, and I’ve actually listed all the kind of ed technologies that you’d have to use for this type of, let’s call it publication or e-publication in terms of the open journal systems or the way you would do your copyright licensing or the way you would manage your indicators and metadata and so on and so forth. But just to give you a sense that, and just two seconds then. So where the partnership comes in though, when we’re developing the metadata and how it’s archived, we have to talk to the users, and the users are governments who have training in their LMSs, the corporates. who have their HR training, the academics who are, and schools who are doing their curriculum and their courses. And in a sense, you have to have what we call the diamond engagement. So it’s not enough just to do diamond publication, which is free to publish and free to use, but you actually have to work with the curators and then the users to get the whole system working effectively, or else it’s not going to work.

Moderator – Michel Kenmoe:
Thank you. Thank you very much for three of you for this session, during which you have shared your experience on how to achieve a multi-stakeholder approach into the development of OER, and also on how to engage the different stakeholders in academia, the private sector, and the realization of inclusive OER. I’m going to turn to Zeynep for the next session, the next panel. Zeynep?

Zeynep Varoglu:
Thank you very much, Michel. We have the pleasure now to look at Now and Forever, about sharing resources within a policy framework and within the framework of sustainability. Our first speaker is Neil Butcher, who’s going to look at issues related to national education policies. Neil, the floor is yours.

Neil Butcher:
Thank you very much, and greetings, everyone, from Johannesburg in South Africa. As you can see here, I’m focusing on national education policies. I think what we’ve seen in the world of OER is that sustainability really depends on governments developing and implementing sustainable policies. There’s a lot of OER policies. Unfortunately, many of those policies exist in paper, but are not really being implemented in practice. And I think in the context of the discussions on accessibility today, it’s important just to recognize that 15% of people around the world have some form of disability. So governments really are the key agency that are going to be responsible for ensuring that the good ideas that we’ve heard about in the previous presentations… are implemented and sustained and financed. So we’ve spoken about the importance of content accessibility, the application of critical principles, the repositories that are available to support web accessibility, and so on. And so in the bottom bullets, what I’ve just tried to unpack is some of the important things that are critical for national policy. And I think that starts with bullet four, which is to develop policies that provide for the understanding and application of open licenses to content and software. This may seem like an obvious point, but if our intellectual property and copyright policies nationally are not providing for and enabling government agencies to use open licenses, then it’s unlikely that that will actually ever be done. We also then need in our policy to unpack the meaning of digital accessibility and its practical implications for policy. And the practical implications are the important part. There’s a lot of lip service to the importance of digital accessibility, but the kinds of ideas you’ve heard about in the previous slides really need to be documented in policy and the implications for content development and other processes that are being funded by governments need to be stated very explicitly. So these explicit requirements about digital accessibility need to be contained in the policy, and they need to be binding in the sense that when governments are spending money on content development, there needs to be an obligation that this is built into what government agencies are expected to procure. So the accessibility plan for existing national and other education initiatives, the kinds of ideas we’ve heard about in the previous presentations on the repositories, these initiatives are really important, but if government is not committing to sustaining them on an ongoing basis, we’re unfortunately not going to see that the kind of impact that we’re looking for and that’s been discussed by my colleagues. And so that will bring me to the last point that I consider to be the most important. is policies need to be explicitly stating what the accessibility considerations should be for content creation projects, for educational projects, and how those need to be embedded in the procurement processes. So I think this is the key hurdle at which we tend to stumble, is that we have a lot of good principles and ideas often documented in policies or contained in guidelines, but when we get to the point of procurement and when there’s urgency to move ahead with, say, procuring a content creation policy for the development of educational materials at national level, unfortunately, the procurement processes don’t enforce obligations for the service providers to make sure that the content they’re creating adheres to accessibility guidelines and making sure that that’s a condition of payment for the services being received. So unfortunately, what tends to happen is the contracts are executed and this critical consideration of accessibility is left on the sidelines. So I would say of all the things that we can do that would be most important, we lead up to this one. If we don’t include some references to the importance of accessibility and making sure that there is accountability for delivering those obligations in the procurement process, all of the other excellent work that we might’ve done will unfortunately have been for nothing. So I think those are some of the critical guidelines at the national level we need to consider. Thank you very much.

Zeynep Varoglu:
Thank you very much, Neil. It’s a very clear presentation on the national policy issues. Colleagues, I’d also like just now to ask Melinda Bandeleria to kindly come back to the point that was started at the beginning, which was on bringing this national policy into the classroom in terms of institutions. The person, the colleague who’s kindly taking care of the slide, if they could go to the second slide, you’ll see the slide from Melinda. Melinda, the floor is yours.

Melinda Bandaria:
Yes, thank you very much, Zeynep. And as I mentioned earlier, the skills and knowledge that the teachers should have so that they can make OERs more accessible and inclusive should guide the policies and also in developing training programs for teachers. So I have mentioned already cultural and linguistic diversity, and also the knowledge about copyright laws and licenses that are associated with OERs. So about the skills that should also be integrated into the training programs for teachers. Of course, teachers should know how to convert their open educational resources materials into alternative formats such as OJO, Braille, or even simplified text to cater to students with different needs. They should also have the skills to provide captioning and transcription for hearing impaired learners when reusing OERs and be able to provide descriptive text for hyperlinks and alternative texts for images, especially for those who are screen readers. And of course, the technological skills will be very handy so that they can make sure that the OER platforms and materials that they are using are compatible with assistive technology that the learners, different types of learners will have access to. And the most probably, we are not very much conscious about this, is for them to determine about the text readability of the materials that they are using, and knowing how to determine, like using different mechanisms like the Pug Index measurement. So at the end of the day, it is also making use of the technology platforms to make this materials open educational resources material. So what I’m trying to emphasize here is that our training for teachers should not stop with them developing. sharing, knowing the licenses appropriate for the materials that they are producing, but also acquiring this different knowledge and skills, which are essential to make the open educational resources that they are using more accessible and inclusive to the various types of learners. So, I think that’s all from my end. Thank you very much for allowing me to finish my presentation and contribution to this forum. Good day to everyone.

Zeynep Varoglu:
Thank you very much, Melinda. Thank you very much. So, there’s a very concrete response to policy, which is put into action at the national level and at the institutional level. And with that, I’d like to turn the floor now to give the floor now to Michel. Michel is a communication information advisor from UNESCO Dakar, and he will talk about successful example of an OER initiative at a regional level, which can serve as a model of good practice. So, Michel, the floor is yours. I think it’s further on.

Moderator – Michel Kenmoe:
Yeah. Thank you, Zainab. As part of our initiative for implementing the OER recommendation in West Africa, we started by conducting research with the different stakeholders, the academia, the teacher training institution to understand what are the shortcomings that may prevent the adoption of OER. And we came out with some observation that without middle to top level buy-in of the OER, it’s going to be difficult for most of the country to actually engage in the implementation of the recommendation. So, we turn out to raising awareness among decision makers, the Minister of Education, Minister of Youth, and Minister of Higher Education, and also all the middle decision makers within the education sector, and also to explain to them the importance of OER and how OER can actually contribute to quality education within the country. country. And this led to what? This led to commitment from many of the countries in West Africa to develop a national strategy for OER. So we start with Burkina Faso. So we were successful in actually developing, with the Ministry of Higher Education, a national OER strategy. And that is yet to be validated. As you know, the country has been into some troubles, and then this has halted the progress toward the adoption of the OER national strategy. We also succeeded in convincing the Senegal to engage in the elaboration of its own OER strategy. So today, we are working toward the validation of the national strategy. It was a collective effort with multiple stakeholders involved in the design of the strategy. It’s covered all the dimension of the OER, actually contextualizing with the reality of each country. We also made the same thing in Togo, where the country also engaged in the development of the OER strategy, the same thing in Congo, and also in Djibouti. So so far, we have about five countries that are in the process of adopting the OER national strategy. And all the strategy, what is really interesting is that the very process of elaborating the OER was quite interesting in raising awareness for the recommendation. Because by being involved in the process, many came to have a better understanding on the why and on the importance of the OER recommendation. So that today, we are having in many of those countries, there are a team of experts who are becoming the advocate for the OER within the country. So the challenge that we see so far is the challenge of funding. We have seen that everywhere where the strategy was developed, there was this concern about how the government is going to actually fund. How are they going to find the resources to actually support the realization of the strategy? And one of the suggestions was that government can actually ensure that from now on, whenever there is a project with donors involving the production of educational resources, government should ensure that at least part of the project support the production of open educational resources within the country. So we hope that with that experience, we are still in the process of the adoption of the OER strategy. But the strategy in all those countries has already been elaborated, but still to be validated at the national level. So this is what I can share regarding the experience that we have in West Africa. Thank you.

Zeynep Varoglu:
Thank you very much, Michel. And thank you for sharing this experience in West Africa, which is very, very, very strategic. I’d like to give the floor now to Dr. Talamil, who is the Jung Professor and UNESCO Chair in Distance Education at the University of Brasilia, Brazil, who will talk about sustainability models. Tal, the floor is yours.

Tel Amiel:
Thank you. So one of the things that we have to worry about based on a couple of presentations that came before is, what does it mean to be sustainable for OER? And of course, the first thing is the issue of money, right? Whenever we’re funding these kinds of things, just like when we talk about free software, we know that development of free software and development and sustaining of these projects takes money. And so there are many ways. And I just want to highlight three. One of them is related to what Michel just mentioned, which is the idea of open procurement. So many governments around the world are trying to implement open procurement in their systems. And so there are many ways of doing that. And so one of the things that we have to worry about is how do we build the infrastructure to be able to be able to do that? And so I think the first thing is, how do we build the infrastructure to be able to be able to do that? And so I think the first thing is, One of the ideas that we push the most is this idea that if you’re using public funds, you should have public assets, public goods. So open procurement models are very popular, but I think they fluctuate quite a bit. I mean, in some countries, it’s very easy to push the idea of complete open procurement. Everything that you produce with public funds should be open. In other countries, we have to be open to the idea that this might not work exactly as we expect. You have to be more restrictive on your licenses, or not all resources will be open, and some will and some will not. I like to think of open procurement as a transition, you know, especially if you’re going from an all rights reserved model. You have to kind of try different ways of making this work until maybe eventually you’ll get complete open procurement. But there are other ways to do this. Just like with free software, we have models for open with added value. So you might provide the resources for free, which is a key stone of OER. The resources must be free, but then services like customization or training and all these kinds of other things can be by cost. And then also, something that doesn’t last forever, but is good to get things started, particularly in new projects, whether it’s in a government or institution, is partnerships and donations from foundations. I think people are very keen on funding these kinds of things for openness. But then the financial aspect is one. For OER particularly, there are two others. Neil mentioned policy, so I’ll be very brief on this, but it’s not just about putting the policies on paper. We have plenty of those, and some are much more effective than others. But one of the things that works really well is having working groups that are cross-sector. We’ve heard a lot about multi-stakeholders, but actual multi-stakeholders with people actually doing things and representing their corners of the world, doing things together and monitoring these policies. That works really well in many countries. That has worked very, very well. And groups that can evolve, right? So OER is not something that stands over in time as one solid thing. The entry of generative AI has changed quite a bit our perspective. and OER, so we have to have people thinking about this from the perspective of teachers and legal issues and so forth. So these working groups work for that. And finally, OER, as it’s an educational endeavor, that’s the core. The practices around OER is what really matters. And so if you don’t have community engagement, if you don’t have people that are buying into this at all levels, it makes absolutely no sense. It’s just legislation. It’s just money. It’s just resources, right? So we have to have people that have incentives, that have recognition for doing these kinds of things, and that can continuously raise awareness about where OER is at that moment.

Zeynep Varoglu:
Thank you very much, Kel. And right now, we’ve been very efficient with our time, so I’d just like to take this opportunity to actually do a meta discussion, because in fact, in front of you today on the screen, the majority of these colleagues are actually on the advisory board of the OER Dynamic Coalition. And this is the first OER Dynamic Coalition event at the IGF. We’re all very honored to be here before you. We have in the OER Dynamic Coalition, it was started in 2020. And the principle of the Dynamic Coalition was started in 2020, and we became an official IGF Dynamic Coalition in March 2023. But the spirit of Dynamic Coalition was in the body of this text from the beginning of the discussions, and in the background document to the text of the recommendation, which was presented before the member states. And before you, and in many of the presentations we have had, Melinda, who is the advisory board chair for capacity building, also Lisa, who is for policy, and Kel, who is for sustainability, and Neil for communication. And I’m sorry, I’m just going through the list. You have the different members of the party of the. of the advisory board. And these, the OER Dynamic Coalition has brought together, brings together up to now 500 stakeholders from the different stakeholder groups that were presented at the beginning of the session, the knowledge, community, education, culture, and also publications. And we bring together stakeholders from government, institutions, and civil society. And it focuses on knowledge sharing and collaboration in the implementation of the recommendation. And this format has turned into a very useful way of maintaining the dialogue and maintaining discussions and making the issue of the implementation of the OER Dynamic Coalition a priority for governments and institutions to date. And it’s a great pleasure to be here before you. We have some time ahead of us. So I would just like to maybe ask the panel two questions that were in the discussion, but unfortunately we haven’t had time to look at it, but I will just perhaps put it to the panel for the moment. The first one is how OER can be tailored to diverse needs of learners in terms of cultural, linguistic, and socioeconomic backgrounds, fostering inclusive learning. And this goes to the area of the recommendation, which deals with quality, multilingual, inclusive OER. Could I ask perhaps if there are anybody, I don’t want to put anyone on the spot, but I will nonetheless do so. I hope you don’t mind. Could I perhaps give the floor to Lisa to start with? Would that be okay with you, Lisa?

Lisa Petrides:
Absolutely, of course. Okay, you’re to the left of Tali. And for those of you who don’t know Zainab, who has been running and spearheading this dynamic coalition from the beginning, she asks us to do something, we do it. So thank you, Zainab. I think the issue, let me just start by saying OER, as many of us I think on this panel here see it, is a public good. And just like air or water, education should be accessible to everybody who wants it, who needs it. And so I think what’s been really important in OER is to think of this practice of open education as something that brings education opportunities to not only the mainstream of education, but for those who have been excluded, for those who have left and we want them to come back, to those who simply have been, in some cases, somewhat not a part, or even in the worst cases, invisible to the processes of education. And we think about places where there’s no school systems that are operating because of war or other situations like this. So when we think about diversity of learners, I think the idea that the use of open educational resources as a knowledge transfer, as a knowledge building, is quite transformational. We’re not just talking about what already happens in our education systems. We’re talking about inclusive voices. So in some cases, that’s where students themselves are involved in the content creation, where faculty in higher education or teachers in primary schools are using their own cultural context and localization to actually adapt OER. And this is where we’re seeing some of the biggest transformational changes in the use of OER. And that is all around the world. I can speak for the US. But for many other parts of the world as well. So Zeynep, who do you want to have this next?

Zeynep Varoglu:
Is anyone else in the room? Does anybody want to say something? Tell, I see you smiling. all the way from here in Paris. This is nice. Would you like to add anything?

Tel Amiel:
I was waiting for you to give me the order. No, I think that one of the things that we talk about here, and especially in this context of IGF, is this presence of many different cultural groups and many different needs. And we understate the power of OER for doing this. I mean, if we’re talking about public goods, it means including everybody. And one of the greatest trends of OER is adaptability, remixability, being able to share and revise and remix and reuse, which is quite unique. And we don’t explore that enough, I think, especially in this multilateral, multi-stakeholder process of having people really engage with these kinds of resources is something that pedagogically makes a lot of sense and makes it really a public good.

Zeynep Varoglu:
Thank you. Thank you very much. I’m kind of handicapped here because I can only see what the screen shows me. So I can only see you up to Michelle. But Michelle, perhaps you can see

Patrick Paul Walsh:
better than I can. Would someone like to? Paul here. So just what I think is congratulations to everyone who was part of actually putting together the OER recommendation, because it really is a wonderful instrument. Just to answer your question, though, I think what’s in the recommendation, which is really important, is that this kind of freedom to create and to contribute to the global knowledge commons, that’s so important. And we have to even think about people with disabilities. So people in any part of the world should be able to freely and easily contribute to the content. And that’s one freedom. The other freedom, obviously, is accessibility. And I really like the previous speaker who talked about the content has to be like the PowerPoint slides and videos, et cetera, has to be compliant to people with visual impairment, et cetera, et cetera. That’s very important. But then the key point is that when you you use it, you can repurpose it, translate it, put it into your local context, put it back into the global knowledge commons again. So it’s really so important to keep it decentralized and in decentralized repository so all that can happen. But that’s why I think the recommendation is so wonderful because you might just think, oh yeah, free education resources, but it’s not about that. It’s actually about how they’re created, how they’re accessed, how they’re repurposed. So there’s much more to this than just what it looks like OER. And Stephen would like to contribute if that’s okay.

Zeynep Varoglu:
Sure, thank you, thank you.

Dudley Stephen Wyber:
Yeah, at risk of just reemphasizing a couple of points so far, I think I want to draw firstly on what Paul was saying about the knowledge commons. And actually, Edna, it’s an idea that was very strongly brought out in the futures of education report a couple of years ago about this idea of trying to move away from a sort of single direction model of you shall learn this body of knowledge and that is what you shall learn to a much more sort of recurring circular approach where you learn, you explore, you contribute, you improve. And that’s quite a radical thing it feels like, but actually making that clear that that’s the model that we’re going for is a significant one because it does create agency and it creates responsibility. And the other thing I wanted to pick up on something that Paul said about diamond engagement and this idea that it’s not just at the producer side, but also it’s important to have people therefore on the ground whose responsibility is not just to make sure that the stuff gets on the internet in the first place, but then that the stuff is taken down and used. And of course, that’s logically a role that obviously teaching staff have, but librarians in particular have and accepting that, I don’t know, we can’t just assume that if we shout out to the internet, someone will actually make use of the stuff and it’ll actually work, no. We can’t have a supply side only approach. approach here. We need to have a demand-side approach.

Zeynep Varoglu:
Thank you very much. I don’t know if there are any other inputs in the thing. Neal has raised his hand. Neal. Two people have raised their hands. Neal, please go ahead.

Neil Butcher:
Thank you very much, Zeynep. Maybe just to build on what previous colleagues have said, I think one of the things that I could emphasize possibly is just the importance of making sure that we don’t necessarily think that more is better, and that we focus collectively on ensuring that the way in which we invest resources has a very strong focus on producing high-quality teaching and learning resources and OER for accessibility purposes. I think very often we have a very technical way of thinking about that when we do engage in accessibility, so we just take materials and make them accessible at that technical level, but we’re not actually considering whether or not the quality of the teaching and learning materials justified making them accessible in the first place. The Internet is flooded with content, and the more flooded it becomes, the more I think that carefully curated collections of resources that we can feel confidence are encapsulating high-quality teaching and learning experiences of the kind that we just heard about. Stephen gave some really good examples, I think, of how that might look in practice. We just need to make sure that we take the time to invest properly in what we’re doing and not just rush the process of taking a whole lot of content and making it accessible. I think that’s doing a disservice to learners rather than helping them in the long run.

Zeynep Varoglu:
Thank you. Thank you very much. Melinda, would you like to add anything?

Melinda Bandaria:
I just would like to support the points raised by Neil in saying that we have to make sure that what we are using are quality OERs, so it is very important. that we have this quality assurance framework, which we can integrate in evaluating the open educational resources that especially the teachers are using for their courses. So that’s one point. And the important role of the teachers, the important role of the universities in making sure that this OERs, that’s what’s being circulated in the web, in the internet, are quality materials that are being reused, remixed, translated into the local languages, and shared alike by the teachers, by the universities who are into it. And of course, the more important thing is putting in place the policies that will support or provide the conducive environment for the OERs, the use, the development, and sharing of OERs to flourish. So if it is not possible for a national policy to be there immediately, then probably institutional policies can start the work, can do the work, and make sure that we have these things or the five action areas on the OER recommendations can be undertaken. So role of universities and policies, even at the institutional level, and then the national level

Zeynep Varoglu:
policies. Thank you. Thank you very much. Thank you. We have, from the participants in the room and online, would anybody like to add anything? It’s a very funny thing to have moderation online and in the room, because you can only see so many, I see now only Melinda’s face, but I’m sure there’s a lot of people behind Melinda, but I can’t see them. So in that case, if we have some time left, and I would just like, yes, yes. We have one person in the room.

Audience:
Yes, thank you. Yeah, it’s a question, exactly. Niels Brock, DW Academy. My question was about the experience with decentralized repositories, so I would be interested if there are any best practices that some of you could share about this. and maybe also specifically in like open technologies for this, thank you.

Patrick Paul Walsh:
Yeah, so the basic idea, you know, ironically, I think a lot of universities who are engaged in getting up the rankings on let’s say commercial or branded journals, right? They want citations, that’s part of, you know, impact factors, it’s part of the whole reason why you get ranked as a university, but ironically, the way to gain citations to get a citation dividend is to make sure that on your research portal or your profile, that actually you link a preprint or an open paper in some way to the actual citation, because that will actually, if you put it in the repository, the local repository in your local library, you know, it’s more findable, and you know, you put in a keyword and people come to you, they don’t have to go to the branded journal, they come to you through the search engines, right? And if your metadata is really good, they’ll find you really easily, right? And then of course, they’ll read your preprint, but they’ll actually cite your actual publication and seemingly the citation dividend across different disciplines is enormous, right? So there is actually quite a bit of work, you know, on let’s call these learning objects like PDFs to put into repositories, okay? So what I’m talking about here though, is that I know particularly during COVID that on our LMSs, we all have kind of folders of digital objects, you know what I mean? Like videos and homeworks and so on and so forth. And the idea is that if you standardize how that LMS is structured, and that also can be archived in a local repository, right? That you’re able to, again, you know, true platforms to actually point. So for example, eLife do it for biology and it’s an overlay of repositories of the researchers and they publish their papers. So the idea of our platform would be actually to highlight LMS folders that you could just click, go to a repository, and then you can pull it up into your own LMS, right? This is basically the idea. And it’s a network of learning objects, you know, multimedia kind of learning objects, right? But the real benefit to doing it decentralized, and it’s actually because of Algebra Science and these, I think that the libraries are for interlibrary loans and they’re interoperable. The repositories are so interoperable because they’re actually building it for doing all this work of hosting and archiving for the commercial entities, right? So the academics create the work and sign over the property, right? And then they sell that back to the libraries. And then the libraries do all the work and archiving and preservation and other, this is ridiculous, right? So we have to try and get rid of the middle person and we have to try and just work librarians, academics and others just to work together to make that happen, right? But what was the point I was trying to make was, I’ve lost the point, but the decentralized system, you’re getting the, oh yeah. The key thing is that you can update your course locally. You can repurpose. that locally are, like others can take it, translate it, put it back into the system again, right? So in other words, rather than just giving away, you’re probably right, giving away a PDF that you can’t edit that goes, you know, into a library so that you can’t edit, that’s just nonsense as well, right? We should be able to update what’s in our repository. So my course, I might change 10% every five years, you know, but you see the idea that you would update it and that it becomes a kind of a real-time repository rather than something like, you know, 2005 publication in Nature or something, yeah.

Moderator – Michel Kenmoe:
Thank you very much. Any other question from the room or online? Any comment, contribution? I just want to, following just what Amiel has said about the open government procurement, what we have learned from the context of West Africa, and this was a great surprise for us, is that we discovered that in many of the countries with whom we were working, they have no budget for educational resources production, no budget. So the idea of open procurement doesn’t fit in that context. So we had to, as part of the OER strategy, to raise awareness on the importance for the government to actually engage in the production of educational resources, adaptation and remixing, supporting initiative related to this. So we should not take for granted that countries already are committed to produce educational resources. It’s not the case. And we have this, and I’m saying all the five countries with whom we were working, they have no budget, no budget line to produce educational resources. I’m not talking about open education, but educational resources, they have no budget line. So this is a. a key challenge in such context. And the importance of raising awareness of the importance of a country to actually engage in the production of educational resources.

Lisa Petrides:
Thank you so much for making that comment. I think that if we think of the origins of the teaching profession, that the teacher or educator was the person with the internal knowledge. And over time, we’ve developed a system where, in fact, there are experts out there, all the way to the textbook publishing company. And this whole industry has started. And this is where the money and the procurement happens. But in fact, the native knowledge is around the educator. And it’s also around the learner, who is living, breathing, working in a community with a lot of knowledge and understanding. We found early on, when we were going to certain places to talk about OER, people were seeing what we had done in our library, OER Commons. And they would say, that’s nice. But we have oral histories here. Or we have other native or indigenous languages here. The knowledge is there. And if we think about having it, it’s sort of rethinking what teaching means, and who teachers are, and how teachers are trained. But we’ve gotten so far away from the idea where the educator is actually the expert in their knowledge. And that might be some kind of perspective that is brought there as well.

Moderator – Michel Kenmoe:
Thank you. This was a very insightful exchange. You know, at the beginning of this panel, the Assistant Director General for UNESCO invited us to use the opportunity of this discussion to lay out key actions to undertake in order to advance the agenda of OER recommendation. So I’m going to turn to each of you. Let’s say two minutes. Or OK, three minutes. Yeah. What is the takeaway, and what are core actions that we need to consider for the future of the implementation of OER? I’ll start with Emil.

Tel Amiel:
So I think based on the experience that we’ve had in Brazil for 10 years developing policy on this is give people serious responsibilities for OER, make it a serious element, and give them the responsibility to do it and expect things to happen from people. So create the policies, get people involved, and then give them serious responsibilities for taking care that it’s going to be implemented. Without that, I think that if we don’t have people actually involved in this and around this, but the incentives to stay, it just becomes another piece of legislation that doesn’t move forward. It’s an agenda item that people talk about, but nothing ever happens around it. That would be, I think, the biggest takeaway for me.

Moderator – Michel Kenmoe:
Thank you. Stephen?

Dudley Stephen Wyber:
So I think I’d probably underline, and this is probably sort of a takeaway recommendation for the sector I represent, the importance of trying to get ourselves to the same stage with OERs as we are with open access. And a point I would have made to the colleague from DW Academy is that there’s already a lot of really good work around how do you get interoperability to happen between OA repositories through organizations like COAR in Canada? Can we apply that same logic to OER repositories? And then just come back to the question you were actually asking me now, how do we mainstream? How do we do an end run around the development process here and make sure that librarians are seeing, in just the same way as they provide materials, that they really feel confident and they feel responsible for helping their faculty, for helping students make the most of OER so that they feel agency in order to help other people feel agency.

Melinda Bandaria:
Yes. Thank you very much. My key takeaways, of course, I’m very much focused on capability building as one of the action areas in the OER recommendations. So, initially, we were so much focused on just raising awareness, ability to use and develop and share OERs, but this discussion really brought us back to the essentials of making OERs more inclusive and accessible. So, we have to go beyond in this capability building initiatives. And, of course, I just would like to go back to what was also contained in the OER recommendation, and it will also bring us to that discussion on the lack of resources to produce OERs. And part of the OER recommendation is invoking that the public funds can be used to produce OERs. And if we use the public funds to produce these educational resources, then we are morally obliged to make them open access materials. So, I guess this is something that we should be doing, our advocacy, our commitment to making OERs more popular in terms of use and development. And the incentive system, especially for universities, that’s the sector I am representing, incentive system for the faculty members, for the teachers, when they use and create and share open educational resources to the community. So, these are my key takeaways from this forum. Thank you.

Moderator – Michel Kenmoe:
Thank you, Melinda. Lisa?

Lisa Petrides:
Thank you. I have three quick things. One would be to resist this urge for strategies where one size fits all. I think the comment about decentralization was key, and we have to really keep working on that. and what that really means to have localized control of knowledge. Yet, in a decentralized model, it filters up in a way where we really do build this knowledge commons. The second piece is to not be seduced by the commercial private partnerships that are becoming much in vogue today. They’re wrought with a lot of internal problems that ultimately, I fear, will result in the locking up of knowledge. Not to mention, there are so many privacy concerns once you have commercial interests in terms of how data is used, who uses it, and data for whom. And the third one is a real positive recommendation, our takeaway, which is we really need to build bridges across open. That’s open educational resources, open pedagogy, open data, open science, open access, open publishing. Did I miss any of the opens? I think we’ve been operating in silos for too long, and we really need to start connecting those for real.

Moderator – Michel Kenmoe:
Thank you. Patrick?

Patrick Paul Walsh:
So I fully endorse just what Lisa said. Excellent. Just to come back to my big thing, and of course, I’m going to hope to implement this overlay repository journal of SDG courses. But I guess the thing that keeps me awake at night is behavioral issues, as I call them, or psychology. So in other words, just to take an example of one of the stakeholders is the government. So you have to change the mindset. So what’s the problem there? So Jeffrey Sachs was discussing our project at the TESS, the Transforming Education Summit. And I think he said something very important. So he said the reality here is that there’s a bit of a sunk cost to set this up. I’m an economist, so there’s sunk costs and there’s marginal costs, right? So think of putting in electricity or a digital infrastructure, you know what I mean? Like to set up the power. points, to put in the railway tracks, to put in the ports. No individual can really do that. That has to be done by government. So there is a bit of a sunk cost to get this up and running. The beauty of it, though, is that the marginal costs are very low. And in fact, once it’s open, as Tell Amir was saying, there is possibilities to add value or commercialization, which would actually pay into the resource. So I could put sums together for the government saying, if you put up so much money and you put it into your policies and procurement, I can guarantee you within five or six years, the costs to librarians, to the academics, everyone was going to be way reduced, the marginal cost. And in fact, if any of these global knowledge commons issues are commercialized in any way, your property will actually accrue income or accrue value added. So I can create a business model. The problem for that, though, is you’re saying to the government, you put money up now and change your policies, and then later, you’re going to get a return. And that doesn’t sit well with government, because too many times have they given money for a return in the future, and they’ve never got the return in the future. Now, I could go on in terms of, what are the incentives for academics? What are the incentives for librarians, for interoperability of ed tech, interoperability of all the open opens, and so on. So to me, the problem is mindset and coherence and cooperation. And it’s not necessarily financial or technical or anything like that. It’s a real, what I call behavioral mindset issue that you have to address.

Moderator – Michel Kenmoe:
Thank you. Naeem, are you online? Naeem?

Neil Butcher:
Thank you, Michel. I think the two key takeaways that I’d like to just reemphasize is, first of all, to recognize, notwithstanding nice conversations about how we should support the private sector in monetizing this space. and I’m not sure I agree with a lot of that. I think we have to recognize that the responsibility sits squarely with government to make sure the public education systems are accessible for all. And that involves proper investment in creating learning environments that actually support real accessibility. And the second takeaway for me related to that is that the investment strategy for that has to ensure the quality of the teaching and learning experience for everyone. I think if OER as a public good is simply expanding access to poor quality learning experiences for people at the margins, it’s doing the world a disservice and we need to make sure that the emphasis is very strongly on improving the quality of the learning experiences. I would just add one last and possibly obvious point, which is that the only way in which we can ensure that this happens successfully is to make sure that the processes by which this all takes place are actually led by representatives from the target communities of learners that we are aiming at. And I think if we look around the panel and certainly at this stage, I think it’s clear that we have a lot of work to do to make sure that we bring in the voices of the people who we hope will benefit from these conversations. So I think that’s another critical challenge that we face as we move forward.

Patrick Paul Walsh:
Yes. So Niall, just hopefully we’re on the same page. So I’m not saying you should commercialize the infrastructure or the content or anything like that. So this is a point about value added. So for example, if a private school takes the material, puts a letter book on it, adds things and then sells it out there that they should really, if they’re bringing in income, there should be some rent sharing on a public resource. Or. if a commercial company takes it and actually is doing kind of upskilling and training and again is charging money to do that, that there should be kind of rent sharing. So it’s commercialization on the margin if you like, but it’s not on the infrastructure or the open education resource at all. That has to be publicly owned or stakeholder owned as you say. So I hope that’s okay. And you mightn’t like the idea of the value added either, but just to be clear that it’s not commercialization of the platform or the actual resource.

Moderator – Michel Kenmoe:
Thank you. Zeynep, do you have one last comment? Zeynep? I see some mute. Zeynep? She’s online. I see that she’s online. Zeynep? I don’t know what technical issue. I want to express our warm thanks to all the panelists and all the participants of this session, those who joined us online and those present here in Kyoto. Can we give a round of applause to all our panelists and the participants, please? Thank you very much. Yes, Zeynep?

Zeynep Varoglu:
Yes, it works. OK, sorry. It’s been a bad, bad technology morning. Thank you so much. I was saying that it takes a village to raise a child, they say, but it takes a whole world to make learning possible. And it’s through an open educational. resources that I think the knowledge can really be shared. The point of this recommendation and the point of this panel, the point of this discussion, is about sharing knowledge openly. And I’d also like to thank very much all the panelists here and online. Just to let you know that the colleagues that are joining us online, we’re coming from three different continents right now into your room. And it is a great pleasure to be here. We would all very much like to be there in person, but unfortunately it hasn’t been possible. But thank you very much to all of you. Thank you, Zeynep. And have a great day to all of us. Thanks.

Audience

Speech speed

179 words per minute

Speech length

57 words

Speech time

19 secs

Dudley Stephen Wyber

Speech speed

187 words per minute

Speech length

1132 words

Speech time

364 secs

Lisa Petrides

Speech speed

164 words per minute

Speech length

1465 words

Speech time

537 secs

Melinda Bandaria

Speech speed

162 words per minute

Speech length

1352 words

Speech time

501 secs

Moderator – Michel Kenmoe

Speech speed

143 words per minute

Speech length

1841 words

Speech time

774 secs

Neil Butcher

Speech speed

183 words per minute

Speech length

1367 words

Speech time

449 secs

Patrick Paul Walsh

Speech speed

195 words per minute

Speech length

2344 words

Speech time

721 secs

Tawfik Jelassi

Speech speed

116 words per minute

Speech length

549 words

Speech time

284 secs

Tel Amiel

Speech speed

232 words per minute

Speech length

1117 words

Speech time

289 secs

Zeynep Varoglu

Speech speed

162 words per minute

Speech length

1611 words

Speech time

598 secs

Climate change and Technology implementation | IGF 2023 WS #570

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explores various topics related to climate change and technology in the global south. One key point highlighted is the importance of accountability and responsibility in addressing climate change. It emphasises that governments, corporations, and individuals all need to take responsibility for their actions and work towards mitigating climate change. The analysis also mentions concerns over digital colonisation and the quest for digital sovereignty, particularly in global south countries. It points out the potential exploitation of resources by technology companies from developed nations.

Another topic discussed is the challenge of tackling electronic waste sustainably. While recycling initiatives exist in countries like Brazil, the analysis highlights difficulties in handling electrical and electronic devices due to harmful substances like lithium. It emphasises the need for sustainable solutions to effectively manage electronic waste.

The analysis also examines the search for successful examples of technology mitigating climate change impacts, especially in the Amazon region of the global south. It advocates for leveraging technology to address climate change, reduce emissions, and protect sensitive ecosystems. However, it does not provide specific examples or evidence of successful implementations.

Furthermore, the analysis draws attention to the importance of localising global climate change solutions. It highlights the relatively poor performance of Hong Kong, despite its significant economic power and infrastructure. This suggests the need for tailored solutions that consider local contexts and challenges, rather than relying solely on global strategies.

The role of lobbying and negotiating with decision-makers is also emphasised as a means to advance climate change agendas. The analysis stresses the importance of engaging with policymakers to influence climate-related policies and decisions. However, it does not provide specific evidence or examples of successful lobbying efforts.

The potential of the Internet of Things (IoT) in creating energy-efficient systems and reducing carbon emissions is another topic discussed. The analysis highlights the positive impact that IoT can have on sustainability efforts but does not provide supporting evidence or specific examples.

Lastly, the analysis addresses the need for accountability in adopting costly technologies and the role of lifecycle assessment in defining avoided emissions. It mentions ongoing discussions in Europe regarding the European Green Digital Coalition. This highlights the importance of considering the environmental impact of adopting new technologies and ensuring that the benefits outweigh the costs.

In conclusion, the analysis raises various important aspects related to climate change and technology in the global south. It underscores the need for accountability and responsibility, addresses concerns over digital colonisation and digital sovereignty, discusses challenges in tackling electronic waste sustainably, explores the search for successful technology implementations, advocates for localising climate change solutions, emphasises the importance of lobbying and negotiation, highlights the potential of IoT, and stresses the need for accountability in adopting costly technologies. However, it lacks in-depth evidence and specific examples to support these points. Nonetheless, it raises key issues that require attention and further exploration.

Moderator

Climate change is a pressing global issue that demands immediate action. It is acknowledged as one of the most pressing issues in the world. The seriousness of this concern is emphasized by the devastating impacts of climate change witnessed worldwide, including extreme weather events that serve as evidence that the Earth is changing. Igor, one of the participants, highlights the urgency of taking immediate climate action.

Technology has emerged as a crucial tool in addressing climate change. It is seen as a catalyst for change and offers potential solutions. Various technologies, including renewable energies such as wind, solar, and hydropower, are being utilized to combat climate change. These technologies provide valuable alternatives to traditional energy sources that contribute to greenhouse gas emissions. Furthermore, the session explores how technology can be leveraged to transform social, educational, and environmental aspects, offering concrete solutions to combat climate change.

However, it is crucial to ensure that technology is used responsibly and does not harm the environment. The responsible usage of technology is a fundamental consideration, as it can have adverse effects on the environment. The session emphasizes the need to find ways to ensure that technology does not adversely affect the environment itself, highlighting that great power comes with great responsibility.

Young people are recognized as key actors in addressing climate change. The session highlights the crucial role that young people play in combating climate change. Their active involvement and engagement are crucial for driving change and implementing sustainable solutions.

Artificial intelligence (AI) is also identified as a tool that can assist in mitigating and adapting to climate change. AI can optimize electricity supply and demand, leading to energy consumption savings. AI can also aid in developing early warning systems for severe disasters and accurate climate forecasts, contributing to climate change adaptation efforts.

Despite the positive contributions of technology, there are negative impacts that need to be addressed. The production and usage of technology contribute to surges in energy demand and the environmental impacts associated with the hardware life cycle. These concerns highlight the importance of considering the environmental implications of technology.

Collaboration between various sectors is deemed necessary to maximize the potential of technology in combating climate change. Governments, businesses, research institutions, and individuals are encouraged to collaborate and create incentives for sustainable practices and eco-friendly technologies. By working together, a more comprehensive and impactful approach to addressing climate change can be achieved.

The European Union’s twin transition approach, which combines green and digital strategies, is seen as a significant step towards battling climate change. The EU has committed to cutting its climate emissions by half by 2030 and aims to be climate neutral by 2050. This approach demonstrates the potential for combining digital advancements with environmental sustainability.

Transparency is highlighted as a crucial aspect in addressing the environmental impact of digitization. It is suggested that the lifecycle of applications, including design and conceptualization, should be accounted for, with measurement of material consumption carried out independently. Accessible and transparent results would allow for a better understanding of the environmental impact of digitization.

Circular economy principles are advocated as a means of reducing political dependence and promoting sustainability. The adoption of circular economy practices, such as recycling and resource conservation, can contribute to economic stability and security while reducing the negative impacts on the environment.

Equitable access to digital tools is emphasized as a necessary step towards addressing climate change. It is crucial to ensure that all population groups, including older people and structurally discriminated groups, have equal access to digital resources. Additionally, increasing digital sovereignty, which involves individuals having control over their own data, is seen as a crucial aspect of empowering individuals in the digital age.

Implementing technology solutions to combat climate change can be challenging, particularly in regions with a lack of infrastructure, high costs, and a lack of knowledge. These challenges highlight the need for targeted support and investment in these areas to overcome barriers and enable technology adoption for climate action.

Accountability and compliance regarding environmental laws and technology are critical to ensuring that technology initiatives are aligned with sustainability goals. The session raises concerns about the difficulty in ensuring compliance with environmental laws and court sentences. It suggests that supervision bodies and legal systems need to be strengthened to address these issues effectively.

Efforts from all sectors – including the private sector, academia, the tech community, the United Nations, and governments – are called for to find cheaper technology solutions to fight climate change and overcome existing challenges.

The preservation of biodiversity is mentioned as an important consideration in the context of climate change. The threat posed to the Brazilian biome due to temperature increases is highlighted, calling for urgent action to preserve ecosystems and biodiversity.

The power and influence of big tech companies are also scrutinized, particularly regarding the exploitation of data and resources of local citizens. International organizations are urged to work towards curbing the excessive power of big tech companies and preserving the interests of local communities.

Transparency and consumer awareness are seen as essential elements in promoting responsible behaviors in the digital age. It is suggested that if consumers were made aware of the impacts of data centers or unethical data practices, they might change their behaviors and support more sustainable practices.

Standards are recognized as crucial in promoting sustainable digitalization. The European strategy for green digitalization includes the implementation of standards to ensure that digitization aligns with sustainability goals. However, it is acknowledged that standardization bodies should strive for inclusivity and representation, ensuring that all stakeholders can contribute to the development of these standards.

Credibility issues associated with climate change reports are mentioned, indicating the need for effective checks and measures. It is essential that reports on climate change are credible and reliable to guide decision-making and demonstrate progress towards climate goals.

Lastly, the importance of legal and political collaboration is highlighted. It is noted that successful examples exist when politicians and legal teams worked together in areas such as patents and biodiversity aspects. It is emphasized that international agreements and disputes cannot be resolved solely through legal means, requiring the active involvement of politicians.

In conclusion, addressing climate change through technology requires immediate action and collaboration across various sectors. While technology offers potential solutions, responsible usage, transparency, and equitable access must be prioritized. The session highlights the role of young people, artificial intelligence, and circular economy principles in combating climate change. Challenges related to implementing technology solutions, accountability, and the preservation of biodiversity are also recognized. The excessive power of big tech companies, the importance of transparency and standards, and the need for legal and political collaboration are additional considerations in the fight against climate change.

Joรฃo Vitor Andrade

The provided data highlights the potential of the internet and technology in addressing the global challenge of climate change. The arguments put forward are that these tools can play a crucial role in combating climate change by enabling innovative solutions, facilitating information sharing, and promoting sustainable practices.

One argument suggests that the internet and technology can enable innovative solutions by using artificial intelligence and improved sensors to collect real-time environmental data, such as deforestation, temperature, and air quality. This data can help identify strategies to mitigate climate change.

The importance of information sharing facilitated by the internet and technology is also emphasized. Rapid dissemination of knowledge and best practices can enable individuals, organizations, and governments to make informed decisions and take appropriate action in the fight against climate change.

Technology is also seen as a means to promote sustainable practices. Smart grid technologies, for example, can optimize energy distribution and consumption, reducing waste and making energy systems more efficient and environmentally friendly.

The internet and technology are recognized for their potential to reduce greenhouse gas emissions through virtual meetings and remote work, reducing the need for commuting and business travel. This can lead to a reduction in carbon emissions.

Precision agriculture technologies are also highlighted as important tools in the fight against climate change. These technologies can optimize crop production while reducing the use of water, fertilizers, and pesticides, contributing to reduced greenhouse gas emissions.

Stakeholder collaboration is emphasized as crucial in leveraging the potential of the internet and technology in addressing climate change. Collaboration between governments, businesses, NGOs, and individuals can maximize the impact of internet and technology-based solutions.

In addition, the analysis includes a neutral stance on climate change, suggesting that it is a problem for humans rather than the world. This highlights the need for increased awareness and understanding of the interconnectedness of climate change and its global impact.

There is also a call to rethink the system for distributing energy, focusing on efficiency rather than just production. The use of artificial intelligence to distribute energy efficiently to areas with higher or lower consumption is proposed as a solution for reducing wastage and promoting affordable and clean energy.

Lastly, there is a negative view expressed against the extensive use of fossil fuels in energy production. The contribution of countries like China, with significant coal-based energy production, to higher carbon emissions is highlighted. This underscores the importance of transitioning to cleaner and more sustainable energy sources.

Overall, the data highlights the potential and importance of internet and technology in addressing climate change. Collaboration, innovation, and sustainable practices are emphasized as key to effectively mitigating climate change and creating a more sustainable future.

Igor Josรฉ Da Silva Araรบjo

Climate change is a pressing issue of global concern that requires urgent attention. It poses a significant threat to our planet, as evidenced by extreme weather events, changes in rainfall patterns, and rising temperatures. Human behaviour plays a pivotal role in the origin of climate change, with activities such as burning fossil fuels and deforestation contributing to greenhouse gas emissions. Acknowledging the impact of human behaviour is crucial in developing effective strategies to combat climate change.

Technology plays a crucial role in the fight against climate change. Renewable energy sources, such as wind, solar, and hydropower, offer sustainable alternatives to fossil fuels, reducing greenhouse gas emissions and promoting long-term sustainability. Adaptive practices, such as cultivating drought-resistant crops and implementing early warning systems, help communities respond proactively to the adverse effects of climate change.

Taking responsibility and acting now are essential to finding effective solutions to climate change. By doing so, we can mitigate its threats and safeguard the well-being of our planet and future generations. It is imperative that we adopt sustainable practices and utilise technology as allies in combating climate change. By addressing our actions and pursuing resilient solutions, we can make a positive impact and ensure a sustainable future for all.

Rosanna Fanni

The analysis highlights several key points regarding sustainable digitalisation. The first major point emphasises the need for better transparency and assessment of the environmental impact of digitisation. The report suggests that there is a lack of systematic data on the environmental impact, particularly throughout the lifecycle of digitisation. To address this, independent measurements and the accessibility of results are required. This would enable a more comprehensive understanding of the environmental footprint of digital technologies and help to identify areas for improvement.

Another important aspect identified in the analysis is the promotion of more entrepreneurial thinking and a compliance culture in relation to environmental sustainability. The argument is that creating environments where sustainability is viewed as an opportunity rather than a hurdle can drive innovation and economic growth. Furthermore, educational programs and awareness initiatives are seen as essential for fostering a culture of sustainability and ensuring that individuals are well-informed about the importance of sustainable practices.

The analysis also emphasises the need for a legal commitment to sustainability by design and default. This implies that ecological sustainability should be integrated into the design process of digital technologies, and the impact of these technologies should be visible to users. By making sustainability a legal requirement, companies will be compelled to consider the environmental consequences of their products and services, leading to more sustainable outcomes.

The circular economy approach is advocated for dealing with critical raw materials. Efforts should be made to reduce political dependence on countries with large raw material deposits. Moreover, the expansion of recycling practices can contribute to reducing the demand for new raw materials. This circular economy approach is seen as central to ensuring the long-term availability of critical raw materials and reducing their environmental impact.

Transparency and accountability in digital education, particularly with regards to artificial intelligence, is another important point raised in the analysis. Manufacturers are encouraged to provide clear explanations about how these technologies work and the implications they have. Additionally, special consideration should be given to children to ensure that they are prepared for the digital world and that their rights are protected.

The analysis also highlights the importance of equitable digital access for all, including older adults, children, and other structurally discriminated groups. Efforts to bridge the digital divide and ensure that everyone has equal opportunities to access digital technologies are crucial for promoting inclusivity and reducing inequalities.

Furthermore, the analysis suggests the need for increased digital sovereignty and the curbing of the power of big tech companies. It is argued that individuals should have control over their own data and decisions about its use. Additionally, educational initiatives are required to enhance media literacy and awareness, ensuring that individuals are empowered to navigate the digital landscape.

The analysis also highlights the significance of transparency in understanding the impact of big tech companies. More global reporting about tech companies is deemed necessary to inform consumers about their practices and allow them to make informed choices.

In terms of standards, the analysis stresses their importance in the strategy of European sustainable digitalisation. However, there are questions regarding how these standards are produced and whether inclusiveness is being prioritised. It is essential to ensure that standards are developed through a collaborative and inclusive process to guarantee their effectiveness and relevance.

Lastly, the analysis underscores the need for political prioritisation of green and sustainable digitalisation. Without political commitment and support, progress in these areas is unlikely to be achieved. Policy decisions and initiatives should prioritise environmental sustainability alongside digital transformation to ensure a sustainable and inclusive future.

In conclusion, the analysis highlights multiple crucial aspects of sustainable digitalisation. These include better transparency and assessment of the environmental impact, promoting entrepreneurial thinking and compliance culture, legal commitment to sustainability, circular economy practices, transparency and accountability in digital education, equitable digital access, increased digital sovereignty, curbing the power of big tech companies, transparency for consumers, the importance of standards, and political prioritisation of green and sustainable digitalisation. Emphasising and implementing these aspects will contribute to achieving a sustainable and inclusive digital future.

Denise Leal

The analysis covers a range of topics related to climate change, technology solutions, environmental law, biodiversity, ESG reports, and engagement between legal and political entities. One key issue highlighted is the lack of necessary infrastructure and knowledge in certain countries to successfully implement technology solutions. In Latin America and the Caribbean, for example, there is a significant deficit in infrastructure needed to support the implementation of these technologies. Moreover, technology solutions are often expensive, making them inaccessible to many people, and there is also a lack of knowledge and skills needed to effectively work with these technologies. This poses a significant challenge in achieving the Sustainable Development Goals (SDGs) related to industry, innovation, infrastructure, and climate action.

Another argument put forth is the need for cheaper technology solutions to combat climate change. The analysis suggests that there are countries that cannot afford expensive technology solutions, and therefore, more effort should be focused on developing and making available affordable alternatives. This would enable broader adoption of these solutions, fostering real progress in addressing climate change and achieving the SDGs.

The analysis also sheds light on the difficulties in ensuring compliance with environmental protection rulings. One of the main challenges identified is the lack of adequate supervisory bodies to effectively monitor and enforce compliance with these laws. Supervisory bodies are often small and insufficiently resourced, hampering their ability to carry out proper supervision. This raises concerns about the overall accountability and compliance of environmental laws, which is crucial in safeguarding the environment and achieving peace, justice, and strong institutions.

The negative impacts of climate change on biodiversity and species extinction are also emphasized in the analysis. It is highlighted that a significant portion of the Cerrado, a Brazilian biome, is projected to be lost due to climate change, resulting in the potential extinction of various species. Additionally, the analysis suggests that climate change has already caused some species to become extinct worldwide. These findings underscore the urgent need for action to mitigate the effects of climate change and protect biodiversity in order to achieve the SDGs related to life on land.

Regarding ESG (Environmental, Social, and Governance) reports, the analysis raises concerns about their authenticity due to potential inaccuracies and lack of a foolproof verification system. While standards and checks are in place for these reports, there is a notable absence of an efficient method to confirm their truthfulness. This challenges the reliability of ESG reports and calls for improved verification systems to ensure transparency and accountability in responsible consumption and production, as well as climate action.

The analysis also highlights the importance of collaboration between legal and political entities for effective resolutions. Successful examples of politicians and lawyers working together on patents and biodiversity issues are cited, underscoring the need for political and legal teams to align their efforts. This collaborative approach is crucial in achieving the SDGs related to peace, justice, and strong institutions.

Lastly, the analysis acknowledges the value of traditional communities’ successful environmental protection methods. The recognition of their effective methods highlights the importance of incorporating indigenous and traditional knowledge systems in environmental conservation efforts. This insight can contribute to achieving the SDGs related to life on land and underscores the need for respecting and valuing diverse approaches to environmental protection.

In conclusion, the analysis highlights several key challenges and recommendations related to climate change, technology solutions, environmental law, biodiversity, ESG reports, and engagement between legal and political entities. It underscores the importance of addressing these issues to achieve the SDGs and calls for collaboration, accountability, and the incorporation of diverse perspectives in environmental and sustainable development efforts.

Speaker

Artificial Intelligence (AI) has the potential to play a significant role in understanding climate change and mitigating its effects. It can optimize electricity supply and demand, reducing energy waste and greenhouse gas emissions. Furthermore, AI can enhance energy management systems, leading to more efficient resource utilization and a shift towards renewable energy sources. It also enables the development of early warning systems for severe weather events, improving preparedness and response efforts.

AI’s ability to provide accurate climate forecasts and predictions is another key advantage. By analyzing large amounts of data, AI algorithms can identify patterns and trends, allowing for more reliable projections of climate changes. Additionally, AI can predict crop yields and determine suitable locations for planting, contributing to stable food supply despite changing climatic conditions.

However, it is important to recognize the negative environmental impacts of technology proliferation. Rapid advancements in electronic devices and their shorter lifespan contribute to the growing problem of electronic waste (e-waste). Manufacturing electronic components is energy-intensive and water-dependent, and improper disposal of e-waste can have harmful consequences for both the environment and human health.

Therefore, it is crucial to use technology responsibly and consider both its positive and negative impacts. Responsible consumption and production of technology should be prioritised, considering environmental implications throughout the product lifecycle. This includes implementing policies and regulations to reduce e-waste generation, promoting recycling and proper disposal methods, and encouraging the development of sustainable and eco-friendly technologies.

Furthermore, leveraging AI to rethink energy usage and improve energy distribution is essential for achieving a sustainable future. By utilizing AI algorithms and advanced analytics, countries can optimize energy distribution networks, making them more efficient and reliable. This can lead to a significant reduction in energy waste and contribute to the goal of affordable and clean energy for all.

To address the global e-waste issue, urgent actions and strong policies are necessary. This involves engaging communities and giving them a voice in policy implementation and necessary actions. Collaborative efforts between governments, industry stakeholders, and individuals are crucial to effectively tackle e-waste and promote responsible consumption and production practices.

In summary, while AI offers promising solutions for understanding and mitigating climate change, it is essential to approach technology with a balanced perspective. Utilizing AI in energy management, climate forecasting, and agriculture can yield significant environmental benefits. However, negative impacts associated with technology proliferation, such as increased energy demand and e-waste, must be addressed through responsible consumption and production practices. With urgent actions, strong policies, and community engagement, AI and technology can be harnessed to create a more sustainable future.

James Amattey

Technology undoubtedly offers numerous benefits to society, but it also has a negative impact on climate change. The staggering number of devices globally, over 6.2 billion, each equipped with two or more chips that require frequent charging, contributes to significant energy consumption. These devices, such as smartphones and laptops, perform high computational tasks that demand substantial amounts of power, resulting in increased energy consumption and carbon emissions. Despite the transition to USB-C, a more energy-efficient charging technology, concerns over energy consumption persist.

Furthermore, the worldwide Cloud infrastructure for apps adds to the energy demands. Cloud servers, responsible for hosting and processing data for various applications, consume a significant amount of electricity. This consumption originates from the need to power and cool extensive server networks required to handle the vast amount of user-generated data. As our reliance on cloud-based services continues to grow, so does the strain on energy resources and the subsequent environmental impact.

Moreover, electric and autonomous mobility, hailed as a solution to curb fuel emissions, present a new set of energy challenges. Surprisingly, the computational power required to move an electric or autonomous vehicle exceeds that of conventional vehicles running on fuel. This increased computational power demands a substantial amount of electricity to power the intricate systems that enable electric and autonomous mobility.

To address the rising energy demands of electric vehicles (EVs), national-level policy adjustments are necessary. Expanding the charging infrastructure and implementing mechanisms to seamlessly integrate EVs into transportation systems are vital. Governments can play a vital role by providing incentives and support to encourage the adoption of electric vehicles, laying the foundation for a sustainable future.

In conclusion, while technology brings numerous benefits to society, it also poses challenges concerning climate change. The widespread use of devices and the energy demands of cloud infrastructure significantly contribute to energy consumption and carbon emissions. Furthermore, electric and autonomous mobility introduce new energy challenges that require careful consideration. Policymakers and industry leaders must collaborate to balance technological advancements with environmental sustainability, finding innovative solutions to mitigate the negative impact of technology on climate change.

Session transcript

Moderator:
forward. Good morning, everyone. My name is Millenia Mantany, and I would like to welcome you all in this session of climate and technology. In this session today, I’m also joined by my co-moderator, who is online. His name is Igor, and in this session, we’ll also have a diverse of perspective from researchers, advocates, industry leaders, to share insights and explore how technology can be a catalyst of change. Okay, so before we get deeper into our discussion, I’d like to introduce to you our speakers for today, and we have them here. And we have Rosanna, we also have Denise, and we also have Joao, and we have Sakura, but online, we’re also joined with James. Yeah, I would really like to thank and appreciate each one of you for making it here today by joining us in this session. And today’s my first time moderating a session, so I’m not so sure how I feel. I’m excited, and yeah, I hope you guys enjoy. Yeah, so before we move further to this discussion, I would like to maybe at least say something about this session. As we all know that technology, I mean, climate change has been one of the most pressing issues in the world so far, and we have seen the role that technology plays in ensuring climate change is actually addressed. We have seen different ways, like renewable energies and all, but then we’ve also seen efforts that organizations and individuals put to address climate change. One thing that for me I know is that with great power, it comes great responsibility. So the more we use technology to solve the environment, the more we would want to find ways how we can ensure that technology doesn’t really affect us. And yeah, I would also like to invite my co-moderator from online, Igor, if he can, like, add something before we get deeper into our discussion. Can everyone hear me? Everyone hear me? Yes, yes, yes. Thank you.

Igor Josรฉ Da Silva Araรบjo:
Good morning. Good night to you, too. Good morning. Good night to you, too. And dear participants, I’m Igor, Josรฉ da Silva Araรบjo, rising in a small town in the Brazilian Northeast. I’m a young activist, a law student, and a representative of the Civil Society, the Latin American and Caribbean group. And it’s an honor to be here today to discuss a topic of great importance, climate change and technology implementation. We are currently in a critical juncture in a human story where climate change poses an imminent threat to our planet. Every day we witness the devastation impacts of this change worldwide, from natural disaster to the loss of biodiversity and threats to food security. Our common home, this planet, is undergoing unprecedented climate change. And we don’t need a scientific date to confirm in this moment what we witness daily. Extreme weather events, change in rainfall partners, and rising temperatures are palpable evidence that Earth is not as it used to be. As young activists and representatives from different sectors, we recognize that you represent not only the future, but also the present. We are not just the generation of tomorrow, we are the generation of today. We understand that our current actions have a direct impact on the future that we want to build. So the origins of the climate issues drive us to act now, to take responsibility and the pursuit of the fact solutions. So this is where technology plays a crucial role. And however, we are not alone in this journey. So history has taught us that humanity is capable of overcoming the most complex challenge when we come together and act with determination. Climate technology, including renewable energy such as wind, solar, and hydropower, as well as adaptive practice like drought-resistant crops and early warning systems, are allies in the fight against climate change. They offer hope and concrete solutions to address this global challenge. But are they truly efficient? And is all that we can do in this moment? Our hot table will address fundamental questions like this and aims to stimulate new perspectives from each of you. This session aims to broaden perspectives on the whole of technology and addressing climate change, identify the types of technology and investment needed to achieve our goals, and understand the implications of this environment scenario. So our discussion is based on the principle that while nature reacts to this change, it’s human behavior that plays a fundamental role in this origin. So as we progress in this panel, let us remember that technology is a powerful tool that is how we use it that makes the difference. So we are here not only to discuss the challenge, but also to share concrete ideas and solutions. And most importantly, we are here to inspire action. So this is an opportunity for all of us to learn, share, and collaborate on potential technological solutions that can transform the economic, social, educational, and environmental aspects, and ultimately improve the quality of life worldwide. So finally, I appreciate the presence and interest of each one of you in this vital discussion for our planet. I will be here to provide the support in the online and moderation. And thank you so much for now.

Moderator:
Thank you very much, Igor. Yes, as he said, our main aim of this session is first to raise awareness on the role that technology can play in addressing climate change, but we’ll also be able to make some recommendations on how we can improve policies on climate change. Yes, but then in this discussion today, we’ll have some questions that are going to be guiding whatever that the speakers are going to be presenting. And the first one is, how can we, how can the internet and technologies collaborate to fight climate change? And the second one is, which kind of policies about technology and internet could collaborate on the theme of climate change? And what are the negative impacts of technology in climate change? Yes, so without wasting so much time, I’d like to give the floor to our first speaker, who is Sakura. So she’ll introduce herself, her stakeholder group, where she comes from. Yeah, and one thing to note, I hope you have noticed that on our panel today, we have young people. So I’m so excited to hear from them. Yes, thank you very much. Welcome.

Speaker:
Thank you, Millennium. I’m Sakura Takahashi from Japan. I’m speaking here today on behalf of Climate Youth Japan, which is a youth environmental NGO in Japan. I’m a student studying climate science and geospatial analysis in Keio University. In addition, I have several experiences of being part of youth interventions in the United Nations, such as attending Climate Change Cup, the Asia-Pacific Regional Ministerial Forum of UNEP as a delegate of the Children and Youth Major Group, and serving on OECD Youth Advisory Board 2022. In conjunction with my activities and area of expertise, I’m so excited to talk about the synergy of climate change and technology implementation. I would like to answer the first question and third questions. The first question is, how can the internet and technologies collaborate to fight climate change? So, well, we have various technological ways to tackle climate change-related issues, such as IoT, artificial intelligence, blockchain, climate prediction, and forecasting, and so on. I would like to discuss how artificial intelligence, so AI, can accelerate climate actions from the viewpoint of mitigation and adaptation. In the climate change discussion, we mainly have two approaches, mitigation and adaptation. Mitigation is to reduce greenhouse gas emissions to alleviate climate change. Adaptation is to take measures to adapt the effects of climate change, including reducing risks of adverse effects and exploring new solutions to live healthy in a changing climate world. In terms of mitigation, artificial intelligence can optimize electricity supply and demand. On the supply side, AI algorithms are being developed to optimize electricity supply by reflecting weather conditions and demand-side electricity usage. AI can also be used for building energy management in urban areas where electricity is primarily consumed. For example, the study found that it’s expected to save energy consumption by 9% during the summer season by learning the relationships between operation data of heat source equipment and total energy consumption in the building, and applying an optimization model created from learning results. That’s how AI can contribute to optimization of the supply-demand balance from production to consumption of electricity, contributing to the reduction of GHG emissions. In terms of adaptation, AI can enable us to develop early warning systems for severe disasters and more accurate climate forecast and prediction. Improvements of computing capabilities through supercomputers and the assimilation of global observation data by satellites have enabled more accurate and consistent weather and climate forecasts than were possible several decades ago. This has made it possible to reduce damage by taking early countermeasures in evacuation from extreme weather events and associated disasters. In addition, satellite data and climate models can be used to predict crop yields and determine suitable locations using machine learning, thereby contributing to a stable food supply under ever-changing climatic conditions. In this way, AI can help humans adapt to climate change’s adverse effects and find new opportunities. From these practices, I definitely believe that artificial intelligence can take an innovative role in tackling climate change. And I will move to the third question, which is what are the negative impacts of technology in climate change? So technology, including AI, significantly contributes to our urgent needs to respond to climate change, as stated through previous questions. However, it also has negative impacts on the environment and our life. I’d like to elaborate on this point in terms of energy consumption and the environmental impacts of hardware life cycle. In terms of energy use, the proliferation of electronic devices, data centers, and communication networks has driven a surge in energy demand, primarily by burning fossil fuels. Data centers, which power our digital world, are notorious energy gatherers. According to IEA, global data center electricity consumption in 2022 is estimated at around 1 to 1.3 percent of global electricity demand. Moreover, everyday gadgets like smartphones and laptops, from manufacturing to operation and disposal, collectively add to energy consumption and carbon emissions. In terms of life cycle hardware impacts, the production of electronic devices relies on resource-intensive processes, including mining rare minerals and metals, emitting greenhouse gases and polluting water. Manufacturing electronic components is energy-intensive and water-dependent. Rapid technological advancements lead to shorter product life cycles, resulting in a growing electronic waste, e-waste problem. E-waste disposal can release hazardous chemicals into the environment if not managed properly, exacerbating pollution and heat house risks, especially in developing countries. Additionally, planned obsolescence practices incentivize frequent replacements, driving resource consumption and e-waste generation. As technology has positive and negative impacts, as stated before, on the environment as well as climate, we need to gain literacy to understand both aspects of technology and use it wisely for creating a more sustainable life on the earth. Thank you.

Moderator:
Thank you very much, Sakura. Thank you for your contribution on how AI can be used to mitigate climate change. And next, I would invite James, who is online. Please welcome. The floor is yours.

James Amattey:
All right. Thank you very much. I believe I am very audible. Yes. So my name is James Amate, and I am from Ghana, specifically from the African group. I come from a background of software innovation, where we use software tools to improve daily lives from education down to how we move goods in delivery and logistics. Now I’m going to take this topic from a different angle, and that would be on question three, which is how technology is negatively affecting climate change. Now technology is an enabler of the digital economy that we are in. Fortunately, it’s able to help me to join you all the way in Japan, even though I’m seated right here in Ghana. But also, unfortunately, there are some limitations that this is causing to our climate. Now Sakura mentioned some of them, and I’m going to highlight a few more. Now other than the 6.2 billion devices that exist globally, according to Gartner, there over two chips per device. Now, these chips handle a wide range of processing and computational ability. And these processing and computational ability leads to battery drain, which requires frequent charging. Now, this frequent charging comes from a wide range of tools and mechanisms that have been put in place, including the widely known USB-C, which has been a standard that has been implemented since 2012, I believe, and is currently in the latest model of the iPhone, which was released a few months ago. Now, our research has shown that despite the change to USB-C, there is still a high level of consumptions, that is consumption of energy that’s required to keep phones running because of all the apps that exist today. Currently, the Google Play Store, which has over 1.2 million apps, and all of these apps require computation of one sort to be able to handle whatever processes they have. And these computations usually rely on cloud, which Sakura mentioned. Now, cloud in itself is an enabler of security and allowing the global service exportation. So for example, Uber is made in the US, but here in Ghana, I’m able to use Uber, and that’s because of cloud, but also cloud has its data. And also because of the structure of cloud and how it is the infrastructure and the investment that goes into it, you sometimes realize that it takes a lot more to run these apps than it actually costs to create them. And these involvements sometimes lead to that negative effect. Now, in Africa, where energy consumption is very high, but the production is very low, it sometimes becomes a deficit to the society, which is supposed to benefit, because there are some places where there is, should I say, energy inefficiencies. So to be able to balance the national production to the consumption of users and the requirements of these devices, it sometimes becomes a burden and actually leads to the creation of more energy. And that creation can be a good thing, but sometimes we need to ask ourselves, what is the source of that energy? Fortunately or unfortunately, that’s the most energy relies on fossil fuels. And so there is still that negative carbon defect that is currently going on. Now, because I come from the mobility space, I currently focus on mobility as a domain, that’s electric mobility, autonomous mobility. And this also brings a further, should I say, constraints on the energy produced. Now, previously cars were run on floor and there’s not too much reliance on electronics, but now with EVs and then the birth of autonomous mobility, the computational power that is required to move a car autonomously from one point to another is actually greater than how much fuel it usually costs when cars are just reliable for. So even though we have solved one of the problems that came with mobility that was floors and then the release of carbon dioxide and carbon monoxide through the exhaust, now we have a different problem of trying to understand how much of electric power is required to move mobility, how much of electric power needs to be generated to charge and then move these things, how much of the national grid has to be allocated to drivers who are switching over to EVs and how much of a policy adjustment do we have to have on a national level to be able to accommodate the needs of EVs because EVs are moving from say personal automobiles to now industry level automobiles as high as construction automobiles. And these are going to take a very huge drain on the climate. So hopefully by the end of this talk, we will be able to delve into how we got here and how we can mitigate some of these problems without necessarily injuring the innovation that’s this place. So I hope this gives a little more light on the conversation and thank you for the opportunity.

Moderator:
All right, thank you very much, James, for your contribution. Without wasting so much time, I’ll welcome the third speaker, Joao. Yes.

Joรฃo Vitor Andrade:
Hi, everyone. I’d like to thank you all to be present here today. My name is Joรฃo Vitor, I’m from Brazil. I’m a law student and at the moment I’m into the Brazilian Supreme Court like an internship. This is an important topic to discuss today. And I think we have to discuss, we have to bring up into an event like this one, like the EGF and into some events into the United Nations. So I will bring up some ideas about the first question, how the internet and technologies collaborate to fight climate change. According to some global researches, just the Latin America could lose about $17 trillion between 2021 and 2017, 2070. About this topic, I’d like to emphasize that the internet and technology can play significant roles in the fight against climate change by enabling innovative solutions, facilitating information sharing and promoting sustainable practices. Here are some ways technology can help us fight against this problem. I’d like to highlight some, I don’t have much time to talk about them, but I’d like to say some. First one, data collection and analysis. The artificial intelligence can highlight improved sensors that collect real time environmental data, such as deforestation, temperature or quality, which can be used to develop climate monitoring and research. The sensors can be used not just to monitor the data, but can instantaneously warn policy or the responsible entity to struggle problem. Into the Brazil, for example, we have the IMPI, that’s an important institution that have been doing an excellent job, an excellent work into the Brazil and have been helping us, have been helping the government to create some solutions to the country and not just to the Brazil, but to the Latin America. Advanced analytics and machine learning algorithms can process vast amounts of data to identify patterns, trends and anomalies related to climate change, helping researchers and policy makers make informed decisions. The second one, renewable energy integration. Smart grid technologies can optimize the distribution and consumption of energy sources, becoming possible to reduce energy production from fossil fuels as coal and natural gas, reducing carbon emissions. In Brazil, for example, there is a waste of energy annually equal to a 20 million houses consumption in a period of one year. This is a lot more than we can think that is good. Something like we can use this energy, for example, to help the Latin America. And I think this also happens into the Europe and Asia and other continents. Energy management systems and demand response technologies can help balance energy supply and demand efficiently. About the carbon footprint reduction. The AI and the internet can virtual, can help with virtual meetings and remote work made possible by the internet, can reduce the need for commuting and business travel, lowering greenhouse gas emissions. This phenomenon was lived by all of us into the pandemic period, showing us that it’s possible to reeducate society for this new moment of history that demands our effort to achieve a common objective. E-commerce and digital services can replace traditional brick and mortar retail, reducing the environmental impact of physical stores. This is a good option to not just the Latin America, but to the North America, to the Europe, to the Asia and to the Africa. We know that all continents have a lot of stores and we know that all these things can collaborate to the gas emissions like CO, CO2 and others. We can think or rethink about these things to reduce carbon emissions and can collaborate to reduce, to struggle the climate change. About sustainable agriculture, and this is an important point to the Latin America because countries like Brazil, for example, a significant part of the GDP of my country come from the agriculture. Precision agriculture technologies can optimize crop production, reducing the use of water, fertilizers and pesticides, which contribute to reduce greenhouse gas emissions. Internet connected sensors and drones can monitor silo conditions and crop health, enabling more efficient farming practices and collaborating to combat the climate change. This is a suggestion, like I said, not just to the Latin America, but to all countries who have great productions into the agricultural area. And I think we can discuss about this topic because the Brazil have been working on it to solve the problems about the gas emissions into the agricultural production. And this is an important point to discuss to countries like, for example, India and China that have a large productions of many things. About the climate communication and education, social media and online platforms can raise awareness about climate change and mobilize global efforts. We can use the online platforms, like, for example, Telegram, WhatsApp and other ones to show to many people. And this is an important point because in my country, for example, the population of Brazil, something about 80% of the population have access to internet. So we can use these things to achieve the population that many times don’t know about these important topics. So many times we just excuse these important points in places like this one. And the population of the countries, people around the globe don’t know about these things. Many times we are talking about climate change in a place like this one. And many people in Brazil, for example, don’t know about it. And many politicians like the ancient president of Brazil collaborate that things like this one don’t achieve the population. So the next one about the circular economy, and I think I’ll have more points by the time it’s up. So I can say about the circular economy, climate monitoring and early warning systems, transportation and climate modeling and prediction. So collaboration between governments, businesses, research institutions and individuals is crucial to leverage the full potential of the internet and technology in the fight against climate change. Policymakers can also incentivize and regulate sustainable practices and the development of eco-friendly technologies to accelerate progress. So this topic, the climate change, is not a problem of the world, but it’s a problem to the humans. We’re not talking about the planet. The planet will continue to exist, but we are talking about the existence of the humans. If humans don’t treat this problem like an important one, the humanity will finalize his existence into the new years. So we have to talk about this one. We have to think this thing into the governments, into a place like this one. We have to talk about into the colleges. And I think if we do this, if we do this homework, we can reduce and collaborate to the health of the planet. So thank you so much.

Moderator:
All right, thank you so much, Joรฃo. Because of the time, I’ll advise the next speakers to use a few minutes. Yes, so I’ll welcome Rosana. Yes, please take the floor.

Rosanna Fanni:
Thank you. Thank you very much. I’m also very delighted to be able to speak here today. Yeah, my name is Rosana Fani. I am German and Italian, and I was based in Brussels until just a few weeks ago. And I’m actually today speaking behalf of the German Youth IGF, which I’m very, very happy and honored to represent here today. And we, as the German Youth IGF, have been actually discussing this topic as well. And we were convening together in September as part of our German IGF, so the local IGF that happened in Berlin. And we had an event, so to say, where we discussed the intersection between sustainability and digitalization. So how can the two things go hand in hand? And I will share some of the results that we have discussed and which we wanted to bring to Kyoto, to this IGF, to the global IGF. So I’m very, very happy, and I also thank the entire team that is now still sleeping in Europe for all the work that we have done together. But first, maybe let me share a few words about actually what the European Union is doing in the space of the green and digital. We in Europe have, I think, quite soon understood that the topic is crucial. I mean, Greta Thunberg and the climate movement has actually originated in Europe, as you know, in Sweden. And we feel a big responsibility also because Europe is a big emitter of climate change and climate emissions. And so this is why in 2019, the current European government or the European Commission has come up with very ambitious climate goals that we should achieve. So by 2030, so that’s in six years almost already, we should cut our climate emissions by half. And by 2050, we should be climate neutral, so net zero. That’s very, very ambitious also for the European economy that also relies often still on very traditional and very resource-intense ways of fueling the economy. But we also have understood that we need. to do it. We need to go ahead and so the European Commission has decided that they have a certain strategy that’s called the EU twin transition. And the EU twin transition is basically the combination of green and digital transitions together. So the idea is that only through technology and through data sharing innovation we can actually also make our economies more sustainable and climate friendly. And we have already heard that there are certain contradictions. However to this topic we know that the green and the digital can also clash. As my previous panelists have said for example more technological waste, e-waste and data privacy concerns when more data is shared. We have also difficulties when we do for example new and large language models that consume a lot of energy and so on. And we saw in the German IGF in September we have thought about what could be enabling conditions and what would be needed for policymakers to really enable a more just and digital transition that respects the rights of citizens not only in Europe but also globally. And so we have come up with three different areas where we would present so to say our recommendations. This is firstly the area of ecology or basically the environment. And the second area is the economy. And the third area is the social aspects. So I will start with our recommendations for the ecology or environment. The first point that we have concluded that we need more is better transparency. We think we need better systematic data on the environmental impact of digitization. So what we have already heard earlier. We need to understand better the entire lifecycle of the application. So not the internet that my tablet is using now but actually from the very moment on that the tablet is designed and that it’s conceptualized and built together in a factory. And we need more transparency as consumers about it. We should know how much the materials, the digital devices that we use actually consume. And we should also have the measurements should be taken out, carried out independently. So it should not be the companies themselves that may you know make some numbers nice but it should be independent measurement. And the results should also be made accessible in an accessible form. So not very complex reports that you have to study over hours but it should be very clearly and visible for users. Next point on the ecology is that we want to promote intrapreneurial thinking and a compliance culture. So we argue that we need to create new environments in which environmental sustainability is seen as a chance by startups for example or entrepreneurs. And that it also gives economic advantages instead and also long-term investments instead of something where you have to comply and where you have to tick the checkboxes so to say. And we think that this can happen through educational programs, raising awareness programs and yeah in order to ensure that innovation and sustainability go hand in hand. And the third point of the ecology is that we want a legal commitment to sustainability by design and sustainability by default. So what I already said earlier in the design process already ecological sustainability should be included and also weighed as a factor of importance alongside other economic factors or performance related aspects. So that really consumers can see how much actually this device is sustainable or used sustainable approaches. And this is also sustainable by default. Okay then I will move to the aspects of the economy and there we have two points that we would like to present. The first one is independence and we believe that the circular economy approach should play a really central role in reducing the political dependence on individual countries with large critical raw material deposits. Maybe a little information square for those of you who haven’t really yet heard about critical raw materials. These are rare earths and minerals that are in everyone’s phones and tablets but they are also really crucial for for example solar panels or you know autonomous vehicles. So without those critical raw materials we could not be producing the technologies anymore that we use today and that we are solely reliant on for sustainable energy production for example on solar panels. But the problem is that these critical raw materials are mostly concentrated in a few countries so it’s very hard to get access to them and most countries are very dependent on those countries to allow them access. And so our point is really that we would need more independence and also expand recycling which is another point that I get to. Recycling and other circular economy initiatives where we could then reduce our independence on those countries but already use the critical raw materials that we have been extracting and to in order to really strengthen economic stability and security. When it comes to research funding and we also want to extend funding for the applied circular economy so that also researchers can better conceptualize how the value chain of those materials is and also how maybe new jobs can create it along this need. So last but not least the social aspects because we also believe that sustainability and digitalization should be benefiting all and not just the few privileged ones not only also in Europe but also worldwide. A key concern is still that we need more transparency and accountability in the context of digital education especially artificial intelligence. So we believe that manufacturers should have an obligation to explain actually their products especially also to children so that it’s clear to children I mean to us maybe it’s clear if you see ah this is made by an AI that it’s we know we understand it but it’s still very difficult to explain it to children. And we think it’s really important to prepare children for the digital world and also make them aware that there are risks and challenges. Then we will put forward another point on participation. So we want more equitable access for all population groups including also older people and children and also structurally discriminated groups and I think this is also very much in line with the panel because in Europe we have quite a good access already but if we look worldwide then we need much more and what needs to be done much more in terms of connectivity and enabling people to meaningfully participate in the digital environment. And last but not least we also hope to increase a digital sovereignty so we mean that the internet stays open free and secure and that we can have data sovereignty so that the data is not captured and sold by big tech companies but that individuals can decide over their own data where it’s going and what it is used for. And last but not least also educational projects especially in media and media training and media awareness and also to include the common good in digital policy. Thank you.

Moderator:
Thank you very much Rosanna. Thank you for sharing important points that you were taken from the youth IGF Germany. And lastly I would like to invite Denise.

Denise Leal:
Please welcome. Hi everyone. So I am Denise Leal from Brazil. I am here representing the Latin American Caribbean region and I am happy to say that I am also a former fellow from the program Youth from Brazil. I am seeing some people here that came with the delegation. I am also but today I’m here also representing the private sector. I know I am young but yes I’m representing the private sector. And also I am a researcher at the Brasilia University and my research it’s related with it. I am part of the natural resources law and sustainable development research group. And I would like to add some things that we have researched to this discussion to this topic. The first thing that I would like to say is we know that climate change exists is a problem and we know that we have solutions. Now we have heard a lot about the solutions that it’s possible to implement that we could implement technology solutions that we could use to help in helping solving the problem. But there is an aspect which is do we have the necessary infrastructure, the need infrastructure in every country to implement the solutions? Can we really implement them? When we talk about Latin American Caribbean we don’t have the need infrastructure to implement these technologies. Not all of the technologies. It’s expensive and also there are many people that don’t have the knowledge to deal and to work with these technologies. We need to put more effort on making cheaper solutions cheaper technology solutions to the countries who cannot buy the expensive solutions that well work really well but they don’t are accessible to these countries. Another point that I would like to bring is when it comes to legal disputes, when it comes to technology and environment disputes, what is the end of it? What is the final decision? What do judges decide when it comes to legal disputes? What do we have? So we have researched it in Brazil and it’s a cooperation actually of Brasilia University with Chile, France and also Canada and we’ve noticed that yes we have a lot of litigation on the theme of environment but when it comes to the end we have some good decisions that protect the environment but we have other problems like how can you guarantee that these decisions will really work? What we have in Brazil and in other countries that we have researched is you have a decision, a legal decision that will say that you have to protect the environment that yes this is there is a law saying that you have to protect the environment but in the end there is no way to comply with it like the fiscalization that it’s not easy and this is a huge problem like the control, accountability and compliance of the environment laws and court sentences are very fragile and many times the supervisory bodies are small and incapable of making a true and a constant daily supervision. So I wanted to add to the discussion this important aspect that we have researched in Brasilia University because sometimes we think that okay we have technology and we can implement it we are going to solve the climate change problems but it’s expensive and secondly it’s important to know that it’s hard to keep watching, keep your eye on it and how we don’t have and one of our policies questions is which policies can we make, can we build, can we think about to guarantee that we are really fighting against climate change and implementing technology so I would suggest that more than thinking about new laws how can we make the laws that we already have on environment topics really work so what I think is that we need to have more work, hard work on compliancy and comply with these laws that already exist. I think that everyone, the private sector, civil society, academia, tech community, United Nations and all the country’s governments especially those with the economic possibility and interest should put more effort on helping to find cheaper technology solutions to fight climate change and otherwise there are people in countries who won’t have the possibility of implementing it. To end my speech I think that we talk in a way the end is war, the world is ending for us but the world has already ended for some species. 45% of Cerrado Brazilian biome will end with an increase of 0.7 Celsius degree so it’s not one degree it’s 0.7 it’s less than a degree in 45% almost the middle of the whole biome is going to end with this increase so we are worried with our futures but what about the environment rights like doesn’t this species have the right to exist? Thank you so much and I also want to say thank you to my family and friends who are here and Obrigada. Alright thank you very much to our dear speakers

Moderator:
and I hope all of us have heard what they have presented and I’m actually from this discussion what I have noted mostly is the sense of accountability and responsibility that each one of us has to play to make sure that climate change is really addressed so yeah but since we are like out of time I’d like to open the floor to our participants if you have any contribution

Audience:
any question yes please use the mic behind yes hello everyone I’m Manu I’m from Brazil and I represent here Instituto Alana which is an organization dedicated to the protection of child rights and when we talk about the environment and digital rights it’s very very special to talk about children and I thank you for bringing this point and one thing that I would like to add to this debate is how can we think about the everlasting effects of digital colonization when we are talking about global solutions to problems that we have now and I think a great example it’s what happened earlier this year in Uruguay where Google wanted to build a big big data center and we talked about AI so much in this forum and about solutions that need this kind of infrastructure but the people there were having they couldn’t have water for their own consumption and then we had a government who was privileging the interest of a private company of a global power and that interest in that the interest of the local population so that’s a question like a global solutions are very important but we have everlasting effects of colonization and we are suffering with them and how do we think about digital sovereignty when we think about the solutions and when we think about how can we build in our countries like Brazil for instance and well building solutions that are not actually just serving for the purpose of big global interests and companies who are dominating this economical debate. Thank you very much for your contribution and we’ll move to the next person. Good morning everyone I’m Phelps I’m from Brazil I’m part of the youth delegation from Brazil and my question is about how could we deal better with electronic waste as one globally. As Sakura mentioned electronic devices have life cycles smaller as time goes by and this program of solvency is a really big deal and I can say this because there in Brazil, for instance, we have some local initiatives for recycling and that’s really important for us. Besides that, when it comes to electrical devices, it’s not that simple. Those initiatives do better when we say about paper, when we talk about plastic, but electrical devices, there is another level of treatment. I think I could say that. So lithium and other substances are really nocive to people and to the environment and in the all the environment and when they are used to technologies that even when they are used to technologies that could help us against climate change. So we have some kind sort of a problem right there. We create technologies that could help us against climate change, but use some kind of these substances in them. So we have kind of a cycle there and my question is where sustainability by design can appear in this scenario of high amount of technological waste and as UN says that’s a global issue and global issues are connected and that matters when we talk about climate change. Thank you very much. Hi everyone, sorry for the voice. I am Carla Braga. I am a mentor of the Brazilian Youth Delegation as well. I am Executive Director of the Amazonian Youth Corporation for Sustainable Development and I wanted to understand if we have any successful examples of our experience about facing the impacts of climate change in global south and if possible considering the Amazon region that has used the technology to face the challenge of climate change. Thank you. Thank you very much. Alright, do we have a question online? Igor? Any question online? No, no. Alright, so we’ll, ah, sorry, please welcome. Hi, this is Jasmine from Hong Kong.Asia. So I have, so I just want to respond to what Stanis just mentioned. I agree that like it also depends on how, you know, like each nation’s capacity and each territory, like how do they deal with, you know, like climate change and the problem of, you know, the key is how do we localize the so-called global solution into each, you know, different context. But the thing interesting I find out because, you know, like previous, like day zero, we actually, the .asia, we relaunched our e-commerce internet and that’s here. So we’ve done study about 14 jurisdictions about, you know, like energy consumption, efficiency, and also economy aspect of this jurisdiction. And it’s actually interesting that Hong Kong is actually not in a good position that we thought it is. So like I’m kind of like sad to say that Hong Kong is not in a very good performance status. So I just want to raise a point here. It seems like it’s not just about the capacity. So obviously we do have economic power and also infrastructure to localize the global solutions for, you know, like to tackle the climate change things. But here I just, I also want to get, you know, some inspiration and maybe good case practice from you guys when you have to identify the decision maker, you know, to talk about your agenda and your, you know, your thought about how to tackle with climate change. How do you identify them and how to lobby with them and, you know, negotiate some thoughts that you have as a youth. So I think that’s it, what I want to ask about. Thank you very much. Thank you very much, we, yeah, let’s do that. Hi, I’m Irene from, oh, sorry, what is your name? Hi, I’m Ethan from Hong Kong as well. So I believe that Sakura have just talked about how internets and technology to fight, can collaborate to fight climate changes. So actually I’ve been working on some, some same project, some projects that is related to this topic. And I have just a very short question, is that how can internet of things be harnessed, harnessed to create a more energy sufficient system and reduce carbon emission? And that’s all, thanks. So hi, I’m Irene from IEEE and it is very refreshing to see all these young people. So I think I should put you also in contact with the IEEE young professionals task for climate change. So I, innovation is very close to IEEE and I was doing, analyzing patents for a living for a long time, working also with NP Brazil. So I think we know as a, as a fact that we have enough technology innovation and I think the examples that you mentioned around, which are the win-win situations where it’s about energy efficiency are an easy sell for companies. But the question is, who is taking the accountability about, you know, adopting the other technologies which are very costly? And I wonder where the other, you would have some other thoughts on beyond some incentives that the government’s could give, because the question is, are these enough? And I would like to have your thoughts about what do you think is the importance in, Rosanna mentioned before about the importance of lifecycle assessment. So I was wondering, there is a discussion in Europe about the European Green Digital Coalition, about how do we define, for example, avoided emissions when we talk about net impact. So I was wondering what are your thoughts about systems thinking in that, and what is the role of standards in that? Thank you so much.

Moderator:
Thank you very much. Since we’re out of time, I’ll welcome the speakers, if you can respond to any of the questions asked. Please. Yeah, one minute or second.

Rosanna Fanni:
Okay, thanks a lot for so many questions. I’m glad to hear that we have sparked so many ideas and thoughts. I will just maybe touch on two points, the one point on a colonization and the other point on standards. So first on colonization, absolutely, that’s a huge problem. I think also, especially big tech companies have way too much power, as we know, and there should be, I think, more concerted efforts by the United Nations, by other supranational and international organizations to curb the power of those tech companies. But at the same time, I think, again, transparency is super important, because if consumers would really see the impact of, for example, a Google data center in, why did you say, yes, perhaps there would be, you know, also a mind change from the consumer side and from the recipient side. So I think it’s really important to bring more transparency and also to have more global reporting about those cases, because there’s similar cases with Meta doing the Open Africa ICT project, where they scan biometric data of citizens and use citizens to explore, so to say, the 3D landscape. The second point on standards, I hate actually not mention it, but standards are also one part of the European strategy, of course, and to standardize green digitalization together. I think standards are absolutely crucial, but again, there the question is, how do those standards bodies produce those standards? Is it inclusive enough? Is it also representative of civil society and maybe members that cannot afford to be in those standardization bodies? And definitely, I think, in the end, all these questions that we discuss ultimately political questions that policy makers have to tackle first, and if policy makers do not put their priority on green and digital or digital sustainable digitalization, we will not get anywhere. So I think it all originates in political priorities and making this even more topic, and then through standardization and other measures that I’ve already mentioned. Thank you. Thank you very much,

Moderator:
Rosana. One more contribution, please, and then the rest we can. So thank you for

Denise Leal:
this amount, this large amount of questions. We are happy with your participation. So I’ve noted something here. I wanted to speak about the question of successful examples and lobby, the question of lobby and about the standards. Beginning with this last one about the standards, we have ESG with all those standards, but I think that the reports, they are, we have a lot of lies on the reports. So there is a problem. How can we really read these reports on climate and with the standards that deal with climate change and believe on them? I think that we need to work more on how we can check these reports and how they are made, because the standards are good, but they are there and we don’t have how to check them. And again, we are with the problem of compliance, how we check those things and how can we make them really true, because I think that the standards are good, but we don’t have how, we don’t have a really nice way to check if they are being true in the reports. And the successful examples and also about lobby, I think that we can talk about this too, together. We had some, we have studied some examples when politician and lawyers have worked together in, to create some solutions on patents and biodiversity aspects. So they, we know that when it comes to international treats, we have some problems because you cannot solve things only with legal disputes. You need politicians to help you. So what we have noticed in the international aspects of environment and legal disputes is that when you have these two groups working together, the legal group and the politician group, you might have something, some good example of success in the end. I cannot say that we have a lot of successful examples, like we know there is more traditional communities that have successful examples of how to protect environment, but it’s really small. It’s a thing that we can adopt in our small communities, but if we are speaking in a more big way, looking globally for a solution, we must make our politician and our legal teams work together, like the judge and the politician must work in the same line. They must be aligned, it’s what I think, and what we have noticed in our research. Thank you.

Moderator:
30 seconds, please. Okay, I will answer the question from, I forgot the name of the

Joรฃo Vitor Andrade:
of the sir of the Hong Kong, but about the electric system, about the consumption and about the distribution, how we can build a better system. I will give the example of my country to answer your question because I think I have more knowledge about that and so I can explain better. The question that I have to think is, how we can build a better system where we can distribute and not just think about the production, but how we use correctly the energy in our countries. Because many countries in Brazil, I include Brazil here, just thinking about the production, for example in my country at the moment we have a discussion about the production of wind energy offshore, and in Brazil we have a lot of debates about this topic, because it’s more cheap to produce energy instead of to rethink the system, to rethink the system to build a better system for a good distribution. So what I think that the countries have to do is, where is in the region of the country we can use AI to do it, what part of the country have a higher consumption, what have a low consumption. So we can use the AI to achieve these numbers and can rethink about it, because if we use it we will not need to produce more energy, but just to distribute correctly into the country. So if we do it, we can reduce, for example, the use of fossil fuels, like coal, and large countries like China, for example, have a great production of energy based on coal. So if we think, if we build a better system to distribute correctly the energy, we can reduce the use of fossil fuels, for example, and you can reduce the carbon emissions and collaborate to fight against the climate change. So I think I don’t have much time, but thank you so much for the question. Thank you very much. Thank you for various questions. I’d like to

Speaker:
answer about e-waste and energy efficient systems. Well, about e-waste, how we can tackle the e-waste, because it’s a global problem. It’s definitely, I totally agree with it, because the e-waste is not the problem that related to the, directly related to the countries, about the e-waste, e-waste production country and consumption countries. So I think the community, the engagement and the policies that support those activities and initiatives at the local levels are important, because if we can create good policies, if the local people or the people on the ground can’t take the actions or provide their voices to the decision-makers, the policies are not implemented. So how we can take these problems seriously and take actions urgently is really important. So at the first big step that we need to, what happens in the other areas in the same world, and also as well as knowledge sharing from the other areas, because even if we have the multifaceted programs in the different part of the world, we can have something, we can learn about something from the latest problems. And also, I think we need opportunities to discuss and learn about the case studies more, because we can get the feel of the centre, how to say that, we can let people be involved in the same programs. And regarding the energy-efficient systems, how the internet harnesses energy-efficient systems, the speaker from the Hong Kong asked, I think that smart grid and the energy consumption and production at the local levels are really important. So some areas in Japan, mainly in the metropolitan cities, we take the local heat management system, and also we are trying to develop smart grid systems that can manage the energy supply and demand in the local, I mean, the specific areas focusing. So the smart grid and the local level, the energy management systems are really important. Thank you.

Moderator:
Thank you very much, Sakura. James, please, one line to close, to conclude your… All right. Since he’s not there, I’d really like to appreciate each one of you for participating and joining us in this session. Thank you very much for being an amazing audience, for asking questions and contributing. See you around. Our speakers will be outside for any questions, any contacts, please let’s meet outside. Thank you.

Rosanna Fanni

Speech speed

157 words per minute

Speech length

2038 words

Speech time

778 secs

Audience

Speech speed

158 words per minute

Speech length

1335 words

Speech time

507 secs

Denise Leal

Speech speed

145 words per minute

Speech length

1326 words

Speech time

549 secs

Igor Josรฉ Da Silva Araรบjo

Speech speed

130 words per minute

Speech length

600 words

Speech time

277 secs

James Amattey

Speech speed

133 words per minute

Speech length

933 words

Speech time

422 secs

Joรฃo Vitor Andrade

Speech speed

152 words per minute

Speech length

1642 words

Speech time

650 secs

Moderator

Speech speed

131 words per minute

Speech length

923 words

Speech time

423 secs

Speaker

Speech speed

134 words per minute

Speech length

1198 words

Speech time

535 secs

Current Developments in DNS Privacy | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

David Huberman

The summary emphasises the importance of DNS privacy, as DNS queries can reveal personal information about individuals. The clear text nature of DNS data until a few years ago made it accessible to anyone. This highlights the urgent need for developing protocols that ensure DNS privacy. The DNS was created in 1983, but privacy-focused developments only began in the last five to six years.

Paul Makapetris, the inventor of DNS, is credited for solving significant issues regarding the scaling and knowledge of all hosts on the internet. Prior to DNS, existing processes were unable to scale effectively. The creation of a distributed system through DNS enabled anyone to access information about hosts and their corresponding IP addresses, greatly enhancing the functionality and efficiency of the internet.

Jeff Houston, the chief scientist of APNIC, is regarded as a highly respected authority in the field of internet engineering. His deep understanding of the internet and its engineering aspects is acknowledged by David Huberman. As a thought leader, Jeff Houston is considered one of the best sources for discussing technical considerations related to DNS privacy.

In conclusion, DNS privacy is crucial due to the potential exposure of personal information through DNS queries. The delay in developing protocols for DNS privacy is seen as a missed opportunity, considering the long history of DNS and the recent start of privacy-focused developments. The invention of DNS by Paul Makapetris is credited for resolving critical issues associated with scaling and knowledge of internet hosts. Overall, Jeff Houston’s expertise in internet engineering is seen as valuable for discussions on the technical considerations of DNS privacy.

Geoff Huston

DNS privacy is an incredibly important issue, as DNS queries can track online activities, and if someone sees your DNS queries in real time, they essentially have access to all your secrets. Manipulations of DNS queries are also possible, as applications believe the first answer they receive. However, the DNS industry has made positive strides towards improving DNS privacy and security. Efforts such as query name minimisation and implementing encryption protocols like HTTPS and QUIC are being employed to protect DNS transactions. Despite these advancements, there is a challenge in balancing the need for an efficient network with the need for privacy in the DNS. Additionally, the technical community is working towards an opaque system that removes attributions in name use, but this may lead to a loss of transparency. The role of ICANN in DNS privacy is uncertain, and applications have gained control over the DNS, leaving traditional infrastructure operators behind. This shift towards application-driven technologies presents challenges for infrastructure operators. Overall, DNS privacy is a critical concern, and while improvements are being made, there are still challenges to address.

Manal Ismail

The European General Data Protection Regulation (GDPR) has had a significant impact on the GTLD Whois landscape. It mandates the reduction of personally identifiable information in registration data, radically changing the landscape. However, implementation of GDPR varies depending on the registry or registrar involved, resulting in a fragmented system. This has introduced several key issues, including increased ambiguity regarding the differentiation between legal and natural persons.

To address these challenges, there is a pressing need for standardized regulations and mechanisms for accessing non-public registration data and responding to urgent requests. However, reaching an agreement on the necessary policy recommendations has proven difficult. For example, the Governmental Advisory Committee (GAC) has found the proposed three-business-day timeline for responding to urgent requests unreasonable.

Another challenge arises from the lack of policy applicable to domain registrations subject to privacy proxy services. The use of privacy proxy protection has increased over time, and governments within the GAC are unsure of how to address this issue. The absence of clear policies in this regard makes it difficult to ensure compliance and protect privacy rights.

Improving the accuracy of GTLD registration data is a prioritized area of work. The GAC principles place great importance on the accuracy of this data, and ICANN is preparing a comprehensive assessment of the activities it may undertake to study accuracy obligations in light of applicable data protection laws and contractual authority.

During discussions, Manal Ismail expressed agreement with Steve and Farzi regarding the significance of data collected during the proof of concept. This demonstrates the recognition of the value of such data in informing decision-making and shaping policies.

Moreover, Manal Ismail believes in the necessity of constructive and inclusive discussions within ICANN’s bottom-up multi-stakeholder model. Despite diverse views, all participants were observed speaking from a public interest perspective. This highlights the importance of finding a balance between privacy and safety while considering the broader societal impact of ICANN’s decisions.

In conclusion, the GDPR has brought about significant changes in GTLD Whois records, necessitating the need for standardized regulations and mechanisms for accessing registration data and addressing urgent requests. The lack of policies applicable to domain registrations with privacy proxy services poses additional challenges. Efforts are being made to improve the accuracy of registration data. It is crucial to recognize the value of collected data during the proof of concept and engage in constructive and inclusive discussions to strike a balance between privacy and safety within ICANN’s bottom-up multi-stakeholder model.

Audience

During the ICANN62 Policy Forum, discussions on data privacy and access covered several crucial points. One speaker highlighted the potential harm caused by publicly accessible personal data of domain name registrants. For 20 years, this sensitive information, including mailing addresses, phone numbers, and email addresses, was available to the public. This raised concerns regarding the potential risks and harm that could arise from such unrestricted access to personal data.

On the positive side, another speaker mentioned the improvement brought by the advent of privacy proxies. This development allowed for increased privacy protection by masking personal information in domain registrations. This was seen as a step in the right direction towards improving domain privacy.

The forum also acknowledged and appreciated ICANN’s focus on DNS privacy. In one of the workshops, ICANN specifically titled it as DNS privacy and emphasized the importance of privacy in addition to access. This recognition highlighted the commitment to address privacy concerns and protect the data of internet users.

Transparency and accountability regarding law enforcement’s access to people’s data were deemed important. It was stressed that governments and law enforcement agencies should be transparent in their requests for access. This would ensure that there are clear processes in place for requesting and granting access to personal data, minimizing the potential for misuse or abuse.

Concerns were raised about the implementation of metrics for requester’s access, particularly when the requesters are from authoritarian countries. Questions were posed regarding the accessibility of data to law enforcement in such countries and the verification process to ensure compliance with ethical standards. These concerns emphasized the need for a robust system that prevents unauthorized access to personal information.

The audience also expressed the need for clarification on who has access to the data and how it is granted. This highlighted the importance of defining and understanding access privileges to ensure that data is only accessed by authorized entities and for legitimate reasons.

The adoption of the Registration Data Access Protocol (RDAP) was seen as a positive development. RDAP is a standardized protocol aimed at improving data access and security in domain registrations. However, concerns were raised regarding data privacy and security under the new protocol. The example of Indonesia was mentioned, where a local law prohibits the disclosure of data, even for legitimate law enforcement interests. This highlighted the challenges of reconciling different data protection regulations and ensuring compliance.

Data ownership was emphasized as a fundamental aspect of data protection and privacy discussions. Registrars were highlighted as having an obligation to comply with the data protection laws of the country whose residents’ data they hold. With potential obligations under multiple data protection laws based on the nationality of residents, the need for clarity and understanding of data ownership became crucial.

The forum also recognized the importance of ICANN, IETF, and IANA in addressing DNS privacy and developing policies. There was an expectation for these organizations to be actively involved in considering the costs and benefits of potential tools and providing guidance on DNS privacy.

Regarding Request Distribution Reporting System (RDRS), concerns were raised about its adequacy as a measure of demand. The need for improvements, such as the ability to allow bulk uploading of requests and retaining requester data for analysis, was suggested. It was proposed to hire a privacy lawyer for an in-depth review to ensure the system’s effectiveness.

The uncertainty of registrar participation in RDRS and its potential impact on requesters’ engagement was highlighted. It was remarked that promises on the operation of RDRS could not be successfully delivered due to the unknown number of participating registrars. A negative initial experience discouraging further engagement was also mentioned as a potential consequence.

Suggestions were made to retain data for evaluation purposes to provide an incentive for requesters to continue participating, despite potential initial disappointments. The low submission of requests indicated that some requesters might be tackling the issue without relying on data, but the importance of data retention for downstream analytics was emphasized.

Making participation in the Expansive Secure Synchronized Access and Disclosure (ESSAD) program mandatory was seen as beneficial. It was recognized that ESSAD could potentially serve as a valuable resource for data gathering and enhance the effectiveness of data access and disclosure.

ICANN’s participation in discussions on DNS abuse was mentioned, indicating a commitment to address and mitigate abuse issues in the domain name system. This participation demonstrated the recognition of the importance of maintaining a secure and abuse-free online ecosystem.

The lack of uptake of encrypted DNS, DNSSEC, and other protocols was highlighted, raising concerns about the security of the internet infrastructure. The need for end-user involvement in the design and implementation of standards was emphasized to ensure better adoption and implementation.

Lastly, the importance of not compromising enterprise cybersecurity through the “going dark” phenomenon was emphasized. Privacy was viewed as deluded without security, and it was emphasized that removing all data without ensuring proper security measures would lead to a worse privacy condition than before.

In conclusion, the discussion at the ICANN62 Policy Forum highlighted the necessity of addressing data privacy concerns while ensuring responsible data access. It underscored the importance of transparency, accountability, and clarity in the process of granting data access, especially for law enforcement agencies. The adoption of protocols such as RDAP and ESSAD were seen as positive steps towards improving data privacy and access. However, concerns regarding privacy, security, and the participation and effectiveness of various systems were also raised, emphasizing the need for continuous improvement and collaboration among stakeholders to ensure a secure and privacy-focused internet ecosystem.

Becky Burr

The discussion revolves around the need to protect privacy in the Domain Name System (DNS), particularly with regards to WHOIS data. WHOIS data contains information about the registrant of a domain name, and access to this data can potentially be misused for phishing, fraud, and suppressing free expression.

To ensure appropriate handling of data, it is important to adhere to fair information practice principles, which include principles of lawfulness, fairness, transparency, and accountability. These principles should guide the way data is dealt with in the DNS.

One notable development in 2018 was when WHOIS data went offline and became accessible only upon request. This change was made to provide better accountability and protection of privacy in the DNS ecosystem.

While ICANN (Internet Corporation for Assigned Names and Numbers) plays a role in supporting and facilitating registrars in their data processing responsibilities, it cannot dictate the outcome of the balancing test that registrars must perform when determining the accessibility of data. The responsibility for data processing lies with the respective registrars.

Queries associated with an IP address can provide information about individual and institutional internet uses. However, it is argued that not all queries associated with an IP address should be public. The public nature of the DNS is essential for resolving queries, but privacy considerations should also be taken into account.

Registrars, who hold the data, make decisions about the release of data based on a variety of circumstances. These decisions are informed by the relevant laws, regulations, and the registrars’ own company policies. The release of data should consider legitimate interests and the privacy rights of the individuals involved.

Data ownership is a complex issue that is fundamental to the discussion of data protection and privacy. Modern data protection laws apply not just to processing data within a country but also to the information about residents of that country. When users register a domain name with a registrar, they agree to the registrar’s privacy policy. Additionally, the ICANN contract requires registrars to make certain disclosures.

Compliance with the law is crucial for registrars. Even if registrars are located outside a particular country, they may have obligations under the law of the country where the resident whose information they hold is located. Therefore, registrars must comply with the applicable laws and regulations governing the processing of data.

In terms of encouraging participation, it is suggested that collecting data for downstream analytics can serve as an incentive for registrars to participate. This data can offer valuable insights into the DNS ecosystem. There is even a suggestion to make participation mandatory for all registrars, as it is seen as important for the overall functioning and improvement of the system.

Finally, there is an acknowledgement of the importance of understanding the needs of requesters through the system. This understanding can help address any issues or concerns and improve the overall experience for all parties involved.

In conclusion, the discussion highlights the importance of protecting privacy in the DNS, specifically in relation to WHOIS data. Fair information practice principles should guide appropriate data handling, and registrars are responsible for complying with relevant laws and regulations. Data ownership and privacy are complex issues that need to be considered in the context of data protection. Encouraging participation and understanding the needs of requesters are also essential for the effective functioning of the DNS ecosystem.

Yuko Yokoyama

ICANN (Internet Corporation for Assigned Names and Numbers) is developing a new service called RDRS (Registration Data Request Service), which aims to simplify the process of requesting non-public GTLD (Generic Top-Level Domain) registration data. RDRS will act as a centralized platform for registrars to submit and receive data requests, benefiting stakeholders such as law enforcement agencies, IP attorneys, and cybersecurity professionals.

RDRS is a voluntary service and ICANN cannot force registrars to disclose data through this platform. The decision to disclose or not lies with the registrars, and RDRS operates as a proof of concept service for up to two years.

Key features of RDRS include the automated identification of domain managers, eliminating the need for requesters to identify them themselves. Additionally, requesters will have access to their past and pending requests within the system.

It is important to note that the disclosure of requested data is not guaranteed by RDRS. Each registrar conducts a balancing test before deciding whether to disclose data, taking into account local laws and other applicable regulations. This ensures compliance with legal regulations and protects individual privacy rights.

Only ICANN accredited registrars have access to the RDRS system. They act as intermediaries between requesters and the platform, holding the registration data and routing requests accordingly.

In summary, ICANN’s RDRS aims to streamline the process of requesting non-public GTLD registration data. It provides a central platform for registrars to submit and receive data requests, benefiting stakeholders such as law enforcement agencies, IP attorneys, and cybersecurity professionals. However, the decision to disclose data is ultimately up to the registrars, considering local laws and regulations. Only ICANN accredited registrars can use the RDRS system.

Session transcript

David Huberman:
you for your patience. Welcome to this workshop. I could use my glasses, thank you. Welcome to this workshop. We are going to be discussing current developments in DNS privacy. And why are we going to be discussing that? Well, when the internet was created in the 1960s, 70s and 80s, we were just trying to engineer solutions that would work, that would allow us to intercommunicate. And in 1983, Paul Makapetris was able to invent the DNS to solve two important problems that we were having at that time. One of them was about scaling. One of them was being able to know what all the hosts on the internet were. And the way we were doing it did not scale at all. And so the DNS was able to fill that gap by creating a distributed system that would allow everyone everywhere to know all the hosts and all the names on the internet and map them to the IP addresses that are necessary for computers to talk to one another. Importantly, it also, okay, thank you. Importantly, it also enabled email, it enabled email at scale, because email before 1983, you had to be a human router, you had to describe all the different steps that an email needed to take in order to reach its destination. The DNS allowed us to scale that. So all you needed to know is where someone was me at ICANN.org. And then on the left side of the app, all you needed to know was my address, david.huberman for the local routing of that email. Okay, so why am I giving you all this history? Because from 1983, until really, five or six years ago, ish, all of the DNS data that everybody in the world used and communicated in their queries was all in clear text, it was all out in the open for everybody to see if you were listening on the wire, or if you were operating a DNS element that was looking at your query. And this is a problem because DNS queries have a lot of information about who we are and what we’re doing. It’s 2023, or back then it was 2017, 2016. And this was not acceptable anymore. Privacy is a right, privacy is a responsibility. So we began to develop solutions to increase the privacy of the DNS. I am very honored and you are very lucky that we are going to hear today from four world class experts, who are going to talk about some of these developments, both historically and contemporarily. Today on the panel, Becky Burr. Online, we have Manal Ismail, we have Jeff Houston, and we will end with some new developments here at ICANN with Yuko Yokoyama. So to begin our session and to set us in a good historical perspective and a good legal perspective, it’s my honor to introduce Becky Burr. Becky Burr is a member of the ICANN board. She’s a world class privacy attorney in Washington, D.C. And most importantly, ICANN is entirely Becky’s fault. So if we could please put presentation one, if we could please put those slides up. Becky, to you.

Becky Burr:
Thank you so much and thank you all for being here. If we could go to the next slide. We’re going to talk about two aspects of DNS privacy. I bet some of you are here because you heard DNS privacy and you thought, okay, this is about IP addresses and queries and things. Some of you heard DNS privacy and you’re here because of WHOIS. We’re going to talk about both of those things. And our hope is that we’ll go through the presentations pretty quickly and have a lot of time for discussion. So we’re going to go sort of way back in the WHOIS way back machine world. In 1998, when the U.S. government issued the white paper on domain name management, it said we need to have an organization that ensures that there are policies out there that require that registrant data, including name and address and contact information, is included in the registry database and available to anyone with access to the internet. Now, the world has changed a little bit and I think if you recall the discussion about access to domain name registration data in NIS2, the European Commission directive, European Union directive, it says something slightly different. It does say that policies should ensure that registrant data is collected, that it includes all of those things, and it should be available to people who have legitimate, people who are verified and authenticated to have legitimate interests in that data. So we’ve come a way down the road in terms of finessing the way we think about access to registrant data to reflect the fact that the world has evolved in terms of considerations about privacy. If we go to just the privacy principles, I’m not going to give you a lecture on data privacy law. I’m just going to tell you that almost all data privacy law is built on fair information practice principles and they’re very fundamental principles that guide how you deal with data in an appropriate way. We’re not going to talk about all of these. This happens to be the formulation of fair information practices that’s found in the GDPR, but they’re all quite similar. The things that we need to talk about in terms of DNS privacy for our discussion today is the lawfulness, fairness, and transparency principle that provides that if you’re processing personal data, it has to be lawful and fair and transparent. And fairness is the issue that we’ll think about here in terms of is the processing harmful, unduly harmful, unexplained, misleading, or surprising to the data subject, and accountability and controllers, the people who make decisions about what data is collected, how it’s processed, what’s it used for. Under modern fair information practice principles, those people are accountable for their use of their processing of data. So let’s just talk about the fairness analysis because I think if we’re all on the same page, we’ll do better here. As I said, it’s about is the processing unduly detrimental or surprising or harmful to a data subject? And you think about it in a couple of ways. What’s the purpose of the processing? Is the processing, is the purpose legitimate? Is it legal? Is it ethical? There’s a necessity component, which is do you need to process this data to achieve the goal that we’ve just decided is legitimate? Is it proportionate? Is there a less intrusive way to get the information you need, achieve the goal that you’ve set out with processing this? And you take those two things and you apply essentially about balancing tests. And you say, given the purpose that I want to balance, that I want to process this data for and the considerations about whether there’s a less intrusive way to do it, how does this balance out? How do my legitimate interests compare to the fundamental privacy rights of the individual data subject? So if we apply that in the context of domain name registrant data, we can go back and talk about the original purpose way back when was really for engineers to resolve routing issues. They needed to be able to get in touch with somebody to resolve an issue. But the function of this data has evolved over time. It’s now used, and this is not a new development. This is really since commercialization on the internet began in 1992. This data can be used to identify and mitigate cybersecurity threats, to fight crime and fraud, to protect consumers, and to protect intellectual property. But it also can be, and most assuredly is being used for marketing, for phishing, for fraud, and for suppression of free expression. So there are some important reasons to process this data. There are also some significant potential for misuse of the data. And when we’re talking about balancing, we think about that. The necessity test is, does the registrant data need to be publicly available for anyone to process? Is there a less intrusive way to address the legitimate interests of cybersecurity threats, crime prevention, and the like? And we saw that in 2018 when GDPR went into effect, WHOIS went essentially offline. It wasn’t published on the internet for anybody to see without any kind of accountability. You had to come and ask for it. And that also was a way of making users more accountable. Because before, nobody knew who was looking at WHOIS information and for what purpose. If you have to ask for it, if you have to provide an email address, there’s more accountability. So considering both those tests, is the access to DNS data proportionate? Is it fair? Is it lawful? The answer to that, I’m sure, is clear as mud to everybody, because it really depends on the context. It depends on so many variables that you can’t have a bright line test. You have to think about this in specific context. So the question then becomes, who gets to decide? And I think it’s useful to focus on this for just a minute. Because remember, we said we’re going to talk about fairness. We’re also going to talk about the accountability principle. And under every data protection law that I’m aware of, the controller, the person who makes the decisions about what data is collected and how it’s used, is responsible and is accountable for applying that balancing test. So in the domain name world, registrars are surely controllers. I don’t think that there’s any question about that. Lots of people will debate whether ICANN is a joint controller or not. I don’t think we need to do that for our purposes. You can decide whatever you want on that. Because whatever the answer is, ICANN can’t determine the outcome of the balancing test for registrars, who are themselves controllers and who must conduct the balancing test themselves. So that puts ICANN in a very difficult position. Because people say to ICANN, why are you not making registrant data available? And the answer is, we don’t get to decide. The registrar who has the data, who is delivering it to somebody in response to a request, they are under the law, accountable for and responsible for making that decision. And even if ICANN was willing to say to every registrar on the planet, if you get fined, no problem, we’ll indemnify you, we’ll take care of you. ICANN still can’t have a policy that says you must do something that you think is against the law. So I just think that we really need this question of who decides has to be fundamental in the way we think about privacy. I didn’t mean to do that. So if ICANN can’t dictate the balancing test outcome, can its policies and can its tools facilitate and support that process? Can we make it easier for people to submit requests? Can we make it easier to ensure that registrars have the information that they need to conduct the balancing test that they’re required to conduct? We’re going to talk about that. Yuko is going to talk a little bit about that in the context of a tool that ICANN is rolling out shortly. Now, the other kind of DNS privacy that we’re going to talk about is the more technical kind. And the DNS, what IP address corresponds to what name, is necessarily public. If that information is not public, you can’t resolve queries. But the queries themselves are not necessarily public. And IP queries that are associated with the IP address of the requesting server can tell you things about individual and institutional internet uses, who’s searching for what. People will differ on how good a tool it is to do that, but there’s no doubt that you can get some information. And there are several technical organizations who are working on this aspect, and Jeff Houston, who is on with us, will talk about that. So I’m going to move on quickly to Manal Ismail, our dear colleague from ICANN, who is going to talk about the government perspective on these issues. And I’m hoping that we will have a lot of time for a discussion, because we almost always can have a pretty lively conversation here.

David Huberman:
Okay. If we could please put the presentation down, thank you. And Manal, if you are using slides, if you would be so kind as to share your screen. Otherwise, if not, please go ahead.

Manal Ismail:
Thank you. Thank you very much, David. I’m not using slides, so I’m good to start. And good morning, good afternoon, and good evening, everyone. I’m sorry to miss the opportunity to be with you all in Kyoto. My name is Manal Ismail. I’m chief expert internet policies at the National Telecom Regulatory Authority of Egypt and former chair of ICANN’s governmental advisory committee, the GAC, and now representing Egypt on the committee. Thank you for inviting me to join this distinguished panel on DNS privacy, and many thanks, Becky, for the excellent setting of the scene. In light of the evolving landscape in DNS governance and the ongoing changes related to access of registration data, governments are striving to strike the right balance between, on one hand, privacy protection and responsible handling of DNS registration data, and on the other hand, ensuring transparency, accountability, and access to accurate and reliable registration data. There are great efforts by ICANN in that respect so far, but I will focus my intervention on four key public policy concerns from a government perspective, of course. Related to, first, reduction of registration data and the differentiation between legal and natural persons. Second, access to non-public registration data and the timeline for for response to urgent requests. Third, the privacy proxy service and fourth and last on accuracy of registration data. So to start with the reduction of registration data and the differentiation between legal and natural persons, as you may all know, and as Becky has highlighted previously registration data were made available through free and public Whois services. Starting the 25th of May, 2018, the European General Data Protection Regulation came into force mandating the reduction of any personally identifiable information which changed radically the GTLD Whois landscape and left ICANN grappling with the potential impact of this on Whois services. Just before that on 17 May, 2018, the ICANN board adopted an emergency measure referred to as Temporary Specification or TempSpec for short, in order to enable registries and registrars to comply with the GDPR while maintaining the existing Whois system to the greatest extent possible. This TempSpec allowed the registries and registrars to redact registration data information unless of course provided with registrant’s consent and required the registries and registrars to provide reasonable access to non-public registration data only based on legitimate interest and for a legitimate purpose. This created the fragmented system with distinct policies depending upon the registry or registrar involved and introduced a number of important issues including distinguishing between registration data of legal and natural persons. And this is to allow for public access to legal person’s data since it does not fall within the limit of the GDPR. The relevant policy development process proposed only a mechanism to facilitate the differentiation for those who wish. So it’s kept merely as an option. Therefore, governments in the GAC urged the development of more precise policies that would protect personal data while publishing non-personal data. Noting that a significant percentage of domain names are registered by legal entities and that some analysis shows that a considerably larger set of registration information was redacted around 57.3% as compared to what is required by GDPR estimated to be only 11.5%. Now moving to access to non-public registration data and the timeline for response to urgent requests. In continuation of community work to develop an accreditation and access model that complies with GDPR, a policy development process was conducted and proposed a standardized system for access and disclosure as said for short where consensus was achieved on aspects relating to accreditation of requesters and centralization of requests. Yet agreement could not be reached on the policy recommendations necessary to provide for standardized disclosure. And the ICANN community is now expecting the rollout of a voluntary proof of concept, the registration data request service which is expected to inform future consideration of the ASAN in terms of demand and I believe Yoko will be speaking more to this. Yet certain public concerns are likely to remain such as the lack of centralization with regard to disclosing data, lack of a mechanism for review of disclosure decisions and worries that the recommendations could create a system that is too expensive for its intended users. Additionally, governments, members of the GAC of course are concerned regarding the timeline for response to urgent requests. And by urgent here, we mean limited circumstances that pose an imminent threat to life, injury, critical infrastructure or child exploitation. The proposed timeline contains improvements of course such as an explicit reference to the general expectation of the response within 24 hours and the requirement to notify the requester if additional time is needed. Yet it allows for not one but two extensions that could bring this timeline up to three business days. And the GAC finds three business days not a reasonable time period for responding to urgent requests and moreover, the use of business days injects uncertainty into the process where the three business days would stretch to seven calendar days depending on diversity of global holidays and work weeks. So in an effort to reach a compromise, there was a proposal that the extension notification must include three things. First, confirmation that the relevant operator has reviewed and considered the urgent request on its merit and determined additional time is needed. Second, rationale for why additional time is needed. And third and last, the timeframe for response which is expected of course, not to exceed two business days from the time of receipt. In a recent exchange, the board requested more information from GAC members on their experience with urgent requests and the GAC confirmed its intention to provide such information and acknowledged ongoing work to gather scenarios and use cases of urgent requests and related experience with contracted parties. Moving to the third point on privacy proxy, governments within the GAC are concerned that there is still no policy applicable to domain registrations subject to privacy proxy services which in effect create a double shield of privacy. The GAC requested that at least the registration data record clearly indicates whether the data is protected by a privacy proxy service or not. Particularly that per a study by Interisle Consulting Group, the use of privacy proxy protection has increased over time from 20.1% in 2013 to about 29.2% in 2020. In addition, lessons learned from the COVID experience indicated that 65% of a sample of domains reported to the FBI had registrant data obfuscated by privacy proxy services. And again, during a recent exchange between the GAC and the board, it was acknowledged that the use of privacy proxy services is increasing and it was suggested that meaningful access to registration data would mean integrating the privacy proxy providers into the system similar to how the registrars are integrated. Finally, on accuracy, in GAC principles on GTLD, who is services issued in March, 2007, the government stressed the importance of accuracy of who is data and who is services and that the who is services must comply with applicable national laws. Also, ICANN by-laws recognize that ICANN shall work with supporting organizations and advisory committees to improve accuracy and access to GTLD registration data, as well as consider, of course, safeguards for protecting such data. In addition, dedicated ICANN review teams have considered levels of accuracy of registration data, where the first team found that only 23% of who is records were fully accurate. And the second assessed that data accuracy rate is still high, as high as 30 to 40%. In response to that, ICANN had put in place who is accuracy reporting system, which has stopped publishing reports because it relied on collecting publicly available data. And in 2021, an accuracy scoping team was formed. However, its work has been suspended given data protection concerns around whether ICANN has legitimate purpose that is proportionate. And the work is currently pending outcome of engagement with the European data protection authorities, as well as negotiations between ICANN and contracted parties. So to conclude, the GAC is currently examining opportunities for advancing work on accuracy of registration data. And ICANN is preparing a comprehensive assessment, I understand, of what activities it may undertake to study accuracy obligations in light of applicable data protection laws and its contractual authority to collect such data. I leave it at this, apologies if I exceeded the 10 minutes and I’ll turn it back to you, David, please.

David Huberman:
Oh, thank you so much, Manal. That was extremely helpful to understand the four key public policy concerns that the GAC has today in this area. There’s a lot to discuss here and there’s a lot of people, I think, who would like to have their voices here, but bear with us for just a few more minutes, please, because we wanna finish setting the table here so we can have a good interaction and a good discussion. I’d like to turn now online to our friend, Geoff Houston. Jeff, are you with us?

Geoff Huston:
Yes, I am, thank you.

David Huberman:
Great, thanks, Jeff. It’s good to hear your voice. He probably doesn’t need an introduction, but just in case, Jeff Houston is the chief scientist of APNIC, the Regional Internet Registry in the Asia Pacific region. Jeff embodies the concept of a thought leader. Jeff is someone who understands the internet and how it’s engineered better than almost anyone in the world. And so to help us understand some of the technical considerations in DNS privacy and putting the discussion that Becky and all have laid out for us in a technical context, Jeff, can you take us through some of this, please?

Geoff Huston:
Yes, thank you. And I must admit, I have to say first off that my background is technical, not policy-based. And so when you say DNS privacy to me, I don’t immediately swing over to the registration issue. I don’t. The massive use of privacy proxies, the corporatization of large amounts of the internet, to my mind, that looks like a minor issue. The burning conflagration, the elephant in this room is actually somewhere else. If I can see your DNS queries in real time for any value of you, you have no secrets because everything you do, everything, even the ads you get delivered to your screen, starts with a call to the DNS. And so if I know what questions you’re asking, when you’re asking it, then you have no secrets from me. And you go, well, okay, that’s pretty bad. But unfortunately, it gets worse because the applications that use the DNS that run on your computer or on your phone are not just naive. They are almost criminally negligent in terms of their naivety because they believe the first answer they get. Not the answer. Any answer that is first that reflects the information in the query, that’s the truth. So if I can see your queries and I can jump in first with the wrong answer, you’re mine, I own you. And you can’t even see that it happened because although the queries and answers are in the clear, the DNS innards are incredibly opaque. No one knows where the answer came from. It just comes. It’s like magic. It’s lightning fast, but you can’t check it. You just believe it. Now, you might say, well, so what? But the issue is that this property of the DNS has been used and abused by many over the years. It’s no surprise that the internet’s capitalism is basically based around surveillance and advertising. The more knowledge that advertisers have about me as a user, then the more valuable the ads that can be sort of splashed in front of my eyeballs, the more money the advertiser makes, the more money everyone else makes about me. So knowing about me is critical. And seeing my DNS is a sure path to actually obtain that phenomenal knowledge. Now, it’s not just advertisers, it’s not just commercialism, it’s public entities. Various public bodies have been caught with their fingers in the DNS till looking hard. Malware, all kinds of criminal activity have also focused around the incredible naivety of the DNS. From the technical perspective, the reality came to the answer that enough was enough. It was time to actually arm the DNS with some level of protection against casual eavesdropping and intervention. And there have been three areas in the last five years that have been radical steps forward in making the DNS more private. And they’re quite effective. The first is stopping the DNS being gratuitously noisy. When I want to resolve a.very.long.name that may.have.bad components, then literally everyone gets to see that’s the name I’m trying to resolve. From the root service to the top levels to the second domain and so on. I’m telling the world of my interest in that particular domain. I shouldn’t be doing that. And as it turned out, there was a protocol error way back from 1983. It seemed like a good thing to do at the time. It’s been a disaster. And so we’re doing now a practice called query name minimization. And little by little, we’re clearing up that particularly important leak. But that’s not the heart of what we’ve changed. You may have noticed recently that almost every web page is now HTTPS, not just HTTP. And if you’re using a number of popular browsers, if you go to something that doesn’t have that magic S, that doesn’t use a secure and authenticated channel, the browser goes, hang on a second, this doesn’t look very good. Are you really, really sure? And more recently, some browsers are going, I’m not going there. It’s not protected. I’m not gonna help you in being silly here. It’s not gonna happen. We’re doing the same in the DNS. And little by little, we’re taking this open, very insecure protocol and transforming it. with the same technology we’ve used to protect the web. We’re using PLS is the name of the protocol, but we’re putting the DNS transactions behind a wall of incredibly good encryption. It’s no longer possible to casually eavesdrop. And we’re going even one step further, because if you think about it, a web page, a DNS query, they all look the same. So why don’t we just put the DNS into HTTP? Why don’t we put it into this new protocol called QUIC and wrap the whole thing up with some pretty heavy, heavy duty encryption? So now there is no possibility to be a casual network eavesdropper. But now we’re thinking about more than this, because the real problem is that I, Jeff, am making the query. Do I really have to? Because in the HTTP world, to make the web even faster, there’s a technology called server push. It says, I know you’re going to go to that web page. I really do. And even if you don’t, bandwidth’s available. Here’s some answers. Here’s some objects in advance. So if you touch, you know, if you click, bingo, instant answer. We can do the same for the DNS. We can pre-provision answers. Here’s the results of your search page. And by the way, all those URLs, here’s the DNS. I never make a query. I don’t get caught asking. It’s not me anymore. I’ve gone dark. So with a little help from DNS security, DNSSEC, and chain validation, we’re within a hair’s breadth of actually taking the user out of the picture and making the entire DNS go dark. Now, that means there’s only a few places that know you. And one of them is what we call an open DNS resolver. Normally, your ISP knows that. But there are a few folk like Google and Cloudflare who are very big in this game as well. And it might be very good to have privacy. But if you’re sharing all your secrets with Google, is that really private? Or is that really the veneer of privacy without the substance? And so we’re now working on even better forms of security and privacy, where who I am and what I’m asking for is split apart. And no one knows the two in conjunction. Apple are playing with this with their Apple private relay systems. We’re now no single party knows what the user is asking for. Nobody. That information is only available to the end user. The interior of the system knows nothing. So over the last few years, we’ve seen astonishing leaps in making the DNS more private to stop this kind of tampering and observation of the DNS. It’s not quite the end of the story, because we’re now starting to use the DNS for things other than simply resolving names. We’re using it for content steering. It’s the new routing protocol. When you actually go to a web page, the answer that you get will be different to the answer that I get. You’re in Japan. I’m in Australia. The answers necessarily might well be different. So the DNS has this tension of to give you good answers, you need to expose a little bit about who you are and where you are. But if you really want privacy, you don’t want to expose anything. And fighting that tension between an efficient and fast network and a private network is actually where the substance of the DNS privacy debate is today. So to my mind, registration is a small fire down in the corner of the roof. And I appreciate there is a bit of a fire, and it’s a problem. But the raging problem is the fact that the DNS kind of makes the internet an incredibly exposing experience to folks that you’ve never met, never will meet, but know all about you. And that is a deeply discomforting view that I think from a technical perspective, there’s a lot of energy going into trying to fix that. Thank you.

David Huberman:
Thank you so much, Jeff. If I may make an aside real quick, I’ve been listening to Jeff talk to us as a community for about 24 years. And most of the time, Jeff is yelling at us. Jeff is very unhappy with us, because Jeff has seen everything that’s broken, and he’s telling us we need to fix it. And what was really nice about the last 10 minutes is Jeff shared with us some of these astonishing leaps that we’re making, and actually achieving some of the goals and fixing some of the brokenness that we’ve had for 40 years. There’s one more piece to this. Jeff gave us some really good input about some of the technical considerations to improving this on the DNS side. But next to me, we are very honored to have Yuko Yokoyama. And Yuko is going to talk to us about the next steps that ICANN is taking in helping advance this conversation. If you would please put presentation to slides, thank you. Yuko, I would like to first introduce you to everybody. Yuko Yokoyama is ICANN’s Program Director for Strategic Initiatives. And Yuko is currently leading two programs at ICANN, the Data Protection and Privacy Program and the DNS Abuse Program. Yuko is fluent in English and Japanese and currently resides in Los Angeles, California. So, Yuko, please, if you would, take the microphone and talk to us about ICANN’s Registration Data Request Service. Thank you, David. Konnichiwa. Yuko Yokoyama desu.

Yuko Yokoyama:
Kidding. Just kidding. My name is Yuko. I’m from ICANN. Thank you for the introduction. Today, I’m going to talk about the tool that ICANN is making. And this tool is going to make it slightly easier for data requester and data holder in exchanging information for the request for non-public registration data in the GTLD space. That would be me. Okay. So what is a registration data? Maybe you guys don’t need to be lectured about what it is, but simply put, it’s a contact information, identifying information of the domain name holder, such as names, addresses, and phone numbers. This information is used in a variety of reasons, right? It could be a law enforcement trying to do the criminal investigation or the IP lawyers trying to hunt down the IP infringement, or maybe it’s just trying to resolve technical issues related to the network within that domain name. So as Becky and Manal has talked about, these domain name registration data, such as we used to call it, who is information we’re trying to now call it a registration data within ICANN, it used to be public to anybody and everybody who wanted that data. But with GDPR and other emerging privacy laws around the world, it is now largely redacted. And if you want that data, you have to jump through the hoop to get that. So how are those people who have legitimate interest to want that information, getting that information, such as law enforcement or IP lawyers or cybersecurity professionals, how are they getting this information? Not easily. They would have to first figure out who owns that domain name, which registrars managing that domain name, and then they would have to find the contact information of that registrar, call them up, figure out their own process and procedures on how they accept the data requests. It may be web form, it may be email, it may be just a simple phone call, who knows. But they’re trying to have to figure out the individual registrar’s method to try to submit their redacted data information requests. So obviously it is not ideal. So as Manal mentioned, through ICANN’s various multi-stakeholder policy development process, they have come up with this thing called System for Standardized Access and Disclosure. This is shortened as SSAD. There was a policy recommendation, 18 policy recommendations related to SSAD, and this SSAD envisioned to have pretty great features. It connects data holders and requesters in a standardized manner. Requesters’ identity is verified and the system accredits them. And there were service level agreements specified and other processing requirements. And lastly, it envisioned a paper use model, so requesters who want the data through this SSAD needed to pay for the usage of the system. That said, when ICANN conducted an analysis of these policy recommendations, it turned out to be very complex and possibly very much cost prohibitive. So we needed to figure out first what the demand is out there in terms of such a centralized system, and if there were enough user pools to sustain this system and the paper use model. So here comes the RDRS, the registration data, yes, registration data request service. So RDRS is a proof of concept service that will be operated for a period of up to two years. It is much simpler than SSAD, where there’s no identity verification or accreditation. It is also free. It can be used by anybody in the world who wants to use the service. They can simply sign up and submit their data requests. As the RDRS is not a result of the consensus policy through ICANN’s process, this is currently a voluntary system, meaning that not every data holder, meaning in this case registrars, are needing to participate. So ICANN accredited registrars can choose to participate to receive the request through the other side of the RDRS, or they can choose not to participate. Another thing to note is that there is no service level agreement. Again, this is because it’s a voluntary system. So why are we building this? As mentioned, we first need to figure out if there’s really a demand for such a centralized system, and if so, what kind of volume and what kind of demand, what kind of user pools there may be. And the data that we can collect through this proof of concept two-year operation, this will inform the future of what we can do about this non-public registration data. If there’s demand, great. If not, also, that’s good to know. And you know, through this exercise, we can potentially get some idea in terms of what kind of tools would really be beneficial to the world. Part of this, as you may all know, ICANN is very much about transparency. So once this service launches, we will be publishing the monthly metric report so that you can all sort of figure out what we’re seeing in terms of usage. So how does RDRS work? So it is a centralized platform, just like SDAD, and it allows submission and receiving of non-public GTLD registration data requests. There’s a standardized form and the ability to upload any sort of attachment to make your case as a requester. This means that you don’t have to make a phone call or you don’t have to figure out who owns this, who manages this domain name. You don’t have to cater to individual registrars’ process. Sounds pretty good. But one thing to know is that registrars will be the one who’s making the determination of whether the requester has legitimate interest and decide whether to disclose the data or not. So let’s talk about data disclosure. It is a heavy microphone. So it is a simplified system. Therefore, all communications between the requester and registrars will be taking place outside of the system, including the data disclosure. Disclosure methods will be based on registrars’ choosing, and system does not and cannot guarantee the disclosure of the data. A disclosure decision is solely lying within the data holder. In this case, it is the registrars. I want to stress this point that ICANN cannot, through contract or any sort of policy, obligate registrars to disclose data in any particular case because the law requires registrars the data holder to do the balancing test, as Becky mentioned earlier. So who can use this service? So obviously, data holder side, this would be the ICANN-accredited registrar who choose to participate. And from the requester side, anybody who wants a non-public GTLD registration data. So it could be, as mentioned, law enforcement agencies, IP attorneys, government agencies, cybersecurity professionals, anybody who may hold a legitimate interest. So it could be beyond those people that I’ve mentioned. Since this is a proof-of-concept service, there are some limitations or restrictions. For example, the data holder side, we are not considering registry operators to be part of this. We’re also not envisioning to utilize this system for CCTLD-related registration data. So that’s something important to note. This is only for GTLD domain names. So I’m not going to go through all these next two slides. It talks about benefits for registrars and requesters. As mentioned, on both sides, it will be a streamlined process, you know, standardized form and centralized platform. And you don’t have to figure out who manages the domain. The system will automatically do that for you. And from the requester side, again the same thing, and there’s also a template feature so that you don’t have to fill out the same form over and over if you are submitting more requests than just once. So both registrars and requesters benefit from this standardized form. It’s easier, less pain, I would say, and it acts like a ticketing system. So you can review your past request, your pending request, and what you may about to submit. So when is this exciting tool becoming available? So the system created with Privacy by Design, it’s nearly completed in terms of development work. The launch date is expected to be later this year, probably late November. In fact, we have already opened up the service to ICANN accredited registrars for their early onboarding so that they can become familiarized with this new tool. And then when the service launches to the general public, the requester pool, they will be ready to receive requests already. So I want to conclude this presentation with this. As you all know, the landscape of the internet privacy has been quickly changing, and it will obviously continue to evolve. And balancing the rights of the data subjects and timely access to domain name registration information is crucial more than ever. ICANN is striving to seek ways to evolve with the ever-changing environment and landscape through our multi-stakeholder bottom-up consensus building model. I’d like to encourage all of you, if you are part of the requester pool. If you ever need registration data within the GTLD domain name ecosystem, then this system is for you. As mentioned, this is a proof of concept service. Therefore, more people utilizing it, more accurate and useful data we can produce, which would lead to a better tool in the future. So please spread the word and be ready to use this system in November. Thank you so much for your time.

David Huberman:
Okay, thank you very much, Yuko. That was a very clear and very succinct explanation of this new tool available to everybody. Okay, you’ve all been very patient. Thank you. While we’ve set the table here, it is now time for questions and answers. We have quite a lot of people online who are watching. And online, if you have questions, you may raise your hand. You may type questions in the chat and our online moderator, Patrick Jones, will read them to us. I am going to start in the room. There are microphones on either end. There are microphones at the table. The first person who has gotten my attention is Farzaneh. So please go ahead and take a microphone and talk to us.

Audience:
Thank you, David. My name is Farzaneh Badi. For 20 years, we published domain name registrants’ personal sensitive data, their mailing address, their phone number, their email for everyone on the internet to have access to. This could lead to doxxing in dictatorships and where LGBTQI is illegal. Could actually lead to persecution. And website owners most of the time don’t know that their private sensitive information was being published. Then the privacy proxy came along and there was some kind of improvement. But I just want to give people, I see a few people who have not been involved with ICANN. But I just want to show the gravity of the issue. But I want to also congratulate ICANN. This workshop is one of the first workshops that ICANN actually titles it as DNS privacy and focuses on privacy and not just access. So this is a major improvement and I’m very thankful for that. But then when we don’t have, then we are now talking again about the issue of access and I have several questions. One is that when we talk about the metrics for the requesters, what sort of metrics are we talking about? Are we going to say how many from law enforcement requested access? And if this is globally accessible to everyone, since it’s free, is it also accessible to law enforcement in some authoritarian country? How can you actually verify that? The other thing that I have seen that the government ask for is the request to have confidential logs. And this is very dangerous. We need the governments and law enforcement to be transparent and I know how ICANN has responded to that request. But we need transparency in law enforcement’s request for access to people’s data, personal, private data and they are people. I don’t know where Manal got the stats about that, like the major part of the registrants or organizations and they are legal, but also there are personal information in legal, in even when they are legal entities, there might be their personal information. It might be their name and family name and that’s a personal information. Anyway, I’m not gonna give you more speech, but I think that there are many aspects that we need to think about and this session has been very, very, and thank you, Becky, for that fabulous presentation. It was very, very inclusive of all the aspects of privacy. Thanks.

David Huberman:
Thank you. Please, sir, gentlemen.

Audience:
Good afternoon. My name is John Sihar Simanjuntak from the ID Registry Indonesia. So my question is, so that the only accredited register that can access the data, it’s just my clarification, I think, actually, and then the question is actually how you can grant it to other requests? Maybe, I mean, this is something that we have to define really carefully because the really big question is, do really I can understand how can be granted the request? I mean, because in each country, maybe such different entity can be different situation. That’s my question, actually. I think the first one is about the clarification about who can access that and how you can grant it. I think you have to definitely define exactly what the meaning to be granted of the access. Thank you.

Yuko Yokoyama:
Thank you for your question. So I’m going to answer your first question, which was the ICANN accredited registrars being the only one who’s using the system. So they’re the ones who hold the data. They’re not the requester side. So requesters come to the system and request the registration data for certain domain names. And if that domain name is managed by the ICANN accredited registrars who use and who participate in this RDRS, then the request gets routed to that registrar and that registrar will conduct the balancing test and determine whether to disclose the requested data or not based on the local laws and other applicable laws. I don’t know, Becky, if you want to add something else.

Becky Burr:
No, I think the second part of your question was sort of what are the circumstances under which somebody would have access to the data. And as I said, first of all, the registrar who has the data and has to make the decision to give it out is going to apply the law that they’re subject to, the law, the regulations and the policies from their company that reflect the law and the policy that are relevant to them. Depending on a huge variety of circumstances that are relevant, they’re going to decide what kind of information they need to make a determination about whether they think the person who is requesting the information has a legitimate interest in that information and whether that legitimate interest is overridden by the fundamental privacy rights of the individual. So they’ll conduct a balancing test. They’ll decide if they have the information they need to make that determination and that decision will be based on and informed by the law that they’re subject to.

David Huberman:
Patrick.

Audience:
Thanks, David. It’s Patrick Jones from ICANN. Wanted to also mention, since we don’t have any remote questions yet in the chat, that one of the other elements that’s changing is that we’re moving to a new, more secure, more standardized protocol called the Registration Data Access Protocol. So in the past and historically, all of this registration data has been delivered through a protocol called WHOIS. And many of the registries are already delivering this registration data through the RDAP protocol. My point to Jeff to see if you might be able to touch on this a bit more as well. I believe all GTLB registries are already doing this. We’ve been going through a contract change process with the generic top-level domain registries to enable them to use the RDAP protocol and many country code registry operators are also using it. So with that, I’ll turn it back to the panel to note that RDAP is something new and we’ll be using this with the system. Okay, did we want to have a quick follow-up here, sir? Please. Yeah, following the explanation actually. So Indonesia, we have already the same similar GDPR actually last year, since last year. And the question is since you have the database can access through the accredited registrar, meanwhile the registrar maybe it’s not the native data. I mean, not the owner of the data. How can the registrar can provide the data while the registrar is not accountable to that data because it’s data maybe crossed globally from the ICANN database. So I mean, in our law, of course, this is not allowed to give data. Even the legitimate interest is the police, let’s say, but since the registrar is not the owner of the data, I think it’s still not allowed. Thank you.

Becky Burr:
So the question of data ownership is so fundamental to the discussion of data protection and privacy that it would take us the rest of our natural lives to discuss it here. So I think we’ll just skip that part of it. If your information is with a registrar in Malaysia, that registrar is certainly subject to Malaysian law. It’s possible that a registrar in another country also has obligations under Malaysian law with respect to your data as well because the way modern data protection laws work is it tends to apply to processing of information about a resident of the country. And it doesn’t only apply to processing within the country. So even if a registrar is outside of the country, they may have obligations under Malaysian law. But, and I am not an expert on the Malaysian data protection law, I can tell you that there are circumstances, the balancing test that we talked about where it would be appropriate or it would be okay for a registrar to disclose that data. And then there are gonna be circumstances where it wouldn’t be okay for a registrar to disclose that data. Just in terms of the issue, when you sign up for, when you register a domain name with a registrar, you will be asked to agree to their privacy policy. Their ICANN contract requires them to make certain disclosures as well. And so there are some contractual provisions that flow from you to the registrar. But the point is, the bottom line is that the registrar has to comply with the law that applies to the processing of that data. And if it’s Malaysian law that governs that processing, then they have to comply with Malaysian law. And if it’s Irish law, that or European Union regulation that applies, they will apply that law to it.

Audience:
Steve. Thanks, Steve Del Bianco. I work with NetChoice, a trade association in Washington, but also very active at ICANN for the last 20 years and in the business constituency. And I am most eager to hear more about Jeff Houston’s elephant. Namely, what would ICANN, IETF and IANA have to do? How would you be involved? I can’t be involved in that element of DNS privacy on the development dissemination of those protocols and standards and what policies would be developed to address it. And then allowing the community to suggest costs and benefits of some of the tools that Jeff has talked about. But I think that elephant is not going anywhere. He’ll wait a little bit. What’s more immediate is within a few weeks, we’ll start the RDRS, turn it on and offer it. And I was part of the group that did the EPDP as well as the small team in RDS and RDRS. And I’d always believed that it would be a false promise to think that a system like that would be an adequate measure of demand. Because the demand that we’re talking about is the demand to solve a problem, a requester like a commercial organization is trying to stop fraud that’s harassing their own consumers and undermining their businesses reputation across a wide variety of an audience. There might be IP attorneys looking to protect their IP, but it’s usually to protect consumers that are getting defrauded. That’s the character of that. We also have security researchers as well as security professionals trying to stop a current attack that’s going on. And then you have law enforcement, which Manal talked a little bit about. Now, historically, WHOIS helped to decrease the time it took and decrease the cost it took to start that investigation of solving the problem. It was only part of, a small part of solving the problem. You’re probably well aware that even before GDPR, we had an increasing proportion of registrations that would go privacy proxy. That was of concern, because it meant that ICANN needs to embrace the privacy proxy providers to accredit and hold them accountable to the standards of performance. And that got interrupted, of course, by the effective date of GDPR fines in 2018. And ICANN’s reaction to that led to a dramatic reduction in the value of using WHOIS to begin to solve problems that often are maybe not rise to the level of urgency that Manal gave us, imminent threat to life, right? Or critical infrastructure. But it’s quite urgent if your customers are being defrauded at the rate of thousands a day, because they’re being directed to another website or a fraudulent Red Cross donation site to take advantage of a new natural crisis. So the demand for RDRS may not be indicative of the demand for an SSAD that achieved some of the benefits. I mean, it’s not a replication of what value SSAD would provide. So there shouldn’t be an assumption that the demand value will transfer. Let me explain why. That the value of a requester submitting a request won’t be sufficient with RDRS to motivate a lot of use. So we’re having to find other ways to motivate the use of RDRS. And that’s a real challenge, because the promised value of RDS, as well as experience value is low. So I’ll reiterate something that I’d like you to consider. There’s still time. There’s still time. before you deploy, to do a few things that will increase the likelihood of value and increase the use. Not only the use in terms of the monthly metrics, but increase the use in a way that gives us the data we need to determine whether asset is worth doing, or determine whether new policies are necessary. Yuko, you’re well familiar with this, but one is to allow a request or to load a bulk, but a batch of requests for multiple domains that might be in use right now with a threat to my customers. And ICANN said no, didn’t want to do that, thought it would delay the release date. And I’m a programmer, so if that’s true and it’s a problem, why not still work on it? Put it on the queue for something that could be announced within a few weeks or months of the release. Make it the second thing that comes out, so it doesn’t jeopardize the release date. And then finally, I would say that retain in detail all of the data that a requester submits even if it turns out that the registrar is not participating. That data is essential to do analysis at any point before we conclude whether there’s demand. Because that analysis would show the quality and the quantity of who’s requesting, why they’re requesting, what evidence they presented, what reason or legitimate reason they’ve offered. And then on the other flip side, were the registrars who did participate, how fast did they respond and how well did they apply the balancing test? Because you can look at the evidence if you retain it. Now, if there’s a concern that you don’t want me to see all that data, that’s fine. Let’s hire a privacy lawyer that ICANN lets look at the data and comes back with a more qualitative analysis as opposed to just a handful of metrics. So there are things that ICANN can do in the four weeks remaining. Start to work on bulk uploads, right? And don’t throw the data on the floor. Don’t throw the data away. If it turns out that I put all the work into formulating requests and I provide all this evidence and screenshots, oh, but the registrar isn’t participating. So we lose the ability to understand what the demand was because that is the measure of demand. The measure of demand is the requests I put in, and if you throw most of them on the floor, you’re not going to get a good answer. Thank you.

Becky Burr:
So as we discussed, I do think there is some value in doing analytics on the data. But I’m a little confused about one thing that you just said, which is I think you’re saying that people are only going to use it if they, they’re not going to, in other words, they’re not going to use it to make requests. They’re going to use it to create the data. I mean, okay, so then why, well, then I’m confused because I guess I don’t understand what you’re saying, what the value, the value that you’re seeing, that you’re talking about is not the return of the data or the ease of the submission. It’s the creation of information about requests.

Audience:
If I could just clarify, Becky. We can make promises about what RDRS will do when it turns on, but none of us know how to effectually deliver on the promises because we have no idea how many registrars, registrars whose domain names are the subject of requests, will be participating on day one. We’re not going to be very clear about who the nature of that is, nor can we make promises about the fairness that’s done in the evaluation of the balancing test. So you make promises, but ultimately it’s the first experience, the first taste the requesters get when they submit their first several requests. And if it turns out that over half of the first batch of requests I put in were for non-participating registrars, you’ve just really diminished the interest of that requester to bother to do any more. So once they get a bad taste, or they get participating registrars that take four days to come back and say, nope, you don’t pass the balancing test, if the first taste is bad, then that requester community, I would like them to stay engaged, Becky. I want them to stay there to provide evidence of demand, but they have to have some assurance that having been disappointed at the actual experience and taste, is there a reason to do another set of requests? You know, I put 10 in, five were non-participating registrars, and the rest took three days to get back, and the balancing test said take a hike. I’m not going to come back to you unless there’s some other reason. So think of it as a two-step reason. First is, maybe I’ll get a good taste back, maybe it’ll actually help me stop an existential attack. But if it doesn’t, for reasons that are outside of your control, if it doesn’t, retain the data necessary to analyze the nature of demand that was there, and that will provide an additional incentive for people to continue to try the requests.

Becky Burr:
Okay, so I mean, I just want to confirm that what you’re saying is that the data, the analysis that, the retention of data for downstream analytics is an incentive for participation. Okay, I get that. I think we have to acknowledge here that the analysis that we just talked about in terms of is there a less intrusive way to accomplish your goal, stopping crime, protecting intellectual property, whatever it is, the only source of information that we have at this point is who’s going to submit requests, who submits requests. What we hear anecdotally is that registrars are not getting very many requests. And the consequence, which I think is a logical conclusion from that, is that requesters are getting, they are attacking the problem in a different way that doesn’t use the data. And that is the fundamental essence of the balancing test. So all I’m saying, and by the way, ESSAD isn’t going to change that outcome at all. So we are very much encouraging all registrars to participate. As you know, the board did suggest that the JNSO Council consider policy development to make participation mandatory, because we know just how important that is. But I do think for this collective data gathering, we have to ask requesters to make their needs known through the system.

Audience:
Edwin, did you want to add to that, please? Edwin Chung here. I just wanted to respond to Steve’s other question. So just a note, Edwin from DotAsia, also serving on the ICANN board, but I’m not speaking in my capacity of board, but as a general participant. But on the first item that you mentioned, the elephant or the burning question that Jeff has, ICANN is, and both the ICANN community and ICANN itself is actively participating in discussing those issues. And I think Jeff can add to that. At the ICANN meetings, actually those were discussed a few years back. You might remember DOA, DOH discussions. That’s part of what I think Jeff mentioned, and Jeff, please add to it. And follow from that, what is called OARC. I don’t know whether you know OARC, but I don’t really know the long version. It’s Operational Research Analysis Coalition, something like that. But that, I think, is also part of the multi-stakeholder model in the work, because I want to emphasize one thing, and the reason why this session is here at the IGF and not at ICANN is because of that broader sense. And this is an issue that is not just an ICANN issue, but an issue that we need to involve other stakeholders for which ICANN is one of them. The Who Is Matter has a slightly smaller element to that, but the other part has a bigger element to that, and it’s closer to things that we talk about, such as DNS abuse. ICANN can do one part of it, but there is a much larger DNS community that needs to do further work on those type of things. So I don’t know whether Jeff wants to add to it, but just a quick answer is that ICANN and the ICANN community are working on that other elephant as well. And hopefully, we’ll shrink or start walking away, but Jeff, you definitely have much more

Geoff Huston:
to say. I have some bad news. I have some very bad news for you. The issue is that once abused, there’s no coming back. And the response from the technical community is heading down a path that, quite frankly, touches upon an addiction in this industry. We are addicted to open DNS data. What if the DNS and its use generated no data whatsoever, nothing, could not see it, nobody, no matter how good their tool? What then? What names am I going to look up for registration if I don’t know what names are being used in the first place? Once you get a totally opaque system, then this entire conversation heads down an entirely different path that very few people are actually prepared to think about right now. But the response from the technical community is to create precisely that picture, that there is no attribution in the use of names whatsoever if we head down this area of abuse, obfuscation and encryption. This is nothing left. So everything that we’ve been thinking about, yes, we know how the DNS works. We understand what DNS abuse is and so on. Once you go down this privacy path to its destination, all the lights go out. It’s dark. And at that point, it’s an entirely different world. And exactly how we’re going to respond as law enforcement, as engineers, as network operators, when the network has all the lights turned off, is a question I don’t think anyone’s actually able to answer. Now, what’s ICANN’s role in all this? I’m not sure ICANN has a role other than be another onlooker into what is either going to be a phenomenal success for privacy or a phenomenal tragedy. I don’t know which either at this point. But I do know the path is inexorable and the answers are certain. We are going to turn the lights off. And that’s just one of those things that’s going to happen. It’s an interesting area to contemplate.

David Huberman:
Thank you, Geoff. We have three minutes left. Andrew?

Audience:
Fantastic. You may as well stay there, Jeff. Because I thought he was looking bored. So I wanted to bring him in. Anyway, two things. The picture you painted, Jeff, before when you touched on encrypted DNS and DNSSEC and so on was all very positive. You didn’t mention, unusually for you, the lack of take-up of many of those. So designing the protocol is interesting, but that’s the start of the solution. It’s a long way short of the end. And I would suggest part of the problem for the lack of take-up is the โ€“ so ITF, when it develops standards, there’s little, in reality, no involvement of the end-user community. So maybe we will get better take-up of standards if we bother to involve the end-users in the design process. Because we’re designing things that are either too hard to implement or they’re not interested in implementing. So ticking the box because we’ve got the protocol is not really that interesting in solving this. And then just briefly and finally, to your point on when we get to that destination or everything going dark, if we have a diverse standards community, which includes CISOs, I can tell you from personal experience of talking to them, they will be horrified at that because we kill enterprise cybersecurity when we go dark. And then if we think we’ve got privacy, we’re deluded because at that point, we’ve got no privacy at all because we’ve got no security. So I think the privacy purists, I think, are kidding themselves if they think they get privacy by removing all the data. That’s when they really have a problem. They’d be far worse than it ever was before.

Geoff Huston:
Andrew, just think for a second, and I’ll respond very, very quickly. The biggest tension in the internet is between applications and infrastructure, and the applications have won the game hands down. QUIC is not a transport protocol. QUIC is an attribute of the application. The application designers, and particularly the browser engines, have lost patience with the rest of us. Privacy in the DNS is not a DNS infrastructure problem anymore. It’s what browsers are doing and where applications are heading. They have the money, the agility, the update factory and infrastructure. And so the DNS is being taken away from traditional DNS operators because they’re basically too slow and the job they’re doing is not good enough from the perspective of the application. And that battle for control, round one happened in Firefox, round two will happen in Chrome and Safari. In fact, Apple is probably there already with its own relay, Apple privacy relays. So Andrew, the fight is happening further up the protocol stack, and the application folk who have the money, the agility, and the motive appear to be winning hands down at this particular point in time. The infrastructure folk are being left behind. It’s interesting to think about. Thank you.

David Huberman:
Manal, you have your hand up.

Manal Ismail:
Yes, it’s a more of a general comment and I feel obliged because I’ve been mentioned twice so quickly to agree with Steve on the importance of the data collected during the proof of concept and also to agree with Farzi that there are many aspects to this discussion. And it’s very interesting to see that despite the diverse views, we are all talking from a public interest perspective. So on one side, it’s privacy, on the other side, it’s safety. And I hope we can utilize ICANN’s bottom-up multi-stakeholder model to continue a constructive and inclusive discussion to be able to strike a right balance in that respect. Thank you.

David Huberman:
Well, thank you so much, Manal. That is actually a wonderful way to end it because unfortunately, my friends, we are out of time, even though there are more questions. Thank you very much for coming. I’d like to thank Becky Burr, Yuko Yokoyama, Goeff Houston, Manal Ismail, Patrick. Thank you for getting up so early and being our online moderator. And thank you all for coming. This concludes that session. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

3123 words

Speech time

1115 secs

Becky Burr

Speech speed

141 words per minute

Speech length

2559 words

Speech time

1090 secs

David Huberman

Speech speed

173 words per minute

Speech length

1351 words

Speech time

468 secs

Geoff Huston

Speech speed

168 words per minute

Speech length

2135 words

Speech time

761 secs

Manal Ismail

Speech speed

134 words per minute

Speech length

1728 words

Speech time

776 secs

Yuko Yokoyama

Speech speed

147 words per minute

Speech length

1811 words

Speech time

740 secs

Can (generative) AI be compatible with Data Protection? | IGF 2023 #24

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Kamesh Shekar

The analysis examines the importance of principles and regulation in the field of artificial intelligence (AI). It highlights the need for a principle-based framework that operates at the ecosystem level, involving various stakeholders. The proposed framework suggests that responsibilities should be shared among different actors within the AI ecosystem to ensure safer and more responsible utilization of AI technologies. This approach is seen as crucial for fostering trust, transparency, and accountability in the AI domain.

Additionally, the analysis emphasizes the significance of consensus building in regard to AI principles. It argues for achieving clarity on principles that resonate with all stakeholders involved in AI development and deployment. International discussions are seen as a crucial step towards establishing a common understanding and consensus on AI principles, ensuring global alignment in the adoption of ethical and responsible practices.

Furthermore, the analysis explores the role of regulation in the AI landscape. It suggests that regulation should not only focus on compliance but also be market-oriented. The argument is made that enabling the AI market and providing businesses with a value proposition in regulation can support innovation while ensuring ethical and responsible AI practices. This market-based regulation approach is believed to be beneficial for industry growth (aligning with SDG 9: Industry, Innovation, and Infrastructure) and economic development (aligning with SDG 8: Decent Work and Economic Growth).

Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not provide specific principles or regulations, it emphasizes the importance of a principle-based framework, consensus building, and market-based regulation. These insights can be valuable for policymakers, industry leaders, and other stakeholders in developing effective and responsible AI governance strategies.

Jonathan Mendoza Iserte

Artificial intelligence (AI) has the potential to drive innovation across sectors, but it also poses challenges in terms of regulation, ethical use, and the need for transparency and accountability. The field of AI is rapidly evolving and has the capacity to transform development models in Latin America. Therefore, effective regulations are necessary to harness its benefits.

Latin American countries like Argentina, Brazil, and Mexico have taken steps towards AI regulation and have emerged as regional leaders in global AI discussions. To further strengthen regulation efforts, it is proposed to establish a dedicated mechanism in the form of a committee of experts in Latin America. This committee would shape policies and frameworks tailored to the region’s unique challenges and opportunities.

The adoption and implementation of AI will have mixed effects on the economy and labor. By 2030, AI is estimated to contribute around $13 trillion to the global economy. However, its impact on specific industries and job markets may vary. While AI can enhance productivity and create opportunities, it may also disrupt certain sectors and lead to job displacement. Policymakers and stakeholders need to consider these implications and implement measures to mitigate negative consequences.

Additionally, it is crucial for AI systems to respect fundamental human rights and avoid biases. A human-centric approach is necessary to ensure the ethical development and deployment of AI technologies. This includes safeguards against discriminatory algorithms and biases that could perpetuate inequalities or violate human rights.

In conclusion, AI presents both opportunities and challenges. Effective regulation is crucial to harness the potential benefits of AI in Latin America while mitigating potential harms. This requires international cooperation and a human-centric approach that prioritizes ethical use and respect for human rights. By navigating these issues carefully, Latin America can drive inclusive and sustainable development.

Moderator – Luca Belli

The analysis delves into various aspects of AI and Data Governance, shedding light on several important points. Firstly, it highlights the significance of comprehending AI sovereignty and its key enablers. AI sovereignty goes beyond authoritarian control or protectionism and involves understanding and regulating technologies. The enablers of AI sovereignty encompass multiple elements, including data, algorithms, computation, connectivity, cybersecurity, electrical power, capacity building, and risk-based AI governance frameworks. Understanding these enablers is crucial for effective AI and Data Governance.

Secondly, the analysis underscores the need to increase representation and consideration of ideas from the Global South in discussions about data governance and AI. The creation of the Data and AI Governance Coalition aims to address issues related to data governance and AI from the perspective of the Global South. It highlights the criticism that discussions often overlook ideas and solutions from this region. To achieve comprehensive and inclusive AI and Data Governance, it is imperative to involve diverse voices and perspectives from around the world.

Moreover, the analysis emphasizes that AI governance should be considered a fundamental right for everyone. It is mentioned in Article 1 of the United Nations Charter and the International Covenants on Political, Civil, Economic, Social, and Cultural Rights. Recognizing AI governance as a fundamental right ensures individuals possess agency and control over their own technological destiny.

Furthermore, the analysis notes that the development of an international regime on AI may take between seven and ten years. This estimate is influenced by the involvement of tech executives who advocate for such an agreement. Due to the complexity of AI and the multitude of considerations involved, reaching international consensus on an AI regime requires ample time for careful deliberation and collaboration.

Lastly, the examination reveals that the process of shaping the UN Convention on Artificial Intelligence could be protracted due to geopolitical conflicts and strategic competition. These external factors introduce additional challenges and intricacies into the negotiating process, potentially prolonging the time required to finalize the convention.

In conclusion, the analysis offers valuable insights into AI and Data Governance. It emphasizes the importance of understanding AI sovereignty and its enablers, advocates for increased representation from the Global South, asserts AI governance as a fundamental right, highlights the time-consuming nature of developing an international regime on AI, and acknowledges the potential delays caused by geopolitical conflicts and strategic competition. These findings contribute to a deeper understanding of the complexities surrounding AI and Data Governance and provide a foundation for informed decision-making in this domain.

Audience

The analysis explores various topics and arguments relating to the intersection of AI and data protection. One concern is whether generative AI is compatible with data protection, as it may pose challenges in safeguarding personal data. There is also an interest in understanding how AI intersects with nationality and statelessness, with potential implications for reducing inequalities and promoting peace and justice. Additionally, there is a desire to know if there are frameworks or successful instances of generative AI working in different regions.

Privacy principles within Gen-AI platforms are seen as crucial, with 17 initial principles identified and plans to test them with 50 use cases. However, the use of AI also raises questions about certain data protection principles, as generative AI systems may lack specified purposes and predominantly work with non-personal data for profiling individuals.

There is a call for a UN Convention on Artificial Intelligence to manage the risks and misuse of AI at an international level. However, the analysis does not provide further details or evidence on the feasibility or implementation of such a convention. Potential geopolitical conflicts and strategic competition between AI powers are also highlighted as potential barriers to developing a UN Convention on Artificial Intelligence.

The “Brussels effect” is mentioned as a factor that may have negative impacts in non-European contexts. Concerns are raised about premature legislation in the field of AI and the need for clear definitions when legislating on AI to ensure comprehensive regulation. The analysis covers a broad range of topics and arguments, though some lack supporting evidence or further exploration. Notable insights include the need for privacy principles in Gen-AI platforms, challenges to data protection principles posed by AI, and the potential hindrances to global cooperation on AI regulation.

In conclusion, the analysis offers valuable insights into the complex relationship between AI and data protection.

Giuseppe Claudio Cicu

Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision making, monitoring, and compliance in organisations. However, challenges arise in terms of transparency and accountability. To address this, an ethical approach to AI implementation is proposed, such as the AI by Corporate Design Framework, which blends business process management and the AI lifecycle. This framework incorporates ethical considerations like the human in the loop and human on the loop principles. Furthermore, it is suggested that corporations establish an Ethical Algorithmic Legal Committee to regulate AI applications. This committee would act as a filter between stakeholders and AI outputs, ensuring ethical decision-making. Additionally, there’s a call for legislators to recognise technology as a corporate dimension, as it has implications for accountability, organisation, and administration. By developing appropriate regulations and norms, responsible and ethical use of AI in corporate governance can be ensured. Overall, AI has potential benefits for corporate governance and business processes, but careful consideration of transparency, accountability, and ethics is necessary.

Armando Josรฉ Manzueta-Peรฑa

The use of generative AI holds great potential for the modernisation of government services and the improvement of citizens’ lives. By automating the migration of legacy software to flexible cloud-based applications, generative AI can supercharge digital modernisation in the government sector. This automation process can greatly streamline and enhance government operations. AI-powered tools can assist with pattern detection in large stores of data, enabling effective analysis and decision-making. The migration of certain technology systems to the cloud, coupled with AI infusion, opens up new possibilities for enhanced use of data in government services.

To successfully implement AI in the public sector, attention must be given to key areas. Firstly, existing public sector workers should receive training to effectively manage AI-related projects. Equipping government employees with the necessary skills and knowledge is essential. Citizen engagement should be prioritised when developing new services and modernising existing ones. Involving citizens in the decision-making process fosters inclusivity and builds trust. Government institutions must be seen as the most trusted entities holding and managing citizens’ data. Strong data protection rules and ethical considerations are crucial. Modernising the frameworks for data protection safeguards sensitive information and maintains public trust.

The quality of AI systems is heavily dependent on the quality of the data they are fed. Accurate data input is necessary to avoid inaccurate profiling of individuals or companies. Effective data management, collection, and validation policies are vital for meaningful outcomes. Strong data protection measures, collection, and validation processes ensure accurate and reliable AI-driven solutions. Developing nations face challenges in quality data collection, but good quality data and administrative registers are necessary to leverage AI effectively.

In conclusion, successful AI implementation in the public sector requires government institutions to familiarise themselves with the advantages of AI and generative AI. Workforce transformation, citizen engagement, and government platform modernisation are crucial areas. Strong data protection rules and ethical considerations are essential. The quality of AI systems relies on the quality of the data they are fed. Proper data management, collection, and validation policies are necessary. Addressing these aspects allows government institutions to harness the full potential of AI, modernise their services, and improve citizens’ lives.

Michael

The analysis examines the issue of harmonised standards in the context of AI and highlights potential shortcomings. It is argued that these standards might fail to consider the specific needs of diverse populations and the local contexts in which AI systems are implemented. This is concerning as it could result in AI systems that do not effectively address the challenges and requirements of different communities.

One of the reasons for this oversight is that the individuals involved in developing these standards primarily come from wealthier parts of the world. As a result, their perspectives may not adequately reflect the experiences and concerns of marginalised communities who are most impacted by AI technologies.

While some proponents argue that harmonised standards can be beneficial and efficient, it is stressed that they should not disregard the individual needs and concerns of diverse populations. Balancing the efficiency and standardisation of AI systems with the consideration of local contexts and marginalised populations’ needs is paramount.

The tension between the value of harmonised AI standards and the disregard for local contexts is noted. It is suggested that the development of these standards may further entrench global inequities by perpetuating existing power imbalances and neglecting the specific challenges faced by different communities.

In conclusion, the analysis cautions against the potential pitfalls of harmonised AI standards that do not take into account diverse populations and local contexts. While harmonisation can be beneficial, it should not be at the expense of addressing the specific needs and concerns of marginalised communities. By striking a balance between efficiency and inclusivity, AI standards can better serve the needs of all communities and avoid perpetuating global inequities.

Kazim Rizvi

In his paper, Kazim Rezvi delved into the important topic of mapping and operationalising trustworthy AI principles in specific sectors, focusing specifically on finance and healthcare. He discussed the need for responsible implementation and ethical direction in the field of AI, highlighting the potential synergies and conflicts that may arise when applying these principles in these sectors. To address this, Rezvi proposed a two-layer approach to AI, dividing it into non-technical and technical aspects.

The non-technical layer examines strategies for responsible implementation and ethical direction. This involves exploring various approaches to ensure that AI technologies are developed and deployed in a manner that upholds ethical standards and benefits society as a whole. Rezvi emphasised the importance of involving multiple stakeholders from industry, civil society, academia, and government in this process. By collaborating and sharing insights, these diverse stakeholders can contribute to the effective implementation of AI principles in their respective domains.

In addition to the non-technical layer, the technical layer focuses on different implementation strategies for AI. This encompasses the technical aspects of AI development, such as algorithms and models. Rezvi emphasised the need for careful consideration and evaluation of these strategies to align them with trustworthy AI principles.

Moreover, Rezvi highlighted the significance of a multi-stakeholder approach for mapping and operationalising AI principles. By involving various stakeholders, including those from industry, civil society, academia, and government, a more comprehensive understanding of the challenges and opportunities associated with AI can be gained. This approach fosters partnerships and collaborations that can lead to effective implementation of AI principles in relevant domains.

Rezvi also discussed the need for coordination of domestic laws and international regulations for AI. He pointed out that currently there is no specific legal framework governing AI in India, which underscores the importance of harmonising laws in the context of AI. This coordination should take into account existing internet laws and any upcoming legislation to ensure a comprehensive and effective regulatory framework for AI.

Furthermore, Rezvi explored alternative regulatory approaches for AI, such as market mechanisms, public-private partnerships, and consumer protection for developers. While not providing specific supporting facts for these approaches, Rezvi acknowledged their potential in enhancing the regulation of AI and ensuring ethical practices and responsible innovation.

In conclusion, Kazim Rezvi’s paper presented an in-depth analysis of the mapping and operationalisation of trustworthy AI principles in the finance and healthcare sectors. He highlighted the need for a multi-stakeholder approach, coordination of domestic laws and international regulations, as well as alternative regulatory approaches for AI. By addressing these issues, Rezvi argued for the responsible and ethical implementation of AI, ultimately promoting the well-being of society and the achievement of sustainable development goals.

Wei Wang

The discussion centres around the regulation of Artificial Intelligence (AI) across different jurisdictions, with a particular focus on Asia, the US, and China. Overall, there is a cautious approach to regulating AI, with an emphasis on implementing ethical frameworks and taking small, precise regulatory steps. Singapore, for instance, recognises the importance of adopting existing global frameworks to guide their AI regulation efforts.

In terms of specific regulatory models, there is an evolution happening, with a greater emphasis on legal accountability, consumer protection, and the principle of accountability. The US has proposed a bipartisan framework for AI regulation, while China has introduced a model law that includes the principle of accountability. Both of these frameworks aim to ensure that AI systems and their designers are responsible and held accountable for any negative consequences that may arise.

However, one lingering challenge in AI regulation is finding the right balance between adaptability and regulatory predictability. It is vital to strike a balance that allows for innovation and growth while still providing effective governance and oversight. Achieving this equilibrium is essential to ensure that AI technologies and applications are developed and used responsibly.

The need for effective governance and regulation of AI is further emphasized by the requirement for a long-standing balance. AI is a rapidly evolving field, and regulations must be flexible enough to keep up with advancements and emerging challenges. At the same time, there is a need for regulatory predictability to provide stability and ensure that ethical and responsible AI practices are followed consistently.

In conclusion, the conversation highlights the cautious yet evolving approach to AI regulation in various jurisdictions. The focus is on implementing ethical frameworks, legal accountability, and consumer protection. Striking a balance between adaptability and regulatory predictability is essential for effective governance of AI. Ongoing efforts are required to develop robust and flexible regulatory frameworks that can keep pace with the rapid advancements in AI technology and applications.

Smriti Parsheera

Transparency in AI is essential, and it should apply throughout the entire life cycle of a project. This includes policy transparency, which involves making the rules and guidelines governing AI systems clear and accessible. Technical transparency ensures that the inner workings of AI algorithms and models are transparent, enabling better understanding and scrutiny. Operational and organizational transparency ensures that the processes and decisions made during the project are open to scrutiny and accountability. These three layers of transparency work together to promote trust and accountability in AI systems.

Another crucial aspect where transparency is needed is in publicly facing facial recognition systems. These systems, particularly those used in locations such as airports, demand even greater transparency. This goes beyond simply providing information and requires a more deliberate approach to transparency. A case study of a facial recognition system for airport entry highlights the importance of transparency in establishing public trust and understanding of the technology.

Transparency is not limited to the private sector. Entities outside of the private sector, such as philanthropies, think tanks, and consultants, also need to uphold transparency. It is crucial for these organizations to be transparent about their operations, relationships with the government, and the influence they wield. Applying the right to information laws to these entities ensures that transparency is maintained and that they are held accountable for their actions.

In conclusion, transparency is a key factor in various aspects of AI and the organizations involved in its development and implementation. It encompasses policy, technical, and operational transparency, which ensure a clear understanding of AI systems. Publicly facing facial recognition systems require even higher levels of transparency to earn public trust. Additionally, entities outside of the private sector need to be transparent and subject to right to information laws to maintain accountability. By promoting transparency, we can foster trust, accountability, and responsible development of AI systems.

Gbenga Sesan

The analysis highlights the necessity of reviewing data protection policies to adequately address the extensive data collection activities of AI. It points out that although data protection regimes exist in many countries, they may not have considered the scope of AI’s data needs. The delayed ratification of the Malabo Convention further underscores the urgency to review these policies.

Another key argument presented in the analysis is the centrality of people in AI discourse and practice. It asserts that people, as data owners, are fundamental to the functioning of AI. AI systems should be modelled to encompass diversity, not just for tokenism, but to ensure a comprehensive understanding of context and to prevent harm. By doing so, we can work towards achieving reduced inequalities and gender equality.

The analysis also underscores the need for practical support for individuals when AI makes mistakes or causes problems. It raises pertinent questions about the necessary steps to be taken and the appropriate entities to engage with in order to address such issues. It suggests that independent Data Protection Commissions could provide the requisite support to individuals affected by AI-related concerns.

Additionally, the analysis voices criticism regarding AI’s opacity and the challenges faced in obtaining redress when errors occur. The negative sentiment is supported by a personal experience where an AI system wrongly attributed information about the speaker’s academic achievements and professional appointments. This highlights the imperative of transparency and accountability in AI systems.

Overall, the analysis emphasises the need to review data protection policies, foreground people in AI discourse, provide practical support, and address concerns regarding AI’s opacity. It underscores the significance of transparency and accountability in ensuring responsible development and deployment of AI technologies. These insights align with the goals of advancing industry, innovation, and infrastructure, as well as promoting peace, justice, and strong institutions.

Melody Musoni

The analysis explores the development of AI in South Africa as a means to address African problems. It emphasizes the significance of policy frameworks and computing infrastructures at the African Union level, which emphasise the message that AI can be used to tackle specific challenges that are unique to Africa. The availability of reliable computing infrastructures is deemed crucial for the advancement of AI technology.

Furthermore, the analysis delves into South Africa’s efforts to improve its computational capacity and data centres. It mentions that South Africa aspires to be a hub for hosting data for other African countries. To achieve this goal, the government is collaborating with private companies such as Microsoft and Amazon to establish data centres. This highlights South Africa’s commitment to bolstering its technological infrastructure and harnessing the potential of AI.

The discussion also highlights South Africa’s dedication to AI skills development, with a particular focus on STEM and AI-related subjects in primary schools through to university levels. This commitment emphasises the need to provide quality education and equip the younger generation with the necessary skills to drive innovation and keep up with global advancements in AI technology.

However, it is also stressed that careful consideration must be given to data protection before implementing AI policies. The analysis asserts that existing legal frameworks surrounding data protection should be assessed before rushing into the establishment of AI policies or laws. This demonstrates the importance of safeguarding personal information and ensuring that data processing and profiling adhere to the principles of transparency, data minimisation, data subject rights, and campus limitation.

Moreover, the analysis sheds light on the challenges faced by South Africa in its AI development journey. These challenges include power outages that are expected to persist for a two-year period, a significant portion of the population lacking access to reliable connectivity, and the absence of a specific cybersecurity strategy. This underscores the importance of addressing these issues to create an environment conducive to AI development and implementation.

Additionally, the analysis points out that while data protection principles theoretically apply to generative AI, in practice, they are difficult to implement. This highlights the need for data regulators to acquire more technical knowledge on AI to effectively regulate and protect data in the context of AI technology.

In conclusion, the analysis provides insights into the various facets of AI development in South Africa. It emphasises the significance of policy frameworks, computing infrastructures, and AI skills development. It also highlights the need for prioritising data protection, addressing challenges related to power outages and connectivity, and enhancing regulatory knowledge on AI. These findings contribute to a better understanding of the current landscape and the potential for AI to solve African problems in South Africa.

Liisa Janssens

Liisa Janssens, a scientist working at the Dutch Applied Sciences Institute, believes that the combination of law, philosophy, and technology can enhance the application of good governance in artificial intelligence (AI). She views the rule of law as an essential aspect of good governance and applies this concept to AI. Liisa’s interdisciplinary approach has led to successful collaborations through scenario planning in military operations. By using scenarios as a problem focus for disciplines such as law, philosophy, and technology, Liisa has achieved commendable results during her seven-year tenure at the institute.

In addition, there is a suggestion to test new technical requirements for AI governance in real operational settings. These settings can include projects undertaken by NATO that utilize Digital Twins or actual real-world environments. Testing and assessing technical requirements in these contexts are crucial for understanding how AI can be effectively governed.

In summary, Liisa Janssens emphasizes the importance of combining law, philosophy, and technology to establish good governance in AI. She advocates for the application of the rule of law to AI. Liisa’s successful engagement in interdisciplinary collaboration through scenario planning highlights its effectiveness in fostering collaboration between different disciplines. The suggestion to test new technical requirements for AI governance in real operational environments provides opportunities for developing effective governance frameworks. Liisa’s insights and approaches contribute to advancing the understanding and application of good governance principles in AI.

Camila Leite Contri

AI technology has the potential to revolutionise various sectors, including finance, mobility, and healthcare, offering numerous opportunities for advancement. However, the rapid progress of innovation in AI often outpaces the speed at which regulation can be implemented, leading to challenges in adequately protecting consumer rights. The Consumer Law Initiative (CLI), a consumer organisation, aims to safeguard the rights of consumers against potential AI misuse.

In the AI market, there are concerns about the concentration of power and control in the hands of big tech companies and foreign entities. These companies dominate the market, resulting in inequality in AI technology access. Developing countries, particularly those in the global south, heavily rely on foreign technologies, exacerbating this issue.

To ensure the proper functioning of the AI ecosystem, it is crucial to uphold not only data protection laws but also consumer and competition laws. Compliance with these regulations helps ensure transparency, fair competition, and protection of consumer rights in AI development and deployment.

A specific case highlighting the need for data protection is the alleged infringement of data protection rights in Brazil in relation to ChatGPT. Concerns have been raised regarding issues such as access to personal data, clarity, and the identity of data controllers. The Brazilian Data Protection Authority has yet to make progress in addressing these concerns, emphasising the importance of robust data protection measures within the AI industry.

In conclusion, while AI presents significant opportunities for advancement, it also poses challenges that require attention. Regulation needs to catch up with the pace of innovation to adequately protect consumer rights. Additionally, addressing the concentration of power in big tech companies and foreign entities is crucial for creating a fair and inclusive AI market. Upholding data protection, consumer rights, and competition laws is vital for maintaining transparency, accountability, and safeguarding the interests of consumers and society as a whole.

Session transcript

Moderator – Luca Belli:
All right, we are almost ready to go. It’s almost five past five. Should I give you a heads up to start? We can start. We are already online. OK, fantastic. Good afternoon to everyone. My name is Luca Belli. I’m professor at FGV Law School, where I direct the Center for Technology and Society. And together with a group of friends, many of whom are here with us today, we have decided to create this group, this coalition within the IGF, called the Data and AI Governance Coalition, where, as you might imagine, we are discussing already data and AI governance issues, and with a particular focus from the global south perspectives. So the idea to create this group was born some months ago during a capacity building program that we have at FGV Law School. It’s called the Data Governance School LATAM, which is itself the sort of academic spin off of a conference. We host CPDP LATAM. You might know the European one. There is also a Latin American one that we host in Rio every July. And so after these three days of intense discussions on data governance and AI in March, actually in April, at the end of April, we figured out that it was good to keep on maintaining this very good interaction we had, and even try to expand them to bring new voices. Because one of the main, let’s say, critiques that emerged is that frequently these discussions about data governance and AI have a novel representation of global north, if we can say so, ideas and solutions, and the severe under-representation of global south ideas and concerns, and even solutions sometimes. So the idea was precisely to start to discuss how to solve this. And as many of us have a research background or are interested in doing research, we decided to draft this book that we managed to organize and print in record time. But I have to also to disclaim that this is a preliminary version. So if you want actually to give us feedback on how to improve it, or in case anyone is interested in proposing some additional very relevant perspective, we might have missed. For instance, we know that the only region that is still a little bit poor in the book is Africa. The others are very well covered. And we are going to actually create the form. If you tape in your browser bits.ly slash DAIG, like Data and AI Governance, DAIG23 in capital letters, you will arrive directly on the form where you can also download for free this book. If you are allergic to Google Forms, which is something that may absolutely happen, you can even use another mini URL, bits.ly slash DAIG2023, where there is the direct downloading option of this from the IGF website without having to fill any form on comments. But if you want to provide us comments, actually we are here to hear them. The book deals with three main issues, AI sovereignty, AI transparency, and AI accountability. I’m not going to delve into the transparency and accountability part, because we have a very large set of very good speakers that will explore the various details of these topics from very different perspectives. I’m just going to say two words on the first topic, AI sovereignty, which is actually an application, an implementation of what I have been working with some colleagues from another project, the Cyber BRICS project, with regard to digital sovereignty over the past years. And the fundamental teachings of the past years have been of two types. First, there are a lot of different perspectives on digital sovereignty. A lot of people see this as authoritarian control or protectionism. But also, there are a lot of other perspectives, including those based on self-determination and the fact that both states or local communities or individuals have the right to understand how technology works, develop it, and regulate it. And there is nothing authoritarian in all this. And actually, it’s a right of all peoples in the world, according to Article 1 of not only the Charter of the United Nations, as we are in the United Nations context, but also the International Covenant of Political and Civil Rights, and the International Covenant on Economic, Social, and Cultural Rights. So it’s a fundamental right of everyone here to be the master of your own destiny, if you want, in terms both of social rights, governance, but also technology. And so the fundamental reflection of the first part of this book is about this. How do you achieve this? And in the chapter I’ve authored, I identify what I call the key AI sovereignty enablers that are eight key elements that go from a stack, an AI sovereignty. is stuck. They go from data. So you have to understand how data are produced, harvested, how to regulate them. So data, you have algorithms, you have compute, you have connectivity, you have cybersecurity, you have electrical power. Because something that many people don’t understand is that if you don’t have power, you cannot have AI at all. You have to have capacity building, which is short of transversal. And last, but of course not least, you have to have AI governance framework based on risks, which are the main thing that we are actually trying to regulate. But I think that if we only regulate AI through risks, we only look at the tree and we miss the forest. Because there are a lot of other elements that interact and they are interrelated. So that is, in a nutshell, the first chapter. I was very honored to have Melody and her co-author, Ceaseless Nile, that was one of the former directors of the South African regulator to draft a reply on this framework with regard to South Africa. There is another one with regard to India. And then there are a lot of other very interesting issues analyzed by our distinguished speakers of today. So without missing any more time, I would like to pass the floor to the first speakers. We have in this first slot of speakers, we have some more general perspective and we delve into the generative AI part. And then we delve again, we zoom out into other transparency and accountability, more general issues. So I would like to pass the floor to Armando. I’m not going to list all the speakers now. I will present them each one by one because there are a lot. So first we have Armando Mazueta, that is Director of Digital Transformation at the Ministry of Economy of the Dominican Republic. Please Armando, the floor is yours.

Armando Josรฉ Manzueta-Peรฑa:
Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to share with you some important insights regarding AI. how governments, for example, are trying to use AI, specifically Gen AI, to modernize their infrastructure and provision of public services. Well, how to begin with this? Well, a few technologies have taken the world by storm, the way AI has over the past few years. That’s something that’s a reality. Not even the blockchain revolution had had this much impact on the world as AI had. And its many use cases have become a topic of public discussion, not just for the technical community or the so-called tech bros, but say all people has been discussing on how to implement AI one way or another. And generative AI in particular has a tremendous potential to transform society as we know it for good, give our economies a much needed productivity boost and generate public and private value, potentially in the trillions of US dollars in the coming years. Well, the value of AI is not limited to advances in industry and retail alone. When implemented in a responsible way, where the technology is fully governed, data privacy is protected, and decision-making is transparent and explainable, AI has the power to usher a new era of public services. Such services can empower citizens and help restore trust in public entities by improving workforce efficiency and reducing operational costs in the public sector. On the back end, AI likely has the potential to supercharge digital modernization in by, for example, automating the migration of legacy software to more flexible cloud-based applications or accelerating mainframe applications modernization, which is one of the main issues most government has. Despite the many potential advantages, many governments are still grasping on how to implement AI and gen AI in particular. In many cases, public institutions around the globe face a choice. They can either embrace AI and its advantages, tapping into technology potential to help improve the lives of the citizens they serve, or. They can stay on the sidelines and risk missing out on AI abilities to help agencies more effectively meet their objectives. Government institutions early develop solutions leveraging AI and automation over concrete insights into technology’s public sector benefits, whether modernizing the tax collection system to avoid fraud and predict trends, or using automation to greatly improve the efficiency of the food supply and production chain, or to better detect diseases before they occur and prevent major outbreaks, such as the pandemic that we had before. Other successful AI deployments reach citizens directly, including virtual assistants and chatbots to provide information to citizens across many government websites, apps, and messaging tools. Getting there, however, requires a whole-of-government approach focused on three key main areas. The first one is workforce transformation, or digital labor. At all level of governments, from national entities to local governments themselves, public employees must be ready for this new AI era. While that can mean hiring new talent like data scientists and developers, it should also mean providing existing workers with the training they need to manage AI-related projects. The goal is to free up time for public employees to engage in high-value meetings, creative thinking, and meaningful work. The second major focus must be citizen engagement. For AI to truly benefit society, the public sector must need to put people in front and center when creating new services and modernizing the existing ones. There is potential for a variety of uses in the future, whether it’s providing information in real time, personalizing services based on particular needs of the population, or hastening processes that have a reputation for being slow. For example, anyone here has ever to field paperwork or has to suffer doing impossible lines or queues just to receive documentation or have toโ€”that must be repeated at several institutions just to receive the same service. that they need. And the thing is, most of the governments, for example, don’t have interoperability or any sort of services just to exchange information freely. And it’s something that, with AI and other related infrastructures, that’s something that we could be solving very quickly. The third one is the government platform modernization. And governments are regularly held back by true transformation, by legacy or ancient systems that are tightly coupled with workload rules that require substantial effort and cost to modernize. For example, public sector agencies can make better use of data by migrating certain technology systems to the cloud and infuse it with AI. Also, AI-powered tools hold the potential to help with pattern detection in large stores of data and be able to write applications. This way, instead of seeking hard-to-find skills, government institutions or agencies can reduce their skill gap and tap into the evolving talent. Last but not least, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of the technology throughout the lifecycle, design, development, use, and maintenance, something which most governments have promoted for years, to put it simply. Along with many organizations that belong in the health care industry or the financial sector, for example, government and public sectors must strive to be seen as the most trusted institutions because it holds most of the citizens’ data, one way or another. So if the citizens don’t trust the governments, how they can even trust all the institutions that exist in the same nation? That means that humans should be able to continue to be at the heart of the services delivered by government while monitoring for responsible deployment by relying on these five core aspects for trustworthy AI. Explainability, fairness, transparency, robustness, and last but not least, privacy. When we talk about explainability, it means that an AI system must be able to provide a human interpretable explanation for expeditions and insights to the public in a way that does not hide behind technical jargon. In government, there are many trends and many conversations regarding algorithmic transparency because that is the major aim, to reveal what’s in the black box and so for everyone to see how an AI system works and how it was built so we understand how it provides this insight and how it deploys and how it functions. The second one is fairness, that an AI system’s ability to treat individuals or groups equitably depending on the context in which the AI system is used, countering biases and addressing discrimination related to protected characteristics such as gender, race, age, and other status. Transparency, an AI system’s ability to include and share information on how it was being designed and developed and what data from which sources have fed the system, which is something that I previously mentioned with explainability, which is something that is closely related to it. Robustness, an AI system must be able to effectively handle exceptional conditions such as abnormal abilities in input to guarantee consistent outputs. And last, privacy is basically the ability to prioritize and safeguard consumers’ privacy and data rights and addressing existing regulations in data collection, storage, and access and disclosure, which is why it’s important that besides implementing AI, we also should be consistently improving, modernizing the frameworks that entice us everything related to data protection because if we don’t have those rules in place, there is the possibility that many people, not just in the private sector but also the government, use the data that is stored in the government databases to do harm, to use it as a political weapon. and many other things. So it’s important that we have strong data protection rules in place so the data isn’t used against the same citizens that the government is there to protect and to serve. Just to conclude, if AI is implemented in a way. I’m going to ask you to conclude quickly because we have a lot. OK, just a quick conclusion. If we implement AI, including all the traits mentioned above, it can help both governments and citizens alike in new waves. We can generate public value, but in a way that allows all the citizens to benefit from it and to build a future that we all want to live in. Thank you.

Moderator – Luca Belli:
Thank you very much, Armando. And thank you very much for giving us this initial inputs on what is the ideal that governments should strive for when they have to automatize their system and implement AI. And now I would like to give the floor to Gbenga, that might have a more critical perspective and less ideal. And it’s very good to have both these perspectives to try to synthesize our own opinion. Please, Gbenga.

Gbenga Sesan:
It’s like you framed my conversation already. I’m glad we’re having a lot of conversations around AI. This is my second panel on AI today. Thankfully, this is more focused on generative AI and data protection. But I think one of the advantages of having such conversations over and over is that you get to tease out all the points and ask the questions. And what I want to do is to very quickly, so that you don’t have to say I should conclude, I want to speak very quickly to three things. One is in terms of policy. The other is in terms of people. And if I have more time of my six minutes, I’ll conclude on practice. And by policy, I mean that we already, in many cases, have data protection regimes in many countries. There are countries that still don’t have data protection regulation. Of course, this presents an opportunity for them to have this conversation within the context of massive data collection and processing for AI. But for those who have, it means that this is also a chance to have a review. And I say this as an African who is excited that now finally the Malabo Convention has been ratified by as many countries, so it’s in force. But also concerned that it happened so late that Malabo Convention, the text of the Malabo Convention is to say the least outdated. And of course, there have been calls for reviews. There are countries that are literally just ignoring the fact that they have more recent policies on the subject. So I think in terms of policy, we need to have a conversation about how to make sure that existing data protection policies are useful as we have this conversation about massive data collection and processing. People are putting in their data, it’s being processed. And that takes me to my second point of people. I work in civil society and that means that much of my work is centered on people. And it means that when we have all these conversations over the last year, I mean, November 30, oh, actually, it’s just a month away. So November 30 is the birthday of CHAT-GPT, as everyone knows. So it’s been one year. And there’s been a lot that’s happened since then. But at the center of all this is people, the data owners themselves. I’ll give a very simple example. When CHAT-GPT came, a lot of people were just typing and typing because don’t forget many times, the reason why people engage with either social media or new platforms or new technology. which is the way we do, is that for many people, it’s literally magic. You know, you put in where you’re going, and then the map tells you how to get there. And it tells you there’s going to be traffic. And it’s almost like magic. But the problem is that many times, because people don’t understand that when they put in their data, that’s the input that is being processed. The output is what you get. But the input is also important. So I think in terms of people, we need to have a conversation around demystifying AI, which is one of the reasons I’m glad we’re having all these conversations over the last two or three days, for people to understand, you know, when I put in data, I’m training the system. When I ask questions, the response I’m getting is based on what input has already been given. Of course, that goes to the need for, and we talked about that a bit earlier today, that in modeling AI, we need to make sure that diversity, this is not about tokenism. This is real diversity. Otherwise, we’re going to build systems that don’t understand context. I’m going to cause more problems than solving things. And finally, it’s on practice. And I think this is where the data protection commissions come in. Hopefully, data protection commissions that are independent already understand the need to have conversations with various stakeholders. And the practice is, what happens if something goes wrong when I’m using, you know, any platform or system that is powered by artificial intelligence? You know, someone shared an article with me a few days ago. It was supposed to be an article about myself, but I read the article and I was confused. Because at the beginning, it was accurate, and then it gave me a master’s degree that I don’t have from a school I haven’t attended. And then it said I was on the UN I-level panel on digital cooperation, which is very close. Because, you know, I’m on the IGF leadership. panel, but not the one on digital cooperation. And this is quite tricky. And this, by the way, is one area of criticism from me to say that, what happens when I use this and something goes wrong? Who do I talk to? And I think this is one place where people, institutions, that already answer questions with data protection can come in. So I’ll close it here and say that it’s really important that we center this on people. But apart from saying that, there’s a need to review policy when necessary. People are the center of this. And when it comes to practice, what do I do when something goes wrong? Who do I talk to? We need to demystify this black box. Fantastic, Beng.

Moderator – Luca Belli:
I really like this trilogy of policy, people, and practice. Actually, while you were speaking, I was thinking that, in the best case scenario in most countries, we have some sort of policy, but the people part is almost inexistent. Even in the most in the country that have data protection, for instance, for 50 years, like in Europe, most of people would not be aware of their rights, let alone in the developing world. And the practice part is still something pretty much non-existent everywhere. All right, on this initial energy and optimism, let’s get to the third speaker of this first round, Melody Mussoni. Please, Melody, the floor is yours.

Melody Musoni:
Good afternoon, everyone. Thank you, Luca. I’m happy that you are bringing these issues around data protection and how laws and how that can help with regulating AI. And I’ve been following a couple of discussions around AI policy and regulation. And I keep on wondering, what exactly do we want to regulate here? Because when we look at law, it is quite vast. There are different areas of law. Are we looking at it from a direct perspective? intellectual liability, criminal liability, are we looking at intellectual property issues, data protection, there is a myriad of issues that I think when we have these discussions around AI policy and regulation, we need to keep that at the back of our minds on what exactly do we want to regulate, are we regulating the industries, are we regulating the types of partnerships that we may end up having, or it’s just going to be specifically data protection. And I’m sure some of our speakers will speak on the limitations that we have with data protection laws. And coming to my section on the chapter we wrote on South Africa, what we did was we looked at the case framework that Luca spoke about earlier, looking at how these key AI enablers can actually apply within the South African framework, and hopefully that can also be replicated across Africa and other African countries. And I’m just going to touch on four important key findings from the research that we have already conducted for South Africa. And the message that we are getting throughout is that there is the need for AI made in Africa to solve African problems. So when you go through some of the policy frameworks at the African Union level, for example, the digital transformation strategy, looking at the data policy framework, that is the message we keep getting across, that there is that urgency for Africa to start looking into AI and innovation to actually develop African solutions or homegrown solutions to deal with African problems. And then the second key point I want to emphasize in looking at South Africa is the issue of computational capacity and data centers and building the data in cloud market in Africa. So you understand, of course, that with AI development. government, it would depend more on the availability of computing infrastructures to host, to process, and to use data. And with South Africa, what we have noticed is that there are efforts to actually improve on its computational capacity. There have been discussions about having as many data centers within the country as possible, the private sector, the likes of Microsoft, Amazon. They’ve been actually working closely together with government to make sure that there are data centers on the continent in South Africa. So the vision for the country is not just to have data centers in South Africa to cater for businesses and government in South Africa, but also to become a host or to attract other African countries to actually host their data within South Africa. And there was a draft policy that was published sometime in 2020 called the National Data and Cloud Policy, and that policy seemed to actually point towards a direction where South Africa wants to locally own, to make sure that locally owned entities are active in the data market and promoting local processing and local storage of certain types of data. And as you can imagine, like with data localization, it’s something that is not so popular. So there have been clash back from different stakeholders. And now, as I understand, there have been an update on that draft policy. It’s yet to be finalized, the updated version. It’s yet to be released. But what we anticipate is we want to see this revised data and cloud policy to focus more on better regulation of foreign-owned infrastructure instead of indigenizing all existing infrastructures. while also promoting our public-private partnerships. And the third point I also want to speak on, which also supports this notion of AI sovereignty for Africa and for South Africa in particular, is the commitment towards AI skills development. So there is, again, what we are getting from going through the fragmented policies is that South Africa is hoping to build its own pool of AI experts to research and develop AI-driven solutions to address some of the problems that it has. And there are different programs, starting from basic primary education level all the way through to university levels, which are focusing on STEM subjects as well as AI-related subjects. Of course, the question would be, how long are these initiatives going to be actually implemented? Most of them, they are still strategies and they are still plans that are yet to be actually implemented. So it’s still a long process. And the last point I also want to point out is they still need to have an AI strategy. The country doesn’t have a clear AI strategy or an AI policy, but I would like to say or to think that it’s important for countries to first prioritize, like Kibenga said, data protection issues before you rush to have an AI strategy or an AI policy or law in place. So starting from what are the low-hanging fruits? We have data protection laws. Are they adequate enough to address some of the data processing activities? Do we have cybersecurity, cybercrime laws? To what extent do they cover issues like deepfakes if someone is going to commit a crime and they are using AI technologies? To what extent do the existing legal frameworks that we have, are they adequate? Are these legal frameworks addressing some of these issues? And of course, just to finalize, there are of course challenges that the country and other African countries are facing and likely to face in development of AI systems and even with data processing. Issue of power outages, unreliable power supply in South Africa, it’s now a very big problem. Almost every day there are electrical outages and load shedding and it’s been said that it’s going to run for a period of two years. So imagine you rely on electricity and already the amount of time you spend online is going to be cut short because there is no electricity. So that’s also a challenge that the country is facing. The second challenge, I think it applies to all other digital projects, issue on meaningful connectivity. Yes, there have been massive deployment of different digital infrastructures. Now we are moving to 4G and 5G, but still about 16 million people are still unconnected in the country. And then also the need again for stronger cyber security. So there are laws on cyber crime, there are laws on protection of critical infrastructure, but there is still no strategy specifically to deal with cyber security. And also coming at the last point on implementation of the laws that we have, especially data protection laws, there’s always going to be that challenge that our data regulators will not have the capacity and even the expertise to understand some of the AI tools that are in place to be in a position to actually assist with implementation and enforcement of the laws. So those are my thoughts.

Moderator – Luca Belli:
Thank you very much, Melody, and also for stressing how these issues are interconnected. And many of the most relevant ones are infrastructural issues, particularly, I would like to stress something that you mentioned about… compute and cloud computing there are actually three main corporations that have almost 70% of the cloud computing market Google Cloud AWS and Microsoft Azure then a little bit of Chinese corporation a little bit of Huawei and a little bit of Alibaba but then basically the entire world relies on five corporations to do AI and generative AI so that is that is a huge challenge because even if you want to if you want to find an alternative it’s some an investment that takes ages it’s ten years investment in the best case scenario to have something minimally reliable and no government is in charge for ten years or has the vision to do something in ten years so it’s it’s really something that is worth thinking about all right let this now the moment for the first break for questions so we can take two questions and then we will get into the second segment of the session so if you have questions you could line yes you can raise your hand you could there is a mic there for question so if we can take two and we have a quick round of replies and then we get into the second segment then we will take more questions and then we’ll take more question at the end all right so we have one there and yet I see two hands there so if you could use that mic and introduce and explain mention who

Audience:
you are thank you very much hi my name is Shuchi I work for nationality for all which is basically an organization that deals with nationality rights my background is not really in AI which was why I was so interested in this conversation because I really wanted to understand the question that this panel really proposes whether it can whether generative AI can be compatible with data protection and I understand the challenges that we’ve all been speaking about and those have been deeply insightful but for the for the second phase of this panel I would be super interested to actually know if there are sort of frameworks or if there’s any sort of ways that we actually have if this has basically worked in regions because again My background isn’t in AI. So I was really curious to know, because it’s very in line with statelessness and nationality.

Moderator – Luca Belli:
Yes, in the second, be sure that in the second segment, we will speak about this. So that was the quick reply to your question. So maybe if we can have another one, an extra one, if there is. This was a very fast reply. So another one, yes.

Audience:
Hi, my name is Pranav. I’m a technology law and policy professional. And I also had the opportunity of contributing to this report with a paper on generative AI, thinking about privacy principles. And the gentleman, the speaker, also mentioned about why there is need to ensure data protection within Gen-AI platforms. And my question is from everyone on the panel and in the room is, what are some of the key privacy principles at a normative level that should be ensured so that these Gen-AI platforms can comply with? And I have teased this question with identifying 17 of them in my paper. And this is just the first step, to seek inputs at this global forum. And then I would like to test those principles by deploying it on around 50 use cases and then make it better. So if at a normative level, you have any ideas that these are some of the key principles that should definitely be there, that level of consensus building would be really helpful for our people. Thank you.

Moderator – Luca Belli:
Fantastic. And yes, let me also mention that we have 24 chapters here with almost 30 authors. So given the time constraints and also space constraints, we were not able to have everyone. We plan to have webinars where everyone can present and have feedback. If you want, anyone else who wants to have, or even has an answer, actually. Even if you want to, we want to have a conversation here in this segment. So if anyone from the audience also wants to give a reply, you are very welcome to reply. And then we will have feedback from this panel.

Audience:
Thanks a lot, Luca, to give me the floor. And thanks to. to the previous speaker. First, I would like to thank you. It’s very glad to hear from the southern countries the voice. And that’s very important. As we got the problem of AI and data protection, that’s a very big question. And I have worked hard on that problem. It is quite clear that AI put into question a certain number of data protection principles. And I would like to have your feeling there about. First, the question of finality, question of purpose. Normally, you must have a determined purpose. And with generative AI system, you have no more the possibility to have a specified purpose. The second problem is the question of minimization. It is quite clear that it is totally contrary to the AI functioning. AI functioning is working on big data. You do not know, a priori, which kind of data will be interesting and pertinent for achieving your purpose. Another problem, and you have mentioned that, is the problem of explainability. It is very difficult to make AI system explainable, because it is quite clear that there is no logic. As Vint Cerf said, it is quite clear that you are working on correlation and not on a certain logic. And so you have no logic at all. I have other problems, but we might come back on this issue. It is quite clear that, as we got the problem of personal data, it is quite clear that AI are working more and more on non-personal data. And they are using that for profiling people. So it would be absolutely needed that data protection legislation will enlarge its scope.

Moderator – Luca Belli:
All right, these are very good questions. Do we have initial replies from the panel? Melody, yes, you can go first.

Melody Musoni:
I agree with you. We have more questions than we have answers. So looking at the protection of personal information of South Africa, we provide a framework to say when it comes to automated decision-making processes and profiling, these are the conditions that have to be met, and then looking again at the basic data protection principles on transparency, data minimization, data subject rights, campus limitation, it’s, the principles are there, but I think application, that’s where the problem is, because we don’t have any data, but I think application, that’s where the problem is, because it’s much easier to say, okay, in this context, this is the principle on processing of personal data. You need to know the purpose, you need to be very transparent, when it comes, especially with facial recognition technologies, we need transparency, can data subjects exercise their rights? So in principle, in theory, the principles apply, but then when it comes to practice, and especially with generative AI, I think we have more questions, and that’s why I was saying even with our data regulators, there is that level of expertise from someone with more technical knowledge on how that actually, the technical side of AI, that can be translated into the legal side. In my opinion, there is more questions than answers.

Moderator – Luca Belli:
Armando is an answer.

Armando Josรฉ Manzueta-Peรฑa:
Actually, I don’t have, like I said, like you said, there are many questions that are still to be solved, to be answered regarding AI uses, but I think it applies to most systems. the use of data, and any system, depending on any platform or any technology, its quality will depend on the quality of the data itself that the system has been fed on. And if we don’t have, as you said, the public protections in place, and we don’t have the data that is properly collected, and it’s properly minimized, then the system will, of course, will do a profiling of the person, of the company, or the subject itself in a way that doesn’t necessarily translate into the realities or provide a solution to solve a certain problem. So in that case, besides having strong data protection rules, there should be also strong data collection and data validation regarding the quality of the data itself in order for AI or any system to provide a proper solution or actually of use of any help at all. And that’s the main challenge that we as a government has, especially in that part, in the developing nations, because having data of good quality, good administrative registers, it’s the main issue that we’re facing right now, just to give this any use.

Moderator – Luca Belli:
Okay. This provides us a very good segue to the second segment of the session. So let me give the words to the regulator. We have Jonathan Mendoza, who is Secretary for Data Protection at the National Institute for Transparency, Access to Information, and Protection of Personal Data of Mexico. Please, Jonathan, the floor is yours.

Jonathan Mendoza Iserte:
Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this topic to the table, especially Luca Belli, a leader in the Latin American region. Data governance and trust has become a crucial topic, and we find ourselves at a critical juncture in the history of technological advancement. Artificial intelligence is rapidly evolving, offering boundless potential for innovation growth and improvement in our daily lives. But in the same way, we must also recognize the challenges it poses for its regulation, ethical use, and the importance of promoting AI transparency and accountability. In the Latin American region, steps have been taken toward regulating artificial intelligence. However, we must remember that the region is very diverse and has technological deficiencies that only allow access to technology for some sectors and groups of the population, therefore closing the digital crest is a primary task. Even though there are some exercisers that are part of the efforts to regulate artificial intelligence, there needs to be a full instrument dedicated entirely to it. In 2019, the members of authorities of the Ibero-American Data Protection Network issued general recommendations for processing personal data in artificial intelligence. Also in the region, it seems to be a trend closer to the ethical use of technology, but how could we ensure that algorithms are fair if they are not accessible to public scrutiny? How can we balance the ethical design and implementation of AI? Artificial intelligence can contribute enormously to the transformation of development models in Latin America and the Caribbean, to make them more productive, inclusive, and sustainable. But to take advantage of its opportunities, and minimize its potential threats, reflection, strategic vision, regional and multilateral regulation and coordination is required. According to the first Latin America Artificial Intelligence Index in 2023, Argentina, Brazil and Mexico are regional leaders in participation in international spaces to influence the global discussion on AI. In the global context, according to the McKinsey Global Institute, the use and development of AI in multiple industries will bring mixed economic and labor results. 2030 estimations of AI are $13 trillion will be the impact of AI in the global economy. 1.2% will be its contribution to the annual gross domestic product globally. $15.7 trillion will be the additional income to the global GDP. 45% of the benefits of AI will be for finances, healthcare and the automotive sector. As Chris Newman, Oracle’s principal engineer said, as it becomes more difficult for humans to understand how AI tech works, it will become harder to resolve inevitable problems. In our interconnected world, multilateralism plays a key role because AI knows no borders and international cooperation is not just beneficial but imperative. We must ensure that AI respects fundamental rights with a human-centric approach, abiding biases. The paper I co-authored with my colleagues Nadia Garbacio and Jesรบs Sanchez is a proposal to start a debate on AI in the Latin American region. We propose the creation of a dedicated mechanism that contributes to AI-related matters. Cooperation and strategic alliances with the Organization of American States will help us achieve this goal. To facilitate the implementation of this proposal, it is suitable to create a committee of experts that analyze and agrees on the importance and urgent need to contribute through non-binding mechanisms to the situation regarding the use and implementation of existing and yet to be developed disruptive technologies given the risk they could imply in the private life of users. The objective of this committee of experts must be built on goodwill and on the exchange of knowledge and good practices that promote international cooperation based on multilateralism and the opportunities that it offers us to strengthen the protection of human rights, joining efforts with other international organizations that have also spoken out on the matter, as well as with groups of economic powers that have shown their concern about this panorama of the new digital age. The work of this committee will be based on a mechanism that will seek to analyze the specific cases, issue recommendations, provide follow-up, and develop cooperation tools. Let’s be part of the conversation to maximize the benefits of AI for our societies while minimizing its potential risks. We must remain committed to fostering international cooperation as well as strengthening these efforts to ensure that AI serves humanity’s best interests.

Moderator – Luca Belli:
Thank you very much, Jonathan, and let me also stress that ENI has been doing a lot of excellent work, both in terms of attempts of policy experimentation and international cooperation in trying to put forward some recommendations on how to work and regulate with generative AI. Staying in the Latin American region, I would like to ask Camila. has also been one of the minds behind the construction of this group since April to provide us a quick overview of what’s happening in Brazil.

Camila Leite Contri:
Perfect. Thank you so much, Luca, for the invitation, for the creation of the group, for all the amazing job that you do in FTV, and also a pleasure to be here with you. Considering that I’m from Brazil and also from a consumer organization in Brazil, I would like to focus on that. We are talking about data privacy, but as Melody mentioned, we are not only talking about data privacy. We have several other rights that we have to consider. So I’m going to talk first about the general risks that we can talk when facing the challenges of generative AI. Second, about the laws that might interconnect on that, focusing on data protection, but also on consumer protection. And also talk a little about the Brazilian context in terms of legislation and ways ahead, too. AI has lots of possibilities. And for example, EDAC works in financial services, in mobility services, in health. And all these areas can benefit from AI and generative AI. But as we can see, it has two sides. We have both an opportunity and a challenge on dealing with that, especially because innovation goes in a speed that regulation does not follow. So that is why it’s important also to think about current legislations that have to be applied when we are facing that. Some general risks that you are tired to hear, like we have issues related to power. We have issues related to wrong output on the use of this technology to manipulate people on bias, discrimination, privacy, vulnerabilities. And we also have a challenge in here coming from a global south country. And it’s a table of global south in here, which is dependence. So we are talking about how to protect people from that. And we rely on other countries, on other technologies. And how we can do that, how can we build the sufficient power on that? So it’s a great challenge that, obviously, I don’t have an answer. But I hope that we can build on that. Also, one important thing is the techno-solutionism that this kind of technology bring. Because when we do that, we disregard the context. And that is the reason that I want to talk more about Brazil. But before talking about Brazil and the different laws, I would also like to bring the issue of concentration of power. Once we are talking about generative AI, of course, we think about CHEP-GBT. But we are not only talking about CHEP-GBT. We depend not only on foreign companies when we are talking about the global south, but rely on big techs. And we know that these big techs can bring lots of solutions, but lots of abuse, considering that they dominate the market. So that is why it is important to consider not only data protection law that, of course, is extremely necessary, but also consumer law to protect people in the end. We are putting people in the center. And these people are also consumers. We are all consumers. And competition law also to face that. So first law that is important that we have to comply in its existence, and we have to enforce that, is competition law. The second one is data protection, as we are mentioning. And to develop on that, I will talk about a case in Brazil that was brought by a really known organization in Brazil, which is a really known person in Brazil, which is Lucca Belli, about this. And third of all, also consumer rights have to be respected. We are talking about transparency. We are talking about access to information, which is basically consumer traditional. rights Beyond that we have also IP IP law, of course copyright and But I’m going to to focus on that Okay talking about Brazil. Brazil is a huge market not only in terms of Market in general, but also on AI so Brazil is the fourth country on the use of chatty PT so it’s a concern that we have to to consider on that and Since is a since it is a concern I’m going to spend a little more time talking about the petition that was presented to the data protection authority in Brazil by Luca about about Not complying with the data protection law in Brazil and of chatty PT not complying on the law I’m gonna focus on the on the rights that the that was That was requested on this petition which is to know the identity of the controller of the data This is a minimum thing to know The second one is to access all the personal data that have respect to the person that is affected So this is about self self determination and as Luca mentioned, this is not only data protection, right? But this is a human right in the end The third one is the right to have access to clear and adequate Information on the criteria and the procedures used on the formulation of the automatic response these three topics can Luca brought it but everyone is affected by that not only in Brazil but also other countries and Also this this kind of complaint could have been brought also by the consumer authority Because we are talking about access to information in the end So this is a provocation also for you Like we have to think on how we can advance on that not only in Brazil, but other countries Unfortunately, I have some bad news that the data protection authority didn’t go forward with this this process, which I think It’s not only sad, but it’s an absurd And I hope that we can the authority can advance on that because it’s an important issue But nowadays the data protection authority is bringing a consultation on sandbox of AI But when we bring cases like this when Luca bring a case like this, they don’t advance on that. I Don’t know why Second context that Jonathan also brought me just ask you to wrap up in one minute. Okay, just one minute the the network of authorities in in the Ibero-american region also is focusing on chat GPT issues of legal hypothesis XR exercise of rights and transfers of data, which is interesting because the data protection authority in Brazil is also present on that and We have to comply with existing laws, but we can also advance on future frameworks as you were mentioning so In Brazil, we have a bill also on that and we hope to advance on that But meanwhile, we have we have to comply with existing laws. Thank you. Sorry for extending. Thank you very much

Moderator – Luca Belli:
Just a very brief comment because that case that she was mentioning that concerns me personally It’s it’s very also frustrating to say that even when there are laws in place and rights in place when there are every law has needs to have elements of flexibility not to Not to to be able not to regulate Technology in a way that is to strict and and allow the advancement of technology But when there are clauses of flexibility like what is an adequate information about how your data are processed or what is in Adequate information about the criteria according to which your data are utilized to train models That is the moment where the regulator has to enter into the game because adequate anything is adequate is the favorite word of lawyers together with reasonable because you can charge hefty prices and fees to your clients to debate what is adequate or reasonable. But the role of the regulator is precisely to say to enterprises, to people, what is adequate and what is reasonable. And it’s a little bit frustrating when the regulator don’t do it. And they find also that some very curious practices of data scraping by some corporations are maybe considered as adequate or reasonable because those are very hard to believe and to think as reasonable and adequate practices. Anyway, not to get into very personal matters. I would like to ask if our online panelists are online. Can you hear us? I would like to ask if Wei Wang is connected. Wei, can you hear us? Sure. OK, so actually, we have an example where generative AI has actually already been regulated in China that has just issued some specific recommendations on it. And so it’s quite interesting to understand what is the situation in China with regard to regulation of generative AI and data protection. So please, Wei, the floor is yours.

Wei Wang:
Thank you so much, Luca, as always. And thank you for having me today, at least virtually. Yes, and it’s very cool to meet quite a few new and older friends, at least virtually. Yes, and as per the content of our report, I think I’m supposed to share some Asia perspectives on regulating artificial intelligence in the first place. Since I came back from Latin America to Asia, yes, I have attended quite a few events, both online and in-person. I happen to find that quite a few, I mean, Asia jurisdictions are cautious in regulating AI. They prefer to let ethical framework to go first rather than making how to go come first. And they also prefer minor steps where what we call precise regulation. For example in Singapore the governance model prefers a light touch and a voluntary regulatory approach for a I basically the aims to use a I as a tool for economic growth and improve and improving quality of life. But they also acknowledge that Singapore might have to adopt to existing global frameworks instead of creating new regulations in isolation. So this is sort of I mean global least perception. I distinguish those Asia jurisdiction from others like you you Brazil you can the United States. Do you always. I think as all of us know you and the Brazil are adopting comprehensive acts or abuse at a UK model is based on a pro innovation idea so far at least while the United States seems to stick to the liberal market idea. Still I got my contract. China has a sector specific approach instead. For instance in the areas of recommendation and I present this is technology and a generator via as Luca has mentioned. So as some from the FPF I mean the future private firm argued that data protection authorities are becoming sort of default regulators for a I in this time gap. In the case of China the PIPO it’s personal information protection law as well. Articles for a double 24 27 55. They are clearly relevant to regulating automated decision making and official recognition. And then the newly established in Turin measures on generative I basically highlights the importance of ensuring the use of data and the underlying models from legitimate sources in compliance to the revolution laws and regulations as regards IP and the data protection. But. way. But things are it seems are becoming I mean more interesting as quite a few to reduction are considering a big change in this sort of regulatory model. For example in both states and China as you may be already aware of the recent proposed bipartisan framework for a U.S. act advocates for a regulatory focused on legal accountability and consumer protection proposing a license and regime administered by an autonomous oversight entity. Similarly in China a research group on the Chinese Academy of Social Science of which I’m currently to invite member as well drafted a model. I love proposing a negative least based and risk based approach to governing. I there are some similarities with the U.S. act but there are also some nonsense as well. But but I mean generally the model law introduced the principle of accountability catalyzing the entities along the value chain at assigned duties or responsibility in terms of retention disclosure at a manual assistant to data disclosure or data sharing with an institutional intent of fostering a transparent system. That being said some of the jurisdictional perspectives are reaching a conscience as we got the AI governance. But this also requires more continued comparative studies for example about more to go both sides and approaches. Those new development basically highlights the response of jurisdiction to address those challenges of AI with the focus on accountability principles were tailored obligations and a proactive technology design. You’ve been probably coming. I mentioned that the technos solution is a solution isn’t. But it’s still essential to seek an implementable where up up the operationalizable. I’ve mentioned in our chapter sort of about the reason lies lies about. requiring a sort of a long-standing balance between adaptability and the regulatory predictability to ensure effective and end-to-end governance within the dynamic AI landscape. We will definitely keep coming across the question of regulation versus innovation. And I think our DC is a perfect place to achieve this goal, I believe. So in this regard, I look very much forward to continuing the collaboration within and beyond the group in the near future. Okay, I think that’s all from me today. Thank you for having me here virtually today. Yes, I will hand it back to you, Luca.

Moderator – Luca Belli:
Thank you very much, Wei. And actually now we have, this is a good segue to enter into the last speaker of this segment, the Smriti Parshira from India. Smriti, can you hear us? Smriti, are you connected? Yes, I can hear you. Yes, so Smriti can bring us a little bit of, is going to broaden a little bit our perspective with some concrete cases from India, and then we can expand on this in the last segment. Please, Smriti, the floor is yours.

Smriti Parsheera:
Thanks so much, Luca, and hello to everyone in the room and online. So as Luca mentioned, I’m gonna really be a little broader than the topic which is suggested, which is more specific to generative AI. And my intervention in this book talks about the question of transparency and the interpretation of what really transparency should mean in the AI context. And this is a term which is well regarded now, well accepted in most AI strategies. India also has a AI strategy, and it talks about the principle of transparency among others. It’s also a principle that’s reflected in different ways in data protection law. So India very recently has adopted its data protection law. And the philosophy of transparency does come about when you think about processes like notice and consent, access to information, correction, redress facilities. So all of this does speak in some way. to transparency and very often in the AI context, transparency is connected with explainability and accountability. And what I do in this intervention is really, I say that when we often think about transparency in the AI context, it’s often the tools or even the discussions are very much about the technical side of transparency. So it’s about algorithmic transparency, transparency of the model itself, but the paper argues that we really need to step back and take a broader lens because we know that there are a number of actors who are involved typically in any AI implementation. And therefore transparency like every other principle you see in AI principles should permeate through the entire life cycle of the project. And in this paper, I specifically identify three layers and this is mostly in the context of, large scale public facing applications. And I take the case study of one such application in India in the context of facial recognition systems for entry into airports, which is something which is being seen across the world and many other countries you see similar system. And the argument of the paper is that, there are at least three layers of transparency that you need to think about. The first is policy transparency. So it’s about how did this project come about? Is there a law backing it? Who are the actors involved? Which government departments, ministries took this decision through what open and deliberative process? The second is about technical transparency, more well understood questions about a transparency of the model, what kind of data was used? Who designed the code? What does the code do? How well does it work, et cetera. And the third is about operational and organizational transparency, which is really about which is the entity which is finally giving effect to this. How does the system work on the day to day basis? What are the kind of failures that you’re seeing? What is the kind of accountability mechanisms that exist for this entity? And who exactly is it answerable to? Is it answerable to the parliament, to the public? Like what are the mechanisms for transparency within this body? And then I apply this in the paper. I’m not gonna go into great detail due to paucity of time into. the findings, but there are three, you know, broad observations that I made. One is that transparency in the policy sense cannot just be about imparting information to the public about the existence of such systems. It has to be a bit more deliberative about, you know, why we are bringing this, should we bring this in the first place, et cetera. The second point is about, you know, there is a culture of third parties working with the government, either as philanthropies, as think tanks, as consultants. There is the need for transparency, not just about who developed the code and whether we were transparent in the procurement process, but even how did these ideas come about and there is need for transparency, you know, at a deeper level. And finally, tools of transparency. Very often, if you have entities outside of the public, private sector, nonprofit bodies running these systems, then will the, you know, tools of transparency, which are in the form of right to information laws, for instance, in India, apply to these entities? And we see in this particular case study, which I study here, that the design does not enable the application of, you know, transparency and public disclosure, which a public body would be faced with in this particular structure. So I’ll stop with that. And people in the room, I would love to hear your comments if you have it later. Thank you, Luca.

Moderator – Luca Belli:
Fantastic, Smriti. Now, we have to do, to have a series of actions in the next five to 10 minutes, because we will have the possibility for participants to ask questions. At the same time, we will have the speakers of the initial two rounds that will move to the first row of chairs, and the speakers of the last round that will move to this part of the table, because for organizational purposes, we have to, speakers have to be here. So if you have questions in the room, please, this is the moment for you to ask questions using the mic there. We have questions from, oh, yeah, sorry. So let me also thank Shilpa Singh, that is our remote moderator, and you can take the mic and ask the question from the…

Audience:
participants. There’s a question from Mr. Amir Mokaberi. He’s from Iran and his question is that, could shaping the UN Convention on Artificial Intelligence help to manage its risk and misuse at international level? Do geopolitical conflicts and strategic competition between AI powers allow this? What is the

Moderator – Luca Belli:
role of IGF in this regard? That’s a very, very open question. I don’t know if the new set of panelists has any ideas on this. My personal take is that it will take a lot of time before having international agreement on any international regime on AI and that is precisely the reason why many tech executives, or at least some of them, may be advocating for having an international regime because they know very well it will take between seven and ten years to be developed and maybe start being slightly meaningful. So it’s, I don’t know if we have other opinions here in the panel on international organization. I think actually this is a very good connection with Michael Karanikola’s paper because he’s really, actually coincidentally, he’s the first speaker of this last slot and coincidentally he has written an excellent chapter in this book about this, exactly this topic. So Michael, no one better than you can reply to this and start presenting your paper, please.

Michael:
Thank you and I’ll start by echoing the other panelists and thank you, thanking Luca. I’m amazed at how quickly this has come together and with such a great group of authors. So my paper focuses on emerging transatlantic frameworks for AI that are being developed under the auspices of a handful of powerful regulatory blocks, namely the US, the EU, and China, and examines the implication of this trend for the emerging AI governance landscape. Gonna have to go through this very quickly so I won’t go too deeply into the paper. But just in response to the question about the potential UN framework, I discussed the OECD framework as well as these different structures. And I think there’s a broader tension between the value and benefits and efficiencies of harmonization, right? And the tendencies of harmonized standards, whether it’s at the UN level or the Brussels effect or the California effect or whatever, to trample over important local contexts, not only in terms of the needs of populations being impacted by AI, but also in terms of how, at a more basic level, how harms are framed and the assumptions and prioritizations that are inherent in any legislative framework. And I argue in my paper that there is a challenge in terms of trying to develop a harmonized structure that it’s going to fail to take into account diverse populations, particularly when the people that tend to have a seat at the table in the early development of these standards tend to be from wealthier parts of the world. So I explore that tension. I think that it’s, I’ll caution by saying that it can also be overly reductionist to view this dynamic purely in global north and global south terms, that there are a lot of different dimensions to this. But ultimately, I say that as frameworks begin to coalesce into transnational standards, it’s important to query whether they actually represent the needs and concerns of those on the sharpest edge of technological disruption, and whether the development of these standards and the harmonization of these standards. has the potential to further entrench inequities on a global scale. So that’s a two-minute version of my paper. And I’m happy to chat further, folks, have questions.

Moderator – Luca Belli:
Fantastic, Michael. Also for actually having provided both a reply to the question and the presentation of your paper. I guess you have also a question. Yes, so I think we can do this. We can take this question, and then go through the presentation. And then it will be the first question to be replied at the end of the presentation. OK, yes, please go ahead.

Audience:
I just wanted to build on what Michael just said. My name is Michael Nelson. I work at the Carnegie Endowment for International Peace in Washington. And one of my colleagues is Anu Bradford, who wrote the book The Brussels Effect, and now has a new book on digital empires that covers some of the same territory. I look forward to spending more than two minutes with your ideas. Anu and I have a friendly debate about whether the Brussels effect sometimes becomes the Brussels defect. One part of it is what you just said. Other countries are taking European language designed for a European legal system and putting it in a place where it doesn’t really work. But a more important problem, particularly with the AI Act, is they’re writing law that’s, I think, way too premature. They haven’t even really got a definition of what is AI. I’m a physicist. I’m not a lawyer. But when I was working on Capitol Hill, the first thing we did was get the definitions right, not just defining what you’re regulating so you can have a box, but defining what you’re not going to regulate. So I guess my question for anybody who wants to take it is, how do we avoid this problem of imposing these aspirational goals on a vague field of technology that will be totally different in 18 months?

Moderator – Luca Belli:
Thank you very much, Michael, for this. Excellent comment. As we have started with 10 minutes of delay, we might have a margin of 10 minutes at the end, because I saw already a, yes. So we can rush with the last round of very quick presentations. So the next one will be Kamesh Shikhar.

Kamesh Shekar:
Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our intervention, our chapter, talks a little bit or answers some of the questions that the first panel spoke about also. So I’ll very briefly touch upon the three things that we do in our paper and what’s the background to that. As we all know that there is already a lot of buzz around the uncertainty over the AI regulations and AI technologies itself. And just a response to that, we still see a lot of frameworks happening at the various levels, right? Like you use the strategy documents and legislations cropping in here and there. But one question, very important question that we try to answer through our chapter is that, OK, tomorrow we bring a framework and we say that AI developers have to follow a certain amount of principles. Will everything become fine, right? And that’s where our paper comes in and asks, what about AI deployers? What about impact population who also interface with the technology at this moment, right? So AI technology used to be B2B. But now it’s also B2C, the generative AI technologies where we also interface with it and use it. So it’s that specific, that’s this very specific question that we’re trying to ponder over and suggest a framework called as a principle-based framework at the ecosystem level where various responsibilities are divided across various stakeholders within the ecosystem such that collectively. or collaboratively, we can make the entire ecosystem of artificial intelligence utilization safer and responsible. So how we went about doing this is that first thing is we tried to like map the impact and harm across the AI lifecycle. At the different stages, let me give you an example and that makes it very clear, exclusion, okay? So if we talk about exclusion as the end impact of whatever is happening, adverse implication, it just doesn’t happen because one particular aspect has gone wrong. There are various aspects which come together at the different stages of the AI lifecycle and across this AI lifecycle, we all know that there are different players involved. So all of these implications come together and make the exclusion happen. So we went about actually mapping that and this also answers, kind of resonates with Melody’s point on what is the liability, where the liability or the responsibility lies with, right? Like we need to understand who and what they do. After doing this, obviously the organic progression is like what’s the principles that everybody has to follow. And this also answers somebody’s question from the online is that like the consensus building in principles. We have a lot of principles available out there on AI, right? So, but we need to now start having a conversation. Hey, you have those principles and I also think that this is the principle I resonate with. So I think that’s the starting point. Maybe that’s an answer to the question also starting for a point for like at the international level, everybody coming together and like discussing about something collaborative and like legitimate for the international level. So we map all the principles and then like the third point is like operationalization, which is also like spoken. In operationalization, like what we went about doing is that like very specific, you know, gap that we are trying to fill is bring out the differences in the principle at the different like, you know, stages. And show that, like, hey, when we talk about, giving an example again, like, human in the loop as a principle, we keep talking about it. But at the operationalization level, when we come to planning to designing stage, human in the loop means differently, right? Like, it means you have to engage with the stakeholders or, like, you know, you have to bring the impact population into the room and et cetera and stuff. But same principle means differently at the, like, you know, other levels. So that is what the difference that we bring. Thirdly, you know, final point, like, before I conclude, is that is also now it’s the impact. We’ve mapped the principles, operationalization, and also now it’s implementation, right? And that goes ultimately to your governments. So there what we try to do is that, like, you know, look at a little bit from also like somebody mentioned, the last speaker mentioned that, like, you know, there is a market in Brazil for generative AIs. That’s the case for any developing country. So we need to balance that approach and see, like, not necessarily regulation has to be, you know, compliance based, right? Like, it can also be market based. How can we enable the market? So we are trying to, like, look in that way and, like, how to operationalize this framework into that market based mechanism where there is a value proposition which the, you know, businesses see. So this is what, like, we do in the paper. Yeah. I can take more.

Moderator – Luca Belli:
Fantastic. So at this time, let me thank all the last set of panelists for being very concise because I know that we have time constraints and I know that our tech support, they are so kind to give us five or ten extra minutes to finalize. And so let me give the floor now to Kazim Rezvi for his very short presentation. Thank you.

Kazim Rizvi:
Thank you so much. So I think just moving on from what Kamesh was talking about and, you know, we have two papers as part of this brief. And the second paper actually looks at, you know, mapping and operationalization of trustworthy AI principles. So while what we are doing… what Kamesh is saying in terms of the first paper is to sort of just come up with all the principles. Here we are sort of looking at certain sectors where we have to sort of look at understanding the synergies and conflicts with respect to these AI principles and how they’ll play out. So what we try to do over here is basically look at two areas. One is the finance and finance sector, and the second is a health care sector. And for these two sectors, we sort of come up with certain principles which we believe are critical for operationalization and to make sure that you are deploying trustworthy principles on the ground. So the paper has adopted an approach where it has looked at the technical and non-technical layer of AI. Within the technical layer, there is basically looking at different implementation solutions and how do you integrate these solutions with the responsible AI framework which we are developing. And the non-technical layer is basically sort of exploring strategies to sort of look at responsible implementation and ethical directions, et cetera. And all of this has been done through a multi-stakeholder approach. So we’ve advocated for a multi-stakeholder approach towards mapping and operationalization of AI principles. I think that’s something which we’ve been very clear about because we believe that you need a different set of stakeholders. You need the industry, the civil society, the academia, the government, et cetera, to sort of come up and look at how these principles will be operationalized for these two particular sectors. So we’ve spoken to experts in these sectors. We will also be sort of looking at certain discussions and see if some of these principles can be implemented effectively. And also to look at domestic coordination of regulation. So what we’ve also identified that AI, there is no specific act or law which governs AI. in India. So we have tried to come up with some sort of principles where you have the privacy law, you have a IT law which regulates the internet, you have different other laws which are coming up. How do they all work together? And how do they harmonize with each other with respect to regulation of AI in the future? So at one level, we are talking about domestic coordination. We’re not saying that, look, you have to regulate it sort of very stringently. But existing internet laws, how can they be harmonized? And the second is around international coordination. And I think that’s where, even previously, what Kamesh was talking about. And this is something which we’ve looked at is, at a global level, can we come up with some sort of models or frameworks to identify implementation? And then looking at these two sectors, health care, what is required, what are the principles which are key for health care which may not be necessary for finance sector? So that kind of mapping and operationalization is what we are doing right now. And then we’re also sort of looking at alternative regulatory approaches. So we are talking about market mechanisms, private-public partnerships, even looking at consumer protection within the developers. And how do we ensure safety, et cetera? So I think that’s something which we’ve looked at as well. And the idea is to look at deployment and implementation and testing it with one of these two sectors.

Moderator – Luca Belli:
The technical support is telling me that we have to move fast without breaking things. And so let me pass the floor to the last two speakers that will very fast expose their brilliant papers. So Claudio, Chico, you have a presentation. We have a very last presentation, and then a very last other presentation. If we can have this online, can we have? of the presentation. Maybe in the interest of time, let me. Yes, we have a presentation. Excellent.

Giuseppe Claudio Cicu:
So konnichiwa to everyone. And thank you, Mr. Thank you, Professor Bailey, for the introduction. I will very quickly dive deep into the relationship between artificial intelligence and corporate governance. Because as we all can see, artificial intelligence is reshaping our social, economical, and political environment, but also the corporate governance framework and the business processes are being affected by this technological revolution. Indeed, we are hearing about, for example, an appointment of artificial intelligence as directors that, legally speaking, I really doubt about it, but it’s happening in this time. So I have the feeling that we are going toward a new form of corporate governance that I have labeled as a computational corporate governance model, where artificial intelligence is an auxiliary instrument, or can maybe substitute the directors in the main function of corporate governance bodies, like, for example, strategy setting, decision making, monitoring, and compliance. So I have put a question to myself. We are going toward a technologization of the human being. I’m afraid of it. So as we know, we have a lot of problem in this kind of revolution. For example, the main problems that I’m working on in my paper is about the transparency. the accountability problems. And so for this reason, I tried to create a framework to allow the corporation to implement artificial intelligence in an ethical way in the corporate governance and business processes. My proposal that I named AI by Corporate Design Framework is grounded on the business process management, the field of management that can allow to analyze, to improve the processes in the corporation. And it is just posed with the AI lifecycle. And I divided both of them in seven steps to combine them to control artificial intelligence and enhance the principle of human in the loop and human on the loop. Of course, this model is also grounded in a human right, a global AI framework. And it’s based also in a privacy by design principle that states that it’s better prevent than react. Under the corporate governance, and quickly, I’m going to conclude, I propose a creation of a new committee, the Ethical Algorithmic Legal Committee, composed by a mix of professionals, like, for example, not only directors, but consultants that can act as a filter between the stakeholder and the output of the artificial intelligence. And also, I conclude with asking, not only to myself, but also to you, if it’s not the case that the legislators start to think about that technology as a corporate dimension, as happened in Italy, for example, with reference to accountability, organization, administration, my answer is yes. I think that is the time.

Moderator – Luca Belli:
Thank you. Fantastic and thank you very much for doing this excellent and detailed presentation in literally three minutes. Excellent. So we have now the final one. The one by Lisa, the last but of course not least. Please Lisa the floor is yours.

Liisa Janssens:
Thank you very much. My name is Lisa Janssens and I will very shortly explain where I am from because that’s also connected to the paper that I have written. I’m a scientist at the Department of Military Operations at the Dutch Applied Sciences Institute and I have a background in law and philosophy and I combine those two disciplines in my projects and I work together with mathematicians, engineers and I’m very proud to say that because it’s very difficult to work actually interdisciplinary together. So my job at the Institute, I’m now there for my seventh year and since two years it actually works out to work together and how I’m doing that, I found a way how to work together. So scenario planning, scenarios, military theater scenarios can be a platform that you meet each other from different disciplines. You stay on your own discipline but you can meet each other in one focus point of problems and how to solve problems from the technical point of view and how to connect those two. For example rule of law mechanisms because I am trying to seek for new requirements from the from the point of view of rule of law tenets because we can find an agreement within the United Nations but also in the European Union and also in the USA that the rule of law matters and is very important to adhere to. So the rule of law for me is about good governance and if I connect it to AI it’s about good governance of AI. How do we do that? So, I am looking for new technical requirements informed from multiple disciplines, law, philosophy and technology and I found a way how to work together and that is a scenario that is like a very good informed operational scenario and you can even test the new requirements. For example, that’s very ambitious but we are going to try to do that in a NATO project via Digital Twins or even maybe a real setting, an operational test environment. Thank you.

Moderator – Luca Belli:
Fantastic. Fantastic. And so now, as everyone has been so patient to stay here until the end of the day, it’s 6.36 so you all deserve a free complimentary copy of the book and the first that will run here will deserve it. The other ones will have a free access PDF that you can download already on the page of the Data and AI Governance Coalition. I repeat, you can also use the mini URL bit.ly slash DIG23 or DIG2023. You have both. You can use the form to give us feedback. You can speak with us now to give us feedback. We can have a drink now together so that we can give us feedback. All feedback is very welcome. And thank you very much. Really thank you very much especially to the โ€“ I don’t want to diminish the importance of the first two sets of panelists but this last one has been fantastic and thank you a lot to the technical teams. You are excellent and you have done tremendous work. Thank you very much.

Armando Josรฉ Manzueta-Peรฑa

Speech speed

171 words per minute

Speech length

1779 words

Speech time

625 secs

Audience

Speech speed

167 words per minute

Speech length

1002 words

Speech time

361 secs

Camila Leite Contri

Speech speed

182 words per minute

Speech length

1243 words

Speech time

410 secs

Gbenga Sesan

Speech speed

187 words per minute

Speech length

1035 words

Speech time

333 secs

Giuseppe Claudio Cicu

Speech speed

133 words per minute

Speech length

509 words

Speech time

230 secs

Jonathan Mendoza Iserte

Speech speed

134 words per minute

Speech length

791 words

Speech time

354 secs

Kamesh Shekar

Speech speed

188 words per minute

Speech length

956 words

Speech time

305 secs

Kazim Rizvi

Speech speed

182 words per minute

Speech length

704 words

Speech time

232 secs

Liisa Janssens

Speech speed

153 words per minute

Speech length

373 words

Speech time

146 secs

Melody Musoni

Speech speed

158 words per minute

Speech length

1579 words

Speech time

600 secs

Michael

Speech speed

167 words per minute

Speech length

428 words

Speech time

154 secs

Moderator – Luca Belli

Speech speed

168 words per minute

Speech length

3484 words

Speech time

1244 secs

Smriti Parsheera

Speech speed

224 words per minute

Speech length

890 words

Speech time

239 secs

Wei Wang

Speech speed

157 words per minute

Speech length

839 words

Speech time

321 secs

Child online safety: Industry engagement and regulation | IGF 2023 Open Forum #58

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Julie Inman Grant

The analysis covered a range of topics related to online safety and abuse. One of the main points discussed was Australia’s strong online content scheme, which has been in place for over 22 years. The scheme is primarily extraterritorial, as almost all of the illegal content it deals with is hosted overseas. This highlights Australia’s commitment to tackling online content and ensuring a safe online environment for its citizens.

Another important aspect highlighted in the analysis is the need for a more individuals-centered approach in addressing online abuse. Schemes have been put in place to address individual abuse cases, and understanding current trends in online abuse is deemed integral to applying systemic powers effectively. Taking into account the experiences and needs of individuals affected by online abuse can lead to more targeted and effective interventions.

A concerning finding from the analysis is the significant increase in cases of online child sexual exploitation and sexual extortion. It is reported that there has been a doubling of child sexual exploitation cases and a tripling of sexual extortion reports. Shockingly, one in eight analyzed URLs involves coerced and self-produced abuse through smartphones and webcams. These figures highlight the urgent need for robust measures to combat online child sexual abuse and protect vulnerable children.

The role of online platforms in preventing abuse was also discussed. Currently, online platforms are being used as weapons for abuse. However, platforms like Snap and Instagram have been provided with intelligence reports on how to prevent this abuse. The analysis suggests that online platforms should do more to proactively guard against their services being exploited for abusive purposes.

The analysis also touched upon the topic of corporate responsibility in online safety. The introduction of the basic online safety expectations tool allows the government to ask transparency questions and compel legal answers from companies. Furthermore, companies can be fined based on their truthful and complete responses. These expectations play a pivotal role in compelling companies to operate safely and protect their users.

Global collaboration and transparency were identified as crucial factors in tackling online child abuse. Initiatives like the Heaton Initiative are putting pressure on large companies, such as Apple, to do more to address child sexual abuse. Additionally, future enforcement announcements targeted at five more countries are to be made next year, indicating the ongoing commitment to global collaboration in combating online child abuse.

The analysis also highlighted the challenges faced in safeguarding children online. While the internet has become an essential tool for children’s education, communication, and exploration, it was not initially built with children in mind. Notably, there has been a notable increase in reports of cyberbullying among younger children during COVID-19 lockdowns. It is imperative to strike a balance between safeguarding children appropriately and allowing them to benefit from the internet’s use.

Regarding age verification, the analysis presented differing viewpoints. Companies were encouraged to take responsibility in verifying users’ ages and facilitating meaningful checks. However, it was suggested that age verification should not restrict children’s access to necessary and beneficial content. Trials for age verification are currently being conducted by platforms like Roblox, and Tinder and Instagram have begun implementing age verification in Australia. However, there are concerns about the effectiveness and potential restrictions on access for marginalized communities.

The effectiveness of META’s Oversight Board in reviewing content moderation decisions was called into question. In the past year, the board received around 1.3 million requests for content moderation reviews but was only able to cover 12 cases. This raises concerns about the board’s efficiency in handling the sheer volume of cases.

Lastly, the analysis emphasized the importance of multinational regulation for online platforms and the need for specialized agencies to handle investigations. The gray area of regulation poses significant challenges, requiring multi-layered investigations to effectively address abuse and ensure accountability.

In conclusion, the analysis shed light on various aspects of online safety and abuse. It highlighted Australia’s strong online content scheme, the need for individuals-centered approaches in tackling online abuse, the concerning increase in cases of online child sexual exploitation, and the role of online platforms in preventing abuse. The importance of global collaboration, corporate responsibility, and safeguarding children online was also emphasized. Critical evaluations were made regarding age verification measures, META’s Oversight Board, and the need for multinational regulation and specialized agencies. These insights provide valuable information for policymakers, platforms, and organizations to address online safety and combat abuse effectively.

Audience

The discussion revolves around striking a balance between children’s right to access information online and ensuring their safety, particularly in relation to sexuality education. It is important to provide children with accurate and scientific information while also protecting them from potentially harmful content. This highlights the need for a comprehensive and inclusive approach to online education.

There are ongoing discussions regarding the implementation of new regulations to safeguard children online. The speaker questions whether there is a balance between raising awareness and imposing obligations on service providers under these regulations. This reflects the growing recognition of the importance of protecting children from abuse, exploitation, and violence online.

In terms of ensuring child safety online, the audience argues for not only blocking but also removing harmful content. Simply blocking such content may not be sufficient, as individuals seeking it may find ways to circumvent these blocks. Therefore, the removal of harmful content becomes crucial to guarantee the safety of children.

In conclusion, the discussion emphasizes the need for a balanced approach that upholds children’s right to access accurate information while safeguarding them from harmful content. The introduction of new regulations and the emphasis on removing, not just blocking, harmful content further demonstrate the commitment towards ensuring online child safety. This signifies progress in protecting children from abuse, exploitation, and violence in the digital realm.

Noteworthy topics discussed include children’s rights, online safety, access to information, and sexuality education. Additionally, the discourse touches upon the relevance of the UN Convention on the Rights of the Child and the impact of digital regulation on children’s rights and internet safety. These aspects contribute to a comprehensive understanding of the subject matter and highlight the interconnections between various global initiatives, such as SDG 4: Quality Education, SDG 5: Gender Equality, and SDG 16: Peace, Justice, and Strong Institutions.

Tatsuya Suzuki

During the discussion, the speakers emphasised the need to enhance internet safety for children. They highlighted the importance of having a comprehensive plan in place to ensure the secure use of the internet by children. This plan involves collaborative efforts with various stakeholders, including academics, lawyers, communications companies, and school officials. These groups can work together to develop strategies and guidelines that promote responsible internet use among children.

The speakers also expressed their support for public-private initiatives aimed at addressing online child abuse and exploitation. They recognised the crucial role of the Child Welfare Agency in upholding the interests of the private sector in these efforts. Additionally, they highlighted the active collaboration between the agency and the Ministry of Education, Culture, Sports, and Tourism, as well as the involvement of Japanese UNICEF. These collaborations are important in developing effective and comprehensive approaches to combating online child abuse and exploitation.

Overall, the sentiment expressed during the discussion was positive, with a strong emphasis on implementing measures to protect children online. The speakers recognised the urgency and importance of ensuring the safety and security of children in their online activities.

Through the analysis, it is evident that this issue is aligned with Sustainable Development Goal 16.2, which aims to end abuse, exploitation, trafficking, and all forms of violence and torture against children. By addressing the challenges of internet safety and working towards its improvement, progress can be made towards achieving this goal.

In summary, the discussion highlighted the necessity of implementing initiatives to improve the safe and secure use of the internet for children. Collaboration with various stakeholders, such as academics, lawyers, communications companies, and school officials, is essential in developing a comprehensive plan. The support for public-private initiatives in tackling online child abuse and exploitation was also emphasised, acknowledging the roles of the Child Welfare Agency, Ministry of Education, Culture, Sports, and Tourism, and Japanese UNICEF. Overall, there was a positive sentiment towards the implementation of measures that protect children online, aligning with Sustainable Development Goal 16.2.

Moderator – Afrooz Kaviani Johnson

Child exploitation on the internet is an ongoing issue that has evolved over the years. It now encompasses more than just explicit materials, but also the ways in which technology enables abuse. To effectively address this issue, collaboration across sectors is crucial.

Australia’s eSafety Commissioner is at the forefront of combating online abuse. This government agency has implemented a range of regulatory tools to drive industry-wide change. The role of Australia’s eSafety Commissioner in spearheading these efforts is commendable.

The involvement of the private sector is also vital in protecting children online. Companies are increasingly being called upon to take proactive measures and be accountable for their responsibilities in ensuring online child safety. These discussions involve industry experts from various countries, including Japan’s private sector and BSR Business for Social Responsibility.

Japan is making significant strides in enhancing internet safety for young adolescents. The country’s Child Welfare Agency and multiple stakeholders, such as academics, lawyers, communications companies, school officials, and TTA organizations, are actively involved in creating a safe and secure online environment for young people. Japan’s measures in this regard have been positively received and appreciated.

Recognising the importance of private sector involvement, Japan’s Child Welfare Agency has developed the Internet Environment Management Act, which respects the individual and subjective interests of private organizations. These organizations are actively engaged in ensuring the safe and secure use of the internet by children.

Addressing online child abuse is a complex and challenging task. Mr Suzuki, a prominent speaker, highlighted the various ways in which children can fall victim to online abuse, emphasising the need for parental involvement and proper ‘netiquette’. In Ghana, collaborative regulation involving tech companies has been adopted to tackle online child abuse.

Continued learning and knowledge exchange are crucial in combating online child abuse. A recent discussion on internet literacy and online child abuse served as a fruitful exercise and a positive step in addressing the issue. Ultimately, promoting sustainable development by ensuring all learners acquire the necessary knowledge and skills is vital.

In conclusion, addressing the issue of child exploitation on the internet requires collaboration across sectors, involvement of government agencies like Australia’s eSafety Commissioner, proactive engagement of companies, efforts from countries like Japan, and continued learning. These various approaches collectively work towards protecting children online and making the digital world a safer space for young people.

Toshiyuki Tateishi

Summary: The Japanese private sector has adapted over the past decade to address the challenges of online child sexual abuse and exploitation. Japan has a constitutional law that protects the secrecy of communication, preventing the blocking of certain websites. They have also implemented mechanisms, such as the Jeopardy system, to block access to illegal sites. If the abusive site is located in Japan, it is deleted by the ISP and investigated by the police. If the site is overseas and found to be sexually abusive, it is promptly blocked. Japan’s approach to internet safety has been commended for its low level of government interference with digital freedoms. They emphasize balancing freedom of communication, security, and innovation online. Communication is seen as crucial before taking down any content, even engaging with parties located overseas. Overall, Japan’s comprehensive approach demonstrates its commitment to creating a safe online environment and addressing online child abuse and exploitation.

Edit: The private sector in Japan has proactively responded to the emerging challenges of online child sexual abuse and exploitation in recent years. Japan has enacted a constitutional law that safeguards the confidentiality of communication, thereby prohibiting the blocking of certain websites. To combat online child abuse, Japan has established mechanisms like the Jeopardy system, which enables DNS servers to block access to illegal sites. If an abusive site is discovered within Japan, the Internet Service Provider (ISP) will delete it and involve the police for further investigation. In the case of overseas sites, a thorough examination is conducted, and if found to contain sexually abusive content, it is promptly blocked. Japan’s efforts to combat online child abuse have been recognized by a 2016 UN report for preserving digital freedoms with minimal government interference. They place a particular emphasis on striking a balance between freedom of communication, security, and innovation in the online realm. Additionally, before taking down any content, Japan believes in attempting communication with the relevant parties, even if they are located overseas. This approach underscores the importance of dialogue and potential collaboration with foreign entities to effectively address online safety concerns. Overall, Japan’s comprehensive strategy exemplifies its unwavering dedication to promoting a secure online environment and combating online child abuse and exploitation.

Dunstan Allison-Hope

Human rights due diligence plays a vital role in upholding child rights and combating the alarming issue of online child sexual exploitation and abuse. Business for Social Responsibility (BSR) emphasises that incorporating human rights due diligence is essential for companies to demonstrate their commitment to the well-being of children. BSR has conducted over 100 different human rights assessments with technology companies, highlighting the significance of this approach.

A comprehensive human rights assessment involves a systematic review of impacts across all international human rights instruments, focusing on safeguarding rights such as bodily security, freedom of expression, privacy, education, access to culture, and non-discrimination. It is crucial to adopt a human rights-based approach, which includes considering the rights of those most vulnerable, particularly children who are at a greater risk.

The European Union Corporate Sustainability Due Diligence Directive now mandates that all companies operating in Europe must undertake human rights due diligence. As part of this process, companies must evaluate the risks to child rights and integrate this consideration into their broader human rights due diligence frameworks. By explicitly including child rights in their assessments, companies can ensure that they are actively addressing and preventing any potential violations.

However, it is important to maintain a global perspective in human rights due diligence while complying with regional laws and regulations. Numerous regulations from the European Union and the UK require human rights due diligence. However, there is a concern that so much time and attention going towards the European Union and the United Kingdom takes time away from places where human rights risks may be more severe. Therefore, while adhering to regional requirements, companies should also consider broader global approaches to effectively address human rights issues worldwide.

A holistic human rights-based approach seeks to achieve a balance in addressing different human rights, with a specific focus on child rights. Human rights assessments typically identify child sexual exploitation and abuse as the most severe risks. To ensure the fulfilment of all rights, a comprehensive assessment must consider the relationship between different human rights, with considerations given to tensions and the fulfilment of one right enabling the fulfilment of other rights.

Another crucial aspect of human rights due diligence is the application of human rights principles to decisions about when and how to restrict access to content. Cases before the meta-oversight board have shown that having the time to analyse a condition can provide insights and ways to unpack the relation between rights. Applying human rights principles like legitimacy, necessity, proportionality, and non-discrimination to decisions about when and how to restrict access to content helps ensure a balanced approach.

It is also important to provide space to consider dilemmas, uncertainties, and make recommendations in cases relating to human rights, particularly child rights. Highlighted is the use of space available for the meta-oversight board to make decisions and the idea for similar processes to take place concerning child rights is welcomed. This helps ensure that informed decisions, considerations of different perspectives, and recommendations can be made.

In conclusion, human rights due diligence is vital to respect and safeguard child rights and combat online child sexual exploitation and abuse. By integrating child rights into their broader human rights due diligence, companies can demonstrate their commitment to the well-being of children. While complying with regional laws, it is crucial to adopt a global approach to effectively address human rights risks. A holistic human rights-based approach considers the interrelationships between different rights, while the application of human rights principles guides decisions about content access. Providing space for deliberation and recommendations in cases involving child rights is fundamental to making informed decisions and ensuring the protection of children’s rights.

Albert Antwi-Boasiako

The approach adopted by Ghana in addressing online child protection is one of collaborative regulation, with the objective of achieving industry compliance. In line with this, Section 87 of Ghana’s Cyber Safety Act has been established to enforce industry responsibility in safeguarding children online. The act provides provisions that compel industry players to take action to protect children from online threats.

Furthermore, Ghana’s strategy involves active engagement with industry players, such as the telecommunications chamber, to foster mutual understanding and collaboratively develop industry obligations and commitments. This collaborative approach highlights the importance of involving industry stakeholders in shaping regulations and policies, rather than relying solely on self-regulation.

The evidence supporting Ghana’s collaborative regulation approach includes the passing of a law that includes mechanisms for content blocking, takedown, and filtering to protect children online. These measures demonstrate the government’s commitment to ensuring the safety of children in the digital space.

The argument put forth is that self-regulation alone cannot effectively keep children safe online, as it may not provide sufficient guidelines and accountability. On the other hand, excessive regulation can stifle innovation and hinder the development of new technologies and services. Ghana’s approach strikes a balance by fostering collaboration between the government and industry players, promoting understanding, and establishing industry obligations without impeding innovation.

In conclusion, Ghana’s collaborative approach to online child protection aims to ensure industry compliance while striking a balance between regulation and innovation. By actively engaging with industry stakeholders, Ghana seeks to develop effective measures that safeguard children online without stifling technological advancement. This approach acknowledges the limitations of self-regulation and excessive regulation, thus presenting a more holistic and effective approach to online child protection.

Session transcript

Moderator – Afrooz Kaviani Johnson:
Okay well welcome everyone, welcome everyone in the room and I understand we’ve got at least 20 that have logged on online as well to join this evening session in Kyoto so I know it’s been a long day for many people and we appreciate you taking the time and joining us in this session. We are going to be exploring different models of industry engagement and regulation to tackle online child sexual abuse and exploitation. My name is Afrooz Kaviani, I work for UNICEF headquarters in New York as the global lead on child online protection. I’m joined by my colleague Josie who leads our work on child rights and responsible business conduct in the digital environment. So Josie is managing our online moderation today and she’ll be looking out for questions and comments that may be coming from our online participants and we’re delighted to have with us expert speakers from different sectors and really from around the globe joining us representing Australia’s East Safety Commissioner, Japan’s Children and Families Agency, Japan’s private sector, Ghana’s Cyber Security Authority and BSR Business for Social Responsibility. Our aim today is to foster collaboration and the exchange of ideas, experiences and innovative strategies on this difficult topic of child sexual abuse online. So I do want to give the content warning that we are speaking about a difficult topic and it may be disturbing for people in the room or online so please feel free to step out or do what you need to do to you know safeguard your your own well-being. Many of you already know that this challenge of child exploitation on the internet is not new, however its nature has changed over the last decades and in the early stages efforts primarily were looking at halting the spread of child sexual abuse materials on the internet but today we’re seeing how technology is also being used to enable or facilitate child sexual abuse in a wide range of ways including the live streaming of child sexual abuse, the grooming of children for sexual abuse, the coercion, deception and pressuring of children into creating and sharing explicit images of themselves. So obviously it goes without saying that addressing this issue requires collaboration across sectors and it requires strengthening of systems for protection for children you know in their homes and their communities and in their countries but today we’re zooming in on a specific dimension of this response and it’s about how different jurisdictions are engaging companies in this effort and we’ve got one round of questions for our panelists and then we’re going to open the floor for questions and discussions from the audience. So I’m really delighted to turn to Australia to start us off and we’re so pleased to have with us Julie and Mum Grant who is Australia’s eSafety Commissioner. Thank you Commissioner for joining us and the question for you is really being the world’s pioneering government agency for online safety. I’m interested to hear from you about the suite of regulatory tools that you’re deploying to really drive systemic change in industry against online child abuse.

Julie Inman Grant:
It’s important to start with the fact that Australia has had a strong online content scheme for more than 22 years, which means almost none of the content, illegal content that we’re dealing with online, is hosted in Australia. It’s almost all extraterritorial and overseas. So you see the world moving towards some much more process and systemic types of laws. We’re seeing with the online safety bill in the UK, with the Digital Services Act. We do have process and safety powers, but I also want to start by talking a little bit about the complaint schemes that we have, because I believe it’s one of the most important things that we do. We seem to forget that it’s individuals who are being abused online, and that’s how harm manifests, and the ability to take down that content to prevent the re-traumatisation, but also to understand the trends that are happening through engaging with the public is really critical to our success in applying the systems and process powers. So just to give you an example, we’ve seen a doubling this year of child sexual exploitation. When we analysed about 1,300 URLs, we found that one in eight is now, instead of inter-familial abuse, which tends to be more typical, one in eight is coerced and self-produced through smartphones and webcams in children’s bedrooms and bathrooms, in the safety of the family home. So that’s really significant. It just shows that the internet is becoming a new receptacle for targets, for predators, and it’s no longer one of convenience. The other huge trend that a number of us are seeing is we’ve had a tripling of sexual extortion reports coming into our image-based abuse schemes. So image-based abuse, the non-consensual sharing of intimate images and videos, we are seeing younger children being subject to that, but it’s now young men between the ages of 14 and 24 that are largely being targeted. And while 18 is the year that we consider young people adults, they’re not totally cognitively developed. They may be leaving school, so they don’t have the pastoral care and protections that they might once have had. So it’s a very distressing kind of crime, and sometimes it can happen very rapidly. Organized criminals have figured out that young men will take off their clothes and perform sexual acts for the camera more readily than young women, and they will negotiate down. We’ve seen some negotiations where they’ll try and extract $15,000 from a teenager, and they’ll say, well, I’m just a teenager, I don’t have that money, well, how much can you give me? And it’s relentless. But they’ll also use guilt and shame and other tools of social engineering. So all this is really important for us to understand. We’ve actually developed some intelligence reports for companies like Snap and Instagram to say this is how we see your platform being weaponized. If you use some AL and machine learning, you can see that these same images are being used in 1,000 different reports, and if you use some natural language processing, you’ll see that they’re using the same language. So we need to encourage the companies to step up, and that’s where safety by design is a key systemic tool. But I guess the most potent one that we have is what we call the basic online safety expectations, and that’s where we lay out a set of foundational expectations we have for online companies, whether they’re gaming or dating sites or social media sites or messaging sites, to operate in our country. And it gives us the opportunity to ask transparency questions and compel legal answers. Questions we’ve been asking for six years, basic things like photo DNA has been used for more than ten years. Which services are you using it on? Are you using it at all? Are you looking at traffic indicators for live stream child sexual abuse material? Again, we can find the companies based on whether or not they respond truthfully and fulsomely in the manner and form. So that’s where the penalties are. stunning report, I think, in December of 2006, looking at the most powerful countries and companies in the world that have the financial resources and the capability to do things, but we’re not doing enough. So shining that light, with sunlight being the best disinfectant, is, I think, an effective tool. We’ve already seen in the United States the Heaton Initiative and others, you know, putting pressure on companies like Apple to target child sexual abuse material. You can’t tell me that in 2022 they only had 234 cases of child sexual abuse when they’ve got more than a billion handsets and iPads in the market and iCloud and iMessage. So we really need to lift the hood. We’ll be making a similar set of enforcement announcements next year focused on five more countries. We need to continue to work together. We need to lift the lid. We need to focus sunlight on so that we don’t let darkness fester in the darkest recesses of the web. Thank you, Commissioner. That is so fascinating, just the breadth of tools, and really I have to apologize and let everyone in the room know that I’ve given a very small amount of time to each speaker. So the Commissioner did an amazing job there really covering the breadth, but I think we’re going to have time to unpack and understand better. But I think just what you’ve managed to do and just those analogies of, you know, shining the light and using those regulatory tools to lift the hood. I forgot just to mention that we have codes, mandatory codes and standards covering eight sectors of the technology ecosystem, five of which we’ve filed a search engine code which now includes generative AI and synthetic generation of CSAM and TVEC, but we’re creating standards for a broader range of what we call designated internet services and relevant electronic services.

Moderator – Afrooz Kaviani Johnson:
Thank you, Commissioner. We’re now going to move to Japan. We’re here in Japan, so it’s very timely and it’s actually very exciting to introduce the next speaker. Mr. Tatsuya Suzuki. He’s the director of a newly formed agency in Japan, which is very significant for the child protection and child rights architecture in this country. So he’s the director of the Child Safety Division of the Children and Families Agency. He’ll be joining us online. And the question for Mr. Suzuki is to understand with his extensive experience, which includes roles at Japan’s National Police Agency, we’re wanting to know more about how this newly formed agency is really going to push forward public-private initiatives in order to tackle the specific issue of online child sexual abuse and exploitation. Do we have…

Tatsuya Suzuki:
SPEAKING JAPANESE SPEAKING JAPANESE SPEAKING JAPANESE SPEAKING JAPANESE SPEAKING JAPANESE The first is to use mobile information transfer machines and appropriately select and utilize internet-specific information so as to polish their ability to appropriately utilize internet. That is the first. However, the development and progress made on developing a safe and secure environment for young adolescents was to avoid military smuggling and military destruction of the newer internet developed with this current state-of-the-art new moira technologies. As we heard Mr. Kambiyu’s acceptance, a scale of research and innovation of the potential applications of modern data in the development of the environment of the moira in Japan and Japan to the world. Considering the characteristics of the Internet, there are more involved activities and more meaningful activities for the Mรผnster-based demandรฉ in our nation, it is necessary to earnestly appreciate the sacrifices of the standpointattled favored by the challengers. Give credit, for those who continue to support us in the spirit of universal demand, We are also working on a comprehensive plan for the proper use of children. We have been working on this work at the Cabinet Office for a long time, but this April, the Child Welfare Agency was established, so we are working on it. As I mentioned earlier, the Child Welfare Agency’s Internet Environment Management Act respects the private sector’s individual and subjective interests, and we are working with private organizations and organizations. For children’s safe and secure use of the Internet, we are working with experts in various fields, including academicians, lawyers, communications companies, school officials, and TTA organizations, and holding a seminar on the maintenance of the Internet environment for adolescents. We are also discussing the revision of the basic plan. Finally, I will explain a little about child sexual harassment prevention measures. In the past, the National Police Agency and the Japanese government have been working together on child sexual harassment prevention measures. Last year, we implemented the Child Sexual Harassment Prevention Plan 2022, but from this year, we will be in charge of the children’s home prevention measures. Regarding the promotion of the measures, we are also actively working with the Ministry of Education, Culture, Sports and Tourism, the Ministry of Education, Culture, Sports and Tourism, and the Japanese UNICEF. That’s all I have to say. Thank you.

Moderator – Afrooz Kaviani Johnson:
Thank you very much. and answers, but fantastic to hear how there is, you know, these basic standards in the law and now you are starting the implementation measures which are taking that multi-sectoral approach, but with a strong engagement of the private sector. On that note, I’m very pleased to shift the mic to the private sector representative from Japan, Mr. Toshiyuki Tateishi, who is representing the Japan Internet Provider Association as well as the Internet Content Safety Association, and my question is if you can let us know how private sector initiatives in Japan have adapted over the last decade to address emerging challenges relating to online child sexual abuse and exploitation.

Toshiyuki Tateishi:
Thank you. I’m very happy to be here. So I’d like to explain some Japanese situation, so could you check? So in Japan we have the Secrecy of Communication, it’s a constitutional law or something, but the brokering system is very hard to the ordinary people, how do they work, and then I make a small slide, so could you put something? So ordinary people think like that. So we are going to go outside and then I want to go somewhere, the house or any other building, so then we find we cannot go into them. But this is not the brokering, this is just a real world block. So I’m sorry for a little bit still Japanese. Any question with DNS? Normal website access with DNS. We ask the, we put the someone. So the LeZova replies actually may not, because So the the brockin system server answered the wrong kind of maybe another server’s IP address. So we cannot access to the overlap place. about that I first mentioned to enter some buildings. But actually the blocking Next please. So it’s like this. So when I want to go to the karaoke, the gatekeeper will say, okay, push it, next please. Then one more push, please. You can go there. But in fact the blocking system is like this, next please. He said I want to go to the house A. But when I want to go home, next please. So it’s like this, right? You cannot go out even from your house. So this is a broken system. But so in the other view, so if we broke them in Japan, but many other countries can access the each whatever, access the content. So only Japanese don’t know. Next, please. So this is a blocking scheme as a measure against these sites. So many left side of this slide, the users of the internet report, which you always have a legal content something. Then they will, the report is coming from. Then our association, Internet Content Safety Association, we make it a list. And automatically, the DNS server retrieves the list weekly, then update the DNS. So we can block the illegal contents. Next, please. Next, please. So if the website is located in Japan, deleted by the USP, ISP, and put it again, please. And the police will have an investigation about that. Push, please. So arrest the criminals. But not in located in Japan, located overseas, please. Check the site, which is really exist or not. Then we have a validation about that, which is a sexual abuse something. And then after that, we create a list and distribute a list to the ISP. And then the sites are blocked. After that, but we check weekly, which is exist or not. and then we delete the URL from the list. Thank you very much. And lastly, one more express. So in 2016, the UN report about freedom of expression in Japan, I was talking with him. He said this is a model, a kind of a great model in Japan presents in the area of freedom of internet, he said. So a very low level of government interference with digital freedoms addresses the government’s commitment to freedom of expression. As the government considers the legislation related to the wiretaps and new approaches to cybersecurity, I hope that this spirit of freedom, communications, security, and innovation online is kept as a forefront of regulatory efforts. So he said, so I’d like to keep this situation. Thank you.

Moderator – Afrooz Kaviani Johnson:
Thank you so much, Mr. Tateishi. That was very helpful to have the images. I appreciate your effort in those bespoke images. And I think you raise some important points that we may get to discuss as we go on, looking at the various rights that are implicated and making sure that we do advance human rights and children’s rights holistically and ensuring that every child has the right to protection from sexual abuse and exploitation. So from Japan, we’re now going to move to Ghana. And I’m so delighted to have Dr. Albert Antwi Boasiyako, who’s the Director General of Ghana’s Cybersecurity Authority. So Director General, as the Cybersecurity Authority really pioneers its role, because it is also relatively new in the scheme of things, interested to hear how Ghana is championing industry responsibility and fostering innovation to tackle this issue of online child sexual abuse and ex-

Albert Antwi-Boasiako:
And thank you also, Yannis, for the topri food privacy efforts and the violation of food access for land exploitation. thank you, and colleagues, and speakers, and everybody here and hopefully online on behalf of the for the invitation to contribute to the discussion. Thank you very much, Yannis, and thank you to all of you for being here today. I’m a government leader, and I’m impressed about how Australia has advanced that, but as a government lead for the past close to seven years, I think there are different maturity levels, and I want to speak from the developmental context. It’s very important, first things first. If you jump without doing the first things, you’re likely to create problems. So, I think the first thing is to have a commonality, and I think that’s one of the things that I heard that was here, the commonality, you know, whether you’re starting or whether you advance, I think the baseline requirement is key. But one also need to appreciate a bit of context. Developing country perspective, sometimes my Western colleagues tell me, but you have this law, especially when they come in, I say, well, the culture of other people is different. You have an interest in making progress but two aspects, technical competency or capacity of the host country. But my early part of this job as national adviser to government before I was appointed as general director, I realized that there are other factors that affect enforcing certain mineralsetting. you to have a certain degree of In fact, we run a lot of ideas with my partners. Once you mention regulations, my Western colleagues said no regulation, especially my US folks. But I think over the past few years, we’ve emerged to the matter. There is that sort of consensus that self-regulation alone cannot what? Keep our children safe. I think by and large, some of my colleagues have shifted a little bit. Possibly, I didn’t also stay too extreme, because there is that concern. If you over-regulate, then you are also going to kill innovation and others, especially a private sector perspective. But Ghana came up with a certain strategy, what we call collaborative regulation. Is that regulation all right? Because without regulation, I don’t think we’ll be able to achieve it. But how? How unique is it? I realize that it’s not just a government making a law and expect the industry to what? To comply. Sometimes, even understanding, and I can confirm that, the industry that we expect to follow certain best practices, to implement certain measures, themselves do not appreciate the risks that our children are facing, either the content they assess, the conduct themselves, or the content that they establish. When you have this realization, I think one will be very careful in terms of how you start your regulatory process. So taking inspiration from Julie, the basics of online safety expectations, we had to pass the law. And the law incorporated the issue of blocking, taking down content, filtering. It was quite a difficult one, because, of course, the suspicion from the civil society. Again, we had to sit together to debate, and eventually, Section 87 of our Cyber Safety Act make provisions to compel industry to act. in a manner that will protect children on the Internet. But that is just a basic framework. I think my colleague from the common law country will appreciate that. The primary law is one that you need an ally, a legislative instrument, to also effectively and practically operationalize the law. And I think, Alfred, we’re grateful we had to invite you yourself. We open up, not just industry, but international partners. Alfred has to visit Ghana for the first time to take part in a public consultation to formulate the specific mechanism by which industry plays. And I think she saw that. The industry is sitting together with us. In fact, they are suggesting. And as I sit here, I can mention that Ghana is active. The first active private sector player is the equivalent, the one who has the telecommunication chamber. Arguably, that is the most important industry body with that. And they have been actively involved in terms of even developing the ally. That is what I refer to the collaborative regulation. Because if you’re certain that we’re doing this together, you lose the morale or to say you are not complying. Of course, it doesn’t mean that is the only two. Ghana’s law incorporates sanctions, both administrative and criminal sanctions. Of course, we needed to fund cybersecurity. And in the developing context, you don’t just allow that. So we incorporated that. So if you do not comply, you are sanctioned. And telecommunication firms have got money. So you pay. And then we’re used to what? To finance cybersecurity. So we have these tools available in our law. I mean, nonetheless, at the core, what I wanted to share as a model from our perspective is this collaborative approach, that you engage with them because you need to build understanding. The concept of regulation in this age is not like in our contests, you know, all these headmaster and the student to do, go and do it. I think that’s a focus is quite, we need to engage. And I think it has been successful, even at governance level of my authority, 11 board members, three are from the industry. And I think that approach has work and other international practices such as the guidelines for industry by the IT, UNICEF, we have been incorporated into the ally as a best practices. But currently what we’re doing most is intensifying the awareness creation. The allies in the process, because that is really what is going to personalize the industry obligation and commitment. But we, I don’t think I will achieve much without really raising the awareness among the industry players that these are the risks. That’s the reason why you need to comply if you need to take down a content. This is why you need to apply if you need to, I mean, you need to comply with the law if you need to block certain content as far as the protection of children are concerned. So I think in a nutshell, ours is a developing situation, I must admit. Ours is a collaborative regulation because I think that is the best, it’s not really a government just giving instruction to industry, it doesn’t work like that. I think if you have a case, you discuss, you argue on the table, and I think that’s what Ghana has been able to use to get the industry sitting at the table, and I think some of our international partners who visit see the discussions, it’s open, transparent, there are risks, the government has to lead, industry need to get on board. But I think we do that by way of talking. Thank you.

Moderator – Afrooz Kaviani Johnson:
Thank you. Thank you, Director-General. No, that’s really fascinating, and indeed, the purpose of this whole discussion is that exchange of experiences, because there are very different approaches, different contexts, but what I really heard from you was going along that journey together with industry and looking at what was fit for purpose in your context, and really moving from just what is on paper to practice, and the best way to do that is bringing industry along with you. Now I’m really pleased to introduce Mr. Dunstan Allison-Hope, who’s the Vice President for Human Rights at BSR. Now, as I mentioned just earlier, this issue of online child sexual abuse and exploitation, it’s a human rights issue, it’s a children’s rights issue, and we do know that there are various tools in the human rights suite of tools, including human rights due diligence, including impact assessments, which are conducted by companies, and these can be key instruments in advancing responsible business conduct. So the question for you is, what does robust human rights due diligence entail, and how can it play a role in addressing this particular issue of online child sexual abuse and exploitation? Thanks. Great. Well, first of all, thank you

Dunstan Allison-Hope:
for the invitation to speak. Much appreciated. I’d love an invitation to Ghana as well, if that’s forthcoming. That was quick. So the main purpose of my comments today is to share how human rights due diligence, based on the UN guiding principles on business and human rights, can form an essential part of company efforts to respect child rights and to address online child sexual exploitation and abuse. I have really two main thoughts to share. The first thought is around the value of human rights due diligence, and the second is about some regulatory trends that are going to transform the landscape of human rights due diligence that I think it’s important to think about. So for context, the technology and human rights team at BSR has now conducted well over 100 different human rights assessments with technology companies. They come in a wide variety of different shapes and sizes. Sometimes it’s new products, sometimes it’s content policy, sometimes it’s market entry, market exit as well. They come in lots of different shapes and sizes. And in doing those assessments, I think we’ve experienced three main benefits of taking the human rights-based approach that you mentioned. So the first is the systematic review of impacts across all international human rights instruments, including all rights in the Convention on the Rights of the Child. So in a child rights context, that forces us to consider rights such as bodily security, freedom of expression, privacy, education, access to culture, and non-discrimination. It forces us to consider all of these rights holistically and to consider the relationship between them. So these rights are interdependent. Sometimes there’s tension between them. Sometimes the fulfillment of one right enables the fulfillment of other rights. So one clear benefit has been to take that holistic approach. The second is that a human rights-based approach requires us to give special consideration to those at greatest risk of becoming vulnerable, which clearly includes children. So this means that a robust human rights assessment would need to consider and find ways to consider the best interests of the child. The third is that the UN Guiding Principles provides a framework for appropriate action to address adverse human rights impacts. And one thing that we’ve really noted in the technology industry is that that appropriate action may vary considerably according to where in the technology stack a company sits. So the UN Guiding Principles have been written for all companies, in all industries, in all countries of all sizes. They apply to everybody, but it forces us to think through how you apply them in the context of the company that you’re working with. Now, till now, everything I’ve mentioned, all this human rights due diligence, has mainly been of a voluntary activity by companies. It is about to become much more mandatory with some very important implications. And this is my second point, and I’m going to share a long list with you in slide form, too. I started writing this long list, and I thought actually putting them on the slide might be helpful. So there is a very long list of things that companies are now. having to respond to. We have the European Union Corporate Sustainability Due Diligence Directive that’s going to require all companies doing business in Europe, so not just European companies, all companies doing business in Europe, to undertake human rights due diligence. The Corporate Sustainability Reporting Directive will require all companies doing business in Europe to report material topics informed by the outcome of human rights due diligence. And people often think of this as a reporting directive, which it is, but it has this really important line, informed by the outcome of human rights due diligence in it. And we’ve mentioned already the EU Digital Services Act that requires large online platforms and search engines to assess their impacts on fundamental rights, and it specifically calls out child rights as something to be assessed. We have the UK Online Safety Bill, which requires social media companies to assess content that may be harmful or age-inappropriate for children. We have the EU AI Act, which is still being debated as we speak, but essentially it includes the EU Charter of Fundamental Rights as the basis for understanding risk. In Japan, we have the Guidelines on Respecting Human Rights and Responsible Supply Chains. So if you put yourself in the shoes of a company, that’s a lot to take on in one go. And what we’ve noted, what we advise companies about, and what we talk to companies about a lot, is that throughout these regulations, human rights assessment requirements that are based on are very similar to the UN Guiding Principles on Business and Human Rights. So our position has been, if you want to prepare yourself to comply not just with the letter of these laws, but the spirit of these laws, the outcomes that they’re seeking to achieve, taking an approach based on the UN Guiding Principles is going to get you there and is going to get you to the right place with not just one of these regulations, but all of them. My point here is quite a simple one, which is that the rights of children, including efforts to address online sexual exploitation and abuse, should be fully embedded into these broader methods of human rights due diligence. We need to make sure that assessment of risk to rights of children is fully embedded into these broader methods of human rights due diligence. So that could mean, for example, child rights impact assessments being a modular part of much broader human rights due diligence. It might mean making sure that children or those with insight into the best interests of the child being meaningfully consulted and included in the process to undertake human rights due diligence. There’s lots that we can unpack there, but my advice is to sort of invest a lot of effort and thought into these processes. So this trend towards mandatory human rights due diligence I think is a massive regulatory and cultural shift for companies. It’s one I think will be well advised to harness for the child rights outcomes that we want to see. I am reasonably optimistic on all this with one caveat, which is you’ll notice the European Union and the UK features very strongly on this list, and I do fear that so much time and attention goes towards the European Union and the United Kingdom that that takes time and attention away from places where human rights risks may be more prominent, may be more severe. So one sort of flag that we’re raising is to make sure that companies take global approaches while applying to these quite regional laws and regulations. I’ll stop

Moderator – Afrooz Kaviani Johnson:
there. Fantastic. Thank you so much. Another very impressive effort of condensing a lot of information for us in that short time. Thank you, Dunstan. That was fantastic. A lot of food for thought, and really it’s a timely discussion because of this global, you know, this massive shift in the global landscape, and also at the same time these massive child rights and child protection challenges that we’re facing. So online participants and people in the room, we now do have a few moments for questions. We have a microphone behind us here, and we also have Josie monitoring the chat there. I’m not sure if there are any questions. Please, if you could come up to the mic and put your question. we can take a few and then we can open to the panel to discuss. Thank you so much for this

Audience:
great discussions and presentations. My name is Yulia, I work for UNESCO and I would like to bring a kind of challenging topic and a rather provocative question. So we are talking here about protection and safety which is of course the key of existing of children online but at the same time we do consider the right of children to access to information and it comes more pressing when we are talking about for instance sexuality education. So basically it is easy to ban all the content on on sexuality online but at the same time the reason children right to get access to correct and scientific information and content on sexuality and I wonder what are your I don’t know thoughts ideas how we might proceed with those you know challenging intersections between safety and access right to access this information. Thanks a lot. I might just take a couple of questions

Moderator – Afrooz Kaviani Johnson:
so just in the interest of time and then really it will be open to the panel to answer so please

Audience:
go ahead. Thank you my name is Jutta Kroll from the German Digital Opportunities Foundation. Just to answer before I put my question I would refer to the general comment number 25 to the UN Convention the rights of the child. I’ve brought some copies for those who have not been come across that and it will probably answer some of the question that has been put. I have a question to the first speaker I have to apologize that I came in a bit late but what I heard on the the new law and regulation was that it is also on raising awareness for parents and children in regard of protection on with regard to see them there. And I wanted to know whether there is a relation between, or a balance between, on the one hand, raising awareness and education part, and on the other hand, the obligations to service providers. And then the second question is going to this colleague. I have seen you’ve been talking about DNS blocking, but also we would need then removal of the content, not only blocking, because then it would still be there, and probably those who are looking for that content might find ways to circumvent the blocking that you’ve been talking about. Could you explain that maybe a little bit deeper? Thank you so much.

Moderator – Afrooz Kaviani Johnson:
Okay, it looks like we don’t have any other questions from the floor for now. So there’s three main questions, if I can summarize it for our panel, and then we can just pass the mic along, and if you would like to respond to one or all. So the first one is the really important balancing of the children’s rights to access information, and particularly sexual and reproductive health information, and getting that balance right when we’re talking about harmful content and restricting access, and making sure that that doesn’t inadvertently restrict children’s other rights. The second one, I think, may have been for our second speaker, the Japanese law. Yeah, so just understanding more about the awareness-raising content. And then the third one, which you’ve addressed it to the Japanese private sector, but perhaps other jurisdictions might like to share, you know, how they’re making sure that it’s not only about blocking, but also taking down, and also responding, you know, safeguarding children as well. So I think there is that whole system. So any volunteers from the panel? Commissioner?

Julie Inman Grant:
I was just saying earlier that clearly the internet was not built for children, although one-third of the users of the internet are children, and their online lives are inextricably intertwined with their everyday lives. It’s their schoolroom. It’s their friendship circle. It’s their place for learning, commuting, creating, and exploring, whether it’s exploring their sexuality or affinity groups. And we need to make sure that as we’re trying to make this safer, that we’re notโ€”we’re mitigating the harms, but we’re notโ€” we’re harnessing the benefits as well. So, you know, we came up against this. We did a two-year consultation on age verification, which was probably one of the most difficult processes I’ve gone through because there’s just so much polarization. But one of the things that we were so conscious ofโ€” of what was the ability of marginalized communities to be able, and particularly young people, to be able to do that exploration. And that doesn’t mean, age verification doesn’t mean restricting their access to everything. Again, I think there are a lot of things that companies can do beyond age gating. And I think Roblox is trialing some age verification. Tinder just announced they’re doing so in Australia, as is Instagram. So it’s good to see that companies are starting to think about what is our responsibility to make sure that children are 13 and above, and that we’re making meaningful checks. And I can say, from our experience of youth-based cyberbullying, what we saw post-COVID is that because parents were so much more permissive with technology use when we were locked down, we now have kids that are eight or nine reporting cyberbullying to us, whereas prior to COVID, the average age was about 14. So once you are permissive with technology use, you really cannot ratchet that back. So I would just say, I’m the Australian regulator. And yes, we have powers. But we have a model where we talk about the three Ps of prevention, protection, and proactive and systemic change. You’ve got to prevent the harms from happening in the first place by having fundamental research, by understanding how harms manifest against different vulnerable communities in different ways, and then co-designing solutions with those communities, doing this with communities rather than two communities. I think I heard Albert say, and we struggle with this too, one of the biggest challenges is raising awareness and encouraging young people to engage and help seeking behaviors. And I’d say parents are the hardest cohort to reach. So all of the things are interrelated. If they weren’t hard, then we would have nailed this

Dunstan Allison-Hope:
already, but they are. I just have a comment on the, I think it’s the first question, maybe the second one. So it’s a great question because it enables me to say a human rights-based approach is designed to achieve precisely that. So a couple of things to say. First of all, when we do human rights assessments, it is quite typical for child sexual exploitation and abuse to raise to the surface is one of the most severe human rights risks that companies need to address. So first of all, that risk tends to come up as one of the top priority risks to address based on the criteria that UNGPs set out. However, we do take this holistic approach. So we consider the relationship between different human rights. So when does the fulfillment of one human right enable the fulfillment of other rights? When does a violation of one right, like the violation of the right to privacy, present risks to other human rights, like the ability to access information or express yourself freely? When there are tensions between rights, how do you address those tensions? How do you apply human rights principles like legitimacy, necessity, proportionality, non-discrimination to decisions about when and how to restrict access to content? And just one idea to throw out into the room that came to me when the question was asked. One of the interesting developments in the sort of business and human rights field and the tech industry over recent years has been the meta-oversight board, where they publish case decisions on particular cases that come to the oversight board and make recommendations for what meta should do to address the whatever failings they’ve identified. And I read a lot of those cases and they’re very long and it includes a segment that undertakes a human rights analysis of that particular case. And the Oversight Board has the time and space to do that, because they’re not making rapid decisions like META does. They have weeks and months to do this analysis. And I find it a really helpful source of insight. I’m not sure that there have been many child rights-related cases before the Oversight Board, but some place where we can do that type of thinking to unpack tensions between rights, the relationship between rights in a child rights context, I think would be really useful, because we come across this all the time when we do human rights assessments. Dilemmas, uncertainties, we’re not sure what recommendations we should be making sometimes. And I’d love space for that thinking to take place.

Julie Inman Grant:
Can I make a comment about? I’m glad you’re reading the cases of the META Oversight Board. It raises an interesting issue, because there’s a lot of discussion now about multi-stakeholder regulation of the platforms. And I believe that in the last transparency report, the META Oversight Board received about 1.3 million requests to review content moderation decisions that were made. And because these are such long, drawn-out decisions, they were able to cover 12 in 12 months. Now, we’re a very small agency, but we’ve dealt with tens of thousands of investigations. And you’re just able to be a lot more nimble. So I think there’s a really important role for that, and to kind of interrogate some of those more difficult and contextual issues. It’s always the gray area that’s going to be challenging. But I’m not sure also how many of the decisions that the META has actually accepted based on Oversight Board recommendations you might have a better sense.

Moderator – Afrooz Kaviani Johnson:
I’m just wondering if if you would like to respond to the question around the blocking and take down

Toshiyuki Tateishi:
So first of all we have to take down So it’s it so many times we try to the and sometimes before we are booking that We ask the other even foreign countries. We ask there. There are some time police or any governmental Offices asked that at last if we ask Them if there is no reply or something. So the last measure to block some sexual contents

Moderator – Afrooz Kaviani Johnson:
Thank you, and I’m not sure if Mr. Suzuki is still online because there was a question about Just explaining more about the provision in the law about raising awareness Whether it’s that’s broad around all of children’s rights in relation to the digital environment Would Suzuki son like to answer that question or we can

Tatsuya Suzuki:
Say seat You must have cornelia on the full it’s demand so No, internet literacy Guinevere Because the most of its Failure You write about home for some of us you are not Know Villa voodoo duty I mean Oh Oh ๆ—ฅๆœฌใงใ‚คใƒณใ‚ฟใƒผใƒใƒƒใƒˆใงๅญไพ›ใŒๆ€ง่ขซๅฎณใซ้ญใ†ๅ ดๅˆใจใ—ใฆใฏ 1ใคใซใฏ้จ™ใ•ใ‚Œใฆใ€่‡ชๅˆ†ใฎ่ฃธใฎๅ†™็œŸใ‚’้€ใฃใฆใ—ใพใ†ใจใ„ใ†ใƒ‘ใ‚ฟใƒผใƒณใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ‚Œใฏๅคšๅˆ†ๅค–ๅ›ฝใงใ‚‚ใ‚ใ‚‹ใจๆ€ใ„ใพใ™ใ€‚ใ‚‚ใ†ไธ€ใคใ€ใ“ใ‚Œใฏใฉใกใ‚‰ใ‹ใจใ„ใ†ใจๆ—ฅๆœฌใซ็‰นๆœ‰ใฎใ‚‚ใฎใชใฎใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใŒใ€ ใ‚คใƒณใ‚ฟใƒผใƒใƒƒใƒˆใง็Ÿฅใ‚ŠๅˆใฃใŸไบบใจๅฎŸ้š›ใซใƒชใ‚ขใƒซ็ฉบ้–“ใง้ขไผšใ—ใฆใ€ใใ“ใงๆ€ง่ขซๅฎณใซ้ญใฃใฆใ—ใพใ†ใจใ„ใ†ใจใ“ใ‚ใŒใ‚ใ‚Šใพใ™ใ€‚ ใ“ใ†ใ„ใฃใŸใ‚คใƒณใ‚ฟใƒผใƒใƒƒใƒˆใฎไฝฟใ„ๆ–นใซใคใ„ใฆใ€ใใฎๅฑ้™บๆ€งใซใคใ„ใฆๆ•™ใˆใ‚‹ใจใ„ใ†ใ“ใจใ‚’ๅญไพ›ๅฎถๅบญๅบใ ใ‘ใงใชใใ€ ่ญฆๅฏŸๅบใ‚„ๆ–‡้ƒจ็ง‘ๅญฆ็œใ‚ใ‚‹ใ„ใฏ็ทๅ‹™็œใจใ„ใฃใŸใ„ใ‚ใ„ใ‚ใชๆฉŸ้–ขใซใŠใ„ใฆใ€ ใใ‚Œใ‹ใ‚‰ใ‚‚ใ†ไธ€ใคใฏไฟ่ญท่€…ใงใ™ใญใ€‚ไฟ่ญท่€…ใซใŠ้ก˜ใ„ใ—ใฆใ„ใ‚‹ใฎใฏใ€ใƒ˜ใ‚ขใƒฌใƒณใ‚ฟใƒซใ‚ณใƒณใƒˆใƒญใƒผใƒซใจ่จ€ใ„ใพใ—ใฆใ€ใ‚„ใฏใ‚ŠใŠๅญใ•ใ‚“ใงใ‚‚ไธ€ไบบไธ€ไบบ็™บ้”ๆฎต้šŽใฏใ„ใ‚ใ„ใ‚ใจ้•ใ„ใพใ™ใ€‚ ่ฆชๅญใงใ‚ˆใ่ฉฑใ—ๅˆใฃใฆใ€ไฝ•ใ‚’ใ—ใฆใ„ใ‚‹ใฎใ‹ใจใ„ใ†ใจใ€ ใ‚ใ‚‹ๅ…ˆ็”ŸใŒใ‚ˆใ่จ€ใฃใฆใ„ใ‚‹ใ‚“ใงใ™ใ‘ใ‚Œใฉใ‚‚ใ€ๅญใฉใ‚‚ใŸใกใฏ่‡ชๅˆ†ใŒ่ฉฑใ—ๅˆใฃใฆๆฑบใ‚ใŸใƒซใƒผใƒซใŒใ‚ˆใๅฎˆใ‚Šใพใ™ใ€‚ ใใ†ใ„ใฃใŸใ“ใจใ‚’ใŠ้ก˜ใ„ใ—ใฆใ„ใ‚‹ใจใ“ใ‚ใงใ™ใ€‚ ไปŠใงใฎๆ™‚็‚นใงใ€ๅญใฉใ‚‚ใฎ่ฉฑใ—ๅˆใ„ใซใ‚ˆใฃใฆใ€ ๅญใฉใ‚‚ใฎ่ฉฑใ—ๅˆใ„ใซใ‚ˆใฃใฆใ€ ๅญใฉใ‚‚ใฎ่ฉฑใ—ๅˆใ„ใซใ‚ˆใฃใฆใ€ ๅญใฉใ‚‚ใฎ่ฉฑใ—ๅˆใ„ใซใ‚ˆใฃใฆใ€ ๅญใฉใ‚‚ใฎ่ฉฑใ—ๅˆใ„ใซใ‚ˆใฃใฆใ€ ไปŠใงใกใ‚‡ใฃใจๆ™‚้–“ใ‚’ใŠๅพ…ใกใ„ใŸใ ใใพใ›ใ‚“ใ€‚

Moderator – Afrooz Kaviani Johnson:
Thank you so much Mr Suzuki and thank you again to all our online and in-person participants. We have come to the end of our time together, though I think this is a topic that deserves a lot more time because as was just mentioned, there is a lot of complexity to this. There are very challenging dilemmas that regulators are dealing with, that companies are dealing with, that civil society, that people working on these issues are dealing with. So it is something that I hope we can keep up the exchange. I hope that everyone found that a fruitful exercise, at least a start of the discussion. We’re meant to capture key takeaways and key actions from each of the sessions. I don’t know that they’re fully formulated yet. yet, but certainly I think I’ve taken away that there is this need to continue the learning and the exchange, that there is this need to ensure that these solutions are consultative, that everyone is involved in the journey, particularly companies when we’re talking about regulation, co-regulation or collaborative regulation as Ghana is doing. Obviously tech companies are vital stakeholders in this effort to protect children from online abuse, but we also see this massive global landscape shifting, I think I really took that away from your points Dunstan, and just this opportunity to fully embed online protection of children from online sexual abuse and exploitation into these broader methods that are becoming increasingly mandatory. So thank you to our esteemed panellists, the Commissioner had to dash away to catch her Shinkansen to Tokyo, so she sends her apologies, but a huge thank you to our panellists, a huge thank you to our interpreters and everyone supporting today, so thank you. Thank you.

Albert Antwi-Boasiako

Speech speed

179 words per minute

Speech length

1310 words

Speech time

438 secs

Audience

Speech speed

164 words per minute

Speech length

405 words

Speech time

148 secs

Dunstan Allison-Hope

Speech speed

176 words per minute

Speech length

1708 words

Speech time

583 secs

Julie Inman Grant

Speech speed

165 words per minute

Speech length

1803 words

Speech time

656 secs

Moderator – Afrooz Kaviani Johnson

Speech speed

150 words per minute

Speech length

2003 words

Speech time

800 secs

Tatsuya Suzuki

Speech speed

72 words per minute

Speech length

531 words

Speech time

445 secs

Toshiyuki Tateishi

Speech speed

124 words per minute

Speech length

761 words

Speech time

368 secs

Broadband from Space! Can it close the Digital Divide? | IGF 2023 WS #468

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Berna Gur

Recent advancements in space-based technologies, particularly megaconstellations like Starlink, have emerged as a promising solution for providing broadband services on a global scale. These advancements have significantly improved the capabilities of space-based technologies, making it feasible to deliver high-speed internet connectivity to even the most remote areas worldwide. Starlink, a megaconstellation consisting of thousands of small satellites, has the potential to revolutionize internet access by providing global coverage.

The global coordination of frequency spectrum is crucial for ensuring uninterrupted provision of all wireless services. The frequency spectrum is a limited natural resource that must be carefully managed to avoid interference and disruption of various wireless services. The International Telecommunications Union (ITU) plays a vital role in regulating the global coordination of frequency spectrum. It ensures that the allocation and usage of frequency spectrum are properly coordinated to prevent any conflicts or disruptions.

To fully leverage the benefits of space-based technologies and ensure effective implementation, countries need to re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Laws and regulations play a crucial role in the successful integration of new technological advancements. Therefore, countries must adjust their regulations according to the unique circumstances and requirements presented by space-based technologies. By doing so, they can create a conducive environment for the deployment of satellite broadband services and facilitate their widespread adoption.

Furthermore, active participation in international decision-making processes, such as the ITU and the UN Committee on Peaceful Uses of Outer Space, is essential. Engaging in these forums allows countries to have a voice and contribute to the development of policies and regulations that govern space-based technologies. Active participation enhances the chances of achieving desired outcomes and ensures that countries’ perspectives and interests are well-represented. Moreover, awareness of international space law is crucial for making informed decisions and effectively navigating the complex landscape of space-based technologies.

It is important to note that the provision of satellite services in a specific country is subject to that country’s laws and regulations. These laws, often referred to as landing rights, determine the terms and conditions under which satellite services can operate within a country. Each country has the autonomy to decide its own regulations for satellite services, taking into account its unique needs and priorities.

In conclusion, recent advancements in space-based technologies, such as megaconstellations like Starlink, offer a promising solution for providing broadband services globally. To fully harness the potential of these technologies, countries need to re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Active participation in international decision-making processes, such as the ITU and the UN Committee on Peaceful Uses of Outer Space, is crucial for shaping policies and regulations that support the effective deployment of these technologies. Additionally, it is important for countries to be aware of international space law and its implications to make informed decisions. By doing so, countries can unlock the benefits of space-based technologies and ensure an uninterrupted provision of wireless services on a global scale.

Stephen Weiber

Libraries have emerged as vital institutions at the intersection of digital connectivity and meaningful impact. Despite being rooted in the pre-digital era, they have evolved to embrace the transformative power of technology. Libraries now incorporate robotics, 3D printing, and Starlink connections, enabling individuals to engage with cutting-edge innovations.

While libraries provide essential services, their focus extends beyond mere provision. Instead, libraries seek to make a tangible difference in their communities. They are conscious of their role in fostering education and actively contribute to achieving Sustainable Development Goal 4: Quality Education. By offering access to technology and knowledge resources, libraries empower individuals to enhance their skills and pursue lifelong learning.

Moreover, libraries contribute to Sustainable Development Goal 9: Industry, Innovation, and Infrastructure. They recognize the significance of meaningful connectivity and its impact on individuals’ lives. Libraries have long understood the transformative potential of the internet and have diligently worked towards improving people’s lives within the local context. Their success lies not just in the availability of digital infrastructure but in the measurable improvement in the quality of life for individuals accessing these resources.

Despite the rise of digital infrastructures, libraries continue to hold distinct advantages. Contrary to the assumption that internet cafes and telecenters would replace libraries, this has not been the case. Libraries offer unique value propositions that set them apart. They go beyond providing connectivity by offering diverse avenues for engagement, learning, and social interaction. Libraries serve as vibrant community hubs and spaces that foster a sense of belonging.

In conclusion, libraries are indispensable in bridging the gap between digital connectivity and meaningful impact. Their evolution has enabled them to integrate technology and cater to the changing needs of their communities. Libraries are not simply service providers; they are catalysts for transformation, driving positive change, and improving lives. With their ongoing commitment to innovation and a community-centric approach, libraries will continue to be vital pillars in the digital age.

Dan York

The use of Low Earth Orbit (LEO) satellites for high-speed, low-latency internet connectivity, particularly in the context of video communications, is seen as a positive development. LEO satellites operate at a height of less than 2,000 km, enabling quick packet transfers and offering lower latency times compared to geosynchronous satellites. Notably, SpaceX’s Starlink project leverages LEO satellites, further supporting the viability and potential of this technology.

However, one of the major challenges currently faced is the large-scale launch of LEO satellites. SpaceX has been able to launch seven rockets each month, but there is uncertainty whether smaller launch providers can operate at this scale. Overcoming this challenge is crucial for the successful implementation of LEO satellite technology.

Critical questions are also being raised regarding the use of LEO satellites for global internet coverage. Technical feasibility, environmental impact, and effects on astronomy are all areas of concern. The environmental impact of satellites, both during their launch and disposal in the upper atmosphere, remains unclear. Additionally, large satellite constellations may cause issues for astronomical observations. These concerns highlight the need for careful examination and consideration of the impact and trade-offs associated with using LEO satellites for global connectivity.

While new connectivity options are emerging, such as OneWeb and Amazon’s plans for global coverage, at present, Starlink remains the only option for this kind of high-speed, low-latency connectivity. The expansion of these connectivity solutions presents complex challenges due to legal and regulatory considerations. Each country has its own regulatory rules, and providers need to negotiate with each country’s regulators. Furthermore, conflicting frequency usage can prevent some countries from utilizing these systems. The deployment of these solutions requires cooperation and interoperability among different space-based providers to ensure a seamless and efficient global coverage.

Despite these complexities, there is support for exploring emerging technologies in the field of connectivity. Dan, who supports the exploration of emerging technologies, believes that despite the challenges, the benefits provided by LEO satellites and other technologies outweigh the difficulties encountered.

LEO deployment is viewed as critical because with proper permissions and power, it can be quickly set up anywhere, making it highly adaptable. Additionally, LEO connectivity is seen as complementary to existing infrastructure and can help build digital skills until terrestrial connectivity reaches a particular area.

Concerns are being raised about the environmental and carbon costs associated with launching systems for global connectivity. A recent paper analyzing the carbon costs of launches highlights the trade-off between carbon cost and global connectivity. The sustainability and control of LEO constellations, mainly run by commercial entities owned by billionaires, are also being questioned. The need for continuous satellite launches to maintain the constellations raises concerns about the long-term sustainability of this approach.

In conclusion, the use of LEO satellites for high-speed, low-latency internet connectivity has the potential to revolutionize global connectivity. However, challenges related to large-scale launch, technical feasibility, environmental impact, and legal considerations must be carefully addressed. Cooperation and interoperability among space-based providers are key factors for success. Despite concerns about the environmental and carbon costs, there is support for exploring emerging technologies in this field. It is critical to study and understand the opportunities and trade-offs associated with these technologies to ensure their responsible and sustainable implementation.

Moderator

The discussion highlighted the potential of satellite technology, specifically low-Earth orbit satellites like Starlink, in bridging the digital divide and providing global broadband services. These satellites are capable of connecting anyone, anywhere with high-performance, robust broadband, which has the potential to close the digital divide. This new era of satellite communications has been seen as a game changer, particularly in regions where internet access is limited or non-existent.

However, while satellite technology offers many benefits, there are concerns that need to be addressed. One of the main concerns is the high cost of satellite internet. The cost of using services like Starlink can be prohibitive for certain communities, making it challenging for them to adopt this technology. Additionally, questions have been raised about the environmental impact of satellite systems. Researchers have expressed concerns about the sustainability of Starlink and the potential impact of launching thousands of satellites.

Another issue that emerged from the discussion is the potential misuse of satellite technology. In the case of Starlink in Brazil, it was revealed that the service was being used to support illegal activities such as gold mining and drug trafficking, which goes against its original intent of providing connectivity to remote schools. This highlights the importance of ensuring accountability and regulation of satellite activity.

Libraries were also mentioned as important community support centers that can play a role in bridging the digital divide. They can offer a range of value-adding services and help localize internet usage. Libraries have the potential to act as public interest locations within communities, and examples such as internet backpacks in Ghana utilizing libraries as centers for bringing people online were mentioned. Additionally, libraries can offer a variety of services, beyond just connectivity, and can act as a bridge from the availability of digital tools to their impact, achieving the desired change.

Throughout the discussion, it became apparent that monitoring and regulating satellite activity is essential. This includes tracking the advancements and issues with satellite technology, such as space junk and potential disruptions to astronomy. The audience emphasized the need for better coordination among customer countries for choosing satellite internet providers and ensuring a robust monitoring system.

In conclusion, satellite technology, particularly low-Earth orbit satellites like Starlink, has the potential to bridge the digital divide and provide global broadband services. However, there are challenges such as high costs, environmental impact, and potential misuse that need to be addressed. Libraries can also play a significant role in supporting communities and bridging the digital divide. It is crucial to monitor and regulate satellite activity to ensure accountability, better control, and informed public debates.

Nkem Osuigwe

Starlink Internet has revolutionised libraries in Nigeria, with its implementation in five urban areas including Lagos, Abuja, and Kaduna. This game-changing move has attracted a new audience to these libraries, thanks to the provision of fast, stable, and reliable internet services. Particularly, open knowledge enthusiasts and those interested in Open Educational Resources (OER) have greatly benefited from this influential addition.

The introduction of Starlink Internet has significantly enhanced the efficiency of users’ work within the libraries. Users have reported faster and more efficient work, thanks to the stable internet connection. This positive feedback highlights the immense impact of the fast internet provided by Starlink, which simplifies various online activities, including translation work on open platforms. Users have noticed that the internet does not slow down during use, making the translation process smoother and enhancing overall productivity.

Despite the notable advantages, several challenges need to be addressed to further develop and improve Starlink’s internet services. One major challenge is the weak signals experienced beyond a specific radius from the libraries, limiting the accessibility of the internet service. Additionally, the service becomes unavailable during power outages, further hindering consistent and uninterrupted internet access. Moreover, the limited operating hours of the libraries pose a constraint for individuals seeking to utilise the service outside of the designated time frame.

To tackle these challenges and improve the service, it is crucial to identify and study the usage trends and user demographics of the Starlink Internet service. Gaining a comprehensive understanding of users, including their age range, specific internet usage patterns, and overall internet needs, will enable service providers to enhance the service in a more targeted manner. Moreover, investigating the speed of the internet and potential drop points throughout the day is important. User feedback also plays a vital role in gathering insights and suggestions for improving the service.

In conclusion, the introduction of Starlink Internet in Nigerian libraries has had a significant positive impact. The fast and reliable internet connection has attracted new users, particularly those interested in open knowledge and OER. However, challenges such as weak signals, service unavailability during power outages, and limited operating hours need to be addressed. Therefore, identifying user demographics, studying usage patterns, and obtaining user feedback are critical steps toward enhancing the service and expanding its application.

Audience

The analysis focuses on two key areas of satellite internet: Leo Satellite Internet and Starlink. Leo Satellite Internet is seen as an essential solution to closing the growing digital divide. It allows for faster deployment compared to terrestrial or mobile infrastructure, making it an effective means of bridging the gap in internet access. However, concerns arise regarding the longevity and selection of Leo Satellite Internet service providers. Countries need to invest in hardware and establish institutions to support their services. The analysis suggests that countries should improve coordination to negotiate better conditions with Leo Satellite Internet providers and enhance their power as consumers.

Leo Satellite Internet’s ease of deployment is highlighted as an advantage. The Leo dish only requires power for providing internet access and can be quickly deployed anywhere. It can also help people develop digital skills and increase internet usage, which has positive implications for education and innovation.

However, concerns are raised about the simultaneous development of multiple infrastructures. Companies like SpaceX, OneWeb, and Amazon’s Kuiper are building their own systems, which lack cooperation and interoperability. It is suggested that more collaboration and standardization are needed to address efficiency and sustainability concerns.

The analysis indicates uncertainty regarding the use of Starlink for commercial or community networks. The rules and regulations surrounding Starlink’s licenses and reselling are unclear, causing uncertainty for potential users interested in wider network deployment.

Environmental and financial sustainability are also concerns. The current business model of Starlink, which requires renewing satellites every five years, raises environmental and economic concerns. The long-term environmental impacts of this process are worrisome, considering the urgent need for sustainable consumption and production. Additionally, doubts are expressed regarding the economic feasibility of Starlink’s large-scale satellite launches.

There is also concern about the regulation and accountability of satellite operators. The potential for individuals or entities to manipulate satellite services raises concerns about their misuse or exploitation.

Measurement Lab is mentioned as a valuable resource for monitoring internet performance, including satellite performance. It measures aspects such as interconnection points, speed, and quality of internet globally, providing the largest public dataset on internet performance.

Furthermore, doubts are raised about Starlink’s ability to effectively close the digital divide due to high costs. The unit costs of Starlink range from 150 to 200 watts, with a capital expenditure of 300 to 600 US dollars. Affordability and the ability of individuals and communities to sustain recurring payments for internet access are concerns.

The analysis also highlights the misuse of Starlink infrastructure to support illegal activities in the Amazon region, negatively affecting indigenous communities. Additionally, the unfulfilled promise of Starlink providing internet connectivity to schools in the Amazon region raises doubts about the company’s commitment to addressing educational needs in underserved areas.

In summary, the analysis provides an overview of the advantages, concerns, and uncertainties related to Leo Satellite Internet and Starlink. Leo Satellite Internet shows promise in bridging the digital divide with its fast deployment and potential for improving digital skills. However, concerns exist regarding the selection and longevity of service providers. Uncertainties also surround the use of Starlink for wider networks, environmental and financial sustainability, regulation and accountability, and the company’s commitment to fulfilling promises. Careful consideration and comprehensive planning are necessary for the development and deployment of satellite internet systems to ensure equitable and sustainable access to digital resources.

Session transcript

Moderator:
All right, are we good? Okay, we’re live. Good morning, afternoon, evening to everyone who’s here or in person or online. This is workshop 468, broadband from space, can it close the digital divide? That’s our question. The setup for this is the idea that we’ve entered a new era of satellite communications. They’re not new satellites, they’ve been around for a long time. But types of satellites that have new capabilities, especially in low Earth orbit, we’ll hear about that shortly, and the possibility of satellites in multiple orbits coordinating to create new kinds of services. My name is Don Means, I’m director of the Gigabit Libraries Network. Each of our speakers will introduce themselves. Our time is short today, so we’re going to try to get through this pretty quickly and have time for open discussion. I just wanted to make a couple of points in the beginning here about barriers to adoption. The question that we’re posing here is, can satellites actually close the digital divide? Can they contribute to the solution to this longstanding problem that we’ve had? The reason we’ve had this problem… is just the basic economics of infrastructure that says the farther away you are from the core of any network, the more expensive it is to reach you, and you probably have less money to boot. So that’s why they’re still not participating in the global digital conversation. We’ve identified these three barriers as availability, affordability, and usability. So affordability, of course, this is a difficult question if you approach from the standpoint of how much can a family afford to spend every month for access. It depends on how the value is set. What can they gain from that? Does it change their economic calculation in the first place? Like you buy a car because without the job, that kind of a thing. Usability is the most comprehensive or largest kind of topic here because it covers everything from skills to devices to an environment, but it’s absolutely critical to adoption. Availability is slightly different because if you don’t have availability, then affordability and usability are moot questions. So that’s what’s interesting about satellites, especially low-Earth orbit satellites, is they can connect anyone, anywhere, given that they’re great, with high-performance robust broadband, low latency, 100 megabit connections. There are lots of issues around this related to, well, I’m not going to get into all those, but there are. Hopefully we’ll get into those. So the goal here, at least from Gigabit Library’s network standpoint, is this is a real opportunity to connect every community. Now this is not connect every person or every household. That’s a dream, but it’s a reality that we could set up in every community. If we come up with a number, it’s 100,000, some number of communities, neighborhoods or small communities everywhere, that’s doable. And what do you have with that? What do you have with this community network that is basically no fee or low fee? Well, those are questions. But for us, this is a baseline standard functionality to allow virtually everyone access, even if it’s not everything that everybody wants. It’s something that is there for everybody. So we’re going to hear more about what that means and what are the implications of that and how the technology is built from our next speaker, who is Dan York with the Internet Society. Dan, welcome. Dan is also the co-coordinator of the session. Dan?

Dan York:
Thank you, John. Welcome to everybody. I am delighted that we’re having this session, having this conversation. You should now be able to see my screen, correct? Okay. So my role here is to talk a bit about the technology and to help us understand this as we look at this discussion and this debate. As Don mentioned, satellites are not new for Internet access. We’ve had them for satellites that are in the geosynchronous or geostationary. And those are ones that are way out at 36,000 kilometers. They have the capability that they can basically be parked over one part of the earth. And you can have basically three of them and be able to get worldwide global connectivity. The challenge is they’re expensive. They’re typically the size of a large bus. They take years to create, millions upon millions of dollars. And they’re launched out into that distant orbit, which is great. It has provided Internet access all around the world. long time for a packet to go from the earth out to that satellite and back. Networking terms, we talk about latency or the lag, the amount of delay, and that can be 600, 700 milliseconds, even a second. And that’s, that would be impossible for me to come in over a, this zoom connection. I could not do that. So the exciting part about why we’re here is this new generation of satellites that are, that operate low earth orbit, which is the opposite side of that, that is down underneath 2000 medium earth orbit, which is in between those two areas. And there are a couple of solutions. There’s one company, SES, which operates a network of satellites and there you can have fewer. You can only need maybe 11 or 20 satellites, but they’re in motion and they have longer latency times, but the excitement is all down in Leo’s because now we have things that can have very quick acts, low latency. So you can have maybe 40 milliseconds, 50 milliseconds, which is well within the range of things like video communication and pieces like that. The challenge is that you need more satellites. And so our picture, as we start to grow that in a little bit, the components, when this interest is coming about, because we have this demand for these high speed, low latency connections. There’s also been this massive reduction in costs for satellite development. If you are watching this space, you can see the companies like SpaceX and Amazon are in fact, you know, they’re mass producing satellites. I think I saw a report from one of Amazon’s things. They’re able to pump out for a day out of their factories. SpaceX is similar. They’re creating large numbers of this. And we’ve seen this massive reduction in the cost of launching SpaceX with this reusable rockets and pieces like that. The three. And Berna will get to this when she talks about the policy side are that. you have the satellite constellation, which is of course what we all know or we talk about. You also have the thing on the ground. Now the satellite industry calls this a user terminal or a ground terminal or something like that. For a consumer, we might talk about it as just the antenna or the dish or something like that. It’s a little different from the past. With traditional satellites, you had a fixed antenna that you put on the side of a house and you pointed out because the satellite was always in a location. These antennas look more like a pizza box. They have electronics in them to be able to interact with multiple satellites. And they’re very different. They also sometimes have a different point. It’s packaged differently in different ways. And then you have these ground stations, which are also called gateways. And those are important pieces of how this all works. And let me just show a quick picture to show how this works. In one way, your satellite connection goes up, bounces off a satellite and gets down to a ground station. In LEO environments, you’re actually probably interacting with at least one or two satellites. The satellites are typically overhead for about five minutes. And so you have multiple satellites and that’s part of what happens. Now, one interesting development that’s happened with LEOs that’s made it even more interesting is some communication between satellites. Because before and with traditional satellites, you had to always be in range of a ground station. And so that meant that you could only interact where you had to have a ground station every maybe nine kilometers or so across the earth. Now, the satellites are actually able to connect in between them using what are called inner satellite lasers. And this allows you to connect to a satellite, go across the Starlink constellation in this case, and then drop down to a ground station. SpaceX has pioneered this with. with Starlink, all the others who are out there are looking at similar kinds of ways, and so it provides some interesting and very remote areas that are far away from where you might be able to have a ground station. The Internet Society did create a document about this. You can get it at internetsociety.org. It’s there. It’s something you can be able to look at that goes through a lot of these questions and things. I want to just touch on a couple before I pass it on here. Don mentioned the question around affordability. Can we actually make this affordable to everybody who needs it? Will it have the capacity to handle everybody’s devices? Because we want to go and connect. Everybody has many devices. The big question from a technical point of view, quite honestly, right now in 2023, is getting the satellites up there is one of the big challenges we have right now. It turns out that at this moment in time, SpaceX is the only provider that’s really operating at scale. There’s a number of other launch providers that are working in this area, but they’re all caught in transitions right now between rockets. The United Launch Alliance, which is a traditional US provider, has been around for decades. They’re in the middle of going from an Atlas 5 to the Vulcan Centaur. It’s Ariane 5, but Ariane 6 is delayed. All of these things have caused a delay in us getting there. It should be temporary, but it is one of the challenges right now in getting these systems up there. There are smaller launch providers, but one big question is, can they launch at the scale? To give you an illustration of that, just in the past two months, SpaceX has launched seven rockets each month. Just this month, they’ve launched two already. They were supposed to launch one this morning, actually, but it got delayed because of some high winds. Now, they’re just trying to figure out when. This gives an example of this. and launch at this kind of level to provide this kind of support. We’ll talk, we have our paper outlines and we have room here to talk about some of these questions around security, privacy, interoperability, space debris is a big question. There are questions we don’t know yet. We don’t know whether all of these different proposals can actually work. We’re not clear on the environmental impact of launching all of these rockets and also of having these satellites burn up in the upper atmosphere. And there is a strong concern about impact on astronomy and pieces, which we don’t yet understand. So the reason we need to be having this conversation here at IGF and in other venues of activity in the space over the next few years, we’re expecting to see Starlink complete its first phase, what it calls its generation one and go on toward its next one. Right now it’s going to, the first one is about 4,400 satellites. The next shell, the next part of the constellation has been approved for 7,500 and is going on toward ultimately around 30,000 satellites. One web has completed its, which is now part of Eutelsat. So it’s actually now Eutelsat one web. They’ve completed their initial group, but they’re looking to go on toward building a second phase. Amazon’s project Kuiper just this past week successfully launched its first set two satellites for demonstration, but assuming all goes well, they’re looking in the 24 to begin launching growing up to around 3,200 satellites over the next while China is, is look from what we can tell from the outside is looking to build its own competitor, Starlink, which will be around 13,000 satellites or so 14,000 and the EU, the European union is looking to create what it calls its Iris two constellation. So the timing, the reason we need to be having these conversations is to understand that over the next five years or so, there’s going to be a massive amount of. of capacity coming up there online, the opportunities are tremendous. You know, satellites that are here, it’s conceivable from the filings with the International Telecommunications Union that we could see 40,000, 50,000, maybe even 60 or maybe even 90,000. It’s hard to know how many of these will actually make it into space and be able to work. But it’s a huge number of satellites. There’s a huge opportunity, but there’s also a lot of questions and things we just need to understand around Don’s points around affordability, availability, and also usability. So with that, I wanna just say thank you, and I look forward to answering questions as we go through this more.

Moderator:
Thank you, Dan. Very nice summary there. So, Nkem Uyegi. I never get that right, right, Nkem? But welcome. Nkem has been working on a project in Nigeria with libraries there. Nkem, please introduce yourself and tell us what you’ve been up to with these satellites. Unmute, Nkem.

Nkem Osuigwe:
Oh, apologies. Can you hear me now? Very much. So I am Nkem Uyegi. I am a librarian. I work for the African Library and Information Associations and Institutions with headquarters in Accra, Ghana. That is AFLIA. And when Don started talking about this new libraries, this satellite thing, it was like, is it possible? I don’t know how it’s going to happen. Can we afford it? Who can afford it in Africa? Through his engagements, we were able to get EARNED using Starlink for five weeks. libraries in Nigeria, one in Lagos, in the city center, then we have three in Abuja, and one in Kaduna. These are really urban areas. And what I had envisaged was that maybe it could be possible in rural areas, but I think that the rollout first was to urban areas. So we got those five around from June. We took delivery between around June 29 of this year, 2023. Initially, there were plenty of challenges in setting them up. There were issues. In fact, I took a picture of the library in Abuja that had complained that they were not getting the internet after the unit was set up. And it was because of the trees that covered the dish, so to speak. But right now, all of them are, the five of them are up and running. There are still some challenges because they are saying that the coverage doesn’t extend much to outside of the libraries. Because when I asked them to imagine a situation where something like maybe, we hope not, COVID-19 happens again, what happens? What happens when the library doors are closed? Can they still be able to offer services, even if it’s only internet services? So right now, the internet is strong, fast, stable. Inside of the libraries, we’ve seen particular areas of the libraries, outside a little bit. the signals become weak. Now, we have asked them to, we have asked the libraries to find out who will benefit most from this, or who can benefit most from this, this free and fast internet. And my idea, my idea was, you know, young people that are seeking for employment opportunities or want to learn digital skills or want to assess their assignments, or lifelong learners. But there was, I didn’t envisage another critical group. And when I found this out, you know, it kind of made me elated that, oh, so this is possible too. So, we open community, the open knowledge community in Nigeria. Who are these people? People that like the Wikimedia users group. We have quite a number of them in Nigeria. You know, Nigeria, we have more than 500 languages. And some of these groups, and some of them all have the user groups within Wikimedia. And why am I talking about them in particular? If you have ever edited any of the Wikimedia projects, you find out, once you edit and you want to publish, it kind of hangs. But with this now, the people, those of them that use the library, because we introduced the internet to them. And they say that when they go there to edit, whatever they did, it just goes fast like that. It doesn’t hang up. It doesn’t give them much issues. And also then we… also working with open license and OER enthusiasts. Because we are beginning to realize those resources, educational resources, stories, and so on. We hardly find those in mother tongues. And considering the fact that we have so many languages in Nigeria, we are beginning to ask librarians and others to please use some open platforms where we have storybooks for children to translate them into our local languages. And they’ve been using the internet a lot, quite a lot on that. And right on StoryWeaver, we are building another one on an African storybook. And all these things are made possible because of this internet from Starlink that makes it easier for you to translate. Because when you translate a story on these platforms, or when you’re translating, the internet slows down where you started. Or you get tired. But now with that fast internet, it’s really better for libraries in Nigeria, these five libraries that have free internet. And for the National Library of Nigeria, that they have it in Lagos, in Abuja, and Kaduna, they are saying that it’s a game changer. That although the traditional library users, those ones that we are sure will always use the library, it’s not as if they make use of it so much. But it’s really the new. people that are being attracted to the internet that are making use of it, especially like I said, the Wikimedia users group and then the open license enthusiasts and those that are interested in OER and stuff like that. So that is what I can say now about what we are doing in Nigeria with this. But we had a meeting on September 26th. Dawn was there most of the time. I wasn’t there most of the time. But I spoke with them again this morning and I asked them that there are things that we need to do. And that thing that we need to do is to you know, who are these people that are now using this internet? You know, because they are slightly different from our regular users. What is their age range? What do they do? You know, what stories do they have about the use of this internet? Then what’s the speed for them? Does the speed drop at a particular point in the day and so on? But you know, because these are all government libraries. They work only from Monday to Friday. They don’t work on Saturdays. So we are trying to see how to get staff to run shifts so that they can run, they can open on Saturdays for others that need them on those days. Then also, you know, yeah, hello. Once light goes off, you know, the router stops. And a problem. If the routers came with in-built inverter libraries and batteries, maybe that will be better for us. Thank you very much, Don. Thank you, I’m done. Yeah.

Moderator:
Thank you, Nick. Thank you. Yes, there’s a slide of the dish mounted on a mast to get elevation. Some flyers that went out around Lagos and Abuja show this new service was available, and people, word of mouth is getting around, people are interested and they’re coming in. Stephen Weiber will talk to us here. Stephen’s with the International Federation of Library Associations, a longtime associate. And we talked about this hub concept for sharing. And so Stephen, what’s the what’s the best hub?

Stephen Weiber:
Thank you. So I think I’m going to sort of in a less enthusiastic way, given where I come from. And I think one of the one of the things that makes libraries interesting here is that we are a pre-digital public infrastructure, that it’s an infrastructure that was there ahead of the Internet in order to help actually achieve goals in real life, in order to help people improve their lives through access to information, through access to knowledge. And so what they’re doing is trying to apply to make the most of the opportunities that digital tools aim goals of actually making a real difference. And I think it’s something that really came out in what Kim was saying was that what libraries are doing by bringing in the Starlink connections, by drawing on the satellite internet connections, is actually it’s making the difference between availability and impact as you talked at the beginning that obviously you can make things available but then how do you actually make that bridge from availability to full-on impact and that’s what the libraries are doing and it’s through some of the more basic things about some people being free access, limitation on access, but it’s also through the fact that you have a staff and a space that are actually dedicated to thinking through what how do we make a difference not just we provide something and see what happens but actually thinking it through and I think that there’s a number of characteristics about libraries and about the philosophy and the modus operandi of libraries that mean that they’re pretty well placed to do that and I think some of these actually resonate quite well with the themes picked for this IGF and I think there’s a I know the talk about meaningful connectivity and the back to a table or a link or whatever is something that’s been at the heart of what way that libraries have worked with the internet for a very long time it’s not I know the success of the internet is not measured in the number of people covered by a signal success of the internet is measured in how many people’s lives are improved there’s a strong focus on it being rights based that everyone has this right to access to information to be able to use information to improve their lives and beyond that there’s the role of libraries in localizing in thinking about the context and thinking about what’s going to work building on their knowledge of their communities and really being responsive to the needs of in highlight the interest of libraries as being a public interest and known location within the community there was a fantastic example of an internet backpack so another technology for bringing people online in Ghana and they use the library because it was the one place where all the local schools felt it was okay to come and it was okay okay to be the center in order to get people online. Two other things to mention, I think there’s the potential of libraries as a federator, again, they’re not seen as wanting to shift their weight around or try and dominate things, but they have proven to be quite things in taking all the different local actors, bringing them together in order to think how collectively can we make the most of connectivity and I think in Ken’s examples of working with Wikimedia chapters, working with different groups is really powerful here. And then I think that the final point I’d say is that libraries aren’t just about connectivity. At the risk of sounding rude, once upon a time there was the idea that internet cafes and telecenters would take over from libraries, but that’s not been the case, no, we don’t really talk about telecenters anymore because they were a purely digital infrastructure and with libraries you have other services that has a whole variety of ways of adding value and I think that’s also probably what helps make it and the examples that Don has supported in the US and the examples that Ken’s been involved in in Nigeria demonstrate that when you add connectivity to this mix you can really make things happen and you can really make sure as I said that we make that link between availability and impact.

Moderator:
Very good. That said, us anyway, the quintessential example of a community center, but if there is a center that the community supports and trusts, fine. It’s just that the library offers a certain model for a range of services. support services training, all the things Stephen mentioned that make it a go-to in the absence of an alternative. So now we’ll hear from Berna Gurr, who’s with Queen Mary University in London, a lot of policy aspects of this, which there are not a small number. And then we’ll open it up to questions. Please send them in through the chat or wait for the opportunity after Berna finishes. Thank you.

Berna Gur:
Go ahead. Thank you. It’s a pleasure to be here with such distinguished panelists, so thank you for inviting me. My intervention will focus on the regulatory and policy aspects of satellite broadband with a particular emphasis on addressing the global digital divide. As an international community, we strive to achieve a more equitable internet use that reduces global inequalities rather than increases them. And it’s only when connectivity becomes universal and meaningful, it can be utilized to create social and economic impact, which can lead to economic development and innovation. Now meaningful connectivity has broader requirements, but the underlying communications infrastructure for universal access remains crucial. Recent advancements in space-based technologies, particularly megaconstellations like Starlink, offer a promising solution for providing broadband services globally with minimal additional terrestrial infrastructure. This technology does not have to be considered as a standalone solution. It complements existing global communication infrastructures. However, its successful integration requires careful consideration of each country’s unique circumstances and needs as well as domestic laws and their international law commitments. So first, the policy makers and regulators shall make informed decisions by consulting other stakeholders about the best way to utilize this. They can then intervene by utilizing laws and regulations to maximize its benefits. As there is already an understanding of how satellite services are regulated at the national and international level. To start with, the provision of satellite services in a particular country is subject to that country’s laws and regulations. These are called landing rights, and the countries decide for themselves the terms of landing rights. Satellite communications are not new, so the regulations of regard in almost all jurisdictions. These regulations, however, at times need to be adjusted to the unique circumstances requirements of technological advancements. Mega constellations, I believe, qualify as such. Let’s start with the ground station, the gateway. Dan explains satellite systems connect to the internet through these ground stations. They need to be at the moment, the starting technology, need to be set up at least every 1000 kilometers. For that, they will need authorization from each relevant jurisdiction. Let’s say that your country has a smaller surface area, and there’s a ground station in one of your neighbors. Do you want to use, rely on your neighbor? to not disrupt your services at all times. There may be other cybersecurity implications as well. In another example, let’s say that your country has a very large surface area, you may need regulators to facilitate the authorization of more than one ground station. Suppose you want to create a competitive environment by authorizing multiple satellite broadband companies. In that case, you will need to arrange a location of these ground stations to avoid interfering with each other’s services and all other wireless services. The United Kingdom’s regulatory agency Ofcom, for example, has been very proactive in updating its regulations through frequent consultations with various stakeholders. Now, this brings us to the use of frequency spectrum. The satellites require the use of assigned frequency spectrum for their uplink and downlink connection with the user terminals and also the ground stations. The frequency spectrum is a limited natural resource, the global coordination of which ITU regulates. Unleashing requirements of the ITU, the frequency spectrum assignment in a particular country is subject to their jurisdiction, but coordination at international and domestic levels is necessary for uninterrupted provision of all wireless services, including mobile connectivity and satellites. The coexistence of operators in proximity may require technical cooperation amongst themselves. A licensing requirement for licensees to cooperate with each other may be a good solution to resolve this problem. So the range of these licenses and authorizations also changes with the business. For example. direct-to-consumer model would likely require an internet service provider license. Whereas OneWeb plans to provide backhaul services primarily to incumbent telecom operators. These are subject to different, they will be subject to different narrower set of regulations. Another essential component of the satellite systems are the user terminals. Satellite broadband companies need to export equipment to facilitate use of their services. The use and importation of user terminals are subject to licensing and import requirements of the national authorities. These terminals must be installed at the users premises and have been subject to standards and conformity assessment procedures by national regulatory agencies. These licenses are combined with the internet service provider license. From an international law perspective, the treaty obligations under the General Agreement on Tariffs and Trade, shortly GATT, and the Information Technology Agreement, plus preferential trade agreements, can become relevant. Regulators will have to check their commitments under these agreements. The customs duties applicable to user terminals will be an affordability, will be important to the affordability issue, especially for your broadband companies planning to provide their services directly to consumers. Again, depending on the type of service, data governance regimes and privacy concerns may come into the picture. Shortly, what I’m saying is that most countries have international law commitments when exercising their domestic regulatory powers. It is an extensive subject, so if you find this topic interesting and want to learn more, you could take a look at our research project funded by ISOC Foundation. There is a report on the global governance of satellite broadband, which covers the topics that I mentioned here. And there are short reports and papers for governments and civil society organizations, as well as links to academic papers, if you are interested. I want to conclude my intervention by referring to our policy paper. We advise developing countries to re-evaluate and update domestic regulations related to licensing and authorizing satellite broadband services, consider different business act on cybersecurity and autonomy when deciding on gateways. Forming regional alliances can enhance the achievement of policy goals. Two, participate actively in the International Telecommunications Union, which manages limited natural resources like frequency spectrum and orbital resources. Members should engage in decision-making processes, especially at world radio conferences. If this is done through regional alliances, it will again enhance achieving desired outcomes. Three, trade treaties. Consider interests and priorities associated with satellite broadband technology. And the last one, participate in the UN Committee on Peaceful Uses of Outer Space and take advantage of capacity building opportunities offered by the UN Office for Outer Space Affairs. Awareness of international space law is essential to make informed decisions. By holistically considering these actions, countries can ensure that their initiatives align with their sustainable development goals, technological autonomy and cybersecurity considerations. Thank you.

Moderator:
Thank you, Berna. You’ve made the point that the system is incredibly complicated on so many levels, the intellectual property, the licensing, the technology, the multiple technologies. the ecosystem, we’re really just at the beginning. And I wanted to make that point first, that this is not an advocacy, if it sounds like that, for this new technology, it is, however, I would say an advocacy for exploring this technology. And everyone who’s worried or concerned with bridging this actual digital divide, infrastructure especially, or as a backup, people that are connected, should know firsthand how this technology works. It’s still unfolding, the price has changed, there are so many questions about it. Before I’m going to ask for any questions from our live audience, I want to give Dan Organizer prerogative to make a follow-up comment on Bernard’s presentation. Dan.

Dan York:
Thank you, Don, and thank you, Berna, that was great. I want to just emphasize one key point, partly that you said that it is emerging, right? I mean, two years ago, we didn’t have this capability the way we do. Right now, we have primarily Starlink as our only option for this kind of connectivity. OneWeb expects to go live with their systems later this year to have connectivity by the end of 2023. And Amazon’s looking to get theirs up by the end of 2024, so many more. The important point is, this is an incredibly dynamic and evolving space. One other deployment challenge, I just want to build on what Berna said. If you look at what she talked about with each country, when I started this work, I naively thought that, you know, you could just, once these things were up there, you could bring a dish anywhere and it would just work. And, but the reality is all of that legal, all of those conventions that Berna mentioned are critical. And one question we often see from people is when will, you know, Starlink or. or OneWeb or anybody be available in my country. And it comes back to what Berna showed on that slide. In each and every country, the regulator needs to approve the spectrum that is being used for the uplink and the downlink between the systems also has to approve that user terminal equipment to be distributed. And so there’s a lot of regulatory work. These providers, whether it’s SpaceX or OneWeb or Amazon, they have large teams of staff whose job it is just to go and talk to the regulators of each country. And another critical was this sharing of spectrum. There are some countries that actually can’t use any of these systems because the frequencies that are needed are being used by that country’s existing government systems worth of pieces. So there’s a lot of complexity in turning it on for each individual country around the world. So it’s an exciting time, but there’s a lot of complexity and I see already some fantastic questions that people are asking. So thank you all for paying attention and for being here.

Moderator:
Thank you, Dan. Complexity is the word. In spite of the fact that the point is actually plug and play. We have a question from the audience here. Please identify yourself and try to make it quick.

Audience:
Hello, my name is Utra Meier-Hahn. I’m with GIZ, which is the German Agency for International Cooperation. We also look at this topic and I would like to leave one very short remark and a question. The very short remark adds to the very specific title of this session, how Leo Satellite Internet can contribute to closing the digital divide. And one thing that I have not heard, but when presented with the argument that Leo Satellite is so expensive and that is so unknown and it’s uncertain and all the other limitations and why we shouldn’t be more active in supporting other kinds of infrastructure, terrestrial infrastructure, mobile infrastructure. then I think it is important to remember that the digital divide grows larger with time. So, it’s very important to start closing it quickly, and that is one of the qualities of Leo Satellite Internet, that it allows deployment much quicker than the build-out of terrestrial or mobile infrastructure. So, it has a role in complementing these efforts. I feel like that is good to add. My question relates to the coordination specifically among countries that inquire about the use of Leo Satellite Internet and that try to choose providers. At this moment, there is not so much to choose, but with a view to the future and the past, first the past that has shown that sometimes providers may not live long, but their services require from those countries to make investments, both in hardware and in establishing the institutional setup, as you just said. There may be the assumption that the power as consumers, if we regard those countries as consumers, could be enlarged vis-ร -vis providers to have good conditions by coordinating. My question to the panel goes to the direction of how you would suggest improved coordination among those, if you will, customer countries. Thank you.

Moderator:
Thank you. We actually didn’t mention politics among the various complexities, and certainly in telecommunications is rife with politics. How you integrate that into the ecosystem is a challenge in every country as well, and the business model and the ecosystem impact is another TBD. Dan?

Dan York:
Yeah, I think one of the interesting aspects you’ve mentioned was the quick deployment, and that is a critical element. You can drop a Leo’s dish anywhere, and you can make it happen. in a country that has that permission. And as long as you can get power to it, right, you can get that access and be able to provide that. We see it certainly as a compliment to existing infrastructure. You know, if you, we talk about a low latency in Leo connection, obviously you can get even lower latency in fiber, right? If you can get a fiber connection, you can get a fully synchronous, a higher, even higher speed, lower latency, that’s great. But the challenge is you can’t get fiber everywhere, right? And so there’s a complimentary aspect to this. There’s also a really interesting aspect, which is that Leo connectivity can get out there for the interest and usability and help people build the digital skills to be using internet connectivity so that when other terrestrial, you know, connectivity can catch up to that, they find that the people are already excited and interested and want that. So sometimes, I mean, there is a tension between terrestrial providers and the space-based, the newer space-based providers. But one interesting aspect is they actually work really well and one can lead to the other and can support both in there. Trees, one challenge that we have at the moment globally is that everybody’s looking to build their own systems, right? SpaceX has its own ground stations, its own antennas, its own systems. OneWeb has its own antennas, its own ground stations. Amazon’s Kuiper is doing the same thing. So we’re building multiple infrastructures. We’d love it if they’d cooperate, interoperate more, but these are commercial entities that are on their own market space around that. So we’ll have to see. As far as the internet, Berna could speak to that, I’m not.

Audience:
Dan, we’ve got actually quite a few questions and we’re running low on time. There was one online, maybe you could address that, related to the reselling. This has been a big question about Starlink and their licenses and the ability to use it as backhaul for commercial or even open community networks. Yeah, and this is an open question.

Dan York:
And the question is really, you know, if you get your Starlink connection, can you then resell it to other people? Can you do other things like that? This goes to what Berna mentioned about the different business models. Starlink right now is very focused on a direct consumer, a one-to-one relationship. So you get an antenna, you go and you use it for yourself or your library or your piece. OneWeb, their business model is very much focused on reselling. So their model is to work with partners, to work with people to be able to serve it. So they have a very different model on that. Amazon has indicated that they’re also going to do the direct-to-consumer model. But in all those cases, they’re testing other models too. So I think we don’t have a definitive answer right now around that. I know in some cases, Starlink has allowed their connection to be used as backhaul into a community network, with the backhaul being the connection back to the rest of the internet. So they have allowed that. Not clear yet whether that’s broadly applicable or whether they’re doing it on a case-by-case basis. But it’s one area I think we just have to…

Moderator:
And actually part of what that model is by testing out those limits in fact. And so what will they permit? The Starlink business model continues itself to evolve rapidly. They change their pricing structures, their licensing, and they go for different markets, the end-user consumer market. now into commercial use, ships, planes. So it’s highly dynamic, the business model. So I just would encourage everybody to try one out and see what you can do with it. We have, it looks like three people here in the room. Could you each ask your question briefly, all at once, and we’ll try to get to all of them.

Audience:
Sir, introduce yourself. Yeah, okay. My name is Nick Brock from DW Academy and from Rhizomatica. I wanted to make this longer, but I will make it really short. I think there’s an underlying, undermining question, which is ecology. So, and you said there are many doubts. My question, I will turn it around. Firing satellites in space, satellites that have to be renewed every five years. So why do you put this as a rhetorical question, if this is sustainable or not? So please tell me what, give me one argument why you think this is environmentally sustainable as a technology. So I don’t see it. And I think this kind of, we have to see, there is a competence and maybe, yeah, there are the companies competing against each other. Let’s see what happens in five years. Do we have this time? And this question would come from my daughter, 11 years old, not having a cell phone, opening up because we’re all crazy and fucking up.

Moderator:
Yeah, and it’s an excellent question. And there’s lots more. Dan, let’s collect some questions. We’ll try to answer all of them. That’s a good one. Sounds good. Go ahead, please.

Audience:
Thank you so much. Okay, plus one to the environmental question. The other question is just about the regulation or observation of the people who are putting these satellites into space, especially when they’re able to turn them off or throttle or change the service provided. Sort of at the whims of these individuals. How do we monitor that? What’s the accountability structure? And then also just to give a, raise a hand that we are in the middle of a pandemic. a nonprofit called measurement lab that measures the interconnection points, speed and quality of the internet around the world and being able to monitor the satellite space. So if anybody is curious about making that data public. Could you repeat that last last one, please? We measure interconnection points about the speed and quality of the internet around the world and we make that data public. It’s the largest public data set about internet performance that exists. So if people are interested in monitoring satellites, please come see me.

Moderator:
Okay, that’s a lot. Carlos.

Audience:
Hi, Carlos. My point is around whether it can close the digital divide or not. I think there are very laudable efforts from libraries, where they have access to power, where they have access to pay the connectivity. But what does it happen in other communities where a unit that is, you know, 150 to 200 watts, that costs a CAPEX that is, you know, 300 to 400, even 600 US dollars? I mean, how do you affront those costs when people don’t have actually money to get to the end of the month? How do you do the recurring payments? You know, like, I mean, there are many questions to actually consider this as a business model for the communities. But then there is a real question that is being asked by some researchers that is the sustainability of Starlink itself. If it needs to continue, I mean, the environmental question is a real question that I would like to get answered as well. But would the Starlink continue being sustainable or it would be loomed 2.0, right? Steve Song, one colleague of mine, has been starting to do that research in the economics of the amount of money that they require in revenue to be able to continue putting 12,000 to 20,000 satellites in orbit to cover everyone when they cannot do more individual connectivity, you know, because it doesn’t make sense because people don’t have the money to pay for the CAPEX and the OPEX. I mean, it’s… Thank you, Guido. One more here. I’m from Brazil and I’m here representing the youth program from the Brazilian Committee about Internet Governance. And actually, I would like to have the point of view from the speakers because talking about the Brazilian experience with the Starlink… I think we had a kind of naively expectations about how the Starlink would be a meaningful connective especially in the Amazon region but what we are seeing right now especially me as a research is that the Starlink has been most used in the Amazon region in the really wild and remote areas to support illegal gold miners and drug trafficking and they are exactly the group who is killing the indigenous people and they are responsible to for the indigenous tragedy so we had a kind of promise that the Starlink would provide internet for the the schools in the Amazon region but it didn’t happen so I understand that this is probably related to the affordability key but I would like to see from you what you think about that especially because I think we need to talk about business advocates because if I know I think people that work in Starlink know that this broadband is being used in order to support illegal groups so that’s my point. Thank you.

Moderator:
Thank you. I think all those are excellent and difficult questions. There are many difficult questions with this technology the and the system there is the environmental impact. I should say that it may seem like we’re promoting Starlink but we are not. We’re just pointing to it as a new and unique phenomena in the telecom ecosystem. It seems to us important what it is, how it works, what’s at impact. This a single global last-mile network. That’s really different. Let’s find out. It’s really our case here. So I don’t feel I should be defending Starlink or if anybody else wants to, they’re welcome to, but the satellite turnover, impact, there are trade-offs. So for example, should we allow nuclear power to deal with the amount of carbon that’s accumulating in the atmosphere? I’m completely against nuclear power. In the context of the crisis, maybe we have to. So I don’t know if that’s a good analogy, but I want to make the point about trade-offs. So yeah, yeah. So this is, this is great. Dan, I think you should point everybody to your discussion environment that you have where a lot of these issues are aired out every day, and I think that a lot of these could be dealt with there, but take a shot at anything you’ve just heard.

Dan York:
I know we’re running out, we’re hitting the end of time. And these are great questions. I mean, to the person who asked about the environmental issues, there was just a paper published recently. That’s the first we’ve seen sort of in analyzing the research, looking at the costs, the carbon costs of the launches of these systems. And that’s a real question. And that is this trade-off. Can we use these as a system to go and connect the unconnected around the world? Can they be affordable? That’s the huge question that’s being asked here. Can they be that? I have a larger question. Right now, these are all being built by commercial enterprises. Do we want only under the control of several commercial entities that are owned by eclectic billionaires. The EU is taking a position with their iris constellation of trying to have one that is coordinated by a set of countries. Will there be other models? The larger question that was asked here, are these sustainable? We don’t know. People have been around here for a while will recall there was a Leo burst back in the 1990s with Iridium and Global Star and some other countries, entities that were creating constellations for telephone access. It wasn’t there. It died away, although they’re still up there. They’re still being used. They’re looking to come back in some ways for data, but it’s a real question. The thing that we, and the importance of bringing it here is for people to understand that this technology is happening. It’s going on. There are rocket launches happening every week that are putting more and more of these satellites up there. We have to understand them. We have to understand where they can fit, what trade-offs we will make. What are they? Is the carbon cost, is the trade to get the connectivity that we all need? Are there ways that we can mitigate that or make it better or do it? What happens to all these satellites when they burn up? You mentioned it there, we didn’t really hit on here, but these things only have about a five-year lifespan due to the pull of gravity, atmospheric drag, lots of other reasons. The satellite providers have to be constantly launching new satellites in order to keep these constellations up. Is that sustainable? Is there enough people who will buy it? Is there the capacity to support it? I don’t know. None of us do. The measurement lab, a question around that. We don’t have access to that data yet because a lot of it is all happening in proprietary systems. Also, there’s only one up there in full. Lots of questions. these next five years are going to be very interesting and I think we all just need to keep our attention focused on there to see what are the opportunities, what are the trade-offs we have to make, where does it all work, will it all work.

Moderator:
Very good. There are more of course issues. We didn’t talk about space junk, we didn’t talk about astronomy, we didn’t talk about the stability of this billionaire, that there are just a lot of issues. So tracking this is important, involving everyone in it or as many people that are actually interested is important so that these questions are not just here in a room but are part of the public debate. So I encourage everyone to investigate more deeply into this extraordinary technology and I’ll take a break. That is usually not mentioned, we talk about education, we talk about health information, access to public services and public information, but having basically a connectivity point in a community that is impervious to disruption, to disasters, speaking of carbon and weather, this is increasingly the world we’ve created and we’re living in or going to be living in for the foreseeable. So also don’t have access to educational opportunities, commercial health opportunities, are also people who are not contributing to the carbon accumulation but they are impacted more heavily by the results of what industrial economies have done. Giving them this capability is one very powerful way to give them adaptation capability and we think that’s part of the equation as we calculate how these things should go. So with that we’ve run a little bit over but I want to thank our our panelists, and our audience, and everyone involved in this. Thank you very much. Thank you. And I wish we could have gone to Burna and the chem a bit more too, but we had so little time. Thank you.

Stephen Weiber

Speech speed

181 words per minute

Speech length

792 words

Speech time

262 secs

Audience

Speech speed

167 words per minute

Speech length

1299 words

Speech time

466 secs

Berna Gur

Speech speed

134 words per minute

Speech length

1202 words

Speech time

538 secs

Dan York

Speech speed

183 words per minute

Speech length

3331 words

Speech time

1094 secs

Moderator

Speech speed

136 words per minute

Speech length

1853 words

Speech time

819 secs

Nkem Osuigwe

Speech speed

131 words per minute

Speech length

1167 words

Speech time

535 secs

Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The need to enhance digital health systems in preparation for future pandemics has become increasingly evident. Accurate and reliable medical advice and treatment should be accessible without individuals having to physically visit healthcare facilities. This is crucial to ensure the safety and well-being of patients and to reduce overcrowding in healthcare facilities, especially among the elderly who are more susceptible to complications from infectious diseases.

The COVID-19 pandemic has highlighted the limitations of traditional healthcare delivery models that heavily rely on in-person consultations and hospital visits. This has caused strain on healthcare systems and increased the risk of transmission in crowded facilities. Therefore, there is an urgent call for the development and improvement of digital health solutions.

One supporting fact behind the argument for digital health improvements is the surge in healthcare demand during pandemics like COVID-19. The rapid spread of the virus has emphasized the need for scalable and efficient healthcare services that can cater to a large number of patients. By implementing digital health solutions such as telemedicine and remote monitoring, the burden on physical healthcare facilities can be alleviated, and healthcare providers can reach a wider patient population.

Another important consideration is the age and vulnerability of certain populations, particularly the elderly. Concerns have been raised about the increased risk they face when visiting crowded healthcare facilities. Digital health technologies can provide them with access to healthcare services from the safety of their own homes, reducing their exposure to potentially infectious environments.

The analysis also highlights the relevance of the United Nations’ Sustainable Development Goals (SDGs), particularly SDG 3: Good Health and Well-being, and SDG 9: Industry, Innovation and Infrastructure. Improving digital health aligns with these goals by promoting accessible and quality healthcare for all, as well as fostering innovative solutions to address healthcare challenges during crises.

In conclusion, the need for digital health improvements in anticipation of future pandemics is supported by various compelling arguments. These include the necessity for accurate and timely medical advice without physical visits to healthcare facilities, concerns about overcrowding, increased healthcare demand during pandemics, and considerations for the vulnerable and elderly populations. Embracing digital health solutions can enhance societies’ capacity to respond effectively to future health crises, ensuring comprehensive and accessible healthcare services for all.

Geralyn Miller

During a panel discussion, speakers elaborate on various facets of Microsoft’s initiatives related to health outcomes, health equity, and digital health literacy. One significant topic highlighted is the crucial understanding of social determinants of health. The speakers underscore that these non-medical factors have a substantial impact on health outcomes, accounting for 30-55% of them. It is emphasised that addressing these determinants is vital for tackling health disparities.

Another key point discussed is the importance of addressing systemic problems, including social determinants of health, to enhance health equity. Microsoft’s multidisciplinary research on issues such as carbon accounting, carbon removal, and environmental resilience is commended. The company’s involvement in humanitarian action programs to effectively respond to disasters is also highlighted. By focusing on these systemic problems, Microsoft aims to create a more equitable healthcare system.

The role of technology and data in improving health outcomes and promoting health equity is emphasised. Microsoft’s development of a health equity dashboard, which enables visualisation and understanding of the problem, is lauded. The dashboard employs public data sets to provide different perspectives on health outcomes. Additionally, Microsoft’s LinkedIn ‘Data for Impact’ program, through which professional data is made available to partner entities, aims to enhance digital health literacy by equipping students and job seekers with the necessary skills.

Responsible AI is another significant aspect underscored by the speakers. Microsoft’s commitment to principles such as fairness, transparency, accountability, reliability, privacy & security, and inclusion in its approach to AI is highlighted. The need for implementing policies and practices to ensure safety, security, and accountability in AI is stressed. Measures such as implementing safety brakes in critical scenarios, classifying high-risk systems, and monitoring to ensure human control are deemed crucial. Moreover, the licensing infrastructure for the deployment of critical systems is considered essential.

The panel also addresses the issue of potential bias in AI models and the need to understand and inspect the data guiding these models. Microsoft actively works towards understanding the distribution and composition of the data to prevent bias. The goal is to ensure fairness and reduce inequalities by ensuring that bias does not occur due to the data employed in AI models.

The value of cross-sector partnerships, especially during the pandemic, is emphasised. Collaborations between the public, private, and academic sectors in research and drug discovery are cited as successful examples. These partnerships, including government-sponsored consortia, privately-funded consortia, and community-driven groups, have been instrumental in advancing healthcare during the pandemic. The continuation of such partnerships to drive positive change is advocated.

Additionally, the panel underscores the importance of maintaining good standards work, particularly during crises such as the pandemic. The use of smart health cards to digitally represent clinical information and support emergency services is discussed. The work of the International Patient Summary Group, aiming to represent a minimum set of clinical information, is commended, and the need to continue this good standards work is stressed.

The challenge of keeping up with the accelerating pace of innovation is acknowledged. As innovation progresses rapidly, individuals and organizations must strive to stay current and adapt. The significance of dialogue and information sharing as opportunities to expand knowledge and foster collaboration is also highlighted. Panels and training sessions are seen as valuable starting points for initiating these discussions and sharing insights.

Furthermore, the panel emphasises the need for training in both tech providers and the academic system. They assert that training in digital health should be integrated into the academic curriculum to ensure that everyone in healthcare is equipped with the necessary knowledge and skills. This approach is considered essential for advancing digital health literacy and ensuring its scalability.

Lastly, responsible implementation of generative AI is discussed, advocating for open policy discussions to ensure inclusivity and address ethical concerns. The importance of discussing responsible AI is underscored for its successful and inclusive implementation.

In conclusion, the panel discussion provides an encompassing overview of Microsoft’s initiatives pertaining to health outcomes, health equity, and digital health literacy. It underscores the importance of understanding social determinants of health, addressing systemic problems, and leveraging technology and data to improve health outcomes. Microsoft’s various initiatives, such as the health equity dashboard, LinkedIn ‘Data for Impact’ program, and Microsoft Learn platform, are commended. Additionally, the panel highlights the significance of responsible AI, cross-sector partnerships, maintaining good standards work, and promoting dialogue and information sharing. The importance of training in both tech providers and the academic system, as well as responsible implementation of generative AI through open policy discussions, is emphasised.

Ravindra Gupta

Digital health has achieved technical maturity, with the necessary technology and infrastructure in place for its implementation. However, it lacks organizational maturity, as highlighted by Debbie, a panelist at an event, who pointed out the shortage of trained individuals who can effectively leverage available healthcare technology. This expertise gap poses a significant challenge to successful digital health implementation.

To address this issue, comprehensive understanding and implementation of digital health are needed. This includes educating healthcare professionals, technologists, and patients about digital health’s integration into healthcare systems. The International Patients Union is one example of an organization dedicated to training patients in effectively using digital health technology.

Another area that requires attention is government policies on digital health, which currently lack focus on capacity building. Governments should prioritize capacity building initiatives to equip healthcare professionals with the necessary skills to leverage technology effectively. Pressure should be exerted on bodies like the World Health Organization (WHO) to provide faster normative guidance for digital health policy development, facilitating effective national policies.

Private and non-profit organizations are developing innovative and affordable strategies for digital health literacy. The Digital Health Academy, for example, offers an online global course for healthcare professionals, and plans are underway to provide low-cost training courses for frontline health workers. These efforts bridge the digital health literacy gap and ensure healthcare professionals are proficient in digital tools and technologies.

Governments must play a pivotal role in funding digital health initiatives, as seen in the Indian government’s investment in the national digital health mission. This funding is crucial, especially considering the evolving business model of digital health, which has led to the withdrawal of many large companies. Government support is essential for sustaining digital health initiatives and ensuring successful implementation.

Digital health has proven its readiness during the COVID-19 pandemic. Fast-track vaccine development involved global researchers, and AI was used in repurposing drug use. Additionally, 2.2 billion doses were digitally delivered through COVID apps, highlighting the efficiency and effectiveness of technology in healthcare. This underlines the need to continue utilizing technology beyond the pandemic.

Digital health literacy is crucial for healthcare professionals and workers in the sector. Failing to adapt and learn digital health skills may render individuals professionally irrelevant. Patients’ increasing access to health information necessitates healthcare providers’ awareness of advancements to provide accurate and quality care.

Upskilling and cross-skilling in digital health are essential for scalability, as scalability relies on healthcare professionals having the necessary competencies to leverage digital tools effectively. Moreover, healthcare providers should stay ahead of patients in terms of health knowledge to provide accurate care.

In summary, digital health has achieved technical maturity but lacks organizational maturity. Comprehensive understanding and implementation, capacity building, and literacy initiatives are necessary. Government support, funding, and upskilling efforts are key to successful digital health implementation. Digital health literacy is important for both healthcare professionals and patients, and upskilling is necessary for scalability. Healthcare providers need to stay informed to provide quality care. By addressing these challenges and investing in digital health, we can achieve better healthcare outcomes for all.

Moderator

The panel speakers engaged in a comprehensive discussion on the topic of digital health literacy and equitable access to digital health resources. They acknowledged the existence of disparities in access to healthcare and emphasized the potential of digital health to advance healthcare outcomes if accessed equitably. The need to enhance digital health literacy and promote equitable access was a recurring theme throughout the discussion.

Collaboration among various stakeholders, including healthcare providers, educational institutions, and technology companies, was identified as crucial for enhancing digital health literacy. The panel highlighted the importance of developing comprehensive frameworks and assessment tools to gain a holistic understanding of individuals’ abilities in navigating digital health. This would enable tailored interventions and support for those who need it most.

The role of social determinants of health in influencing health outcomes was also emphasized. The panel noted that 30 to 55 percent of health outcomes are dependent on social determinants of health. To visualize this problem, the Microsoft AI for Good team has built a health equity dashboard. This highlights the significance of addressing social determinants, such as economic policy, social norms, racism, climate change, and political systems, to achieve health equity.

Furthermore, the speakers advocated for digital health literacy and digital skills to be viewed as part of the social determinants of health. Microsoft’s initiatives, including a multidisciplinary research initiative on climate change, partnership with humanitarian open street map team for disaster mitigation, and a free online learning platform, were highlighted as examples of addressing social determinants. Microsoft-owned LinkedIn also promotes economic development and digital skilling through their economic graph and data for impact program.

Sub-Saharan Africa was identified as a region facing high health inequality, with a high disease burden and a shortage of health workers. The panel called for focused efforts to address health inequality in this region. They highlighted the positive impact of digital technologies, especially mobile, in addressing health issues. Reach Digital Health, for example, uses mobile technology to improve health literacy and encourage healthy behaviors. The Department of Health in South Africa also implemented a maternal health program that reached around 60% of mothers who have given birth in the public health system over the past eight years.

The panel stressed the importance of incorporating a human-centered design approach in the development of digital interventions. They noted that design considerations should include an understanding of the bigger context and the needs of the end-users. This approach ensures that digital health solutions are simple, easy to use, accessible, and free, with appropriate literacy levels.

The moderators expressed their interest in hearing insights and key policy recommendations from the panel. They highlighted the importance of enhancing digital health literacy, especially among marginalized populations. The panel agreed that governments and international organizations should prioritize policy interventions and investments to achieve this goal.

Capacity building in digital health was identified as a significant ongoing challenge in the healthcare sector. The need for policymakers to focus on capacity building and provide training for healthcare professionals and frontline workers was emphasized. The speakers emphasized the importance of continuous upskilling, considering the rapid pace of technological innovation, and highlighted the need for a practical implementation focus before policy development.

The importance of equitable access to digital health resources was another key point discussed. The Digital Health Academy was highlighted as an organization focusing on affordable training, providing $1 trainings for frontline health workers to ensure affordability. The responsible development and deployment of digital health technologies were emphasized, with a focus on upholding digital rights, privacy, and security. The speakers stressed the importance of involving various stakeholders for responsible innovation.

The speakers also touched on the concept of the digital divide and its impact on health equity. They highlighted the need to bridge this divide through initiatives such as Facebook Free Basics, which provides essential information for free, improving people’s literacy and data usage. Aligning priorities between mobile network operators and health organizations was seen as crucial for improving health equity.

Youth-led initiatives and community involvement were identified as crucial for bridging the digital divide in health. The panel emphasized the need for culturally sensitive initiatives that consider the specific needs of the population. They highlighted the importance of empowering young advocates to actively shape internet governance policies to ensure equitable access to digital health resources.

Lastly, the panel discussed the role of governments in investing in digital health. The Indian government, for example, has set up a national digital health mission and provided free consultations to citizens through the e-Sanjeevani program. Implementing free telemedicine consultations through health helplines was seen as a way to bridge the digital divide and address healthcare inequities.

In conclusion, the panel highlighted the need for collaborative efforts, policy interventions, and investments to enhance digital health literacy and achieve health equity. They emphasized the importance of addressing social determinants, building digital health capacity, and promoting equitable access to digital health resources. The responsible development and deployment of digital health technologies, as well as the involvement of youth and community in shaping policies, were identified as crucial. Overall, the panel provided valuable insights and recommendations for advancing digital health literacy and equitable access to digital health resources.

Yawri Carr

The emergence of the Responsible Research and Innovation (RRI) Framework in AI healthcare is seen as a positive development in the field. This framework focuses on transparency, accountability, and ethical principles, ensuring that innovation in AI does not compromise ethical standards. It places an emphasis on safeguarding digital rights and privacy and holds AI systems accountable for their decisions.

Stakeholder involvement is highlighted as essential in the RRI process. Societal actors, innovators, scientists, business partners, research funders, and policymakers should all be involved in the responsible research and innovation process. It is important for these discussions to be open, inclusive, and timely, working towards ensuring desirable research outcomes.

Youth-led initiatives are recognized for their role in promoting responsible AI. Universities, education centres, and mentorship programs have crucial roles in inspiring young people to innovate in health technology. Community-based research projects are also highlighted as a means to promote cultural sensitivity and address specific community needs.

However, there are challenges in applying ethical considerations in profit-driven AI innovations. There is often a clash between ethical considerations and profit-driven motives. Power imbalances, particularly financial, often hinder the work of ethicists. Therefore, regulatory frameworks, certification processes, or voluntary initiatives are needed to enforce ethics in AI.

Young advocates are viewed as influential in shaping internet governance policies and ensuring equitable access to digital health resources. Their participation in policy discussions at forums like the Internet Governance Forum (IGF) and the formation of youth coalitions can amplify the collective voice for accessibility and inclusivity. Engagement with multi-stakeholder processes can ensure a diverse contribution to the policies.

Youth-led research and innovation hubs are seen as valuable in addressing digital health challenges. These hubs provide a platform for young innovators, healthcare professionals, and policymakers to collaborate and find innovative solutions.

Technologies such as telemedicine and the use of robots are praised for their usefulness in pandemic situations. Robots can restrict direct human contact, reducing the risk of virus spread. Telemedicine enables remote treatment, ensuring health services while maintaining social distance.

The importance of technology and AI in healthcare is emphasized, particularly in protecting nurses and healthcare workers. Assistive technologies like robots can help safeguard these frontline workers.

Open sharing of data and research related to the pandemic is encouraged. This open sharing can lead to greater cooperation and more effective responses to emergencies.

Digital health leaders are urged to prioritize equity and ensure that healthcare is not a privilege but a right for all. Technical skills are not the only important aspect; a commitment to equity is also vital. Healthcare and digital health care should be accessible to everyone.

The valuable role of nurses and ethicists in evolving technology is highlighted. The work of nurses remains critical in healthcare, and ethicists play a crucial role in contributing to the mission of responsible AI.

In conclusion, youth-led initiatives, stakeholder involvement, and the emergence of the RRI Framework in AI healthcare are viewed as positive developments. Challenges exist in applying ethical considerations in profit-driven AI innovations, emphasizing the need for regulatory frameworks and certification processes. The importance of technology, telemedicine, robotics, and the open sharing of data and research are recognized. Digital health leaders are urged to prioritize equity, and the crucial role of nurses and ethicists in evolving technology is emphasized. Ultimately, youth play a fundamental role in advancing digital health and ensuring its accessibility.

Deborah Rogers

The speakers in the discussion highlighted several key points about digital health in Africa and how it can potentially address health inequality and overburdened health systems. They emphasised the increased access to mobile technology in Africa, which has seen significant growth over the years. In Africa, where 10% of the world’s population represents 24% of the disease burden, access to mobile technology has the potential to bridge the gap and improve healthcare outcomes.

One of the main arguments put forth was the effectiveness of low-tech but highly scalable technology in disseminating health information and services. The speakers stressed the success of programmes that utilise SMS and WhatsApp in improving health behaviours and service access. For example, a maternal health programme in South Africa has reached 4.5 million mothers since 2014, resulting in improved health outcomes.

The discussions also highlighted the role of digital technology in improving health literacy. Through the use of digital technology, a maternal health programme in South Africa has witnessed increased uptake of breastfeeding and family planning. However, the speakers emphasised the importance of implementing digital health initiatives in a human-centred manner and being cognisant of the larger health system they are a part of.

Furthermore, the speakers addressed the issue of health equity and the digital divide. They presented an example of the Facebook Free Basics model, which provided free access to essential health information and led to increased profit for mobile network operators. This approach demonstrated that reducing message sending costs for end-users does not inhibit profit for operators, thus showing the potential for mobile network operators to improve health equity.

The discussion also delved into the importance of a human-centred approach in developing digital health interventions. The speakers emphasised that digital health should be easy to use and accessible, designed with users in mind. They also noted that access to a mobile device itself is less of a problem than the cost of data, which needs to be addressed for wider adoption of digital health services. Overall, digital health was seen as an integral part of the health infrastructure, rather than a side project.

One noteworthy aspect that was brought up in the discussions was the potential bias and lack of diversity in the development of digital health services. The speakers emphasised that the makeup of the development team often does not represent the actual users of the services, leading to the introduction of biases. This can perpetuate health inequities and hinder the effectiveness of digital health interventions. Therefore, there was a call for more diverse and inclusive development teams to ensure the services are designed to meet the needs of all users.

During the discussion, the speakers also highlighted the role of digital health in the COVID-19 pandemic. Large-scale networks were used to quickly disseminate information, and digital health platforms played a vital role in screening symptoms and gathering data. The burden on healthcare professionals was reduced, showcasing the potential of digital health to alleviate the strain on the healthcare system.

Furthermore, the importance of sharing medical knowledge and not hoarding information was emphasised. The speakers noted that the lack of knowledge during the early stages of the COVID-19 pandemic had a significant impact on everyone. Therefore, the dispersal of information on a large scale can greatly contribute to improving patient health outcomes.

The discussions also emphasised the need for investment in digital health infrastructure for future pandemics. The COVID-19 pandemic highlighted the importance of having digital health platforms in place. Building and investing in such infrastructure before the next pandemic occurs would enable a quicker response and avoid starting from scratch.

Additionally, the potential of technology to decrease health and digital literacy inequities was discussed. Technology was hailed as a great enabler in addressing these inequities and improving access to healthcare and education.

In conclusion, the discussions on digital health in Africa highlighted its potential to address health inequality and overburdened health systems. The increased access to mobile technology and the success of low-tech interventions have provided evidence of the positive impact of digital health. However, the speakers emphasised the need for a human-centred approach, diversity in development teams, and investment in infrastructure to fully capitalise on the potential of digital health. There was optimism about the future of digital health, and the involvement of youth in its evolution was seen as crucial.

Session transcript

Moderator:
in turn creating disparities in access to care. So in this session we will discuss strategies to enhance digital health literacy and identify measures to promote equitable digital health access. Our goal is to find innovative policy solutions that bridge the digital divide and ensure that digital health truly advances healthcare outcomes for all. Thank you all for joining us on this important journey and let’s get started. We have three key policy questions that will guide our discussion today. How can comprehensive frameworks and assessment tools be developed to capture and assess different dimensions of digital health literacy, ensuring holistic understanding of individuals’ abilities in navigating digital health information and services? What strategies towards health equity can be adopted to ensure digital health literacy programs effectively address unique needs and challenges faced by marginalized communities, promote inclusivity and equitable access to digital health resources? And also how can partnerships between key stakeholders including healthcare providers, educational institutions, technology companies and governments be leveraged to enhance digital health literacy skills, foster collaboration and knowledge sharing to advance health equity? Our panelists will be addressing these issues today so if you would like to ask a question towards the panel we will have a Q&A session at the end for on-site participants and online participants may use the Zoom chat to type and send in your questions and my online moderator Valerie will be helping me with them. So without further ado to kick off our discussion I would like to introduce our esteemed panelists who will share their insights on these matters. First joining us online we have Ms. Gerilyn Miller, an innovation leader driving change in healthcare and life sciences through AI. She is a senior director at Microsoft in product incubations, Microsoft health and life sciences cloud, data and AI and she’s also the co-founder and head of AI for health which is Microsoft AI for good research lab. And then we have Professor Rahindra Gupta joining us on site here today, a leading public policy expert with vast experience in policy making and he’s been involved in major global initiatives on digital health and holds several key positions in the digital health arena. He’s also the founder and behind many path-breaking initiatives like his Project Create and organizations working for digital health. And next we have Ms. Debbie Rogers joining us on site as well. She’s an experienced leader in the design and management of national digital mobile health programs and the CEO of Reach Digital Health aiming to harness existing technologies to improve healthcare and create societal impact. And last but definitely not least we have Ms. Jari Carr joining us online. She’s an internet governance scholar, youth activist and AI advocate and she’s also a digital youth envoy for the ITU like me and a global shaper with the World Economic Forum with her work centering on responsible AI and data science for social good. Now let’s begin section one of today’s workshop on low digital health literacy and strategies and I would like Ms. Gerilyn to take the floor first. So what research and development initiatives for example including the creation of comprehensive frameworks and assessment tools is Microsoft pursuing to address the multifaceted challenges of low digital health literacy? And additionally can you highlight your thoughts and innovative strategies and partnerships that Microsoft is employing or supporting to enhance digital health literacy among marginalized populations with a focus on inclusivity and equitable access especially in low-income and rural areas? Ms. Gerilyn over to you. Yeah great thanks and thank you for inviting me today to

Geralyn Miller:
participate in this. So the lens I’m going to take from this is really based on something that is known as social determinants of health. So I want to start by defining and sanity checking that social determinant of health is a non-medical factor that influences health outcomes. So this is the conditions that people are born work and live in and the wider set of forces that shape conditions of our daily lives right. So this includes things like economic policy and development agendas, social norms, social policies, racism, even climate change and political systems and this affects about from research we know that this is about 30 to 55 percent of health outcomes are actually really dependent on social determinants of health. So when you want to think about health equity in digital literacy it’s really important to for two things. First to understand the problem based on data and I’ll share a little bit about what Microsoft research is doing in that area and the second is to open your mind and have a willingness to address the underlying often systemic problems that felt that affect health outcomes and that includes social determinants of health. So Microsoft has some things that we’re doing to understand the problem with data including the Microsoft AI for Good team has built something that we call a health equity dashboard that is essentially a Power BI dashboard that takes a number of public data sets and allows one to look at them from a geography perspective, slice and dice the data by rural, suburban and urban populations and then also examine different health outcomes including things like life expectancy. So that’s the first thing right is really being able to understand and visualize the problem itself. So I invite you to actually have a look at that information. There’s a number of other things that from a Microsoft perspective we’re doing to look at on the social determinants of health side. So I’ll point for example to some of the work we’re doing on climate change. We announced a climate change research initiative that we call MCRI which is really a multidisciplinary research initiative that is focusing on things like carbon accounting, carbon removal and environmental resilience. We also have our Microsoft AI for Good research lab and our humanitarian action program. They have for example worked with a group called humanitarian open street map team or HOT which partnered with Bing maps to map areas vulnerable to natural disaster and poverty. So that’s an example of some of the work out of the research lab and the humanitarian action program coming together to help give relief teams information to respond better after disasters. There’s also a lot of work that we have happening from a Microsoft perspective that ties more directly to economic development and digital skilling. So we have some work on a LinkedIn, something called the economic graph which is a perspective or a view based on data of more than 950 million professionals and 50 million companies. LinkedIn which is a Microsoft company also has a data for impact program and this program makes this type of professional data available to partner entities including entities like the World Bank Group, the European Bank and others. So it’s data on more than 180 countries and regions and this is at no cost to the partner organizations. An example of the impact of this type of data, this data for impact information was able to advise and inform a 1.7 billion dollar World Bank strategy for the country of Argentina. And then there’s also the Microsoft learn program which is a free online learning platform enabling students and job seekers to expand their skills. So role-based learning for things like AI engineers, data scientists and software developers, hundreds of learning paths and thousands of modules localized in 23 different languages. So in summarizing I just want to say that we look at this as from a holistic broad perspective as digital health literacy and digital skills as part of the social determinants of health and the work that we’re doing to support those.

Moderator:
Thank you very much Ms. Miller. And now moving on to Ms. Debbie. As an experienced leader in the design and management of national mHealth programs and the CEO of Reach Digital Health, can you share your thoughts on digital health literacy, digital divide and health equity, effective strategies for enhancing digital health literacy among marginalized populations particularly in resource constrained settings and additionally how can partnerships between non-profit organizations like Reach and private sector mobile operators be strengthened to promote digital health literacy among women and marginalized communities addressing gender-based barriers and limited resources while contributing to bridging the digital divide?

Deborah Rogers:
Thanks very much. So I think the first thing just to talk about is a little bit of the context. So we work primarily in Africa. To give you an idea around inequality and health in sub-Saharan Africa we have 10 percent of the world’s population, 24 percent of the disease burden and only three percent of the health workers. And so we really do have the odds stacked against us in a time when we’re supposed to be going towards universal health care, which quite honestly is a pipe dream if you look at where things are at the moment. While we’ve made some progress in addressing maternal and child health and addressing infectious diseases such as HIV, we are getting an increased burden when it comes to non-communicable diseases. So the burden is just increasing, not decreasing. And so really if we follow the same patterns over and over again and we keep just training more and more health workers and not addressing the systemic issues or relieving the burden from the health system, then there’s absolutely no way that we’re going to be able to improve these stats. We’re going to go backwards and not forwards. And so I think I’m fairly optimistic actually because I think that digital and particularly mobile has the opportunity to really address some of these issues in a way that many other interventions don’t. Reach Digital Health was founded in 2007 with the idea that the massive increase in access to mobile technology in Africa, at the time more people in Africa had access percentage-wise to mobile technology than in the so-called global north or western countries, was a way for us to leapfrog some of the challenges that we’ve had in the global south and to actually address some of these issues. And we really have been able to see that. We have been able to see how the access to information and services through a small device that’s in the palm of many people’s hands has been able to improve health, both from a personal behavior change perspective but also health systems as a whole. And so what we primarily focus on is using really, really low tech but highly scalable technology, so things like SMS, WhatsApp. These are the things that everybody uses every day to communicate to their family and friends. And we use that to empower them in their health, help them to practice healthy behaviors, to stop unhealthy behaviors, and to access the right services at the right time. And with the fairly ubiquitous nature of mobile technology in Africa, we’ve been able to reach people at a massive scale. So for example, we have a maternal health program with the Department of Health in South Africa. It’s been running since 2014. We’ve reached 4.5 million mothers on that platform. But that represents about 60% of the mothers who have given birth in the public health system over the last eight years, which percentage-wise is huge. And we’ve been able to see that this has had impacts such as improved uptake of breastfeeding, improved uptake of family planning, and really has seen not just an individual change but a more systemic change with the ability to understand what is the quality of care on a national scale for the Department of Health in South Africa. And so we really do believe that if you harness the power of the simplest technology, if you design for scale with scale in mind, if you design with understanding the context, then you can actually use digital to be able to increase health literacy. And so it’s not all doom and gloom. It’s not just about the fact that digital is always excluding other people. It can be an enabler, but only, of course, if we consider the wider context and we don’t go blindly into things and ignore the fact that this could be something that increases it. And so I think I’ll talk a little bit later more about some of the strategies that can be used, but I think two things to remember is design with the human, not patient. I don’t like the word patient, but in digital health we tend to use that word. With the human at the center of what you’re trying to do, and design understanding that you are part of a bigger system. And this is not something that exists by itself. And if you do those two things, not only will you be able to improve health literacy, but you’ll be able to do so in a way

Moderator:
that doesn’t widen the divide that many technologies already put in place. Thank you very much, Ms. Debbie. Now moving on to Professor Gupta. With your extensive experience in policy development, digital health education, and founding the world’s first digital health university, can you share your thoughts and offer key policy recommendations that governments and international organizations should prioritize to comprehensively enhance digital health literacy, especially amongst marginalized populations? Additionally, can you share insights into successful and scalable educational strategies and approaches that have effectively improved digital health literacy, with a focus on adapting these methods globally to meet healthcare scaling needs for digital health? Thanks, Connie. Firstly, I congratulate you for picking up this

Ravindra Gupta:
very important topic. And secondly, I’m a little worried for such a long question, because after 5 p.m., I’m half asleep. It’s been an engaging session throughout the day. But yes, it’s a very important topic. It keeps me awake. But pardon me for my incoherence. But let me give you a little backdrop of why this topic is important. There is an international society called International Society of Telemedicine and eHealth. It’s been around for a quarter of a century and has memberships in 117 countries. So way back in 2018, I said that digital health has two opportunities and two challenges. But the two challenges are like, we have reached the stage of technical maturity. Give me a challenge, I’ll give you 100 solutions. But where we lack is organizational maturity. People are not trained enough to leverage technology that’s available. So I said, let’s look at capacity building. I think the issue that you brought up. So 2019, they formed the Capacity Building Working Group, which I chair. And post that, we have done two papers on capacity building. One is listing the kind of people we need to train across digital health. And second, we have done a deep dive. We released that in partnership with World Health Organization. So there is, for those who are looking at what kind of capacity we need, the ISFDH website has a list, two papers written on this topic. And then 2019, WHO set up their Capacity Building Department, which is a very recent thing. So I think there is a lot of focus. And now coming back to what my experience was. So having pushed various organizations to do that, but I still relied, we were just doing policy papers and, you know, policies take time to translate. I mean, people like Debbie would need people to help her, you know, in technology. I mean, a policy paper can’t help her. She needs people trained in digital health. So in 2019, I set up the Digital Health Academy, which now is now the Academy of Digital Health Sciences. We have started a course for doctors and for people in healthcare. It’s a global course, fully online as digital course should be. But to your point, that also would not solve my biggest overall challenge. I am training doctors, you know, it is so shocking. And I’ll put a context to that, that we had a half page advertisement in a leading newspaper in India. A very senior doctor called me and asked, Rajendra, what’s digital health? So I was shocked that even doctors need to be first surprised that what does word digital health mean? I’ll give you another example. There’s a company that works exclusively in data domain. So I called the founder who is a doctor and asked, do you do digital health? He said, no Raj, we don’t do digital health. I said, do you use data? He said, we only use data. So I said, you only do digital health. So the challenge is first people should know the definition of digital health. That is the level we have to get in and which is needed across the ecosystem. So right from the bureaucracies and the ministers and the ministries of health, they need to understand what is digital health because they come for a fixed tenure or they get transferred. If that level they are sensitized, then the things flow down the line because government makes policies which get implemented as programs. So that’s one level of competencies that I have told WHO to look at because my experience in WHO meetings is that bureaucrats come, they spend two, three days in Geneva or New York and then they go back and forget it. So there has to be a course for policymakers at the highest level, which probably WHO or any organization could do. The second level is what we need to do is the courses for doctors and health professionals. And third and the most important, which we are launching in next two months is frontline health workers. But understand the challenge that frontline health workers are either doing voluntary service, like you have the ASHA workers in India, which is a million workers. They are our first line or first responders. Don’t expect them to pay you $1,000 or $100. So we had to actually innovate and convince one of the Institute of National Importance that we need to bring out $1 trainings. So we should train people for as low as $1. And this we’re doing globally. So frontline health workers, if I’m able to train, I think I would have addressed the biggest challenge for healthcare. Now, one of the government’s agency has approached us to work with us. So as such, on the capacity building, I think governments just focus on the program minus capacity building, which is a serious lapse. And I think this is across the board. I think that we would agree on that is that we are very focused on saying maternal health, mobile application, child health, mobile application, rural health, telemedicine, but who will do it? We don’t know. But people are going to use don’t even know how to use a mobile phone. They do not know how to log in on the account. So we need basic training. And I think this is what private organizations, not-for-profits, and then government step in very late, let me tell you that. So they are not the ones who would initiate. So once you go with the program, talk to them, they will partner. So as a policy, I’m glad, Connie, that you put a session on this, something that our Digital Health Dynamic Coalition should have done, but they only allow one session for a dynamic coalition. So we had our session, which we are doing tomorrow. But now that you have taken it up, it puts the spotlight on this important topic. At ISFDA, there are policy papers. They have been given to WHO. WHO set up the capacity building department, but honestly, nothing much has moved between 19 and 23, four years. We are still to look at, and they’re still forming a committee. So I think it’s mostly going to be the civil society organizations and private sector that will take the lead. On policy side, I have not seen documents that talk about it so far, so we will have to wait for a normative guidance from WHO, which will be still, I think, a few years away. It takes time to build a document in WHO. How this will happen fast is like this. In India, we have a digital health mission, which has rolled out 460 million health IDs. In this year, we will roll out 1 billion health IDs. Our health consultations, teleconsultations have crossed 120 million. So I think that is the first point. So I’m inverting the process from policy to let’s first have implementation. So when the government rolls out at such level and scale, automatically you will start feeling the need of trained people in this. So I think this is one thing, but more than structured courses, it will be more of continuous upskilling that everyone will need to do because technology is also changing. Till last year, no one talked about generative AI. Now people have started talking about generative AI. So I think we need to keep that trainings as feud and make it more as a continuous upskilling program for people across healthcare. We are not waiting for government policies, we are rolling out as Academy of Digital Health Sciences and these are global programs. We are making it really affordable as $1 trainings for front-end health workers, for doctors and for the industries, the postgraduate program. And we will announce undergraduate programs as well because I think this is where we need to build capacity. So for now, I think policy interventions will happen. I think overall a part of the health policy, everyone should put capacity building and digital health is now an integral part of health. So digital upskilling is required for digital scaling. So I think this is something that governments have to look at and WHO should take a frontal role. So I would say more to WHO and organizations like the one that Debbie runs, organization like the ones that I run with my team. And more importantly, there are two people sitting in this room, Priya and Saptarshi. They run patients union, International Patients Union. Even if you train doctors, industry and the frontline health workers, if patients are not trained, who will use digital? At the end of the day, they have to open an app, use it. They need to know what’s privacy, what’s security. So it’s on us on people like them, to go and train patients for how to use digital technology. So it’s a multidimensional topic and I’m happy that there’s a session dedicated to this. Unless we address this in a complete ecosystem perspective, we’re not done justice to this topic, thank you.

Moderator:
Thank you very much, Professor Gupta. And now to Jari. As someone with expertise in responsible AI, digital rights and a passion for the intersection of technology and society, how can policymakers craft regulations to ensure the responsible development and deployment of digital health technologies, especially for marginalized communities? And also, what role do you see for youth-led initiatives in enhancing digital health literacy, bridging the digital divide and engaging with policymakers to drive policies that support equitable access to digital health resources? Over to you. Hello, everyone, dear organizers, participants and guests.

Yawri Carr:
Thank you very much, Connie, for the organization and thank you for inviting me. Well, so in a world where technology and healthcare are more intertwined than ever, the responsible development and deployment of digital health technologies are paramount importance. This is especially true when considering marginalized communities where equitable access to healthcare is not just a goal, but a moral imperative. So in this case, I would like to mention the Responsible Research and Innovation Framework as one of the guiding philosophies that serve as a roadmap for navigating the intricate terrain of AI in healthcare. At its core, RRI is a commitment to harmonizing technological process with ethical principles. It places a premium on transparency and accountability, recognizing them as pivotal elements in the responsible development and deployment of AI technologies. In the realm of healthcare AI, RRI advocates for policies that do not only uphold digital rights, safeguarding privacy and security, but also establishing mechanisms to hold AI systems answerable for their decisions. It is a holistic approach that seeks to ensure that benefits of innovation are realized with a compromise in ethical standards or jeopardizing individual rights. So who should be involved in a process of responsible research and innovation? Societal actors and innovators, scientists, business partners, research funders and policymakers, all stakeholders involved in research innovation practice, funders, researchers, stakeholders and the public, large community of people, early stages of R&I processes, and the process as a whole. And when? Through the entire innovations life cycle. And to do what? So it is important to anticipate risks and benefits to reflect on prevailing conceptions, values and beliefs, to engage the stakeholders and members of the wider public, to respond the stakeholders, public values and also the changing circumstances that are present in these kinds of processes, to describe and analyze potential impacts, reflecting on underlying purposes, motivations, uncertainties, risks, assumptions and questions, and a huge amount of dilemmas that could also emerge in this kind of circumstances, and open to reflections and to have a collective deliberation and a process of reflexivity and to integrate measures throughout the whole innovation process. So these are also in which ways should we do this? Working together, becoming mutually responsive to each other, and of course in an open, inclusive and in a timely matter. And to what ends, what this framework proposes is that it’s allowing appropriate embedding of scientific and technological advances in society to better align their processes and outcomes with values, needs and expectations of society to take care of the future, to ensure desirable and acceptable research outcomes, solve a set of moral problems, and will also protect the environment and consider impacts on social and economic dimensions, also promote creativity and opportunities for science and innovation that are socially desirable and are taking the public interest. And how these can be applied specifically in a context of healthcare technologies. For example, there are academic projects and also societal projects. One example of an academic project is one from the Technical University of Munich in which I am now studying. Well, we have a project that’s an AI-driven innovation, including a robotic arm of exoprothesis and an advanced version of bimanual mobile service robot. So to ensure the responsible and ethical integration of these technologies into broader healthcare applications, the developers from the Machine Intelligence Institute have collaborated with the Institute of History and Ethics of Medicine, as well as the Munich Center for Technology and Society. And these teams are employing embedded ethics, incorporating ethics, social scientists and legal experts into the development processes. So they have initial onboarding workshops where these experts have become integral members of the development team. They have been actively participating in regular virtual meetings to discuss technological advancements, algorithmic development and product design collaboratively and interdisciplinary. And when ethical challenges are raised, they are addressed as part of the regular development process leading to adjustments in product design. An example involves the planning of model flats for a smart city where initial designs focus on open play layouts. Embedded ethics is highlighted in this case, potential challenges for elderly population and accustomed to such arrangements, promoting every consideration of the layout. Also taking into consideration that these kinds of projects in this specific case had a target population of the elderly population. So this is why it is very important to look at this target population and actually see if they are prepared and if they could be adapted to these kinds of technologies. So insights from this discussion influence the design process, emphasizing the importance of directly seeking future inhabitant perspectives in layout planning. And simultaneously, the project also involves interviews with various stakeholders, including developers, programmers, healthcare providers, and patients. Well, workshops, participant observations of development work, and collaborative reflection and case studies contribute also to active ethical consideration. And well, the project is also aiming to develop a toolbox to facilitate implementation of embedded ethics in diverse settings in the future. But there are also several unresolved issues that remain and that are also like with cultural setting and with corporate and organizational structures. Because even in research setting funded by public resources, the development of AI is predominantly situated in a fairly competitive landscape with prioritization of efficiency, speed, and also profit. So, and also in the case of health, so ethical considerations might be normally isolated or like are normally like not so taken into an importance when they directly clash with profit-driven motives. So taking ethical concerns seriously often creates a tension with industry objectives and the needs of the community. And this is the risk of being assimilated into broader corporate commitments to concepts like technological solutionism, micro-fundamentalism, that at the end prevents ethicists to actually do their work and to do a responsible healthcare technology. Normally, embedded ethicists may find themselves working within contexts that are characterized by pronounced power imbalances, particularly those of a financial nature. And it is probable that some form of enforcement measures will become very necessary in such environments. So not just for the development of the technical aspects, but also like for the work of the persons that are working on the responsible development and deployment. So that may be regulatory framework certification processes or even voluntary initiatives into the organization can make an awareness of these kind of issues that are arising in these situations. And well, okay, I also needed to talk about youth-led initiatives, right? If I still have time. Okay, so, well, there are also like a lot of ways in which youth-led initiatives and also marginalized community could also engage with responsible research and innovation. So for example, youth-led initiatives could connect or could try to participate in events such as this one, but also like try to, that universities or centers of education could inspire the youth so that they can also learn about telemedicine, how can they develop telemedicine initiatives in countries and also in a special rural areas as the professor was mentioning about in India, that these kinds of populations don’t have the same access. Also, for example, community-based participatory research projects that are involved in communities in their research process, ensuring that interventions are culturally sensitive and address the specific needs of a population. Also detail health literacy programs. And also like innovation challenges could be motivated between students and youth so that they can also engage. And I also consider the mentorship that these students or youth can also gain from experienced people is also very important because they need a guidance and also like foundations and also examples of how can they develop their ideas. So thank you.

Moderator:
Thank you very much, Jeri. So while low digital health literacy is a challenge for all populations, it’s also particularly harmful for marginalized communities. So in this section, we’ll discuss strategies for addressing health equity and the digital divide in the context of digital health. So let’s start this off with Ms. Gerilyn again. So in light of the session’s focus on health equity and the digital divide, could you share your thoughts and elaborate on specific policy measures and initiatives that Microsoft is advocating for or actively participating in to bridge the digital divide and promote equitable digital health access? And also how is Microsoft addressing barriers faced by diverse populations and how are these efforts contributing to advancing health equity? Over to you.

Geralyn Miller:
Yeah, thank you very much for the question. So I want to respond to, in this context to some of the comments that Dr. Gupta and Ms. Carr mentioned and really shine a light on the concept of artificial intelligence, generative AI, and what we at Microsoft call responsible AI as an example of policy. So one of my favorite quotes in this area is a quote by our Chief Legal Officer and President, Brad Smith. And I’m gonna paraphrase a quote I don’t have exactly, but Brad has a quote that basically says that when you bring a technology into the world and your technology changes the world, you bear a responsibility as a person that created that technology to help address the world that the technology helps create. And so from a Microsoft perspective, we look at this under the lens of something that we call responsible AI. Our responsible AI initiatives date back far before the birth of the chat GPT and generative AI and large foundation models and large language models, really back to about 2018, 2019. And we have a set of principles that we’ve established that are around how you design solutions that are worthy of people’s trust. So these are our principles, what we call our responsible AI principles. There are many people who have different principles around responsible AI. I’ll share with you ours. I would just offer that it’s something worthy of thought. And very often when I work with academic medical centers or healthcare providers who are starting to use AI or build and deploy AI models, I also offer to them, hey, you should have a position on responsible AI, right? Do your thought work, do your homework. You should have something that is consistent with your own values, your own entity’s values. And, but going back to, from a Microsoft perspective, what we believe those principles are. The principles are really based on fairness. So treating all stakeholders equitably and not making sure that the models themselves don’t reinforce any undesirable stereotypes or biases. Transparency. So this is all about AI systems and their outputs being understandable to relevant stakeholders. And relevant stakeholders in the context of healthcare means not only patients who may be receiving the output of this, but also clinicians who may be using these as decision support tools or to do some type of prediction. Accountability. And so people who design and deploy AI systems have to be accountable for how the systems operate. And I’m gonna do a click down on accountability in a second. Reliability. So systems should be designed to perform safely, even in the worst case scenarios. Privacy and security, of course, that goes, those are underpinnings behind any technology and AI systems as well should protect data from misuse and ensure privacy rights. And then inclusion. And this is all about designing systems that empower everyone, regardless of ability, and engaging people in the feedback channel and in the creation of these tools. And there are some things I will drill down a little bit on the inclusion front as well. So when you, an example, as I mentioned of the accountability, I’d like to share some things that are, President Brad Smith was offering when he testified before this, the U.S. Senate Judiciary Subcommittee. This was back in the beginning of September, around September 12th, on a hearing entitled The Oversight of AI, Legislating and Artificial Intelligent. So Brad highlighted a few areas that he is suggesting help shape and drive policy. One is really about accountability in AI development and deployment. Things like ensuring that the products are safe before they’re offered to the public. Building systems that put security first. Earning trust. So this is things like provenance, technology, and watermarks so people know when they’re looking at the output of an AI system. Disclosure of model limitations, including effects on fairness and bias. And then also really channeling research, energy, and funding into things that are looking at societal risk associated with AI. He also suggested that we need something called, what he terms safety brakes for AI, that manages any type of critical infrastructure or critical scenarios, including health. And when you think today, we have collision avoidance systems in airlines, we have circuit breakers in buildings that help prevent a fire due to, for example, power surges, right? AI systems should have safety brakes as well. So this involves classifying systems so you know which ones are high risk. Requiring these safety brakes. Testing and monitoring to make sure that the human always remains in control. And then licensing infrastructure for the deployment of critical systems. And then from a policy perspective, ensuring that the regulatory framework actually maps to how these systems are designed so that the two flow together and work together. So that’s an example of the policy in action side of things. And from a Microsoft perspective, we put our responsible AI principles that I mentioned into action through our commitments at a policy level. Our voluntary alignment, for example, here in the US out of some of the things coming out of the White House. So voluntary alignment with commitments around safety, security, and trustworthiness of AI. And on one last point, I did wanna go back to the responsible AI principle and talk about inclusion. And so we’re doing some work from a Microsoft perspective in the health AI team that I am a product manager on to really look at how, when we have data that guides models, and this is either custom AI models or when we’re grounding large foundation models or large language models with data, how do we make sure that we understand the distribution and makeup of that data to ensure that their bias doesn’t creep in from the data perspective? And we’re also doing work, for example, on the deployment of models. How do you understand if models are performing

Moderator:
as they intended?

Geralyn Miller:
How do you monitor for something called model drift? So when models start to perform in a manner that isn’t how you think, right? When the accuracy starts to decline, and then what do you do when the models don’t perform that way? And this last part, the model monitoring and drift is some of the things that we have happening out of our research organization. So thank you. Thank you very much, Ms. Cherilyn.

Moderator:
So now I want to move back to Ms. Debbie. Drawing from your experience in developing the digital strategy for a major telco in South Africa, how can telecommunication companies play a more significant role in advancing health equity and bridging the digital divide through innovative approaches and digital solutions? And also, what lessons can be learned from your work in South Africa that can be applied globally to improve digital health access? Thanks.

Deborah Rogers:
I think one of the most interesting examples of how mobile network operators have really had a big impact on in decreasing any inequities around health is the Facebook Free Basics model. You may not know what that was, but Facebook basically put together simple information through what looked like a little Mobi site. And this was essential information that they felt everybody should have access to. And they work with mobile network operators to zero rate access to only that portion of Facebook, just that portion, not to everything, but just that portion. And they were able to show that by providing essential information that is free to access, they were able to improve people’s literacy and use of data. So they then went on to use more data and to use the internet more often and therefore become more valuable customers to the MNOs. So by doing something like providing free access to essential information, there was also an increase in profit for the mobile network operators. And I think that’s a really interesting model to look at. I think very often we forget that it’s just as important for mobile network operators to be reaching as many people as possible as it is for those of us who are trying to improve health through something like digital health. And so if there are aligned priorities, then there are very good ways that you can work together. One of the ways that we’ve worked with mobile network operators in South Africa has been to reduce the cost of sending messages out to citizens of the country. And that’s been done not in a way that prohibits the mobile network operators from making a profit, but what it does do is it makes it completely free for the end user. So if it’s completely free for the end user, you’re reducing the barriers for them to be able to access this kind of information. But the reduced cost is then something that can be brought to the table because of the increased size of access. So the more we scale out these programs, the more we’re able to see economies of scale, and the more worthwhile it then becomes for mobile network operators to engage with us. And so one of the very interesting models that’s been used was to reduce churn. So if people can only access information, say using a MTN SIM card, they’re less likely to switch to other SIM cards if that’s the case. And so being able to align the health, the desires of a health, digital health organization or government with those of mobile network operators is incredibly important for being able to ensure that you’re working towards the same goal, but without anyone asking for any handouts because that’s not going to work. I think when it comes to strategies for decreasing inequity, I think the one that we really need to talk about more is about being human-centered. And that doesn’t just mean designing for people and occasionally having them attend a focus group. It means designing with them and ensuring that the service is actually something that they want to use, something that they love using. Make it easy and intuitive for them to use. No one starts a course on how to use Facebook before they use Facebook. We shouldn’t create services that need so much upscaling. We should create services that are simple and easy for people to use. You need to use appropriate language and literacy levels. And this is something that the medical fraternity often forgets about because it is a very patriarchal society. Make it something that is at least close to free for people to access. We find that access to a mobile device is less of a problem than the cost of data, for example. So just because somebody has access to a device doesn’t mean that they’re going to be able to go and look up information because they may not have data on their phones. So you can work very closely to reduce the cost or make it zero cost. And that’s really going to ensure that you reduce the barrier to access. And then you really have to try and think about the system that you’re in. By creating a digital health solution, are you overburdening the health system that already exists, for example, or are you reducing the burden on it? Are you creating feedback mechanisms that mean that you can understand what the impact is that you’re having on the system itself rather than working within a vacuum? Are you making sure that where a digital health solution may not be accessible to somebody, there is an alternative in place that does not rely on the digital health solution? We can’t just operate within silos. We have to think about the fact that digital health is just as much a part of health infrastructure as the physical facilities, for example. Until digital health is seen as just as much of an infrastructure, it’s going to be a fun project on the side and not something that’s going to have some systemic change. And so it’s really important for us to think about that system. And then recognizing biases, I think Geraldine mentioned this, very often the people who are creating digital health services are not the people that are using the digital health services. So this goes back to why human-centered design is so important, but it’s also important to understand that you will be introducing biases if the people who are building the system are not the people who are using the system. And so you have to look more systemically. Look at the makeup of your team. How diverse is the makeup of your team? I would assume, having been an electrical engineer myself, that it’s probably not particularly representative from a gender or race perspective. So look at the team that you have. How are you working to make your team more representative and therefore address some of the biases that are going to be put in place by having a non-representative team building up the systems? So there’s a bunch of things in there, but I guess in summary, build for the end user in mind, make it human-centered, make it easy to use, appropriate, and intuitive. Design with the understanding that you work within a system and make sure that you don’t have unintended consequences and that you’re always feeding back to understand what the impact on the broader system is. And ensure that you think about the biases that are going to be inherent in the fact that the people building the system are not necessarily the people using the system.

Moderator:
Thank you very much, Ms. Debbie. And now moving on to Professor Gupta. So based on your background in advising the Health Minister of India and drafting national policies, how can governments play a pivotal role in addressing the intersection of health equity and the digital divide, particularly in the context of healthcare access for marginalized communities, and also what policy measures should be prioritized to ensure equitable digital health access?

Ravindra Gupta:
Thank you, Connie. Thank you, Connie. This depends on the economic status of the country. So when you have a LMIC country like India, so I’ll give you example was what was done. So we understand that there is a sizable population which is underprivileged, which is marginalized. So there was a scheme that was launched for 550 million people, and you have to understand that countries are at different phases of development and they require investments on infrastructure, they require investments on health and education, and it’s not possible to give the amount that the sectors actually deserve. So what was done very carefully since I was in drafting the health policy, I played a role in that. So we carefully treaded the path of saying, let’s first make primary care a comprehensive primary care. So first guarantee primary care. So that’s comprehensive, that includes chronic disease management to all the things. Then let’s convert the sub-centers and primary centers into health and wellness centers and put telemedicine as a part of it. So what happens is 160,000 health and wellness centers now across the country offer you telemedicine. Then we created a e-Sanjeevani program, which is a telemedicine program, which is you can get a doctor consultation for free. So that is across specialties. That’s why it’s at 120 million consultations. And now what’s going to happen is we’re putting in AI and NLP in that. So given that India has 36 states and people talk different languages, their dialects are different. So a person’s talking from a southern state to a doctor in northern state will hear like his language when he speaks and the doctor will hear in his language when he listens to the patient’s problem. So I think India has planned its strategy for addressing the vulnerable and the underprivileged sections as it charts its course of development. One is that integrate technology in the care delivery right from the primary care. So that has proven, as I said, 460 million health records, 550 million people given insurance, which is of a very decent amount, I would say, which a typically middle class would afford. So on the policy side on digital health, India has, as we speak, is probably the largest implementation of digital health in the country that is happening. And I would bring here one point that the government has not only to take the stewardship, but also the ownership of investing in digital health. Debbie would understand it very well that digital health is still figuring out the business model. That’s why you see the largest companies have withdrawn digital health and as much they can give talks on the forum, but their investments are on futuristic technologies, which are probabilistic technologies. But the companies that forayed into it years ago don’t exist on the map. So I think governments have to play a frontal role on investing like Indian government has done. They set up a national digital health mission, rolling it across states, ensuring that everyone has what you call the Ayushman Bharat Health Account number, ABHA number. And we actually will be probably the first country to work towards what I have championed that let’s work to make digital health for all by 2028. And this for those who work in healthcare and more so in public health. 45 years back in Almaty, we promised health for all by 2000. It’s 23 years after the deadline that we are still not close to that. At least we can champion digital health for all by 2028. If that is one objective we pursue as governments across the world, I think a lot of issues will get addressed because there is a whole lot of planning that will go into doing that. And it’s doable. That’s the only way you can address the issue of health equity. Because the practical part is that doctors who study in urban areas do not want to go to rural areas. They will not. I mean, even if you push them to do, they will find a way to scuttle that. But the only way you can do is you can get technology into their hands with the mobile phones. I think now the systems are fairly advanced. Tomorrow we are hosting a session on generate the conversational AI in low resource setting. So you can have chatbots interacting with people, addressing their basic problems. And 80% of the problems are routine, acute problems. So I think we need to leverage technology not only as a policy, but as a program. And there are best practices available. I think India has, parts of Africa have, but these are like islands of excellence. I think forums like these are good to discuss if they can be mainstreamed into islands of excellence to center of excellence. And we can replicate them and scale those programs. So I think India probably would have a good story as we speak about scale up of digital health program. But again, the key point is that the federal government has to be the funder for the program. And where do you start is health helpline. If you really want to address the inequities, start a health helpline, which people can pick the phone, talk to a doctor or a paramedic and get a consultation free of cost. Get into projects like East and GV, which I think the country is offering to other countries as a goodwill gesture, is where you connect to district hospitals and tell doctors to allocate time for doing digital consultations. So these programs actually help you bridge the digital divide. And health and wellness centers, a phenomenal experience of under $60,000 health and wellness centers which have telemedicine facility. So I think picking up the queue, I would say it’s time for implementation. For policy-wise, I think we all know that. I think that we very clearly said it’s getting integrated. And in fact, I go a further line and say, if you’re not into digital health, you’re not into healthcare. Don’t talk healthcare. That’s the truth, actually.

Moderator:
Thank you. Thank you very much, Professor Gupta. And finally, Tijari, drawing from your experiences in speaking about youth in cyberspace and internet governance, how can young advocates actively participate in shaping internet governance policies to ensure that digital health resources are accessible and equitable for all, regardless of socioeconomic status or geographic location? And also, what are some successful examples of youth-driven initiatives in this context? Over to you.

Yawri Carr:
Thank you very much. Well, in the realm of youth in cyberspace and internet governance, empowering young advocates to actively shape internet governance policies is crucial for ensuring equitable access to digital health resources. So young advocates can play a transformative role in policy discussions by engaging in many ways, such as participating in the IGF, because with this active participation, we start to break the ice in how to discuss, how to have dialogues, how to ask questions, and all of these activities, even though they are seen as very daily for experienced people, for youth, this is, yeah, ways to break the ice and to gain confidence in how to participate in public debates. And they also get insights into current challenges and opportunities in digital health governance. Second, for a formation of youth coalitions, young advocates can form coalitions or networks dedicated to digital health equity. And these coalitions can amplify the collective voice of young people advocating for policies that prioritize accessibility and inclusibility in digital health. For example, we have the Inter-Society Youth Group, or we have regionally different youth initiatives, and a chapter about digital health could also be open so that coalitions in this specific topic can deepen into these kinds of topics. Also, third, it would be engagement with multi-stakeholder processes. So not just the IGF, but also in other kinds of processes that are led by governments, NGO, or industry stakeholders. And their participation ensures that diverse voices contribute to shaping policies that consider the needs of all. And it is also important that in this circumstance, so public sector and industries and NGOs can also open this kind of opportunity for youth and that they actively seek for youth that could participate into their processes as well. Because if they don’t do it in such a direct way, so youth, as I mentioned before, they could feel intimidated and think that they are not experienced enough to participate. The fourth, youth-lead policy research. Young advocates can initiate research projects to understand the specific challenges faced by marginalized communities in accessing digital health resources. Because evidence-based research can be a powerful tool for advocating target policy changes. And I think this is something that it is a situation, it is a possibility in many countries that have the resources for research, but it is still very behind in countries, for example, in Latin America, where we don’t have so much support from public foundations or from the government to do research. And we also don’t have like so big research focus in our university. So I think maybe one professor can bring this kind of perspective that can inspire the students to make a research group for example, universities in Brazil, they have like student groups in which they meet some day of the week or some day monthly, and they discuss specific topics. So I think this is a good practice so that youth can start to create, that they can start to discuss and that they can start to bring this university and to other colleagues and classmates. Of course, it would be great if some countries could also start to help other global South countries in order that they can have more research and that the students can participate more in these kinds of initiatives in their own countries. Also innovation hubs for digital health. So for example, in which hubs in which young innovators, healthcare professionals and policy makers can create solutions together. In this sense, it would be also good to have a funding from an organization or a company that can also collaborate so that these kinds of innovations at the end can also maybe have like starting a month of financial resource so that they can start with this kind of innovation and that youth can feel that they are able to become innovators in this kind of field. But also I think that this kind of innovation address gaps in digital health accessibility. And some kind of examples of youth driven initiatives are for example, digital health task forces because in several regions, youth led task forces focus on creating policy recommendations for integrating digital health into broader intergovernance frameworks. Also youth led data privacy campaigns in which youth can also for example, create dialogues in various communities and they can provide awareness about the importance of robust data privacy measures in digital health technologies that people and common patients can also understand why it’s important to protect their privacy when they access some kind of digital health tool. And global youth hackathons for health in which there are health challenges that can develop on innovative apps and platform addressing specific healthcare needs that are specifically related in the communities of these youth. And I also consider another action. It’s this movement also of paid internships that students can also have access to internships that are paid so that they can equally participate in a practical application of what they are learning at university or what they are studying. So, well, I think that by actively participating in these initiatives, young advocates contribute with fresh perspectives, innovative solutions and commitment to digital health equity in internet governance policies because they are digital natives and they also could. I consider they could understand rapidly how the technologies can help them, but also their challenges, their issues, and they can also become more active as they are not just the future, but also the present. So, thank you. Thank you very much, Jari.

Moderator:
And also, thank you once again to the panel for their responses. And so now we’ll move on to the Q&A session. So, if any on-site participants would like to raise their questions, please feel free to walk up to the mic. Okay. Hello, I’m Nicole. I’m a Year 2 student in Hong Kong.

Audience:
In case of another pandemic like COVID-19 nowadays, how do you think the current digital health can be developed and improved and contribute to the society in recovering and ensuring each individual can receive the accurate and same medical advice and treatment without physically visiting a healthcare facility as it will be crowded with a lot of people or elderly. Thank you.

Deborah Rogers:
Thanks. I think actually looking at some of the work that was done during COVID-19 is a really good example of how we can use digital health to address issues that come up during a pandemic. I think one of the things that has really been a challenge in the work that we do is that we speak directly to citizens and empower them in their own health. Given that the medical fraternity is quite patriarchal, that’s not usually a priority. What we found is that when an issue is something that happens to somebody else, then it isn’t seen as a need to provide people with the right information. But when COVID-19 happened, everybody was affected. Nobody had the information. It didn’t matter if you were the president of the country or if you were a student at a high school. No one had the information about the pandemic that was needed. So we were able to use really large-scale networks, things that were already there like Facebook, like WhatsApp, like SMS platforms, to be able to get information to people extremely quickly. In a time when the information was changing on a daily basis, this wasn’t something where you could take a lot of time, think through things and put up a website and think about how things are going to be talked about. This was happening in real time. So you continually had to be updating things. People continually had to get the latest information. And without that, many more people would have died than did already in the pandemic. I think what’s important, though, is for us not to forget the lessons of COVID-19. We very quickly forget, as human beings, when things go back to so-called normal, we very quickly forget the lessons that we learned. And so I think one of the really important things that needs to continue from COVID-19 is an understanding that knowledge is power in the patient or citizen’s hands. And this isn’t something that needs to be hoarded by the medical fraternity, that by giving information to people at a really large scale, you can improve their health and you actually make your life easier at a time when you are most needed. Digital health can’t replace a healthcare professional, but it certainly can reduce the burden for healthcare professionals. And so that’s a really important thing that we need to continue to consider as we move on from COVID-19. I think the other thing to remember is that we built up platforms, digital health platforms, that solved problems during COVID-19. Screening for symptoms, for example, gathering data that could be used for decision-making, sending out large-scale pieces of information to people. Many, many people in the digital health space reacted very quickly and created incredible platforms that could be used to solve the problems during COVID-19. Many of those no longer exist today. And so we need to remember that there needs to be an investment in digital health infrastructure in the long term so that we don’t have to spin up new solutions every time there is a new pandemic, because there will be another one. It’s not something that is going anywhere. So how are we preparing so that when the next pandemic comes, we’re not having to start from scratch all over again? And I think that’s something that we very quickly have forgotten.

Moderator:
I want to take a minute and address that as well, if you don’t mind.

Geralyn Miller:
A couple of things, I think, from the pandemic, and that’s a really great question, because as a society, we want to learn from the past. There’s two areas where I think are worthy to bring forward from the pandemic. First is that there is an incredible value in these cross-sector partnerships. So in public, private, and academic partnerships, we seek to light up research on understanding the virus to do things like drug discovery. Some of this was governance-sponsored consortium. Other were more privately-funded consortium. And then third class was kind of just similar groups of people coming together, what I would say almost community-driven groups. So really this cross-sector collaboration, that’s the first thing. Second thing is there is some good standards work that I think was done during the pandemic that could be brought forward. So we saw the advent of something called smart health cards during the pandemic. Smart health cards are a digital representation of relevant clinical information. During the pandemic, it was used to represent vaccine status. So think of it as information about your vaccine status encoded in a QR code. There has been an extension of that, something called smart health links, where you can encode a link to a source that would have a minimum set of clinical information. And it’s literally encoded in a QR code that can be put on a mobile device or printed on a card for somebody to take if they don’t have access to a mobile device. Smart health cards also reinforces the concept of some of the work being done by the IPS, or International Patient Summary Group. It is a group that is trying to drive a standard around representing a minimal set of clinical information that could be used in emergency services. And so some of those things that happened in the standards bodies I think were very powerful during the COVID-19 pandemic and I would love to see more momentum around driving those use cases forward and also expanding them. Thank you.

Moderator:
Thanks. Firstly, another COVID shouldn’t happen. That’s first.

Ravindra Gupta:
Second, I don’t think that technology at any time failed. Actually, it proved that it was ready. So whether you looked at the fast-track development of vaccine, which was collaborating researchers across the globe for technology. Repurpose drug use, artificial intelligence. That’s why we did it. I think almost every country, our country used COVID app. We delivered 2.2 billion vaccinations, totally digital. So I think digital health proved that it was ready. It is ready. Challenges will come, but I think technology is the only one that saved the life. We wouldn’t be sitting in this room, trust me, if technology wasn’t around. The only thing that we should do through forums like this is to keep the momentum going. What we want is to forget the COVID and go back to the old ways. I think there were incentives given by the government. There were flexibilities offered in terms of continuing the telehealth regulations like in the United States. I think that should become permanent. That’s all we should do. So technology has already proved that it’s ready. We were waiting for COVID to be shaken and start using it. So I think technology is ready, will always be ready with us for anything that comes our way. Thank you.

Yawri Carr:
Geri, would you like to provide a response? Yeah. I just wanted to say that I consider that in this situation of a pandemic, telemedicine and also the implementation of robots as the case that I mentioned previously are of a huge importance and could also be very useful taking into consideration that it’s very dangerous for humans to attend or to take care of people because of the contagious possibilities or risks. So I think that in these specific scenarios, the application of telemedicine and robots is particularly useful. Of course, taking into consideration that it’s an emergency, that the robots should not be working alone. They should also be guided by humans, but at least they are protecting also that workers such as nurses that are commonly workforce are not so valued in different societies because the tasks that nurses do, for example, are normally considered as dirty or not of a great importance. So I think, actually, these kind of technologies can protect not just the health of the patients that are infected by COVID or other pandemic, but also the work of the medical professionals such as nurses that are normally very exposed. And in the other side, I also remember the initiative of Open Science that my country, Costa Rica, actually had proposed to the World Health Organization so that the initiatives, the projects, and the research that was done in a context of a pandemic is opened and that also is kept available for every person that’s interested. And the data can also be accessed without having to pay, without having to make a patent of that. And I consider this also of extremely importance because in a case of an emergency, we just don’t have time for that and we should really try to cooperate within each other and to try to respond to the emergency in a holistic and collaborative way. Thank you.

Moderator:
Thank you very much to the panel for your responses. Are there any other on-site questions? If not, then I’ll take the question from the chat. So what are some emerging trends and future directions in digital health literacy? And what do you suggest to individuals to stay informed and up-to-date in this rapidly evolving field and ensuring they have the accurate guidance and outdated information?

Ravindra Gupta:
I’ll take that because of the couple of initiatives we are running. So one is on the technical community side. What we are doing is, within the health parliament that I run with my team, we have created co-labs. We are creating developers for health, working with companies like Google and others because I think what we need to do is to create developers to solve problems. So that’s one initiative where people who are enthusiastic about being part of the technical contributors to digital transformation of health, that’s one. The other thing, in the next three months, we’ll be starting courses for class 8 students on robotics and artificial intelligence, an elementary course. We want to educate them very early on so that they can choose what they want to do. They will be aware on what the opportunities are. And same way, we are doing courses which are very elementary level for people to understand rather than going to deep dive into tech. And everyone who is into health, I would strongly recommend that if you don’t know digital health, you will hit a zone of professional irrelevance. Please update. Whatever you do, whether you do a one-week course, two-week course, just make sure that you know digital health from an ecosystem perspective. Thank you.

Moderator:
Would any other speaker like to take the question?

Geralyn Miller:
Yeah, just a few comments on that. I think it’s always a challenge at the pace of innovation that we’re seeing today to keep current. So I want to call out and acknowledge our panel here today and the people who put the panel together today and gave us this opportunity. This is one way that the dialogue starts and that information is shared. And so more opportunities for people of similar interests to come together, I think will always help advance the state of where we’re at from an understanding perspective. So opportunities like this, training as well. I know, and it’s not just training from tech providers, it is just training infused into the academic system as well. And so I would agree with what Dr. Gupta said there. But again, a call out to the folks who put together this panel because I think this is one way that that starts. Thank you.

Moderator:
Thank you very much, Ms. Geraldine. So we have about five minutes left. So maybe we could go with the closing remarks from each of the speakers. Maybe starting with Ms. Debbie.

Deborah Rogers:
I guess my closing remark would be that technology is a great enabler. It can actually be used to decrease the inequity that we see in health, but also in digital literacy. I am actually very positive about the future that we see with digital health. And I think Dr. Gupta is right, the technology is ready. We’ve seen many case studies where things have been done at a really large scale. This is no longer a fledgling area. This is now a mature and really large scale area of practice. And so I’m really excited to see what happens from this point. And I’m excited to see that we have youth involved in this panel because yes, absolutely, youth will be the people who will be building the next evolution in this space. So really excited to see how that works and to see how things evolve from here.

Ravindra Gupta:
I think I would say that in this age where patients are more informed, if not than anyone, about health conditions, about the treatment options, it is high time doctors know them before patients start telling them. You don’t know about it? Let me tell you this. I saw this. So I think, one, this is that digital health is something that everyone who is into healthcare, whether it is a clinician or a paramedic, needs to learn this. Second, if you’re talking about digital health, scalability, scalability comes first. So I think continuously upscale, cross-scale yourself.

Geralyn Miller:
And lastly, I must say thanks, Connie, for putting up this wonderful panel discussion. Ms. Sterling? Yeah, first off, I want to start by expressing my gratitude for being included in this. It was a wonderful opportunity. I want to echo the sentiment that youth play a huge role in this going forward, and I’m very appreciative that you brought everybody together under this umbrella. The thing from a tech perspective, I agree with the panelists on that, you know, digital health is here now. The one part that I would add to this is that when we’re thinking about things new, evolving technology like generative AI, let’s do this in a responsible way, open the dialogue around policy discussion. A discussion is always healthy, and let’s make sure that this technology that we’re bringing to light, with good intent, benefits everyone. Thanks.

Yawri Carr:
Well, and in my case, well, in conclusion, let us strive to be digital health leaders equipped not only with technical skills, but also with a profound commitment to equity. I consider value the work of nurses is very important, even though the technology evolves. Of course, professionals, humans will be very necessary, and it is a fact that technology can help us to protect them and also the patients in situations of emergency and also value of the work of ethicists when they have something to say that they are not misvalued, that they can take into consideration and also when there are conflicts with, for example, profits, so that ethicists can also have an opinion of that and that they can also try to contribute in the mission of responsible AI, so that they are not just there as a decoration, but they are actually taken into consideration. And also, well, of course, the role of youth is fundamental. As we see, all the youth-led initiatives that could strengthen the mission of digital health literacy nowadays can, in the future, so develop in a very good environment that it’s inclusive, that it’s including marginalized communities and all the population. So I consider that now health care and digital health care should not be more a privilege, but also a right. And yes, I’m very thankful also for the opportunity to be here and to express my opinions and to talk about youth as well. Thank you very much.

Moderator:
Thank you very much once again to the panel for your insightful responses, and the workshop has closed today. Thank you very much for coming, and together we hope we can create a future where digital health resources are accessible, equitable, and can empower individuals to navigate their health journey confidently online. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

148 words per minute

Speech length

69 words

Speech time

28 secs

Deborah Rogers

Speech speed

171 words per minute

Speech length

2898 words

Speech time

1020 secs

Geralyn Miller

Speech speed

173 words per minute

Speech length

2706 words

Speech time

940 secs

Moderator

Speech speed

171 words per minute

Speech length

1670 words

Speech time

587 secs

Ravindra Gupta

Speech speed

206 words per minute

Speech length

3416 words

Speech time

997 secs

Yawri Carr

Speech speed

146 words per minute

Speech length

3219 words

Speech time

1325 secs

Beyond development: connectivity as human rights enabler | IGF 2023 Town Hall #61

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Nathalia Lobo

Brazil conducted a 5G auction in November 2021, with the majority of the revenue, over $9 billion, being allocated towards coverage commitments. This substantial investment showcases the country’s dedication to advancing its technological infrastructure. The auction resulted in a positive sentiment, as it is anticipated to greatly enhance connectivity in Brazil.

One notable project aimed at improving connectivity is the North Connected project, focusing on the North and Amazonic region. This initiative plans to deploy a comprehensive network of 12,000 kilometres of fibre optic cables into the Amazon riverbeds, ensuring efficient and reliable connectivity. The maintenance of these cables will be handled by a consortium of 12 different operators. The positive sentiment surrounding this project indicates its potential to significantly enhance connectivity in this region.

Furthermore, the impact of these connectivity projects extends to critical sectors such as healthcare and education. With the support from the funds generated by the 5G auction, hospitals that previously lacked internet access can now effectively manage patient information through online systems, providing better access to resources and improving healthcare services. Additionally, schools are being connected through the funding from the auction, enabling better education opportunities and facilitating digital learning.

Efforts are also being made to make the benefits of internet usage tangible and viable for the people of Brazil. The allocation of funds for connecting schools underscores the commitment to providing equal educational opportunities to all. Additionally, investments from the auction funding will be allocated to various projects, ensuring the overall development of the country’s digital infrastructure and making internet accessibility a reality for all.

While community networks play a crucial role in ensuring connectivity, they have specific needs that require tailored directives and policies. These networks operate in unique ways, making it challenging to create standard policies. A positive stance emphasizes the importance of understanding the needs of community networks and structuring effective public policy to support their viability. A working group has been established to study these specific needs and draft viable policies and directives.

One significant outcome of the efforts to improve connectivity is the North Connected project, which aims to bring competition and lower connection prices in the region. By increasing the number of service providers and fostering healthy competition, consumers can benefit from more affordable and accessible connectivity options. At least 12 new companies will operate in the region as a result, indicating a positive impact on reducing inequalities and improving access to digital services.

However, there are concerns regarding illegitimate community networks that don’t operate with the same level of efficiency and reliability as legitimate networks. Differentiating between legitimate and fake networks becomes imperative to ensure that public financing is not misused. The need to regulate and monitor community networks to prevent misuse highlights the challenges faced in this sector.

Overall, the initiatives and projects aimed at enhancing connectivity in Brazil signify a positive transformation in terms of infrastructure and access to technology. Community networks offer meaningful connectivity and foster learning about the digital world within communities, complementing the efforts of Internet Service Providers. The government continues to grapple with the challenges and responsibilities associated with supporting the growth of community networks, highlighting the need for a balanced approach to drive equitable and inclusive digital development in Brazil.

Audience

The analysis delves into various arguments surrounding the topic of internet access and community networks. One argument highlights concerns about the current system of charging for data, particularly as it is seen to benefit more developed communities, while placing a financial burden on users. The speaker expresses worries about high bandwidth resources, like videos, requiring more financial resources from users, thus exacerbating inequalities. This argument reflects a negative sentiment towards the capitalization of bandwidth.

Another critique focuses on the current telco model, suggesting that educational resources should not be dependent on a person’s ability to generate income. The speaker questions why access to educational resources should be gamified and proposes a different model where users can directly access frequently needed resources. This perspective aligns with SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities). It also carries a negative sentiment.

On a positive note, the case of Finland is cited as an example of a more honest and beneficial business model for data plans. In Finland, data plans do not have a data cap but differentiate based on speed, providing everyone with a flat rate. This positive sentiment is supported by evidence of negligible variable costs for data volume, especially in mobile services.

However, the analysis reveals that the project Wikipedia Zero, which aimed to provide zero-rated access to Wikipedia versions, failed to gain substantial traction and was discontinued in 2017 due to low access numbers. This is considered a negative outcome for the project.

The analysis also highlights the importance of revisiting access points and connection questions in communities facing struggles related to conflict and climate change. Access numbers to internet services, such as those provided by the Wikipedia Zero project, are questioned in communities where zero connectivity is a resilience measure. This observation reveals a neutral sentiment towards the subject.

Additionally, the high costs associated with community networks in remote areas are flagged as a concern. It is noted that individuals in Chihuahua and Mexico need to have a daily income of at least $3 or $4 to afford connectivity, while some communities resort to engaging in illegal activities to finance their access. This negative sentiment highlights the financial challenges faced by communities in remote areas when it comes to internet access.

The analysis further reveals that community networks sometimes depend on weak infrastructures, which affects the quality and reliability of their services. This observation adds to the negative sentiment surrounding the topic.

Issues with government policies regarding access to fiber networks for communities are also raised. The analysis suggests that despite the presence of fiber networks near communities, government policies restrict their access. However, there is optimism about future developments, particularly with the Amazonian network project.

The operation of community networks is shown to vary depending on their context. For example, community networks in Africa function more like small businesses, while those in Mexico or Colombia display a greater level of political organization. This insight highlights the diversity in the operation models of community networks.

Lastly, the misconception of poor service quality in community networks is challenged. The analysis presents evidence of good performance and positive impacts of community networks in communities. This positive sentiment encourages a re-evaluation of misconceptions and brings attention to the potential benefits of community networks.

In conclusion, the analysis provides a comprehensive overview of various arguments and perspectives on internet access and community networks. It highlights concerns about the current system of charging for data, critiques the telco model, and examines alternative approaches. It presents the case of Finland as an example of a more beneficial business model for data plans. It also discusses the discontinuation of the Wikipedia Zero project and raises questions about access to internet services in communities facing challenges like conflict and climate change. The analysis examines the high costs and weaknesses in infrastructure associated with community networks. It points out issues with government policies regarding access to fiber networks and highlights the diversity in the operation models of community networks. Finally, it challenges misconceptions about poor service quality in community networks and emphasizes their positive performance and impacts in communities.

Jane Coffin

This extended summary delves into the importance of diverse networks, grassroots advocacy, community networks, public-private partnerships, and structural separation networks in the context of global internet access and connectivity. These points are supported by various pieces of evidence and arguments.

Firstly, the importance of diverse networks is highlighted, with a focus on how they contribute to global internet access, lower prices, and reaching more people. It is demonstrated through the challenges faced by Liquid Telecom in deploying fibre from Zambia to South Africa due to complications, as well as the significance of connectivity being delayed by regulatory issues. This highlights the need for diverse networks to ensure better access, affordability, and inclusivity in the global internet landscape.

The significance of grassroots advocacy and multi-stakeholder approaches in promoting connectivity is emphasised. Personal experiences of working on community network projects are shared, underscoring the collective power of communities in negotiating with governments. This highlights the role of advocacy and partnerships in bridging the digital divide and ensuring that connectivity initiatives are inclusive and sustainable.

The effectiveness of community networks in providing connectivity in regions where major providers struggle to make a profit is discussed. The example of East Carroll Parish, Louisiana, where a community network was utilised to provide connectivity, exemplifies how these networks can fill gaps and offer diverse types of connectivity. This showcases the potential of community-driven initiatives in expanding internet access to underserved areas.

The role of public-private partnerships and innovative financial models in funding connectivity projects is emphasised. The Connect Humanity project is cited as an example. This underlines the importance of collaboration between public and private sectors, as well as the need for innovative financing mechanisms, to overcome financial barriers and ensure sustainable investment in connectivity infrastructure.

Structural separation networks are presented as a viable option for reducing costs and improving connectivity. These networks involve one party managing the network while others operate their services on it. This model is being explored in parts of the US, where municipalities are demanding greater accountability. The potential cost-efficiency and improved connectivity offered by structural separation networks make them an attractive option for expanding global internet access.

Lastly, the summary highlights that communities running their networks can deliver reliable connectivity. It stresses that such networks are not unreliable but are managed by skilled technologists. These community networks require a long-term business plan and substantial financial backing to ensure sustainability. This insight underscores the importance of community involvement and support in achieving sustainable and robust connectivity solutions.

In conclusion, the extended summary underscores the importance of diverse networks, grassroots advocacy, community networks, public-private partnerships, and structural separation networks in promoting global internet access. These insights are supplemented by evidence and arguments from various sources. It is evident that a multi-faceted approach, involving collaboration, innovation, and community empowerment, is crucial for achieving connectivity goals and bridging the digital divide on a global scale.

Raquel Renno Nunes

The analysis explores the important issue of connectivity and stresses the significance of internet accessibility as a fundamental human right. Notably, Raquel Renault plays a crucial role in this area as a program officer at Article 19, where she addresses various connectivity issues. Her responsibilities mainly focus on infrastructure and involve collaboration with standard-setting organizations such as the ITU and ITU-R.

Undoubtedly, the COVID-19 pandemic has highlighted the pivotal role of connectivity, particularly in enabling the right to health. The outbreak has underscored the critical need for accessible and reliable internet connections to ensure the well-being and improved access to healthcare services for all individuals. In this context, connectivity has emerged as a vital tool in bridging the digital divide and reducing inequalities.

One of the central debates surrounding connectivity revolves around whether internet access should be treated as a human rights issue or simply as a commercial service. There are two contrasting ideologies on this matter. On one hand, the viewpoint advocating for the recognition of internet access as a basic human right argues that governments and relevant organizations should ensure equal access and availability of the internet for all individuals. On the other hand, some argue that internet access should function solely as a commercial service, subject to market forces and individual affordability.

The discussion aims to bring together these differing perspectives and comprehend the merits of each argument. Its goal is to comprehensively explore the concept of connectivity and determine whether all forms of connectivity are inherently beneficial. By considering these diverse views, it becomes possible to develop a more nuanced understanding of the issue at hand.

In conclusion, the analysis underscores the importance of connectivity in our society and examines the debate surrounding internet accessibility as a human right. It highlights the invaluable role of individuals like Raquel Renault in addressing connectivity challenges and emphasizes the positive impact of accessible internet during the COVID-19 pandemic. The discussion of various viewpoints contributes to a broader perspective on the issue, stimulating further dialogue and exploration of the different facets of connectivity.

Robert Pepper

The analysis of the given information reveals several key points regarding internet connectivity. Firstly, it is stated that 95% of the global population now has access to broadband. However, despite this high percentage, there are still around 2 billion people who are not online. This discrepancy highlights the shift from a coverage gap to a usage gap in internet connectivity. Affordability is identified as the main hindrance to internet usage, particularly in sub-Saharan Africa. The high cost of devices and monthly service is preventing many individuals from accessing the internet.

Furthermore, the benefits of internet access are seen as serving human rights. It is noted that people use the internet for educational purposes, to receive and create information. In fact, a significant 73% of people believe that access and use of the internet should be considered a human right. This highlights the importance of internet connectivity in empowering individuals and promoting equality.

On the other hand, various barriers to connectivity are observed. Infrastructure limitations, such as the backhaul and middle mile, are identified as one of the challenges in getting people connected. Additionally, the architecture of telecom termination monopoly is mentioned as a barrier.

In terms of specific services, the concept of zero rating is discussed. Zero rating is the practice of not charging for data usage on specific applications or websites. Discover is highlighted as a net-neutral zero rating service that allows access to any application or website. This service is seen as beneficial as it helps prepaid data users stay connected even when they run out of their data plan.

It is also worth noting that not all zero rating services are considered anti-competitive. Some zero rating services have been found to be net neutral and pro-consumer according to the stringent net neutrality rules under the Wheeler Commission.

The analysis also points out the outdated nature of legacy models and regulations in the telecom industry. The traditional telecom network architecture, engineering, business model, and regulation are based on outdated principles. The emergence of modern flat IP networks has changed the costs associated with data usage, rendering the legacy models irrelevant.

To conclude, the analysis reveals the challenges and opportunities in internet connectivity. While a significant portion of the global population now has access to broadband, affordability and infrastructure limitations remain significant barriers. The benefits of internet access in terms of human rights and empowerment are recognized. Additionally, the emergence of zero rating services and the need for modernization in the telecom industry are highlighted. These findings emphasize the importance of addressing these issues to ensure equal and affordable access to the internet for all.

Thomas Lohninger

The analysis focuses on connectivity and internet access, highlighting several arguments and supporting evidence. One argument is that instead of wasting time on debates that hinder the goal of connecting the unconnected, efforts should be directed towards addressing these debates. The discussion points out that although promises have been made about 5G, the actual impact and benefits of the technology have not materialized. In addition, there seems to be no new technology empowered by 5G, and little reason for consumers to upgrade from their current 100 to 300 megabit connection.

The analysis also highlights the negative consequences of network fees or “fair share” contributions. It suggests that proposed fees could harm smaller networks and lead to increased fragmentation of the internet. Another important argument raised is the potential negative impact of zero rating, where certain companies are given an unfair advantage. This practice could potentially violate net neutrality and hinder efforts to achieve reduced inequalities in connectivity.

Thomas Lohninger, in particular, raises concerns about zero rating programs limiting consumer choice and hindering innovation. He highlights examples such as a Smart Net offering in Portugal, where the affordability of certain services compared to others raised concerns about an “internet ร  la carte” system. The analysis also explores alternative approaches to data plans. It suggests that instead of having data caps, data plans could be differentiated based on the speed of internet access, as implemented in Finland. This is seen as a more honest and efficient business model for telecom companies.

The analysis also notes that total data consumption does not necessarily impact network operation unless it leads to congestion. It criticizes the use of legacy models based on minutes of use, distance, and time, which are no longer relevant in today’s data networks. Noteworthy observations include the termination of projects like Wikipedia Zero, which aimed to provide free access to specific services. The low usage of Wikipedia Zero led to its discontinuation.

Furthermore, it is suggested that corporations could make better use of available bandwidth by offering flat rates during off-peak hours. The analysis argues that in instances where the mobile network is often unused during late-night hours in countries with connectivity issues, the refusal to open the floodgates is primarily due to corporate greed, rather than capacity or cost issues.

In conclusion, the extended analysis emphasizes the need to prioritize connectivity and internet access for all. It proposes addressing debates that hinder these goals, critiquing telecom industry PR campaigns, and examining the consequences of network fees, zero rating practices, and data plans. The analysis suggests alternative approaches such as bandwidth-based data plans and flat rates during off-peak hours to optimize available resources. These insights provide valuable perspectives for policymakers, businesses, and individuals involved in promoting inclusive and accessible internet connectivity.

Session transcript

Raquel Renno Nunes:
from Brazil and we are just testing the camera and also she would need us to go to the presentation because she’s gonna you know show some PowerPoint and we’re gonna be doing from here because she’s not being able to access the link via her computer so we have to to do it but anyway I’ll start by introducing myself I’m Raquel Renault from article 19 program officer and I’m responsible for connectivity issues and I work mostly on infrastructure level and standard setting organizations my work specifically is in the ITU, ITU-R and here I’m followed by Natalia Lobo who is online and she’s the director of the sectoral policy department of the Ministry of Communications of Brazil then Robert Pepper the head of global connectivity policy and planning ex-FCC, ex-NTIA and consulted to other regulators and have a vast experience in spectrum management then Jane Coffin also an expert with extensive experience in the technical community public sector private sector as well and she’s currently a consultant and an expert providing information on community networks, IXPs, interconnection peering and community development and Thomas Lohninger there is the executive director of the digital rights epicenter works in Vienna Austria and also works a lot on net neutrality issues specifically in the European Union but not limited to European Union and this is going to be an open discussion we’re going to take questions and comments from the people here in this room but also online and the idea is to bring different views you can see that we have people from different backgrounds here basically the idea is to update and bring different views on the connectivity issues a lot has been said about how connectivity is important how it’s a human rights enablers specifically after the pandemic so even the right to health was something that was related to connectivity lately but we still face the digital divide and we have new kinds of digital divide and also some people say that we have many digital divides and the SDG still frame access as a development issue and but then we are facing a human rights issue so there are two different ideas and assessment of what the right to access the Internet not as a human right but also not just a commercial service how it should be tackled know how it should be seen how it should be framed so we are here together these different ideas and also understand if any connectivity is good connectivity and what are the challenges and opportunities that we might have nowadays so I would like to start with Natalia if possible or not yet Lucas okay okay so we’re gonna start by the people in this room and then we live Natalia for later is it okay okay so we can start by please pepper if you

Robert Pepper:
can start there I think that that works better thank you very much and it’s great to be here and thank you for the invitation on the panel so just maybe to start things off a little bit one of the things that I’ve done with meta with the economist intelligence units now called economist impact was we had started a study back in 2017 and it was a six-year time series called the inclusive Internet index and it looked at 54 55 indicators for a hundred countries and you know you’ve seen some of this Brazil does quite well and we early on you know the issue on connectivity was coverage people did not have available to them a broadband connection over the last six or seven years we and about three years ago we saw a shift from a coverage gap to a usage gap and the latest data provided by the ITU and the UN Broadband Commission three weeks ago in New York at the their annual meeting before the UN General Assembly met was that 95% of the world’s population now has available to them a broadband connection there’s about 400 million people out of the 8 billion people that do not have a broadband connection available and that remaining 400 million will be served by satellite and that was you know a general conclusion not just by the Broadband Commission but by GSMA the mobile operators satellite operators who were there in the room and that becomes extremely important on the on the other hand there’s about 2 billion 2.1 billion people who could be online who are not online that is the usage gap so there are about 2 billion people not connected and then there are people who are under connected and this goes to the question about what is meaning you know about meaningful connectivity and so the question is why are they not connected so one of the other projects we did with the economist was they surveyed people in sub-saharan Africa who were not online in about 23 24 countries and what they found these are the people who are not online all right the vast majority had a connection potentially available the number one reason was affordability of devices and affordability of monthly service the second reason was the way it was framed was I don’t know how to use the internet or what to use it for digital literacy questions the third major reason was the lack of local relevant content which also goes to the question of why should somebody be on and then there’s a separate issue with electricity no electricity you’re not even if you have devices you’re not gonna be able to charge them and so on so we’ve seen this shift from a coverage gap to a usage gap which is about adoption but why is that important it’s important because of the focus of the of the session being online and we especially learned this during the pandemic provides access to services that are directly related to fundamental human rights education the ability not just to receive information but also to speak and create information this is this is really fundamental when you take a look at article 19 not just the organization but article 19 one of the things that the economists did as part of this project was to go into the field in each of those hundred well actually there were it was 98 countries because two countries do not permit surveys so they couldn’t go in the field for a survey in two of the countries but 98 countries and they asked people when they called it the value of Internet survey what do you use the Internet for how do you value it and what do you know how has it improved your life and in the last two years of the pandemic it was specifically focused on questions about well-being and what was interesting is especially during the pandemic the last minute actually change it’s by region it’s differences in some of the regions but questions about education the way people use the Internet for education for their children during the pandemic when things were shut down and what was a little bit surprising was that people in sub-saharan Africa felt that being online was more important than people in Europe for education for their children across the world as part of that survey on average three-quarters of people 73% year-over-year I mean so it’s more than one year 73% of people said access and use of the Internet should be a human right and it was one of the things about that that’s interesting is in emerging markets in particular this was the case in sub-saharan Africa it was 76% Middle East North Africa 75% Asia 74% Latin America 71% in Europe it was only 69% and North America 57% because in Europe and North America people take being online for granted that’s at least we don’t know that that’s the fact that’s my you know presumption right that’s my hypothesis of why that’s lower because people think oh it’s like turning the water on on the tap but when you’re in parts of the world where you cannot take the Internet for granted people see it as even more important as something that really should be a human right so I’ll stop there and I’m happy to dive into any of the more data later but I wanted to just sort of maybe set the scene with you know connectivity why it’s important for human rights broadly and how there are real data that reinforce that from both people who are online and from people who are not yet online and then we can have a conversation about how do we get people online so that they can benefit from being online in ways that serve human rights thank you

Jane Coffin:
follow on with another economist story from 2014 and I know some of the people involved in this so it’s a true story well I know the company it’s called liquid telecom they provide fiber a lot of fiber connectivity in sub-saharan Africa and the point of telling you this is to focus on the importance of diversity of networks for access of course lowering prices and competition and bringing in more people to the global internet liquid telecom has been providing connectivity for years and years and one of the hardest things for them to do is get fiber across borders now it will change with the Leos that are going up but that’s got hold another cross-border issue which I’d love to hear peppers opinion on and folks in the room later but this one is about how hard it was to take fiber from Zambia to South Africa the team had to get in a boat after about there were negotiations going on for over a year between the two countries and part of the hang-up was a an historic bridge so a cultural ministry was involved on both sides of the border the telecom ministries were involved on both sides of the border the Border Patrol was involved on both sides of the border the regulators were get the picture the regulators were involved on both sides of the border so a year goes by and there’s still no fiber deployed year now if you’re going into business and you’re deploying fiber your business model is not a year’s wait it’s it’s get the fiber out get it deployed you have the investment and liquid does do a lot of great work with developing communities as well so we’re not just talking about a big corporate giant that doesn’t think about working with communities that have been unconnected and underserved they finally put some people in a boat and the article in The Economist is from July 5th 2014 where they said the connectivity between the two countries was nearly thwarted by a swarm of bees because they had put the fellow in the boat a bunch of bees started to attack the fellow the CEO took off one of his shirts wrapped around the guy’s head they got in the boat they took the fiber across the river this is a true story there have been other stories like this in it all around the world where you have these border issues now of course there are going to be complications in some parts of the world but this is more of a government’s just not coming together and negotiating those agreements to quickly get connectivity deployed across those borders of course with mobile networks it’s a little different sometimes that’s a power level issue I used to live in Armenia and there were all sorts of power level issues where people would blast the signals too much from one country to another and you were picking up the signal from one operator that you didn’t intend to have and paying a lot more money but the point of this story is that there are ways that connectivity can be deployed but it just gets hung up if you were advocating for connectivity from a grassroots perspective you can help with governments I’ve worked with Pepper and others here at this table talking to governments I was government years ago it does take a multi-stakeholder approach to make sure that governments understand whether it’s from a corporate perspective or nonprofit perspective that there are things that need to be done in order to speed up connectivity the liquid story is of course one about a company I used to work on I’ve worked on many community network projects and it helped lead a movement here back in 2016 in Guadalajara when I was at the Internet Society we brought about 40 different community network advocates together from all walks of life to talk about the importance of working together and as a collective you can often have more power when you’re trying to negotiate whether you’re in an ITU meeting or you’re here and you’re talking to others when you’re when you’re bringing certain concepts together that are similar and you can share those stories and come up with talking points together because if you’re by yourself sometimes you’re not going to make that difference when you’re negotiating with folks that may have more power quite frankly community networks have come in from a diversification perspective to bring in last-mile and minimum connectivity when you talk about community networks it’s not in the manner at least what we were advocating at the Internet Society what I still advocate when I was with a startup community networks or community networks can provide different types of connectivity that some of the bigger providers may not be interested in providing because they don’t get a return on investment in certain communities because time and distance equals money right and if there are only a certain number of people in a certain place some providers can’t don’t go in there because they can’t get that return the smaller networks some of these are fixed wireless some of them are just Wi-Fi mesh networks are creating change and most recently I was working with some folks in a place called East Carroll Parish Louisiana that had been named the poorest town in America and they were tired after the pandemic because they weren’t connected so this is another story of where public-private partnerships were something called capital stacks which is just a fancy term for putting a lot of different types of money together whether it’s concessional capital and or philanthropic which means foundations grants banks coming together companies who can provide those loans as well and that’s what the capital stack means stacks of different types of funding blended finance is the other fancy-pants term for this it’s just lots of different funds coming together to de-risk investment and you can do that in small towns and in poor communities and this is what a group called Connect Humanity is doing the startup I was working with before other organizations are looking at this even some of the folks in the UN and the Giga project I’m not speaking for them I do work with them adjacent to them on a project but I would just say that you’re finding more innovative ways to bring in these PPPs that are very different from the huge infrastructure projects like the dams and roads that we saw in the 60s 70s 80s 90s I suppose to infrastructure infrastructure is expensive if you talk to anyone in the space who’s building out that connectivity it’s billions of dollars to build networks but it also can be supplanted with the M’s and the tens of thousands of dollars with these smaller networks so I’m just here to throw out that there are ways that different types of organizations can work together to achieve the same thing which is connecting the unconnected and digital skills training that’s a whole separate issue I won’t get into but I’m more on the infrastructure side there are ways to work together and it’s not as if the capital out there is something evil you’ve got to look at capital in a very clinical way when you’re working at that grassroots level also be as smart as the banks are be as smart as the people that are putting this infrastructure together thank you

Thomas Lohninger:
thank you My name is Thomas Lohninger from the Austrian Digital Rights NGO Epicenter Works and I am surely here on this table the one with the least expertise from on the ground when it comes to building community networks but I absolutely think that this is one of the key issues that should be in the focus of this IGF and digital rights debate in general and what I might be able to contribute in that regard is to point out how debates that we are having right now globally but particularly in Europe are actually working against achieving this goal of connecting the unconnected. The first thing is that I really want to call out some of the PR campaigns that we have seen from the telecom industries in recent years. I mean that whole debate around 5G with all of the promises what it should bring nothing of this is materialized and I think it would have been that money would have been far better spent actually bringing connectivity of normal best effort end-to-end internet to all corners of the world and doing it in an affordable manner because of course satellite internet is available everywhere but it also needs to come to the people in the form of a device that can be powered it can be sustained in the local circumstances. And now we have debates around 6G already while we just see all of the promises of 5G just imploding in themselves. There’s no killer app, there is no new technology that’s just empowered by that. It is at best a little bit of a faster internet connection and what do we see in the countries where this already exists? People are actually not interested. If you have a 100 megabit to 300 megabit connection there’s very little reason as a consumer to upgrade. So I would really question the premise of a lot of the international telecom debates and we should ask the question if that energy that focus and that money is well spent and I think we just simply have bigger problems. And then there is a second big issue that I want to raise which also ties into this whole issue of connecting everyone on the globe together and that is the issue of network fees. It’s also often dubbed fair share or fair contribution. This idea which currently is making the round because ETNO, the European Telecom Operators Association, was quite successful in lobbying a former CEO of France Telecom who is currently serving as digital commissioner in the EU to adopt this idea. It is a very old idea we know it from the telephony era. It’s basically you want to reach my customers you have to pay me money. This idea of the telephony era is forced upon the interconnection world so whenever autonomous networks connect with each other, the so-called interconnection sphere, according to the fair share network fee proposal there needs to be a lot of money exchanged before that connection can be made. And if you just think that through you will see many problems and one particular is that smaller networks will suffer. We already have many small ISPs saying that they are actually afraid of their ability to compete, of their ability to connect to other networks if such a proposal were to really become law of the land. Because when you are right now looking at the interconnection world this is not an area to make a profit. This is usually nerds connecting networks with each other. It’s where we see that some connection is congested and that we have a packet loss. Okay let’s just put another cable there and maybe the money that this cable and connection itself will cost is the price that you have to pay. But it’s not a way to make money. We see some telecom operators already abusing interconnection as a tool to maximize their profits and if this were to be the law of the land, if every interconnection would have to follow this principle then I think we would see many more problems in the global internet in terms of right now you can connect to every other point of the internet which is what we call end-to-end. This could be a concept of the past and maybe we will wake up in a splinter net or in a fragmented internet where only a few big telecom companies are able to really be reachable globally. But all the others might actually have a far lower chance of connecting and that would certainly deteriorate the ability of particularly global majority countries to connect to privacy-friendly alternatives to let’s say Google or Meta. And then the last thing that I also want to mention because there’s a very interesting court case going on and the Constitutional Court of Colombia is the issue of zero rating. So the price differentiation if you make certain data packages more expensive or cheaper than others and in many global majority countries that is a very common way of connecting. So when you buy a SIM card you have free WhatsApp or free Facebook or other services that are given to you no matter if you have gigabytes in your SIM card or not. And that of course gives an unfair advantage to the companies that are in a poor position if they are the only ones reachable for people that otherwise would only have a telephone number and not be able to access the full internet. And I think Free Basic is certainly a project that needs to be discussed in that context and it’s my hope that the Colombian court follows the example from Canada, from India, from Europe to actually outlaw zero rating as a practice that is violating net neutrality. Because if we want to bring meaningful connectivity to everywhere in the world that needs to encompass the whole internet and just not a walled garden and a handful of selected services. Thank you. Now we’re gonna

Raquel Renno Nunes:
have Natalia that is going to be online.

Nathalia Lobo:
Hello everyone. Thank you so much for waiting for me. I had some trouble getting in. Let me try to share my screen. Can you all see it there? Yes, we can see it. But if you can you put it in show mode? Yeah. There you go. Great. Thank you. So I’m going to talk about a little bit what we have been doing in Brazil on connectivity. So it talks a lot with what you all have said, principally Jane and Pepa. So let me tell you a bit of what we have been, what is Brazil. Okay. And what is our challenge. So Brazil is the fifth largest country in geographical area. We have 203 million people and we have the largest city in Latin America with 12.3 million people and over 5500 municipalities and more than actually 40,000 localities. The largest economy in Latin America. And look at our size. Well, there’s lots of Europe there. We have great parts of Africa. So you can see the size of our challenge. Connecting all the people that are in our Brazilian territory is a challenge, especially when you got mainly people that you cannot make, you can’t give for granted any type of technology. You need to have them all working together so that we can get everyone inside the internet. So one of our, basically all the policies that we do have in Brazil regarding connectivity have been facing the supply side. So how do we get networks into the people? Today we managed to get over 90% of our households with connectivity and how do we achieve the rest of the 10%? That’s the deal. So in our 5G auction that we held in November of 21, we had the 5G obligations that put an over 90% of all economic value into coverage commitments. Let me talk to you that over $9 billion were in revenue of this spectrum auction and 90% of it converted to obligations. Most of the difference was paid over reserve price and converted into commitments and these commitments were into investments until 2030 and they go through 4G obligations in localities. We have over 7500 localities that are going to have 4G mobile broadband that had no service at all. We’re going to have 5G obligations in all 5570 municipalities and we have also the North, the connected North. This one is our very dearest project as we have, we’re going to a region in the North and to the Amazonic region that connectivity is still poor in terms of quality, in terms of resilience and in terms of price. So we are deploying 12,000 kilometers of fiber optic cables into the Amazon riverbeds and making sure that this is the CAPEX is just is public but who does the maintenance and operation of this these cables afterwards are a consortium of 12 different operators that will explore this and do all the maintenance. It’s not easy and maintaining the cable, this fiber cable optic, this optic fiber along riverbeds as it’s that you have many kinds of like anchors in the rivers that from the boats that can do, that can rip off the fiber optic cables. We have many issues regarding logs on the rivers. There are many issues that make it quite expensive and this is turning everything into possible. So the public partnerships are coming into this into this sense. So why is this so important for enabling human rights? Well, things have, this is what we’re doing there in the Amazon. So this is all our, what our boats are doing, deploying the fiber, the optic fiber cables. And why is it so important? Well, it’s transforming lives in the, in this region. So we have regions that had no internet and hospitals had no, had no way to put their protocols inside. And they were, they used to have post office shipping off the information on the patients and now they can have access to a simple system where they can put all the information inside and now they have access to more resources. May there be into medication, into amount of money that it gets to attend to their patients and telemedicine with the best hospitals in Sao Paulo for the people that live there. So there you go. And schools and the 5G auction. We also have a budget for connecting schools. It’s a little less than $1 billion, but then we are going, we are seeking to do the full connectivity there, not only the speed getting into the schools, but also the, all the wifi and what we, what this, what the schools are going to do with this pedagogical use. So that’s a bit of how we used the 5G auction to bring in some other perspectives that is not only the private perspective, but public policy. So we also have the investment fund to structure all that we need. So we have many sunk, sunk loss projects to do with this funding and especially and especially for public points and doing transportation. So we have subsidized, we had, we have many financing lines with subsidized revenue. And so here we go. We have, we have this summit tree of commitments. We have funds to make private investments viable. And we are also talking about now going into public policies in the demand side about usage, about wanting to make tangible benefits from internet usage into people’s lives. So I thank you. That’s all for now.

Raquel Renno Nunes:
Thank you, Natalia. Sorry, is this? Yeah. Okay. Yeah. Thanks. And do we have questions or, because I think I see people here that, ah. So Natalia, that, um, you point out something that’s actually really, really important.

Robert Pepper:
One of the biggest barriers to getting people connected is less in the access network. The backhaul and middle mile is absolutely one of the barriers. So the project by laying the fiber in the river, the Amazon, to bring broadband connectivity to those regions, right? That is essential. And, um, a, another example of that is something that we did partnering with Airtel and a small company called BCS in Uganda, uh, back in 2017, 2018. Um, Airtel had a 4G network, uh, across, uh, the, you know, Kampala, the urban areas in Uganda, but it did not have 4G on its cell site. It had cell sites across Uganda and covered about 90% of the population. Um, but in the rural Northwest part of Uganda, uh, it was 2G, it was GSM, 2G, some SMS. They couldn’t do the internet. Um, and the reason was they couldn’t get backhaul to this tower sites. They only had narrow band microwave, so they couldn’t actually get broadband to the towers. One of the projects that we did was build a, um, wholesale backhaul network, 770 kilometers across Northwest rural Uganda where there were no roads across the Nile can help. And it was an open cable. So there were capacity for all the operators, right? And it enabled Airtel to convert from 2G to 4G once they got broadband to the cell sites. So this is actually very analogous, um, to the project in Brazil, which is like really, really important. Um, uh, I would like to, though, uh, Thomas just respond very, very quickly. The zero rating thing is extremely important, and I do think that, you know, there was an evolution from FreeBasics, which was limited, to a less than perfect but much, much better and actually net neutral called Discover, because it uses a proxy server, so any application or website is available, as opposed to having to have the selection on the old FreeBasics, which is an 8 to 10-year-old project. And what’s interesting is the way people use it actually benefits people, because it’s not a degraded internet. The way a lot of people are using it is one of two ways, an introduction to being online, and then people actually want to be online. One of the evolving uses of it is actually, I think, extraordinarily pro-consumer, and that is especially in emerging markets, and it’s in dozens of countries, people who rely on prepaid data packs, data plans. What happens is that in the past, when they ran out of their data plan and they could not afford to top up, they were disconnected. What’s happening in many, many cases in many countries is that when people finish, fill up, they run out of their data plan, they, using the zero rating version of the narrow band connection, stay connected, so they have something, and then when they can top up, they go back to their full plan. And so it actually helps both the consumer and the operator in that there’s no transaction fee of having to be reconnected, redo plans, or anything else. So again, it’s not perfect, but it has a very, very positive social and consumer benefit, but the idea is eventually, I mean, everybody will want to be on the fully accessible internet with all of the applications, and that’s, I think, everybody’s goal, but, and by the way, when the FCC had its, you know, net neutrality rules, which were very stringent under the Wheeler Commission, they found that there was nothing inherently anti-net neutrality related to zero rating. There were some zero rating services that they found anti-competitive and violated net neutrality. There were others that were net neutral and that were pro-consumer, and so it was nothing inherent in zero rating that was anti-competitive, and in fact, that was even before some of these other developments, and so it’s not black and white, and I think it’s also a good example of where we’re extremely aligned on some things and not on others, which is absolutely fine. I mean, that’s actually a good thing, but I didn’t know how familiar you were with Discover or the way people are now using it, especially in emerging markets, to bridge top-ups so that they can stay connected, at least at a basic level, and then they top up and then they have the full internet experience.

Thomas Lohninger:
Maybe if we can go into that point. So the problem with zero rating is really what we have seen in the market, and it’s worth really looking at those concrete offerings and also how they are priced. When you go today to the English-speaking Wikipedia article for net neutrality, you will still find the picture of that article, it’s the infamous smart net offer from the Portuguese incumbent provider Mail. If you look closer in that offer, a gigabyte of YouTube was 54 times cheaper than normal internet gigabytes. So we have a drastic difference in affordability of certain services over others, and this internet ร  la carte where individual applications are bought is certainly the worst thing where we can all agree. And there certainly has been an evolution to class-based zero rating offers. I think those were considered in Canada, and then they scrapped it, they said, like, that’s actually not a viable option. In Europe, that was the reading of regulators, how zero rating could be admissible from 2016 until 2021. And in 21, the European Court of Justice found that, no, it’s actually contrary to equal treatment of traffic if you price differently. And why did the court find that? Because it’s actually an additional effort on the side of the telecom company to calculate packages differently, to have not just your monthly allowance, but a monthly allowance for this service, for this service, and for the rest of the internet. So it is, I think, it’s hard to make a case from the perspective of a telecom operator why it should be easier and cheaper to roll out these zero rating offers instead of, as you just said, the goal, and I think we agree on that, is an application agnostic form of access. And that could be a low bandwidth, that could be something that’s tailored for maybe low energy devices or other particular needs where you simply will not be able to stream 4K video. It’s totally fine. But the thing that we all want to avoid is that once the monthly data cap is cut off, you’re left with nothing, or you’re only left with WhatsApp. Because I feel like there always needs to be the freedom to choose. And if discovery is that, and I haven’t looked into it, I cannot speak to it. But I think it’s important that we understand the rights also from low income households to have the freedom to choose all things of the internet. And I mean, it’s also the freedom to innovate, which is most at stake with these zero rating offers. I mean, when Mark Zuckerberg created the Facebook.com, he did so because he had a full-fledged internet connection in his dormitory, and wasn’t limited to a consumer-only version of the internet.

Robert Pepper:
And that’s why the newer versions of that, so first the FCC found that there were some zero rating services where there was discriminatory pricing for some video versus other video as provided by the telco, in terms of the way zero rating was implemented. On the other hand, if you think about a service like Discover, think about sort of a dial-up, a low bandwidth version. So if it’s low bandwidth, you’re not going to have streaming video. So it’s just a low bandwidth version. But it’s not just WhatsApp. So it’s not a separate WhatsApp service versus being online. It’s actually a zero rated service that would be essentially everything, but at a very low speed rate until you top up. And so again, it’s not black and white. And I’m happy to take that offline to have that conversation, because I think it’s an important difference in distinction as things have evolved, and also the consumer behavior of using it as a bridging access for everything, but at very low data rate, until they top up and then get the full experience. And so again, I don’t think it’s black and white, or either or. I think the more important point, going back to the earlier part of the conversation, is some of the ways in which telcos are wanting to have network fees, they call it fair share, which is based upon the architecture and the economics of telecom termination monopoly. So the reality is you can have a lot of choice on the originating end. So I may have four or five operators that want my business. And so there’s a lot of competition. Once I select my operator, right, your network, if we want to talk to each other, if you want to send me messages or videos or whatever, the termination is a monopoly. Because there’s only, once I pick my network, your network must terminate, there’s no choice. And that’s why the, and a lot of this was purely accidental, in Europe, there was the development of calling party pays, or sender network pays, and that created and reinforced a termination monopoly. In the U.S., by just a different model, we had a bill and keep arrangement. So I pay for my airtime, whether I’m originating, sending, or receiving. That eliminated the termination monopoly. And as a result, the choice of the network connection is where it should be, on the originating end. Now, what does that mean? It means that if the telecom networks want to use termination monopoly on interconnection, they want to use termination monopoly to raise fees, they want to use termination monopoly to extract rents, because they have that, if they interconnect to me, it’s a monopoly. And in fact, the European Commission, as you know, in looking at mobile roaming, defined termination as a separate market with significant market power. And that’s at the core, the crux, of a lot of what we’re hearing from the telcos on the fair share and network fee issue. There’s one person waiting for you. Thank you.

Audience:
My name is Jarell James, and I have some questions, because I’m hearing a lot about capitalization of bandwidth, and how people would need to spend more money to get access to be able to stream, if their data packages are running out, and this idea of topping off, or even creating subsidies to allow for topping off into these communities. I think it’s really interesting to focus maybe on the fact that that premise is really locked into the traditional way we’re used to dealing with data, which is coming from a Western, more developed community, and not, I think the days of us counting minutes and counting data packages is long gone for many people in the West. And so, what I’m wondering about is, when we look at an Internet in the future, where streaming platforms are going to have higher quality videos that are going to require more bandwidth, and we’re trying to do these mitigations, is it not also valuable to look at what happens when you take those resources, like videos, and high bandwidth resources that people are regularly looking for? As this gentleman over here in the green tie actually was speaking in his session a few days ago, I don’t even think he knows I’m referencing him, he was mentioning that a lot of people do Brick Lane, it’s like a regular thing that’s Googled, right? And that is advancing their career. Why would someone need to regularly Google that five or six times, instead of just having access to that video, because many members in their community have that? And so, this is where I’m curious about we potentially looking at the way that telcos are gamifying data access, and not doing very obvious movements to ensure that, while they are able to mine the transaction of data for communication purposes between people, and they can say, hey, yeah, we can provide you with communication access, does that really mean that every resource that is beneficial to that person’s life, which objectively is a lot of educational resources, should also be reliant on that person’s ability to create monetary value for themselves, and then give it to telco companies, so they can somehow learn skills and achieve a greater life? It seems like a catch-22, and it seems very much premised on the idea that we’re expecting other communities to pay up and use data packages the way that we’ve done it. And so, these are some of the questions I would have.

Thomas Lohninger:
It’s an interesting question. We had various debates in this direction over the years. It reminds me a little bit about Wikipedia Zero, which was a project from the Wikimedia Foundation to zero-rate access to local Wikipedia versions in the country, and I think the English-speaking Wikipedia was always included as well. They did this in partnership with telecom companies, and disclosure, they were criticised by many people, including me, and the foundation decided to sunset this project in, I think, 2017, something around that time, and one of the reasons why they sunsetted this was it actually didn’t work. The access numbers to the zero-rated service were ridiculously low. I never really got an answer what their conclusion is, why it was never really, because the concept sounds, of course, very good, and we had similar fears also in the pandemic, in the lockdowns, that suddenly everything went online, and that means low-income households have potentially only caps on the data they can use. A model to, I think in your question, there was also a call to think outside of the box, and here it’s maybe interesting to look at Finland. In Finland, you cannot find a data plan that has a data cap. They differentiate via speed, so it’s the bandwidth that you get, but it’s always a flat rate, and this is the much more honest business model, because the variable cost for data volume is absolutely negligible, particularly in mobile, it’s expensive to build a network, to connect it, but then afterwards, whether there’s data flowing or not, you might have congestion if you have too many people at the same time, but then also a bandwidth-based system is helping you to allocate those resources much better than if you right now give everyone a 5G connection with three gigabytes, which can be used up in one concert if you’re streaming it, and yeah, so I think these hard questions about the business model of telecom companies need to be asked.

Audience:
Just a quick question on the point, when they were sunsetting that project, was it access numbers in communities that had regular connectivity, or were there any numbers that you know of for population? I work extensively with populations that have been shut down from the internet, and overwhelmingly the information that is looked up and is searched for is oftentimes health-related, because health facilities are destroyed inside of conflicts, and there’s a lot of questions around this access numbers, how much of it was done in communities that had zero connectivity to the greater world, as a resiliency measure, especially as we go farther into climate change, destroying the telecom infrastructure, it does seem more valuable to maybe revisit those connection questions and access points.

Thomas Lohninger:
Good question. I cannot give you that answer, but there’s a Wikimedia Foundation booth around the corner, they hopefully have.

Robert Pepper:
Thomas, to your point, very related, a lot of the data plans in the name of network management are total data consumption, not related to congestion. So two gigabytes, if I’m downloading that or using that off-peak, there is no impact on the network. Now if everybody’s trying to access a network at the same time, you end up with congestion, but the reality is, right, the legacy telco, their network architecture, their engineering, their business model, and the regulation was all based upon some fundamental principles. The metric was the minute, because it was about voice. The longer the distance, the higher the cost. The longer the duration that you are on, the higher the cost. When you have a flat IP network, and once we got to 4G, it was essentially a flat IP network, even in mobile. There’s a cost to the network, but it’s a step function. Once you’re connected, how much you use that network does not vary until you hit the limit, and that’s where the congestion comes in, but if you use it, you know, only a little bit or a lot, as long as you’re under that technical architecture, the cost is not variable, and yet you still have legacy models based upon minutes of use, distance, and time, which are no longer relevant using the flat IP architectures of today’s data networks, whether they are wireless or fixed, and that goes to your point, which is, like, really important because a lot of these plans are premised upon total data consumed. and that may have no relationship whatsoever to network congestion, right? And, you know, go ahead, I mean.

Thomas Lohninger:
Just to put a finer point on this, like, or to put it bluntly, if you are operating a mobile network in a country where connectivity is an issue, and it’s late night hours where the network is idle, it’s the only reason why there is not a flat rate for everyone is corporate greed. Because it’s a wasted resource. The bandwidth would be there. It costs them nothing. It’s just corporate greed and their business model reasons for not opening the floodgates and that people use it.

Jane Coffin:
One of the things that was mentioned in the description of the panel as well was innovative policies for improving connectivity. There’s a model that’s being explored that’s not new, but it’s new to the United States-ish. It came out of the European, out of the UK, called structural separation networks, where somebody might manage the network, build and manage the network, but other networks can run on top of it. And this is a way to also cut down on costs. This is more of the CapEx side and OpEx later. But bottom line is this is coming back in some parts of the U.S. right now where municipalities are asking for more accountability from the companies that are providing connectivity, suggesting that they not run, because I want to be really clear that governments probably aren’t the best at running networks. There’s a reason for that, that so much liberalization happened and there are no more state-owned enterprises. Well, there are some. But anyway, some companies are better at running the networks at a cost that can help people. But the open access networks allow different types of networks to provide services over the top of the network. So you could have a $10 email-only network. And this is, of course, prices that make sense in that economy, but it wouldn’t be the same in an emerging environment. Another company could come in and run their services over that network as well. It could be full-on video streaming. Who knows? But there are different models being explored. So I think it’s important that for many people here who are interested in what could be done, there are lots of other people you can network with. You can talk to some of the community networks that are also coming into play. They’re actually solid networks. They’re not flaky. When people hear the term community network, they’re like, oh, a bunch of crazy people running a network. You’re like, no, no. These are smart technologists, people who know to run the network. And it’s not fly-by-night in that sense. But knowing your business plan, I do want to put this out there, and I hope I don’t sound too business-y, but you do need to have a thought about how you’re going to continue to run your network. It’s something that Talia was mentioning. You can’t just build that network and hope that people are going to come and buy the services. You’ve got to have a plan, and those plans have to last longer than a year. And you’ve got to have subsidization.

Nathalia Lobo:
How do these community networks actually operate in Brazil? And what are their needs? How can we structure something? Because each one is very specific in their models, in their ways of working in community. So we have to understand how can we make specific directives so that we can make specific policies. You can’t do something on the specific case. It’s very difficult. So how can we manage to make these communities viable first? What are their needs? And how can we, as a public policy, also help these projects actually happen? How do you know that this is a good community network and that that’s not a good way of going for community networks? Not always everyone is exactly the same. So this working group is going to have some study results. And in that sense, from then on, we can start operating something that works for them. That’s the idea. Understand.

Audience:
Hi, everyone. I’m Carlos Baca. I’m from Mexico. I work in an organization that’s called Resomatica. And we are actually a community network. And we help other communities to get connected through this model. So I just went to Brazil, to the Amazonia in Brazil in July, to see what happened with the National School of Community Network there. And I saw a very similar situation with other communities. It is that the WISPs that go to the communities have very expensive prices. Really, really expensive. In Chihuahua and Mexico, when we work, we have people need to have at least $3 or $4 a day to be connected. So what the young people are doing now, to be connected to Facebook, to YouTube, to WhatsApp, etc., is to involve in the narco, in the organized crime. The drug sellers, they are now part of it. Because in northern Mexico, we have this problem very much. And so we are seeing that the people spend a lot of money to be connected. There is a lot of WISPs. So in very, very, very far communities, they have one point that it is very bad connection, you know. And sometimes people spend a lot of money to increase these antennas and this infrastructure for the WISPs. And then they need to pay insurance. It is a very big problem that we are seeing because we think that they are doing this job that they need to do. I am not saying that we don’t need to work with the WISPs, the small ISPs, because we need to have a joint effort. We work a lot with them in Mexico, for example. But we need to establish some conditions to better understand what is happening really in the communities. Not only what they are reporting or saying, what they are doing. So I think this is one of the things that I wanted to address. And the other one is that, as the same in Oaxaca, and this is a question for Natalia, there is a lot of fiber, you know, in a lot of ways, very near from the communities. But the government doesn’t have the policy to access to these fiber networks for the communities. In the Amazonian, they have a lot of expectation about this project. This is what I found. But they are also thinking why they can’t connect to this fiber network. So I think it is important to reach that. And finally, of the last question, I think it’s very, very difficult to try to define what is a community network. And you know, say that this is a good and this is a bad community network. And we need to understand better that maybe it is the main characteristic is that the people in the community can handle in the different ways the infrastructure and the services they have. So if you think the community networks in Africa, they are more like little business models that is very different from one community network in Colombia or in Mexico where there is a lot of political organization. So we need to escape some of the things that we think about community networks. As Jane said, they work well, you know, because we have now good examples of how it’s working in the communities. And as the same as the community radio seen 10 or 20 years ago, we need to escape to all of these imaginations or all of these thoughts that make to say that there are poors, there are not so good services, there are bad things, no? Or there are not, they don’t have enough quality, et cetera, et cetera. So we need also to think in this panel. It’s a very complex issue, I know. But try to understand the diversity that exists around community networks. Sorry for that.

Nathalia Lobo:
It’s okay, because they’re going to disconnect her in two minutes. Okay. Well, I was very surprised about the two, three dollars a day for connection. Well, the idea of the North Connected is that when this is all installed, you have new 12 companies at least working on the region. And some of them are just for capacity. They are just transport operators. So actually you’re dealing about much more offer and competition in the region so that these big prices don’t happen anymore. That’s why we built, we had the idea of building the North Connected, okay? So that competition, so that better quality services get there. So that’s one point. The second point is that what I said about the bad community networks is that how do I not finance none? There are some people that may fake a community network so that they get some public financing. And how do I, as a public servant, know that that one is going into those important networks that community networks do work well? It’s not the question if community networks are necessary or not. It’s just that how do I just take away the bad stuff, the non-legal stuff from all this group. So that’s the idea, to better understand that. And there’s always something that community networks do that others don’t, ISPs don’t do, which is making the connectivity meaningful. So this appropriation of technology, of information, and making the learning among that community, and distributing information on how to better work on that virtual world. So that’s something that we don’t know how to do it. Government is still tackling in Brazil. It’s just tackling how to deal with that next phase. So I believe that we have all the synergies to make this happen. It’s just only that we need to study it a little bit more so that we can structure something that we can go forward with. Did I answer everything?

Raquel Renno Nunes:
Thank you. Thank you, everyone.

Audience

Speech speed

166 words per minute

Speech length

1303 words

Speech time

472 secs

Jane Coffin

Speech speed

199 words per minute

Speech length

1838 words

Speech time

556 secs

Nathalia Lobo

Speech speed

111 words per minute

Speech length

1518 words

Speech time

817 secs

Raquel Renno Nunes

Speech speed

116 words per minute

Speech length

571 words

Speech time

296 secs

Robert Pepper

Speech speed

136 words per minute

Speech length

2719 words

Speech time

1196 secs

Thomas Lohninger

Speech speed

160 words per minute

Speech length

2063 words

Speech time

772 secs

An infrastructure for empowered internet citizens | IGF 2023 Networking Session #158

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The role of libraries is evolving alongside the advancements in internet access. Several cases have been presented, highlighting the changing nature of libraries and their ability to adapt to the needs of patrons in the digital age. Internet access has enabled libraries to offer a wider range of services and resources, promoting digital inclusion.

The Digital Inclusion Index model is highly relevant for all countries. Trish Hepworth emphasizes the significance of this model, which assesses countries’ progress in terms of digital inclusion. It considers factors such as internet access, technology availability, and digital skills, helping countries identify areas for improvement and bridge the digital divide.

Taking knowledge to rural areas is a beneficial approach, promoting knowledge sharing, socialization, and exposure among youth. This strategy addresses limited educational opportunities in rural regions and has received positive feedback for connecting rural communities with educational resources.

Sensitization on academic publishing and compliance with legal frameworks is crucial. Johanna highlights challenges in publishing academic work, such as vetting content and legal compliance. Greater awareness of publishing procedures and legal requirements is necessary to promote quality education.

Establishing education facilities in impoverished rural areas is challenging. Johanna’s personal experience running a small institution in a poverty-stricken rural area in Kenya demonstrates the difficulties faced. Innovative solutions and support from stakeholders are needed to overcome these barriers.

Creating shared knowledge spaces for learners from different institutions offers advantages. Johanna expresses enthusiasm for shared learning spaces that foster collaboration and knowledge exchange. This approach promotes a sense of community and enhances the learning experience.

In conclusion, the evolving role of libraries, the relevance of the Digital Inclusion Index, the benefits of taking knowledge to rural areas, the need for sensitization on academic publishing, and the challenges of establishing education facilities in impoverished rural areas are essential considerations for ensuring quality education. Shared knowledge spaces further enhance collaboration and idea sharing. By addressing these aspects, society can work towards achieving the Sustainable Development Goal of quality education for all.

Erick Huerta Velรกzquez

The use of Information and Communication Technologies (ICTs) is playing a crucial role in preserving and disseminating local knowledge in indigenous communities. This is particularly important as local knowledge in these communities is often oral and unwritten. By utilising local storage and Internet access, ICTs enable the documentation and preservation of this knowledge through recordings and videos.

One notable initiative in this field is the Rizomatica project, which collaborates with indigenous communities to help them develop their own media and conduct research. This empowers these communities to digitise and preserve their local knowledge, which might otherwise be lost. By incorporating ICTs into their cultural practices, these communities are able to create comprehensive reservoirs of knowledge and bridge the gap between traditional and digital libraries.

There are also real-world examples of communities successfully integrating traditional and digital libraries. One such example is Quetzalan, which has established a communication centre. This centre serves as a hub for both traditional and digital resources, allowing community members to access and contribute to the preservation of their local knowledge. Additionally, there are indigenous communities that have taken it one step further by running their own mobile networks and even establishing public intranets within their libraries. These initiatives demonstrate how ICTs can bring together multiple concepts of libraries, creating inclusive spaces for the preservation and dissemination of local knowledge.

Furthermore, community collaborations play a vital role in effectively preserving and disseminating local knowledge. A partnership between UNESCO and local communities in Mexico has resulted in the development of a policy for indigenous community radios. This policy promotes the establishment and operation of community radios, which act as platforms for sharing and promoting indigenous knowledge. In another example, Phonotech has assisted a 60-year-old community radio in restoring and archiving old tapes, thereby making them accessible nationwide. These efforts highlight the importance and effectiveness of community collaborations in preserving and amplifying local knowledge through various channels.

In conclusion, ICTs, community collaborations, and the integration of traditional and digital libraries are powerful tools in the preservation and dissemination of local knowledge within indigenous communities. By harnessing the potential of technology, these communities can document, digitise, and preserve their unique and valuable oral traditions. The partnerships formed with organisations and initiatives such as Rizomatica, UNESCO, and Phonotech further enhance the impact and reach of these preservation efforts. Ultimately, the combination of ICTs and community collaborations contributes to the comprehensive and inclusive representation of indigenous cultures and their local knowledge.

yasuyo inoue

Libraries play a crucial role in bridging the gap between rural and urban areas, reducing inequality, and promoting social and economic development. They achieve this by utilizing information and communication technology (ICT) techniques, which enable them to provide essential services and resources to areas with limited access. By harnessing the power of ICT, libraries ensure that people in rural areas have equal opportunities to access information, education, and other resources that are readily available in urban areas.

In addition to being information hubs, libraries serve as important community activity centers, preserving culture, history, and promoting education. They provide a safe and inclusive space for people to come together, engage in various activities, and cultivate a sense of belonging. Libraries often host community events, such as workshops, lectures, and exhibitions, catering to the diverse interests and needs of the community. This active engagement with the community helps libraries become vital institutions that promote social cohesion and cultural preservation.

Libraries also play a significant role in supporting education and lifelong learning. They serve as educational centers, offering access to a wide range of educational resources and materials. Libraries house books, journals, online databases, and other materials essential for research and learning. By providing these resources, libraries create opportunities for individuals to expand their knowledge, acquire new skills, and pursue personal growth. Additionally, libraries support formal education systems by providing study spaces, access to computers and the internet, and assistance from knowledgeable staff.

Furthermore, libraries have the potential to stimulate the local economy by forming connections with businesses and supporting local industries. By collaborating with local businesses, libraries can showcase their products and services, attracting customers and contributing to their success. For example, a small-town library in Shiwa features exhibitions that highlight the local business of sake brewing, promoting tourism and local commerce. Additionally, libraries can collaborate with agricultural co-operators to organize weekly vegetable markets, supporting local farmers and promoting sustainable agriculture. Through these partnerships, libraries contribute to the growth of the local economy and foster community pride.

In conclusion, libraries play a vital role in society, connecting rural and urban areas, reducing inequality, and fostering social and economic development. Through the use of ICT, libraries ensure equal access to information and resources. They also serve as community hubs, preserving culture, promoting education, and supporting lifelong learning. Furthermore, libraries can stimulate the local economy by collaborating with businesses and supporting local industries. Embracing and strengthening libraries is crucial for creating more inclusive and equitable communities.

Patricia Hepworth

The analysis highlights the importance of digital inclusion in Australia, with a focus on the disparities that exist between metropolitan areas and regional/remote areas. The Digital Inclusion Index in Australia provides statistics on digital inclusion across the entire population. There is a significant difference in the digital inclusion scores between metropolitan areas and regional/remote Australia. This discrepancy is also observed in the lower digital inclusion index among Aboriginal and Torres Strait Islander peoples compared to the general Australian population. The study also reveals that digital exclusion and abilities online vary significantly across different age groups.

Libraries in Australia play a crucial role in addressing digital exclusion. They provide essential support and services in educating people on how to use computers, mobile phones, and stay safe online. Libraries are especially valuable in facilitating community-based connections and nationwide digital collections. For instance, Hume Libraries, located in a highly multicultural area, have implemented successful digital inclusion programs. These programs have been effective in harnessing the existing infrastructure, people, and community relations to promote digital literacy.

The analysis also reveals that libraries can provide a tailor-made and localized approach to delivering digital literacy programs. In collaboration with a local university, Hume Libraries worked towards delivering digital literacy programs specifically designed for culturally and linguistically diverse communities. This approach ensures that the programs meet the specific needs of the target audience while leveraging the resources and expertise available in the community.

Furthermore, discussions at the Asia Pacific Regional Internet Governance Forum highlighted the importance of digital inclusion. While there was not a direct library representative at the Brisbane meeting, discussions centered around the Digital Inclusion Index and the role of bodies like libraries in promoting digital inclusion. This demonstrates that digital inclusion is a recognized and important topic in regional forums and that libraries are seen as significant contributors to this agenda.

In addition to addressing digital exclusion, libraries also play a significant role in improving digital skills and AI media literacy. Libraries serve as important institutions for adults who are not in formal education to enhance their digital skills and acquire AI media literacy. With the advent of generative AI, the need for digital skills and AI media literacy is increasing, making libraries even more crucial in supporting individuals in acquiring these skills.

To conclude, the analysis underscores the critical importance of digital inclusion in Australia and the need to bridge the gaps that exist. Libraries have proven to be invaluable in addressing digital exclusion, providing support, resources, and digital literacy programs. The discussions held at regional forums further emphasize the role of libraries in promoting digital inclusion. Additionally, libraries play a vital role in improving digital skills and AI media literacy, supporting individuals, particularly adults not in formal education, in acquiring the necessary skills for an increasingly digital world.

Moderator – Maria De Brasdefer

During the conference, several speakers emphasised the role of the internet in empowering societies and advancing access to information. Maria de Brasdefer, one of the speakers, highlighted that meaningful access to the internet leads to societies where citizens can make better-informed decisions, ultimately resulting in more democratic societies. This argument is supported by the notion that when individuals have access to a wide range of information and resources, they are able to participate more actively in social and political processes.

Another important point discussed was the significance of documenting local knowledge and leveraging library infrastructure to ensure accessible internet. Maria underlined the importance of preserving local knowledge by presenting four case studies at the conference. These case studies showcased how local communities have utilised ICTs (Information and Communication Technologies) to document and store important aspects of their culture, such as songs, stories, and traditional practices. Additionally, community radios and initiatives like the itinerant museum were highlighted as effective ways to share and preserve local knowledge. However, it was also pointed out that challenges such as high humidity could cause the deterioration of stored materials, indicating the need for proper storage facilities and preservation techniques.

Furthermore, Maria and other speakers asserted that libraries can play a pivotal role in digital empowerment. They argued that libraries are essential in providing access to information, fostering media literacy, and offering coding lessons and training. The audience, participating in interactive questions using menti.com, agreed that libraries can contribute significantly to digital empowerment in various ways.

Overall, it was concluded that the internet and library infrastructure are powerful tools in advancing access to information and empowering societies. The promotion and preservation of local knowledge through the use of ICTs were also deemed crucial. The conference highlighted the positive impact that these initiatives can have on promoting more democratic societies, enhancing education, and expanding opportunities for individuals and communities.

Woro Titi Haryanti

The speakers underscored the critical role of knowledge discovery and digital transformation in libraries and their impact on the community. They emphasised that libraries play a vital role in preserving knowledge, conducting research, providing reference materials, and fostering networking opportunities. The implementation of digital platforms, such as Indonesian OneSearch and e-PUSNAS, was specifically mentioned as a means to enhance access to public collections and digital books.

Moreover, there was a strong advocacy for integrating libraries into the national data infrastructure. The National Library was recognised for its contribution to the development of the national data centre. This integration would enable libraries to further support the digital transformation efforts of the country.

The sentiment towards these initiatives was overwhelmingly positive. People acknowledged the value and importance of embracing digital technologies and using them to modernise and enhance library services. The speakers and the overall analysis suggested that by embracing digital transformation, libraries would be able to better serve the needs of their communities, improving access to information and fostering knowledge exchange.

Additionally, the discussion highlighted the broader significance of this digital transformation for the country as a whole. By integrating libraries into the national data infrastructure, the government can harness the wealth of information and resources available in libraries to fuel innovation, drive industry growth, and promote sustainable development.

In conclusion, the importance of knowledge discovery and digital transformation in libraries, as well as their integration into the national data infrastructure, was emphasised. The positive sentiment towards these initiatives highlighted the potential benefits they hold for both libraries and the wider community. This analysis provided valuable insights into the role of libraries in the digital age and the steps that can be taken to ensure their relevance and impact in an increasingly digital world.

Session transcript

Moderator – Maria De Brasdefer:
We’re good? Okay, so hi everyone and good afternoon. First of all, I would like to welcome you to this session. My name is Maria de Bras de Fer and I work as a policy and research officer for the International Federation of Library Associations and Institutions. So today and also in view of this year’s IGF theme, The Internet We Want, what we would really like to do is to take this opportunity not just to present a series of short cases to you, but also to exchange and explore with you the topic of digital empowerment and to approach it from a slightly different perspective. So of course we know that the fact that you’re here sitting in this room just as well as all the other many people who are attending the IGF this year, it means that you’re already aware of the great value that lies in using the internet as a tool to advance access to information, but also and more importantly on the great value that meaningful access has on our societies as a whole. We also know that a society where citizens can make better informed decisions will automatically translate into a more democratic society where people will exercise their citizenship in a more participatory way, but ultimately they’ll also be able to uphold their rights both inside and outside digital spaces. But of course saying that, it’s the easy part, so we are aware of that and in that case the real question remains how can we do that and also what are the best approaches for this. So having this in mind, today we would really like to present you with a short series of four five-minute case studies that will look at the themes that lie at the intersection between digital empowerment, the documentation of local knowledge, but also the mobilization of the global library infrastructure to help people access the internet and make the most of it. So for this we have four speakers with us today. I think my slides are not showing, yeah, there it is. So we have Eric Huerta Velazquez from Rizomatika in a collaboration with CIDSAG and APC. We also have Woro Titi Haryanti from the National Library of Indonesia. We have also Trish Hepworth from the Australian Library and Information Association who will be joining us online. And we also have Yasuyo Inoue from the Tokyo University in Japan. But before we dive into these case studies, what we would like to do is also we would really like to hear from you because as we’re not many today, it would be good to exchange more. So we would like to do a quick reflection exercise with you first. So for this, and in case you’re not familiar with it, you can either scan the QR code with your phone or you can enter the website www.menti.com and then you will see a space where you can enter the code 18381615. I will give you a couple seconds. So now what you should see on your phones is the following question. So we have the question of have you thought about how can libraries contribute to digital empowerment? If you’ve thought about it before, you can share how, in what ways, and in case you haven’t, you can also share no, it never crossed my mind, or simply no. So far we only have one yes. These responses are anonymous, but of course you will also be able to comment on them at the end of the session if you’d like. So we have a second response, yes, media literacy, awareness, coding lessons, et cetera, yes, that’s very accurate. We have another just a yes. More yeses. So that’s good. We don’t have any noes so far. Device knowledge, which is not yet digital, yeah, that’s a very interesting one, too. So I think we don’t have any other replies, but this is good news, because it means that all of us were more or less on the same page about it. It means that we’ve thought about it before, but maybe we don’t know exactly how. And this is also why we’re here gathered today, to discuss a little bit about that and give you some insights on that. So for this, it is time now for our first presentation. So our first presenter will be Mr. Eric Huerta. So Eric works at Rizomatica in collaboration with CITSAG and APC, and he’s also an expert of the International Telecommunication Union for connectivity issues related to remote and indigenous people, and has served as a co-rapporteur on development of information technology and communication in remote areas and groups with unattended needs. Eric, please go ahead.

Erick Huerta Velรกzquez:
Thank you, and I’m sorry for being late. I got lost within the rooms. It looks very similar, and I went to the different ones. Our work in Rizomatica is mainly with indigenous communities, and so that made me think about what we could share in this session. It was more about, well, the role of libraries, but also just to question what is a library for everyone, and maybe if it does the same. I think one of the things that, like, it’s a barrier for the use of the Internet. For some people, it’s that it’s non-meaningful content within the Internet. That’s explained as one of the barriers of Internet adoption, and some of them is about the content. Sometimes some communities even say, well, when they have to take a decision on which technology they have to use, sometimes some communities refuse to get into the Internet because of the kind of content that people will find there, that sometimes have no relations to the reality, or sometimes it’s exposed to certain content that they don’t want to be exposed to that specific content. So, well, that made me think about, can we put the first one? Well, that’s the sort of communities we work with. We help many, we work together with indigenous communities that want to run their own media, such as community radios, such as community mobile networks, no? They are under their own mobile networks, and also we have this applied research program in which communities define which sort of local research that they want to do for a specific task, and there’s some examples, no? So there, you have the opening of a communication center in a rural area in Quetzalan, that’s some of the sort of the communities that have their own mobile network, and this is a community in the south of Mexico that works a lot with traditional medicine, and that’s the project. The next, please. So my first question is, what is a library, no? So when we think about a library, so we mainly think in the picture that is in our left, but when you talk with some communities, and what is a library for them, it’s this, no? It’s the territory, because most of the territory, it’s talking, it’s saying, it’s where they learn, it’s where they teach each other, it’s where they, it’s where they gather the local knowledge and this meaningful knowledge to manage and understand the territory. And so, how do we put together these different concepts of library, or this reservoir of knowledge that it’s in the nature on the territory of the communities, and the concept of library that we find in books, no, and the storages of knowledge in books, no? And what are the chances that the internet give us to do so? So the next one. So I think that ICTs can bring together these two concepts of library, mainly because most of the knowledge, for instance, in the communities, it’s oral knowledge, no? And it’s more related to knowledge that is full in practice, no, it’s not actually, so for many of the languages there are oral, are starting to be written, but not, but mainly are languages that are not written. And so, that is the main difficulty of bringing local knowledge into the libraries, because the libraries are mainly related to books. But when we bring ICTs into a library, even if it doesn’t have a connectivity, but has a local storage and that, then you can bring inside songs, then you can bring inside music, then you can bring videos, then you can bring all those stories that form part of the local knowledge of the communities. And so, this sort of work is mainly what many communities are interested on. So for instance, in this picture that it’s in my right, they are sharing, these persons are sharing their recovery, well, experience on the recovery of some of the local language and the local variety of their languages and bringing out some words and bringing out some stories and some research they did on that. And then, this space, they are having a workshop on how to put this knowledge together in a handbook, on a manual, and so on, so that they could share it better with other peoples. So that’s the idea. The other photograph is from a community that was one of the first in having these mobile, self-mobile networks. And also, they have a university, and they have a library from the university as well. But one of the things that they were more interested in complementing this library was the intranet. They said, well, we got this library, we got the books here, but we need a lot of, we need to document a lot of the findings that we are having from our knowledge. We also need to bring all the videos, the music that we need for. And that’s a complementary part of the library of the local university, of the indigenous local university. And then, the next one, please. And then, in this one, I wanted to share in the recent years, we have been, we mainly work with, from a long time ago, with community radios. But this, also, we open this local research program from, that bring us some other different experiences through. So, I’m going to talk about these two experiences, two chances that we have to bring and document specific knowledge. We got, with UNESCO, some consultations to develop the policy for indigenous community radios in Mexico. And from there, some needs came, some specific needs came, and some of them were specifically related with local archives of the radios. So, a lot of the, so the radios, and this one in the left, it’s a radio that was the first community radio of Mexico, so it’s about 60 years old, this community radio. And they have a splendid archive of many of the voices, knowledge, festivals, and so on. And was about to get lost, because that’s an area that is very humid and so on. So, when expressing the needs of being in this local archive, the phonotech take interest in that, and help them to restore the tapes, and also they are now keeping them, they have a copy, and they are now keeping in the phonotech now, so it has access for, well, they ensure that this archive will last forever. And then also, some communities, they decide together that some community radios decided to have a one-hour program every week, and that’s in the national radio. And so that’s, that has become also an important reservoir for knowledge in the communities. For instance, it has, they determine which are the subjects they want to talk about, but they, each of these programs is really… rich in knowledge, because some, for instance, some of them talked about the textiles, and they bring together a lot of information that is not in the books, is not in there, because it comes from the person there. And well, these other two, yes, very quickly, one is a community that started the research because they were Afro-descendant, and they wanted to be what the origin of them was. And the last one is another indigenous community that they run their itinerant museum, and these pictures that you see there, then you touch these pictures, and then they play the music or they play the stories of that. And that, well, that was what I wanted to show you for the last one, and we want, thank you very much. And we wanted to share on these possibilities of using the ICTs to incorporate local knowledge in libraries, and those are where you can find more information about it. Thank you.

Moderator – Maria De Brasdefer:
Thank you so much, Eric, for sharing all these nice cases with us, but also to emphasize on the importance of not just promoting local knowledge and building up local knowledge, but also on the importance of storing it, and how hard it is sometimes for certain communities to access not only their own knowledge, but also to store it sometimes, and also the role that libraries that play in it. So thank you so much for sharing it. So please keep in mind that there will be a space for asking questions to the speakers, but now we’re going to move on towards our next presentation. So our next presenter will be Woro Titi Haryanti. So Woro, as I mentioned, she’s a senior librarian from the National Library of Indonesia, and she has also been working in capacity development for librarians and also library technicians across Indonesia for more than 30 years. So go ahead, Woro.

Woro Titi Haryanti:
Thank you. Yes. I agree with Eric said that what is library is a reservoir of knowledge, and I’m going to tell you that what the National Library rules to reveal the knowledge discovery to the community. Yes. Yeah. And go to the second. Next please. Yeah. Okay. Next please. Yeah. This is the presidential directive, five steps to be taken to accelerate the national digital transformation. This is not the area of the National Library, but this is close related to the National Library. It is the function of the Ministry of Communication and Information. It should be taken into immediate action to expand the internet access and develop digital infrastructure and provide internet services for all. I mean, there is a targeted for the people, the population to get the access to the internet, and this is important for us as a library. So as long as they get the access, then the knowledge can be transferred there. And then the second, it’s targeted about 196,000,000, 7,014, and 70, that is the targeted to get the access of the internet. And then we have to prepare transformation digital roadmap for the government, strategic for public services, social ads, and et cetera. And then the third is to take immediate action to integrate national data center. This is also a library can contribute the data that is to be restored in the national data center. And then into taking it down into the need of the digital talents. This is also important for us, because through this digital talent, that there will be training. There will be training for peoples to be able to access the internet. That is, well, it targeted quite a lot from the Ministry of Communication. And this facilitates to this data center, that national data center, it needs to facilitate all the governments to restore their, to store their data, and then can be accessible for the community. And this also, the digital talent include digital literacy. Their target is all over Indonesia. They collaborate with 12 ministries, private sectors, and communities. Digital skills, digital culture, digital ethic, and digital safety. This will be covered in the curriculum, digital society, digital economy, and digital government. And then they divide, they put it into two categories for the training, that is the training for the skills for proficient class, and then also the empowering the cyber creativities that is the inclusion class. And this, next please, and this also directed from the president for the libraries. To improve and expand access to the digital libraries in order to accelerate the human resource development who will master science and technology, improve creativities and innovations to the create job opportunities, reduce unemployment rate, increase income per capita, as well as increase foreign exchange to create prosperity for all. That is the directive to the library. Next. This is the function of the role and the function of the national libraries. Yes, as the library, as the networking center, and also the preservation center. This networking means that we will collaborate with other institutions and then make a network to create more local knowledge, create knowledge that can be shared together. And then preservation center, as this also we have to localize their local information, local content that should be preserved and also can be accessed. And then this research center, depository center, and reference library center, and of course this library development center, but in here, this is the role of national library. And we have also obligation, next thing, we have the obligation to develop library national system in supporting national education system and guarantee the sustainability of libraries as a learning center. That’s again that we have to provide them with the access and also the content. And guarantee the availability of the library services throughout the nations and guarantee availability of collections through translation, transliteration, transcription, and transmedia. And also we promote reading habits and also develop library collection and develop national library itself. And we also have to be developed and appreciate those who preserve conservative and conserve manual. Next, please. Yes, this is libraries is not yet fully integrated to the national data infrastructure. Yes, it is to implement what the directive of the presidents that the national library is as part of the government, so we have to contribute to send our data to the national data center because this is an example of NLE, that’s the national library. And then we have two that actually enlist and then one search. Enlist is the open, what is it, the application for, what is it, to do the library management that is based on the mark base and then online. One search that I will talk about it later on. And also IPUSNAS, it can be accessed all over Indonesia. And other ministry will do the same thing. Next, please. Yes, this is knowledge discovery that is the, we have the Indonesian OneSearch. And here is the Indonesian OneSearch is single search portal for all public collection from libraries. At the moment, we can collect, we can have the 12,608,000 records. And then also the member is around 11,000, sorry, it’s more than, not 11,000, actually this is for the repositories, the repository itself is 11,000. And this is connected almost all the libraries in Indonesia. Not all, but mostly about more than 20% of the libraries in Indonesia is connected to us. And this is for the, the system is for the anti-plagiarism tool, subject analysis tools. And OII, OPH, Open Archive Initiatives. Next. Yeah, this is the institution I mentioned, this library institution is 300, librarians, the repository is 4,000, and then the repository institution is 11,000. It’s a very big knowledge can be reserved there. So we can, more and more knowledge is coming, then we also motivate those who are not yet become part of this program, they have to join us. And we also give them freedom to whether they want to send it to us, it’s only the abstract or only the metadata or full text is up to their policy in individual’s institutions. And we have that, what is it, the contributors, it’s quite a lot, National Library, of course, the biggest contributors, and also there’s a university, yeah, that’s also contribute their collections to us. Next please. Yeah, this is the e-mobile. We have the e-PUSNAS that I mentioned earlier, yes, this is, we have that social media-based library provide digital books to read, share, and shop. This application available on mobile, and then using digital right management and technology as the security. And in this also, we have the menu is for e-donations, for those who write books and then want to donate their books and give the right to the National Library. So you can, and up to now we have around 140 books that is donated to the National Library, and that is free, then everybody can access it. Well we can, it doesn’t have the royalty things, no, we are not talking about the royalty things because we can give it, what’s it, voluntarily, and it’s free. Okay, next. And this is another one, this is the, our latest is Bintang Purposedu, this is for the education, and we work quite close to the Minister of Education, and also Minister of Religion. Why Minister of Religion? Because Minister of Religion, they also have schools that we can collaborate in. And this platform provide to improve access to the digital content for schools and universities. The contents are varied, such as audio books, video books, educational tutorials, scientific journals, all of this can be accessed via multiple platforms. The total collections that we have in here is for elementary schools, a total collection, for the elementary schools is 26,000-something, and then junior high school is 22,000-something, and senior high school is 50,000, and then university is 262,000-something, and the digital books from the Minister of Religion, we have 58,000, and from the Minister of Education we have 1,063,000 books that is stored there, so it can be accessible for the community. I think next, yeah, next, oh, this is, sorry, yeah, this is for the eResources, eResources is the service that we have, this is digital collection for Service National Library Indonesia, which are either subscribed or made independently by the National Library. It means that we subscribe the books that I think everybody is familiar with here, and there’s one, there is Niliti, that’s mainly for the research for the manuscript, and also there’s Balai Pustaka, that is we digitize their book, and then we put it here, and this is free, and to be able to access this, you have to be the member of the National Library, and you can do it online to become the National Library members, and the National Library memberships with the membership number, we now connect it to our national ID, and that’s it, integrated. Thank you, that’s all, I think.

Moderator – Maria De Brasdefer:
Thank you so much, Woro, for sharing this case, too, yeah, I can also think it is indeed an interesting example of a case that can be followed by other libraries, but not just in terms of digital empowerment, but also in terms of economic growth that is tied to the use of libraries, so thank you so much for sharing that with us, and now we’re going to move on to the next case, which is the case from Trish Hepworth, who is the Director on Policy and Education for the Australian Library and Information Association, and she works across the sector to empower the workforce, and also strengthen libraries to achieve a socially just and progressive society.

Patricia Hepworth:
Thank you, Maria, and I wish I was there, but thank you very much for having me. I’d like to acknowledge today that I’m coming from the lands of the Ngunnawal and the Gambri people, and pay my respects to elders past and present. Maria, are my slides up? Thank you, perfect, brilliant. I guess I wanted just to very quickly have a little bit of a look at what this looks like from Australia. So in Australia we have an index called the Digital Inclusion Index that gives us statistics about digital inclusion across the whole population, and the Digital Inclusion Index measures the accessibility, the affordability, and the ability of people online, and then basically gives a score. What you can see up there is some of the various vectors that we know are wildly different across the country. So Australia is a very concentrated metropolitan kind of a country. Most of our population lives in cities along the coast, and there is a huge difference between the digital inclusion scores in metropolitan areas, which are quite high, and the digital inclusion scores in regional and remote Australia, which are much lower. Similarly, for our First Nations people, so the Aboriginal and Torres Strait Islander peoples of Australia, we can see that they have a much lower digital inclusion index than the Australian population generally. But again, in particular, the further you go from those metropolitan areas, the lower the digital inclusion index. Now the next slide, please, Maria. And across all of the different vectors, we see a really significant change across age grounds. So this graph on your screen at the moment talks about digital exclusion. So it’s looking at those two bits around accessibility and affordability. Now, as you can see, for younger age groups, the ability to access digital worlds to be online is much higher. And as you go through the older age groups, that accessibility really drops. And if I could have the next slide, Maria. And that probably unsurprisingly goes with ability as well. So and we see this across all of the things, the accessibility and the ability of people are closely correlated. So people with the most access also have the most ability and comfort online, those with the least access. So First Nations people, regional people, older people, they have the least ability online. If I could have the next slide to have a look at what that actually looks like in practice, only 23 percent of Australians were confident that they could edit a video and post it online. So the fundamental ability to be on TikTok, for example, is only shared by a quarter of people in Australia, only 35 percent. So just over a third were confident that they could work out if they were being harassed online and if they were being harassed, what they could do with it or which authorities they could report it to. And if I could have the next slide. While people’s abilities and media literacy is quite low, people’s interest in being secure and able digital citizens is very high. So when you ask people, they are really keen to know how they can protect themselves from scams. They want to use media across all the different forms of media to stay connected with community, to stay connected with friends and family. And if we have a look at the next slide, this is very much where libraries come in. So across the library systems and in particular libraries in educational institutions, so schools and TAFEs, which is our vocational education in Australia and universities and public libraries, we see that librarians are already working solidly in these areas. So you have the infrastructure from libraries to have the access to the Internet, as Waro and Eric have said, the ability to access community-based connections, but also nationwide digital collections. So you have those accessibility ports. But we also see with libraries is a huge role in bolstering that ability as well. So when you ask libraries, they are helping people find resources in the catalogue and they’re helping people find information online. But they’re also providing a basic support about how to use computers, how to use mobile phones, how to use the internet, how to use the Internet, how to use the Internet or about how to use mobile phones, how to stay safe online. And if we can have the next slide, I just wanted to do a very quick look at a little local library, Hume Libraries, which is based down on Narm, so on Wiradjuri country in Melbourne. So Hume Libraries is a very, is situated in a highly multicultural area. And so they can see that all of those cross the correlation. So they’ve got communities who have English as a second language, which is often one that looks at digital exclusion. They have older communities who often have English as a second language and they have outer metropolitan. So that’s another one where you will find people of lower digital literacy. If I could have the next slide. So Hume Libraries have run a huge amount of work in conjunction with the local university to actually run out a research project around how do we deliver digital literacy programs for culturally and linguistically diverse communities. And the thing about using the libraries is that the infrastructure was already there. So they were able to pull together the resources they had around community engagement. They were able to harness the people in the libraries and also the community relations that were already there. And they had a system in place for the programs. So working with these three together, they very successfully managed to tailor digital inclusion programs for called communities or culturally linguistically diverse communities that went across age ranges and abilities. So that looks different for different people. You might have people who are absolutely fluent in spoken English, but unable to do written English or perhaps need their content in video or audio format. You might have people have different accessibility issues. You need to be able to find case studies and ways of working with people that relate to collections that are important to them and communities in which they are already participating. So running these sort of programs in your local library means that you can have a very tailored experience where you leverage the ability to have those central points for access, but also then brings in all of the support from the libraries to upskill that ability piece. If I could have the very last slide. I think some of the takeaways that we would say from Australia’s experience is that it’s not easy. Libraries are there. The public libraries are in every different community across Australia. So we go regional and remote. We have linguistically diverse. We have the older people going in. There is no other organisation that is currently in a better position to be able to have the people already coming in the door with the access. But having said that, every single community is different. So one of the things that the culturally and linguistically diverse guidelines, for example, developed was a list, a toolkit for each library to then be able to work with local partners to build its own localised program. And the outcome of the program was that we had a group of people who went from being quite digitally nervous to being digitally confident. And that meant that they were more confident digital citizens, but they were also more confident citizens and better able to partake in Australian society and part of democratic society. So it was a resounding success as one case study. It’s replicated across the country. And I hope that was of some interest to you all. Thank you.

Moderator – Maria De Brasdefer:
Thank you, Trish, so much also for sharing the case of Australia. And also, I guess it’s really interesting also to see how a country as culturally diverse and linguistically diverse as Australia also, this could be seen as a challenge, but libraries seem to be addressing this in a very successful way, despite all the diversity there. So thank you so much for sharing this case. So as we’re running a bit out of time, I will move on to our next and last presenter, who is Yasuyo Inoue. And she will give us a local perspective on this topic. Yasuyo is a professor on public librarianship at the Tokyo University. And she has been a professor on other universities for more than 35 years, where she has been focusing mainly on children’s and young adult library service. And also in the past, she was also a member of the intellectual freedom at the library committee of the Japan Library Association. Please go ahead, Yasuyo. Thank you.

yasuyo inoue:
Thank you, Maria. The time is not enough, and I didn’t bring so many slides. So I just wanted to say that some general informations based on in Japan, but right now, from elementary school to junior high and senior high, most of the kids have their own tablet or PC. So as for the technical things that they know how to use their computers, but the problem is that the lack of content. That’s why the library needs some roles to provide informations to the kid. And maybe 50 years later, most of the Japanese people can use any kind of the computers, but it’s later on. So right now, what the library should do, I think library can do that with using ICT techniques. The library can connect to a rural area and urban area. There’s the unfair situation right now, but they can connect to these unfair situations or maybe different strong direct area we can connect to each other through the materials on the informations at the libraries. Overall, the library has three roles. One is, as the other speakers mentioned, that kind of a community activity center, as Eric said, preserve their own culture or traditions. And another one is kind of educational or learning center or information center. So not only books, but also a lot of data. So in that sense, library is a kind of a data center. So we concentrated and stocked a lot of big data. And now the many public libraries in Japan, especially prefecture library, big libraries, they want to digitalize those traditional historical materials into digitalized materials and provides to the users. Especially National Diet Library, that National Central Library in Japan, they have the so huge big data. So they changed their National Diet Library role and changed the copyright role. Now they provide the data via internet. So the content provides to the each users. So more libraries can provide more data, not only big National Diet Library level, but also local public library make the community get together. Like a slide right now, it’s a very small town library, Shiwa Library, close to the Morioka City. I don’t know why New York Times said that Morioka foreigners should visit, I don’t know. But this is a small town library, but the central photo that is a Japanese sake, they just exhibit inside the library and they show that how they make brew the sake. And sometimes they tell the people how to brew and what is the taste and what is the character of the sake. So they wanted to show that their local business to the people at the library. And the right side, that is the library connect to the agriculture corporation. So once a week, there was a kind of the vegetable market in front of the library. So people buy the vegetable and they come into the library and there’s a collection of the recipe. So which vegetable did you buy? You can use this recipe at your own home. So the agriculture business and the library connected to Kinley on the wall where the farmers grown up the vegetables in that local area. So the library stimulate the local business. So I think that is another community activity center that library played it. So not only the real things, but also maybe in the future, more small town library will provide the digital materials. So if you have any trouble or questions, go to the local library. So maybe they will help you how to expand your local business. Thank you.

Moderator – Maria De Brasdefer:
Thank you. Thank you so much. Yes, through and also all of you who are here today. And yeah, I think, well as a final remark, I can only say that if we see all these cases that you presented, you can also see how the role of libraries, well you can see this common factor in all the cases about how the role of libraries really is evolving and yeah, with time and also with the use of internet and access and all that the communities can get out of it at a local level. So thank you very much for sharing it. So now we still have a couple minutes left. So I would like to open the floor for the people who are here to ask any questions to the speakers. I don’t know if we have any questions online. No, okay. Yeah. Okay.

Audience:
Thank you. I did have a question for Trish Hepworth, but is she still online? She is. I am. Oh, you are, great. Good to see you. Trish, we had some contacts within IFLA in the past couple of years. And one of the contacts we had was that you made a presentation at a library webinar that I organized in the framework of the Asia Pacific Regional Internet Governance Forum. And that was, when was that? Two years ago, I forget precisely. But I wanted to ask you, was there anybody at the Brisbane meeting in August this year, the Brisbane meeting of the Asia Pacific Regional Internet Governance Forum, was there anybody there who was talking about the contribution of libraries? Because it seems to me that your comments about the Digital Inclusion Index are highly relevant to all countries. In fact, you’ve got a model there which we should all probably imitate, that is, countries which haven’t got one should have one, have that sort of system and monitor it and develop it. But was there anybody at Brisbane who was talking about library information services, whether on the coast, as you said, in the metropolitan areas or in the outback, in remote areas? Do you know?

Patricia Hepworth:
Thanks, Winston, for the questions. We didn’t have an Ali or a library representative as such, but we definitely had people who were at that forum talking to things like the Digital Inclusion Index and also to the role of other sort of bodies, such as libraries. And I think it’s, you know, one thing I know is very top of mind for our policy makers in Australia at the moment is the increasing need around things like media literacy and digital skills with the rise of generative AI. And that’s certainly something that we know is getting a lot of attention in sort of big structural things. So, you know, there’s both the doom and gloom. I know how will people be able to detect AI scams or what does this mean for the future of internet search? But also those huge potentials. So when you’re, you know, working with people who might have lower levels of written literacy, the ability to use generative AI to help support them with job applications or even in writing search and prompts is huge. And so certainly from a policy perspective at the moment, I think there’s a really important role for libraries to play in that digital skills and that AI media literacy space, which realistically, if you don’t have libraries doing that work in a country like Australia, there isn’t anybody else where adults who are not in formal education really have to go.

Moderator – Maria De Brasdefer:
Thank you, Trish. So do we have any other questions from the floor?

Audience:
Okay, thank you. I am Johanna Munyao. I’m a member of County Assembly from Kenya. I want to appreciate the presenters for packaging the information in the right way. Very clear. And also I want to appreciate the approach of taking knowledge closer to our rural flock. As I do appreciate that, I’ve realized that this approach helps our young ones to come together, socialize, share knowledge, maybe also get exposed. And my question is whether there is sensitization on how we have realized that in the area of academia, the most tricky part is how to publish some of these works or maybe some of the activities so that others from elsewhere can be able to access the same information, access our experiences. Do we really ever contract feasibility studies to either vet on the content and also see the compliance of the same in terms of legal frameworks which may govern whatever you publish to be accessed through the internet? And again, I come from a rural area where the poverty levels are a threat, very low. So you’d realize it is like where the government is not able to come in and support fully, coming up with such structures, however good they are and I really appreciate, becomes a challenge. Personally, I am running an institution with a very tiny library. And the approach I’ve gotten from here has really enlightened me such that I have thought of only addressing the needs of the learners within that small institution. But I have seen other learners can come together from even other institutions and with such an access of such facility, be able to share knowledge and even be able to take it to a higher level of publishing the same in the internet and sharing the experiences with the world over. Thank you.

Moderator – Maria De Brasdefer:
Thank you so much. I think maybe we have time for one last question. Yeah. No? Yeah, well, I think we’re at the end of our session anyways but thank you so much to all the speakers who are here today and who presented and thank you so much for sharing your cases and your stories with us and also thank you for the attendees and the questions. Really, we really appreciate your presence and also if you would like to collaborate with us in the future or if you have any ideas for opportunities or collaboration with libraries, please feel free to reach out to us. Thank you. Thank you. Thank you. Thank you, Trish. I don’t know if you can see me but can you hear me? Thank you, Trish. Thank you. I think this one belongs to her, but.

Audience

Speech speed

143 words per minute

Speech length

595 words

Speech time

250 secs

Erick Huerta Velรกzquez

Speech speed

123 words per minute

Speech length

1415 words

Speech time

688 secs

Moderator – Maria De Brasdefer

Speech speed

152 words per minute

Speech length

1697 words

Speech time

669 secs

Patricia Hepworth

Speech speed

168 words per minute

Speech length

1740 words

Speech time

622 secs

Woro Titi Haryanti

Speech speed

139 words per minute

Speech length

1650 words

Speech time

711 secs

yasuyo inoue

Speech speed

142 words per minute

Speech length

686 words

Speech time

290 secs

AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sarim Aziz

In the discussion, multiple speakers addressed the role of AI in cybersecurity, emphasizing that AI offers more opportunities for cybersecurity and protection rather than threats. AI has proven effective in removing fake accounts and detecting inauthentic behavior, making it a valuable tool for safeguarding users online. One speaker stressed the importance of focusing on identifying bad behavior rather than content, noting that fake accounts were detected based on their inauthentic behavior, regardless of the content they shared.

The discussion also highlighted the significance of open innovation and collaboration in cybersecurity. Speakers emphasized that an open approach and collaboration among experts can enhance cybersecurity measures. By keeping AI accessible to experts, the potential for misuse can be mitigated. Additionally, policymakers were urged to incentivize open innovation and create safe environments for testing AI technologies.

The potential of AI in preventing harms was underscored, with the “StopNCII.org” initiative serving as an example of using AI to block non-consensual intimate imagery across platforms and services. The discussion also emphasized the importance of inclusivity in technology, with frameworks led by Japan, the OECD, and the White House focusing on inclusivity, fairness, and eliminating bias in AI development.

Speakers expressed support for open innovation and the sharing of AI models. Meta’s release of the open-source AI model “Lama2” was highlighted, enabling researchers and developers worldwide to use and contribute to its improvement. The model was also submitted for vulnerability evaluation at DEF CON, a cybersecurity conference.

The role of AI in content moderation on online platforms was discussed, recognizing that human capacity alone is insufficient to manage the vast amount of content generated. AI can assist in these areas, where human resources fall short.

Furthermore, the discussion emphasized the importance of multistakeholder collaboration in managing AI-related harms, such as child safety and counterterrorism efforts. Public-private partnerships were considered crucial in effectively addressing these challenges.

The potential benefits of open-source AI models for developing countries were explored. It was suggested that these models present immediate opportunities for developing countries, enabling local researchers and developers to leverage them for their specific needs.

Lastly, the need for technical standards to handle AI content was acknowledged. The discussion proposed implementing watermarking for audiovisual content as a potential standard, with consensus among stakeholders.

Overall, the speakers expressed a positive sentiment regarding the potential of AI in cybersecurity. They highlighted the importance of open innovation, collaboration, inclusivity, and policy measures to ensure the safe and responsible use of AI technologies. The discussion provided valuable insights into the current state and future directions of AI in cybersecurity.

Michael Ilishebo

The use of Artificial Intelligence (AI) has raised concerns regarding its negative impact on different aspects of society. One concern is that AI has enabled crimes that were previously impossible. An alarming trend is the accessibility of free AI tools online, allowing individuals with no computing knowledge to program malware for criminal purposes.

Another concern is the challenges AI poses for law enforcement agencies. AI technology performs tasks at a pace that surpasses human comprehension, making it difficult to differentiate between AI-generated content and human interaction. This creates obstacles for law enforcement in investigating and preventing crimes. Additionally, AI’s ability to generate realistic fake videos and mimic voices complicates the effectiveness of digital forensic tools, threatening their reliability.

Developing countries face unique challenges with regards to AI. They primarily rely on AI services and products from developed nations and lack the capacity to develop their own localized AI solutions or train AI based on their data sets. This dependency on foreign AI solutions increases the risk of criminal misuse. Moreover, the public availability of language models can be exploited for criminal purposes, further intensifying the threat.

The borderless nature of the internet and the use of AI have contributed to a rise in internet crimes. Meta, a social media company, reported the detection of nearly a billion fake accounts within the first quarter of their language model implementation. The proliferation of fake accounts promotes the circulation of misinformation, hate speech, and other inappropriate content. Developing countries, facing resource limitations, struggle to effectively filter and combat such harmful content, exacerbating the challenge.

Notwithstanding the negative impact, AI also presents positive opportunities. AI has the potential to revolutionize law enforcement by detecting, preventing, and solving crimes. AI’s ability to identify patterns and signals can anticipate potential criminal behavior, often referred to as pre-crime detection. However, caution is necessary to ensure the ethical use of AI in law enforcement, preventing human rights violations and unfair profiling.

In the realm of cybersecurity, the integration of AI has become essential. National cybersecurity strategies need to incorporate AI to effectively defend against cyber threats. This integration requires the establishment of regulatory frameworks, collaborative capacity-building efforts, data governance, incidence response mechanisms, and ethical guidelines. AI and cybersecurity should not be considered in isolation due to their interconnected impact on securing digital systems.

In conclusion, while AI brings numerous benefits, significant concerns exist regarding its negative impact. From enabling new forms of crime to posing challenges for law enforcement and digital forensic tools, AI has far-reaching implications for societal safety and security. Developing countries, particularly, face specific challenges due to their reliance on foreign AI solutions and limited capacity to filter harmful content. Policymakers must prioritize ethical use of AI and address the intertwined impact of AI and cybersecurity to harness its potential while safeguarding against risks.

Waqas Hassan

Regulators face a delicate balancing act in protecting both industry and consumers from cybersecurity risks, particularly those related to AI in developing countries. The rapid advancement of technology and the increasing sophistication of cyber threats have made it challenging for regulators to stay ahead in ensuring the security of both industries and individuals.

Developing nations require more capacity building and technology transfer from developed countries to effectively tackle these cybersecurity challenges. Technology, especially cybersecurity technologies, is primarily developed in the West, putting developing countries at a disadvantage. This imbalance hinders their ability to effectively defend against cyber threats and leaves them vulnerable to cyber attacks. It is crucial for developed countries to support developing nations by providing the necessary tools, knowledge, and resources to enhance their cyber defense capabilities.

The pace at which cyber threats are evolving is surpassing the rate at which defense mechanisms are improving. This disparity poses a significant challenge for regulators and exposes the vulnerability of developing countries’ cybersecurity infrastructure. The proactive approach is crucial in addressing this issue, as reactive defense mechanisms are often insufficient in mitigating the sophisticated cyber threats faced by nations worldwide. Taking preventive measures, such as taking down potential threats before they become harmful, can significantly improve cybersecurity posture.

Developing countries often face difficulties in keeping up with cyber defense due to limited tools, technologies, knowledge, resources, and investments. These limitations result in a lag in their cyber defense capabilities, leaving them susceptible to cyber attacks. It is imperative for both developed and developing countries to work towards bridging this gap by standardizing technology, making it more accessible globally. Standardization promotes a level playing field and ensures that both nations have equal opportunities to defend against cyber threats.

Sharing information, tools, experiences, and human resources plays a vital role in tackling AI misuse and improving cybersecurity posture. Developed countries, which have the investment muscle for AI defense mechanisms, should collaborate with developing nations to share their expertise and knowledge. This collaboration fosters a fruitful exchange of ideas and insights, leading to better cybersecurity practices globally.

Global cooperation on AI cybersecurity should begin at the national level. Establishing a dialogue among nations, along with sharing information, threat intelligence, and the development of AI tools for cyber defense, paves the way for effective global cooperation. Regional bodies such as the Asia-Pacific CERT and ITU already facilitate cybersecurity initiatives and can further contribute to this cooperation by organizing cyber drills and fostering collaboration among nations.

The responsibility for being cyber ready needs to be distributed among users, platforms, and the academic community. Cybersecurity is a collective effort that requires the cooperation and active involvement of all stakeholders. Users must remain vigilant and educated about potential cyber threats, while platforms and institutions must prioritize the security of their systems and infrastructure. In parallel, the academic community should actively contribute to research and innovation in cybersecurity, ensuring the development of robust defense mechanisms.

Despite the limitations faced by developing countries, they should still take responsibility for being ready to tackle cybersecurity challenges. Recognizing their limitations, they can leverage available resources, capacity building initiatives, and knowledge transfer to enhance their cyber defense capabilities. By actively participating in cybersecurity efforts, developing countries can contribute to creating a safer and more secure digital environment.

In conclusion, regulators face an ongoing challenge in safeguarding both industry and consumers from cybersecurity risks, particularly those related to AI. To address these challenges, developing nations require greater support in terms of capacity building, technology transfer, and standardization of technology. A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucial components in building robust defense mechanisms and ensuring a secure cyberspace for all.

Babu Ram Aryal

Babu Ram Aryal advocates for comprehensive discussions on the positive aspects of integrating artificial intelligence (AI) in cybersecurity. He emphasizes the crucial role that AI can play in enhancing cyber defense measures and draws attention to the potential risks associated with its implementation.

Aryal highlights the significance of AI in bolstering cybersecurity against ever-evolving threats. He stresses the need to harness the capabilities of AI in detecting and mitigating cyber attacks, thereby enhancing the overall security of digital systems. By automating the monitoring of network activities, AI algorithms can quickly identify suspicious patterns and respond in real-time, minimizing the risk of data breaches and information theft.

Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurity. As AI systems become increasingly intelligent and autonomous, there are concerns about their susceptibility to malicious exploitation or manipulation. Understanding these vulnerabilities is crucial in developing robust defense mechanisms to safeguard against such threats.

To facilitate a comprehensive examination of the topic, Aryal assembles a panel of experts from diverse fields, promoting a multidisciplinary approach to exploring the intersection of AI and cybersecurity. This collaboration allows for a detailed analysis of the potential benefits and challenges presented by AI in this domain.

The sentiment towards AI’s potential in cybersecurity is overwhelmingly positive. The integration of AI technologies in cyber defense can significantly enhance the security of both organizations and individuals. However, there is a need to strike a balance and actively consider the associated risks to ensure ethical and secure implementation of AI.

In conclusion, Babu Ram Aryal advocates for exploring the beneficial aspects of AI in cybersecurity. By emphasizing the role of AI in strengthening cyber defense and addressing potential risks, Aryal calls for comprehensive discussions involving experts from various fields. The insights gained from these discussions can inform the development of effective strategies that leverage AI’s potential while mitigating its associated risks, resulting in improved cybersecurity measures for the digital age.

Audience

The extended analysis highlights several important points related to the impact of technology and AI on the global south. One key argument is that individual countries in the global south lack the capacity to effectively negotiate with big tech players. This imbalance is due to the concentration of technology in the global north, which puts countries in the global south at a disadvantage. The supporting evidence includes the observation that many resources collected from the third world and global south are directed towards the developed economy, exacerbating the technological disparity.

Furthermore, it is suggested that AI technology and its benefits are not equally accessible to and may not equally benefit the global south. This argument is supported by the fact that the majority of the global south’s population resides in developing countries with limited access to AI technology. The issue of affordability and accessibility of AI technology is raised, with the example of ChatGPT, an AI system that is difficult for people in developing economies to afford. The supporting evidence also highlights the challenges faced by those with limited resources in addressing AI technology-related issues.

Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as persistent issues. While accessibility and inclusivity may be promoted in theory, they are not universally implemented, thereby exposing existing inequalities across different regions. The argument is reinforced by the observation that politics between the global north and south often hinder the universal implementation of accessibility and inclusivity practices.

The analysis also raises questions about the transfer of technology between the global north and south and its implications, particularly in terms of international relations and inequality. The sentiment surrounding this issue is one of questioning, suggesting the need for further investigation and examination.

Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents AI as a tool with the potential to be used against humans, leading to various threats. Furthermore, the importance of responsive measures that keep pace with technological evolution is emphasized. The argument is that measures aimed at addressing new tech threats need to be as fast and efficient as the development of technology itself.

Concerns about the accessibility and inclusion of AI in developing countries are also highlighted. The lack of infrastructure and access to electricity in some regions, such as Africa, pose challenges to the adoption of AI technology. Additionally, limited internet access and digital literacy hinder the effective integration of AI in these countries.

The potential risks that AI poses, such as job insecurity and limited human creativity, are areas of concern. The sentiment expressed suggests that AI is perceived as a threat to job stability, and there are fears that becoming consumers of AI may restrict human creativity.

To address these challenges, it is argued that digital literacy needs to be improved in order to enhance understanding of the risks and benefits of AI. The importance of including everyone in the advancement of AI, without leaving anyone behind, is emphasized.

The analysis delves into the topic of cyber defense, advocating for the necessity of defining cyber defense and clarifying the roles of different actors, such as governments, civil society, and tech companies, in empowering developing countries in this field. The capacity of governments to implement cyber defense strategies is questioned, using examples such as Nepal adopting a national cybersecurity policy with potential limitations in transparency and discussions.

The need to uphold agreed values, such as the Human Rights Charter and internet rights and principles, is also underscored. The argument is that practical application of these values is necessary to maintain a fair and just digital environment.

The analysis points out the tendency for AI and cybersecurity deliberations to be conducted in isolation at the multilateral level, emphasizing the importance of multidisciplinary governance solutions that cover all aspects of technology. Additionally, responsible behavior is suggested as a national security strategy for effectively managing the potential risks associated with AI and cybersecurity.

In conclusion, the extended analysis highlights the disparities and challenges faced by the global south in relation to technology and AI. It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to ensure equitable benefits and mitigate risks. Ultimately, the goal should be to empower all nations and individuals to navigate the evolving technological landscape and foster a globally inclusive and secure digital future.

Tatiana Tropina

The discussions surrounding AI regulation and challenges in the cybersecurity realm have shed light on the importance of implementing risk-based and outcome-based regulations. It has been recognized that while regulation should address the threats and opportunities presented by AI, it must also avoid stifling innovation. Risk-based regulation, which assesses risks during the development of new AI systems, and outcome-based regulation, which aims to establish a framework for desired outcomes, allowing the industry to achieve them on their own terms, were highlighted as potential approaches.

There are concerns regarding AI bias, accountability, and the transparency of algorithms. There is a need to address these issues, along with the growing challenge of deepfakes. The evolving nature of AI technology poses challenges such as the generation of malware and spear-phishing campaigns. Future challenges include AI bias, algorithm transparency, and the impact of deepfakes. These concerns need to be effectively addressed to ensure the responsible and ethical development and deployment of AI.

Cooperation between industry, researchers, governments, and law enforcement was emphasized as crucial for effective threat management and defense in the AI domain. Building partnerships and collaboration among these stakeholders can enhance response capabilities and mitigate potential risks.

While AI offers significant benefits, such as its effective use in hash comparison and database management, its potential threats and misuse require a deeper understanding and investment in research and development. The need to comprehend and address AI-related risks and challenges was underscored to establish future-proof frameworks.

The discussions also highlighted the lack of capacity to assess AI and cyber threats globally, both in the global south and global north. This calls for increased efforts to enhance understanding and build expertise to effectively address such threats on a global scale. Furthermore, the importance of cooperation between the global north and south was stressed, emphasizing the need for collaboration to tackle the challenges and harness the potential of AI technology.

The concept of fairness in AI was noted as needing redefinition to encompass its impact globally. Currently, fairness primarily applies to the global north, necessitating a broader perspective that considers the impact on all regions of the world. It was also suggested that global cooperation should focus on building a better future and emphasizing the benefits of AI.

Regulation was seen as insufficient on its own, requiring accompanying actions from civil society, the technical community, and companies. External scrutiny of AI algorithms by civil society and research organizations was proposed to ensure their ethical use and reveal potential risks.

The interrelated UN processes of cybersecurity, AI, and cybercrime were mentioned as somewhat artificially separated. This observation underscores the need for a more holistic approach to address the interdependencies and mutual influence of these processes.

The absence of best practices in addressing cybersecurity and AI issues was recognized, emphasizing the need to invest in capacity building and the development of effective strategies.

The proposal for a global treaty on AI by the Council of Europe was deemed potentially transformative in achieving transparency, fairness, and accountability. Additionally, the EU AI Act, which seeks to prohibit profiling and certain other AI uses, was highlighted as a significant development in AI regulation.

The importance of guiding principles and regulatory frameworks was stressed, but it was also noted that they alone do not provide a clear path for achieving transparency, fairness, and accountability. Therefore, the need to further refine and prioritize these principles and frameworks was emphasized.

Overall, the discussions highlighted the complex challenges and opportunities associated with AI in cybersecurity. It is crucial to navigate these complexities through effective regulation, collaboration, investment, and ethical considerations to ensure the responsible and beneficial use of AI technology.

Session transcript

Babu Ram Aryal:
Good evening, tech team, it’s okay. Welcome to this workshop number 86 at this hall. It’s very a pleasure to be here discussing about artificial intelligence and cyber defence, especially for developing counter-prospective. This is Babu Ram Aryaland by profession I’m a lawyer and I’ve been engaged in various law and technology issues from Nepal. And I’d like to introduce very briefly my panellist this evening. My panellist, Sarim, is from META and he leads META South Asia policy team and is significantly engaged in AI and policy and technology issues. And he will be representing this panel from business perspective. My colleague is Vakas Hassan. He is lead of international affairs of Pakistan telecommunication authority, Pakistan. And he is engaged in regulatory perspective and he will be sharing regional and of course from Pakistan perspective on regulatory perspective. My colleague Michael is from Zambia and he is cyber analyst and he is investigator in cyber crime and he will be representing from law enforcement agency and Tatia Natropina is… assistant professor from Leiden University, and she will be representing policy perspective, especially from European perspective. So artificial intelligence has given very significant opportunity for all of us. It has now become a big word though it’s not a new one, but recently it has become very popular tools and technology, and lots of threats also have posed by the contribution of technology of artificial intelligence. At this panel, we’ll be discussing how artificial intelligence could be beneficial in especially cybersecurity perspective or defense perspective, and also how we can discuss on the framework of defense side on potential risk of artificial intelligence in cybersecurity, cybercrime mitigation of these kind of issues. I’ll go directly with Michael who is experiencing directly various risk and threats and handling cybercrime cases in Zambia. Michael, please share your experience and the perspective, especially you have been very engaged in IGF perspective. I know you have been MAG member and engaged in African continent as well. Floor is yours, Michael.

Michael Ilishebo:
Good afternoon, and good morning, and good evening. I know the time zone for Japan is difficult for most of us who are not from this region. Of course, in Africa, it’s morning. In South America, it’s probably in the evenings. All protocols observed. So, basically, I am a law enforcement officer working for Zambia Police Service and the Cybercrime Unit. In terms of the current crime landscape, we’ve seen an increase in terms of crime that are technology enabled. We’ve seen crimes that you wouldn’t hope, like expect that such a thing would happen, but at the end of it all, we’ve come to discover that most of these crimes that have been committed are enabled by AI. I’ll give you an example. If we take a person who’s never been to college or who’s never done any computing course, is able to program a computer malware or a computer program that they’re using it for their criminal intent, you’d ask, what skills have they got for them to execute such? All we’ve come to understand is everything has always been enabled by AI, especially with the coming of just GPT and other AI-based tools online, which basically are free. With their time on hand, they will be able to come up with something that they may execute in their criminal activities. So, this itself has posed a serious challenge for law enforcers, especially on the African continent and mostly to developing countries. Beyond that, of course, we handle cases, we handle matters where it has become difficult to distinguish a human element and an artificial intelligence generated as an image, whether it’s a video. So, as a result, when such cases go to court or when we arrest such traitors, it’s a great area on our part because the AI technology are able to do much, much more and much, much faster than a human can comprehend. So as a result, from the law enforcement perspective, I think AI has caused some, a bit of some challenges. What kind of challenges you have experienced as a law enforcement agency significantly? So basically, it comes to the use of digital forensic tools. Like, I’ll give an example. A video can be generated, that would appear to be genuine and everyone else would believe it and yet it is not. You can have cases where, which have to do with freedom of expression, where somebody’s voice has been copied and if you literally listen to it, you’d believe that indeed this is a person who has been issuing this statement, when in fact not. So even emails, you can receive an email that genuinely seems to come from a genuine source and yet probably it’s been AI written and everything points out to an individual or to an organization, at the end of the day, as you receive it, you have trust in it. So basically, there are many, many, many areas. Each and every day, we are learning some new challenges and new opportunities for us to probably catch up with the use of AI in our policing and day-to-day activities as we also try to distinguish AI activities and human interaction activities.

Babu Ram Aryal:
Thank you, Michael. I’ll come to Tatiana. Tatiana is a researcher and significantly engaged in cybersecurity policy development. And as a researcher, how you see the development of AI, especially in cybersecurity issues and as you represent in our panel from European stakeholder. So what is the European position on this kind of issues from policy perspective, policy frameworks, what kind of issues are being dealt by European countries? And Tatiana.

Tatiana Tropina:
Thank you very much. And I do believe that in a way the threat and the opportunity that artificial intelligence brings for cybersecurity or security in general, like let’s say if we put it as protection from harm, would might be almost the same everywhere but the European Union indeed is trying to sort of deal with them and foresee them in a manner that would address the risks and harms. And I know that the big discussion in the policy community circles and also in academic circles is not the question anymore whether we need to regulate AI for the purpose of security and cybersecurity or whether we do not. The question is, how do we do this? How do we do, how do we protect people and also systems and like kind of from harm while not stifling innovation? And I do believe that right now there are two approaches that are discussed or not two, but mostly, we are targeting two things, the risk-based regulation. So when the new AI systems are going to be developed, the risk is going to be assessed and then based on risk regulation will either be there or not. And outcome-based regulation, you want to create some framework of what you want to achieve and then give industry some ability to achieve it by their own means as long as you protect from harm. But I do believe and I would like to second what the previous speaker said. From the law enforcement perspective, from crime perspective, the challenges are so many that sometimes. we are looking at them and we are getting sort of, how do I say it, not our judgment is clouded, but we have to do two things. We have to address the current challenges, why foresee the future challenges, right? So I do believe that right now we are talking a lot about risks from large language models, generation of spare fishing campaigns, generation of malware, and this is something that right now is already happening and it’s hard to regulate, but if we are looking to the future, we have to address a few things in terms of cybersecurity and risks. Sorry, yeah. Well, first of all, the AI bias, the accountability and transparency of algorithms. We have to address the issues of deepfakes and here it goes even beyond cybersecurity, it goes to information operations into the field of national security. So this is just my baseline and I’m happy to go into further discussions on this.

Babu Ram Aryal:
Thank you, Tatiana. Now, at the initial remarks, I’ll come to Sarim. And from an industry player, Meta is one of very significant player and Meta platform is also a platform that is very popular as well as there are many risk Meta platform where complained and not only Meta platform, you are just here, that’s why I mentioned, but these platforms are sometime, many countries they have complaints and they are not contributing, they are just been doing business and then technologies are weighing. issues by people and the bad people. So there are a few things like business perspective, technology perspective, as well as social perspective. So as a technology industry player, how you see the risk and opportunity of artificial intelligence, especially the topic that we have been discussing. And what could be the response from industry on addressing these kind of issues? Sorry.

Sarim Aziz:
Thank you, Babu, for the opportunity. I think this is a very timely topic. There’s been a lot of debate around sort of opportunities with AI and excitement around it, but also challenges and risks as our speakers have highlighted. I think I just wanna reframe this discussion from a different perspective. From our perspective, we have to actually understand the threat actors we’re kind of dealing with. They can sometimes be using quite simple methods to evade detection, but sometimes can use very sophisticated methods, AI being one of them. We have a cybersecurity team at Meta that’s been trying to stay ahead of the curve of these threat actors. And I wanna point to a sort of a tool, which is like our adversarial threat report, which we produce quarterly. And that’s just a great information tool out there just for policy tool as well, to understand the trends of what’s going on. This is where we report on in-depth analysis of influence operations that we see around the world, especially around coordinated inauthentic behavior. If you think about the issues we’re discussing around cybersecurity, a lot of that has to do with inauthentic behavior. Someone who’s trying to appear authentic, from a phishing email to a message you might receive and hacking attempts and other things. So that threat report is a great tool just to, and that’s something we do on a quarterly basis. We’ve been doing that for a long time. We also did a state of influence ops report between 2017 and 20 that shows the trends of how sophisticated these actors are. But from our perspective, I think we’ve seen three things with AI from a risk perspective that honestly does not concern us as much. I’ll explain why. Because one is, yes, like as Michael mentioned, the most typical use cases, AI generated photos and you try to appear like you’re a real profile, right? But frankly, if you think about it, that was happening even before AI. In fact, most of the actions that we were taking with on accounts that were fake, previously all had profile photos. It’s not like they didn’t have a photo. So whether that photo is a generated by AI or a real person shouldn’t matter because it’s actually about the behavior. And I think that’s my main point is that I think the challenge with gen AI is that we get a little bit stuck on the content and we need to change the conversation about how do we detect bad behavior, right? And so that’s one. Second thing we notice is that because of gen AI being the hype cycle, the fact that it’s almost every session here at IGF is about AI, it becomes an easy target for phishing and scams because all you need to do is say, hey, click on this to access chat GPT for free. And people are, because they’ve heard of AI, they think it’s cool. They’re more willing to get duped into those kinds of sort of hype cycles, which is common with things like AI and other things. The third is like we, as I think Michael saluted this and Tatiana as well, that it does make it a little bit easier for, especially I would say non-English speakers who want to scam others to use gen AI, whether you wanna make ransomware or malware to make it easier because now you’ve got a tool that will help you fix your language and make it look all pretty. So it’s like, okay, so you’ve got a very nice auto-complete spell checker that can make sure your things are well-written. So those are sort of the three high-level threats, but honestly, what I would say is that we haven’t seen. a major difference in our enforcement. And I’ll give you an example. In quarter one of this year, we also have a transparency report where we report on, we measure ourselves and how good is our AI. And I think that’s the point I’m trying to get to is that we are more excited about the opportunities AI brings in cybersecurity and helping cyber defenders and helping people keep safe versus the risk. And this is one example. 99.7% of the fake accounts that we removed in quarter one of this year on Facebook were removed by AI. And if I give you that number, it’s staggering. It’s 676 million accounts were removed in just one quarter by AI alone, right? That’s the scale. So when you talk about scale detection and has nothing to do with content, I just wanna bring it back to that. What we detected was inauthentic behavior, fake behavior. It shouldn’t matter whether your profile photo was from chat GPT or it doesn’t matter or your text issue. Because once you get into the content, you’re getting into a weeds of what is the intent and you don’t know the intent, right? Whether it’s real or… And in fact, I’ll also point to the fact that some of the worst videos, you talked about fake videos are actually not the gen AI ones. If you look at the ones that went the most viral, they are real videos. And it’s the simplest manipulations that have fooled people. So I’m pointing to the US Speaker of the House, Nancy Pelosi, her video that went viral. All that they did was they slowed it down and they didn’t use any AI for that. And that had the most negative, like the highest impact because people believed that there was a problem, right? With the individual, which clearly wasn’t the case. It was an edited video. So I guess what I’m trying to say is that the bad actors find a way to use these tools and they will find any tool that’s out there. But I think, so we really have to get focused on the behavior and detection piece and I can get into that more. That’s it for now.

Babu Ram Aryal:
Thanks Sarim. It’s very… Encouraging thing that 99% fake accounts are removed by AI. And what about reverse situation? Is there any intervention from AI on negative side platform?

Sarim Aziz:
Like I said, I mentioned the three areas. Obviously, when you get into large language models, you know, I also wanna make the point that we believe the solution here in getting to solutions a bit early, but is that more people in the cybersecurity space, people who, you know, we talk about amplifying the good, we need to use it for good and use it for keeping people safe. And we can do that through open innovation and open approach and collaboration, right? So of course the risks are there, but if you keep something closed and you only give it access to a few companies or a few individuals, then bad actors will find a way to get it anyway, and they will use it for bad purposes. But if you make sure it’s accessible and open for cybersecurity experts, for the community, then I think you can use open innovation to really make sure the cyber defenders are using the technology to improving it. And this 99.7 is an example of that. I mean, we open source a lot of our AI technology actually for communities and for developers and other platforms to use as well.

Babu Ram Aryal:
Thanks, I’ll come back to you on next round of Q and A. Waqas, you are at very hot seat. I know regulatory agencies are facing lots of challenge by technology and now telecom regulators have very big roles on mitigating risk of AI and telecommunication and of course internet. So from your perspective, what you see is the major issue as a regulator or as a government when artificial intelligence is. challenging the platform in the way that people are feeling risk and of course from your Pakistani perspective as well and how you dealt in this kind of situation in your country. Can you say some lines on this?

Waqas Hassan:
Yeah, thanks Babu. Actually thanks for setting up the context for my initial remarks here because you already said that I’m in a hot seat. Even now I’m in the middle of my platform, the police and the researcher even in this seating. With regulators it’s a bit of a tricky job because at one hand we are connected with the industry. On the other hand we are directly connected with the consumers as well. This is more like a job where you have to do the balancing act whenever you’re taking any decisions or any moving forward on anything. With cyber security itself being a major challenge for developing countries for so long, this new mix of AI has actually made things more challenging. You see the technology is usually and primarily and inherently has been developed in the West. And that technology being developed in the West means that we have a first mover disadvantage for developing countries as well because we’re already lacking on the technology transfer part. What happens is that once, because of internet and because of how we are connected these days, it is much easier to get any information which could be positive or negative. And usually the cyber security threats or the… or the elements that are engaged in such kind of cybercrimes and all, they’re usually ahead of the curve when it comes to defenses. Defense will always be reactive. And for developing countries, we have always been in a reactive mode. Meta has just mentioned that, you know, their AI model or their AI project has been able to bring down the fake accounts on Facebook within one quarter by 99.7%. That means that they do have such an advanced or such a tech-savvy technology available to them or resources available to them that they were able to do to achieve this huge and absolutely tremendous milestone, by the way. But can you imagine something like this or some solution like this in the hands of a developing country with that kind of investment to deploy something like this which can actually, you know, serve as a dome or a cyber security net around your country? That’s not going to happen anytime soon. So what does it come down to then for us as regulators? It comes down to, number one, removing that inherent fear of AI which we have in the developing countries. Although it is absolutely tremendous to see how AI has been doing, bringing in positive things, but that inherent fear of any new technology is still there. This is more related to behavior, which Sarim was mentioning. And I think it also points down to one more point, which is intention. I think intention is what leads towards anything, whether it is on cyberspace or off the cyberspace. I think what developing countries need to tackle this new form of cyber security, I would call it, with the mix of AI, is to have more capacity, is to have more institutional capacity, is to have more human capacity, is to have a national collaborative approach which is driven by something like a common agenda of how to actually go about it. We are so disjointed even in our national efforts for a secure cyber space that doing something on a regional level seems like a far sight to me right now. Just to sum it up, for example, in Pakistan we have a national cyber security policy as well. We do have a national centre for cyber security. PTA has issued regulations on critical telecom and infrastructure protection. We do threat intelligence sharing as well. There is a national telecom cert as well. There are so many things that we are doing but if I see the trend, that trend is more like last three, four years maybe where things have actually started to come out. But imagine if these things were happening ten years back, we would have been much more prepared to tackle AI now into our cyber security postures. So from a governance or cyber security or from a regulatory perspective, it is more about how we tackle these new challenges with more collaborative approach and looking at more developed countries for kind of technology transfer and to build institutional capacity to address these challenges. Thank you.

Babu Ram Aryal:
Thank you, Waqas. Actually, I was supposed to come on capacity and Waqas, you just mentioned the capacity building of the people. Tatiana, I would like to come with you that how much investment on policy frameworks and capacity buildings coming in framing law and ethical issues in artificial intelligence and whether industries are contributing to manage these things and also from government side. So what is the level of capacity on policy research on framing artificial, I mean, framing the way out for these artificial intelligence and legal issues? It’s working, right?

Tatiana Tropina:
Thank you very much for the question. And I must admit, so I’ve heard the word investment. I’m not an economist. So I’m going to talk about people, hours, efforts, and whatever. So first of all, when it comes to security, defense, or regulation, I think we need to understand that to address anything and to create future frameworks, we need to understand the threat first, right? So we need to invest in understanding threats. And here it’s not only, and I think I mentioned this before, it’s not only about harms as we see it, for example, harm from crime, harm from deep fakes. It’s also harm that is caused by bias, but ethical issue, because the artificial intelligence model is only as good, it brings as much good as the model itself, the information you feed it, the final outcome. And we know already, and I think that this is incredibly important. for developing countries to remember that AI can be biased. And technologies created in the West can be double biased once technology transfer and adoption happens somewhere else. For example, when I’ve heard about meta-removing accounts based on behavioral patterns, I really would like to know how these models are trained. Be it content, be it language, be it behavioral pattern, does it take into account cultural differences between language, countries, continents, and whatever? And here, I do believe that what we talk about in terms of cooperation between industry, researchers, and governments, and law enforcement is crucial. Just a few examples. Scrutiny, external scrutiny of algorithms, and I believe that both industry, government, or not both, three of you will agree with me, that it is incredibly important once the algorithm is created and trained to open it for scrutiny from civil society, from research organizations, because you need somebody to see if it’s ethical from the outside. You know, to me, testing algorithm just by adopting them ethically is the same as testing medicine or cosmetic on animals. We don’t do this anymore. So, it’s not only building capacity itself, it’s adopting a completely new mindset, how we are going to do this. And in terms of investment in creation of future-proof frameworks, you really need to see the whole picture and then see, okay, what kind of threats I’m addressing today, and what kind of threats I might foresee tomorrow. And this is why I was talking about sort of, it is hard to think about future-proof frameworks because, indeed, defense will always be a bit behind. But if you forget about technology itself, technology can change tomorrow, but you can think about how you frame harm. what do you want to achieve in your innovation? And then say, okay, META, I want to achieve this level of safety. If you see this risk, please provide this safety and leave it to META and make META open in this also for external research and this cooperation might bring you somewhere to the point where it would be more ethical, where it would be more for good in terms of defense. And I also want to say the existential fear of AI exists everywhere, I believe. And this is why every second session here is about AI, just because we are so scared. But I also do believe that we cannot stop what is going on. We really have to invest in here. I’m talking again, not about money, but about people. And also, if I may, if I have not spoken for too long yet, I think that there are so many issues here that we have to detangle. And again, look at harms and look at algorithm itself. For example, the use of algorithm in creation of spare fishing campaigns or malware. We know how to address it. We need to work with prompt engineers because it creates malware only as good as the prompt you give it. And if a year ago, you could say to charge DPT, just create me a piece of malware or ransomware and it will do it. Now you cannot do this. You need to split it into many, many prompts. So we have to make this untenable for criminals. We have to make sure that every tiny prompt, every tiny step that they can execute in creation of this malware by algorithm will be stopped. And yes, it is work, but this is work we can do. And so with any other harm. Sorry for speaking for too long. Thank you.

Babu Ram Aryal:
It’s absolutely fine. Thank you very much. and bringing more issues in the table. Sorry, there was very interesting response from Tatiana. Setting the, what is harm, how we understand and then setting this, previously Vakas mentioned the fear of AI. So do we have any fear of these things from technology platforms like yours? How you are handling this kind of fear and risks technologically? I don’t know whether you could be able to responding from technological side, but still from your platform perspective.

Sarim Aziz:
I think, yeah, I mean, any new tech can seem scary, but I think we need to move beyond that. And like, yeah, as Tatiana and others mentioned, the existential risk always becomes a distraction in the conversation. I think there’s like near short-term risks that need to be managed. And on their approaches, I think there’s some really good principles and frameworks out there with the OECD principles, about fairness, about transparency, accountability. I mean, the White House commitments as well. So there are good policy frameworks for countries to look at, and they certainly need to be localized to every region. But there’s plenty of good examples, G7 Hiroshima process, that I think industry generally is supportive of, in terms of making sure that we make AI responsibly and for good. But to me, I think the bigger question, the harms are sort of clear. The idea, I think now, is that how do we get this technology into the hands of more people who are working in the cybersecurity space? Because if you think about cybersecurity space, also 20 years ago, it was quite closed. But now you have a lot more collaboration and open innovation happening in that. It took 20 years for us to realize that, actually, that. Keeping cyber security close to a few does not help, because the bad actors get this stuff anyway and then you just have like your defense list against them. So I think the same thing has to happen with AI, it’s going to be tough but I think the governments and policymakers, if they need to incentivize open innovation. So when you have a model that’s close you don’t know how it was trained you don’t know you know how it was built. You don’t have a responsible like it makes it difficult for you know it’s for the community to figure out what are the risks and I think we one of the things we did for example was we submitted our model as our model is open source it was launched just in July of this year and already in one month it was downloaded by 30,000 people. Now, are, of course we did red teaming on it we tested it for, but no amount of testing is going to be perfect and the only way to get it tested perfectly is to get it out there in the open source community and responsible players have access to it they know what they’re doing. And that’s the beauty of AI I think that’s a game changer because mentioned that you know there’s a capacity issue yes there is a capacity issue. We have a capacity issue as meta, you can’t hire enough people to remove the bad content. AI helps us do that. Right, you don’t. So instead of having you can have millions of people looking at what’s on the platform and removing content, it’ll never be enough. Right, AI helps us get better so that human, you still need human review you still need experts to know what they’re doing, but it helps them be more efficient and more effective. And the same thing and open innovation model can help developing countries catch up on cybersecurity because now you don’t need thousands and thousands of cybersecurity experts you just need a few who have access to the technology. And that’s what open innovation and open sourcing does which is what we’ve done with our model. We even submitted our model to DEF CON, which is a cybersecurity conference in Las Vegas and we said, you know, break this thing find the vulnerabilities. What are we not doing where, where are the risks, and we’re waiting for the report but that’s how you get make it better. Right, of course we did our best to make sure that it takes care of the CBR and risks of you know chemical biological. technological nuclear risks, but there are other risks that we may not have seen. So I think this is where putting it on open source, giving access to more researchers, it doesn’t matter whether you’re in Zambia or Pakistan or any other country, you have access to the same technology now that Meta has built. And that’s how we get there to an open innovation approach. There are many other language models. I’m not going to name them, but they are not open and Meta’s is. So I think that’s what we need to get policymakers to incentivize open hackathons on these kinds of things, break this thing and create sandboxes to safely test this on, because a lot of the testing you can do is only based on what’s publicly available. If governments have access to information that they can make available to hackers to say, okay, like use this language model and see if we can do this. And in a safe environment, obviously, ethically without violating anybody’s privacy and things like that. So I think that’s, that’s where we need to focus the discussion on policy.

Babu Ram Aryal:
Thanks Sarim. I think one interesting issue is we are discussing from development country perspective, right? This is our basic objective and there are opportunities to all of countries. Access is always there, as you Sarim mentioned, but there are big gaps between developing countries and developed countries about the capacity. We have been talking about, and especially if I see from Nepalese perspective, we have very limited resources, technology, as well as human resource, that that is a big challenge on this defense. So Michael, what is your personal experience leading from the front and what is the capacity of your team and what do you see the gap between developing countries and developed countries on capacity of? of addressing these issues?

Michael Ilishebo:
So basically my experience is probably shared by all developing countries. We are consumers of services, products from developing countries. We haven’t yet reached that stage where we can have our own homegrown solution to some of these AI module languages where we can maybe localize it or train it on our own data sets. Whatever we are using or whatever is being used out there is a product of the Western world. So basically one of the major challenges that we’ve encountered through experience is that the public availability of these language models in itself has proved to be a challenge in the sense that anyone out there can have access to the tools. It simply means that they can manipulate it to an extent for their criminal purposes. As reported by Meta, in the first quarter of their use of the language model that they are using they got close to a billion fake accounts. Am I correct? Close to, no, no, yeah. Whatever, it could be images, it could be anything that was not meeting the standards of Meta. So if you look at those numbers those numbers are staggering. Now imagine some of the information that Meta has brought down because of ethical and probably safety and other concerns were deployed to a third world country that has no single capacity to probably filter that which is correct, filter that which is not correct. It is becoming a challenge. As much as the crime trend is increasing. also with the borderless nature of the internet, the AI models have really become something that you have to weigh the good and the bad. Of course, the good outweighs the bad, but again, when the bad comes in, the damage it causes within a short period of time, like outshines the good. So at the end of it all, there are many, many challenges that we face through experience, only if we could be at the same level with developing countries in terms of their tools they are using to filter anything that is probably will bring public opinions in terms of misinformation, in terms of hate speech, in terms of any other act that we may deem not appropriate for society or any other act that probably is a tool for criminal purposes.

Babu Ram Aryal:
Thanks, Michael. Vakasa would like to intervene on this issue.

Waqas Hassan:
I think, like already mentioned, the pace with which the threats are evolving is I think unequal to, inequal to what at the pace with which we are, our defense mechanisms are improving. And why this is happening? This is because we don’t have as much faster, the forensics is not as fast as the crimes are happening. Like Michael has already mentioned, this it’s a good thing that the tools or these models are open source, but at the same time, these models are equally available to people who want to misuse it as well. Now, the capacity of people who want to misuse it is sort of… When it outweighs the capacity of people who have to defend against it, you find incidents and you find such situations where we eventually say that AI is bad for us or bad for the society and all. But when we are better prepared, we are proactive, like what Facebook did is sort of proactive thing. Rather than those accounts doing something which would eventually become a big fiasco, they actually took it down before something would happen. That is something which developing countries are usually lagging behind. Doing cyber security or having their cyber defense in a proactive mode rather than being a reactive mode. I am not saying that not prepared and I am not saying that there is no proactive approach there. There is. But that proactive approach is hugely dependent on what kind of tools and technologies and knowledge and resources and investment available to the developing countries, rather than just saying that, you know, okay, fine, we are doing proactive approach and we are doing these things. I mean, Michael is at the forefront of everything. I think everybody would know that the kind of threats that are emerging now are much more sophisticated than they were ever before. Be as sophisticated and as prepared as you were before, I leave that question on the table. Thank you.

Sarim Aziz:
Can I just add a perspective? Yes. It’s your story. Yeah. So I think I’m coming back to my introduction. I don’t think the risk vectors have changed. Sorry, you want to add something? Oh, yeah. Okay. All right. Yeah. I mean, I think, yes, you might. As I said, the bad actors who want to cause harm are using the same vectors they were before. Jenny, I don’t think just because they’re putting the. So it’s like phishing, right? Like phishing is a good example. You don’t solve for phishing. Okay, fine, they can have a much better email that’s written that seems real and logos that look real and whatever, right? But that’s not how you solve phishing. You solve phishing by making authentication credentials one-time use. Because any one of us, the most educated person in this room can be phished, right? I mean, if you’re running in a rush, you don’t have time to check the email address, you just read something and it looks real, you’re gonna click on it. Yeah, we’ve all been there. We’ve all done it, right? I’m gonna raise my hand. So that threat vectors in terms of what you’re talking about haven’t changed, same with the fake accounts. So our fake account detection doesn’t care how real your photo is or isn’t. It’s based on AI, it’s based on behavior. And that behavior, yes, of course, we have 3.8 billion users. We have to be careful that this is the spammy behavior we’re seeing people creating multiple accounts on the same device or sending 100 messages in a minute and spamming people and things like that. So it’s really bad behavior. It doesn’t matter what it is, it’s wrong. What country you’re from, what culture you’re from. So that’s the kind of stuff, it is universal, right? And same with phishing, it’s quite universal. So yes, there are certain risks, same with NCII. So NCII was still there before GENAI, non-consensual intimate imagery. So you can use Photoshop for that, you don’t need GENAI. And that’s, unfortunately, that’s the biggest harm we see. That’s the biggest risk, we talked about risk. That we, and that’s a separate topic where I’m talking on a panel on child safety as well, where you need collaboration. We have an initiative called StopNCII.org, where if you are a victim of NCII, and again, this is where AI helps. So if anyone is a victim, you know a victim of NCII, and their pictures have been compromised and whoever is blackmailing them and things like that, you can go to StopNCII.org and you can submit that video or image. And we use AI to block them across all platforms, all services. This is the power of AI, right? Even if it’s like slightly change or, because we take that hash and we do that. So it’s, this is the power of AI. I think it helps us with sort of preventing actually a lot of harm. Whereas it would be, without AI, you can easily do the same thing. You know, it might make it a little bit easier or maybe it makes it high quality, but the quality of the impersonation or the quality of the intent doesn’t really change the risk factor.

Tatiana Tropina:
Tatiana? Yeah. Yeah, thank you. What I wanted to say largely goes in line with what you say, because I made one line when I was listening. Misuse will always happen. We have to understand that we should stop fixating on technology itself. Any technology would be misused. If you want to create bulletproof technology, you should not create any technology because it will always be people who misuse it, who would find the way to misuse it. Crime follows opportunity. That’s it. Any technology will be misused. And also about phishing, for example, the human is the weakest link always. You’re not fooling the system only, you’re fooling humans. And in the same way, we have to talk about harms. And here I go to one of my intro remarks. We have to focus on harms, not on technology per se. We have to see where the weakest link is, what exactly can be abused in terms of harms, where harm is caused. And in this way, I strongly believe that AI can bring so much good. And thank you for reminding me about the project of non-consensual image sharing. Of course, AI can do it automatically. You can compare hashes, you can have databases. But then again, when we look layer after layer, we can ask ourselves how this can be misused as well, and how this can be addressed, and so on and so forth. We just should always ask questions. And also I would like to remind. again and again. It’s not only about technology, let’s always remember that it is humans who are making mistakes and humans who are abusing this technology and this is where we also have to build capacity. Not only in technological development, not only in regulatory capacity, but after all the whole chain of risk, you know, focuses at the end on humans, on humans developing technology, on humans developing regulation, on humans being targeted, on humans making mistakes and this is where we have to look at as well.

Babu Ram Aryal:
Thanks Tatiana. Now, I would like to open the floor and if you have any question, if Ananda Gautam, my colleague, is moderating online and if there is any question if you want to ask to the panel from online as well who are joining this discussion, you can also put your question to the panel and also I would like to request participants to speak or share your questions to the panel. Yes, please introduce yourself briefly for the record.

Audience:
Hello everyone, I’m Prabhas Subedi from Nepal. It’s been so interesting in discussion, thank you so much panel. I want to just explore a little bit what we miss from today’s discussion, probably that is the capacity of individual countries to negotiate with big tech players, right? If you look at the present scenario, so much resources that is being collected from so-called third world, global south to the developed economy. And of course they are boosting their economies through deploying this sort of technologies and we have nothing. And that is one of the main reasons we are not empowered, we are not capable to tackle this sort of challenges. And of course another thing is the technology is so much concentrated to the global north and I’m not pretty sure that they do care equally, inclusively to the big number of population living in the global south and economy comes first. So it will be continue what is happening today and will be continue in the AI time, AI dominated time. That is my observation and what is yours I would like to ask from panelist side. Any specific resource person you would like to ask? Anyone can, thank you.

Sarim Aziz:
I mean I think as I said before, first of all I agree with you, there is a way of making technology more inclusive and that has to be by design and that is why I think principles when it comes to the frameworks that are out there on AI being led by Japan and OECD and the White House, it is about inclusivity, fairness, making sure there is no bias in there. But those are all policy frameworks. I think from a tech perspective, as I said, I think open innovation is the answer. And AI can be the game changer where, as I explained, it is out there. There is no reason why… The same technology that we’ve open sourced that the Western countries have now, researchers and academics and developers in Nepal and other countries in Africa can also access. And this is an opportunity to get ahead. And you don’t need, AI is the game changer because it’s about skill. It’s about doing things at scale and being able to, especially thinking about systems and protecting systems and the threats you’re talking about. It’s not a problem where you throw people at and it’ll get solved. Of course you need to do capacity building and you need experts, but it helps them be more efficient, more effective. So I’d love to see what the community, it’s only a few months old, our model, it’s called Lama 2. You can go and look at it. You can look, there’s a research paper along with it that explains how the model was built because we’ve made it, we’ve given an open source license under acceptable use policy. And so, yeah, and there’s derivatives already out of it. So you can’t even use the language argument anymore because the Japanese took that model and they already made it into, they call it, I think, ELISA and the Japanese university in Tokyo has made a Japanese version of that model. So it’s, and we’re excited to see what the community can do. And I think that’s the way we can continue to innovate, make sure that nobody gets left behind.

Audience:
I do not completely agree with you because you can already see that, for example, ChatGPT has the premium and free version and the majority of users are from, of course, from the developed economies and it’s quite difficult to afford. And there is no always a chance to be the openly and easily availability of such resources. And if you are not habited and if you are not well equipped with the resources, how can you be capable to tackle the upcoming challenges?

Sarim Aziz:
So I don’t work for ChatGPT or OpenAI, so I can’t speak for them, but our model is open source. It’s already public and it’s the same and anyone can basically write another chat GPT competent competitor using that

Babu Ram Aryal:
Thank You privacy Tatiana he raised one interesting debate on global north and global south Do you see? You

Audience:
Well, thank you very much, this is very interesting debate and International relations, I am dr. Mohammed Shabbir from Pakistan Representing here civil society the dynamic coalition on accessibility and disability so The the debate here going on I sort of as a student of international relations I would agree to this that we don’t live and in an equal world the Terminologies inclusivity accessibility. They all seem very fine on the paper but in reality when we see what is happening here is is Unfortunately, we live live in a real world and not in an ideal world where everything would be equal towards one another What cost is a very Valid point and I would ask want to ask that question from work us and then I would seek the response from from meta you talked about the Transfer of technology. What sort of technology are you talking about here? And my question from meta and and the global North is that how far are they ready to share that technology with the global south? when they when it comes to diversity inclusivity not to talk of the the earlier point my friend raised about the the price and and the open and free plus premium versions of different softwares that are out there in the market. Those will remain there, but what sort of technology are we talking about here in terms of transferring? Of course, AI is a tool like any other tool, but I can see that when it was human against human, so it would be like a sharp knife that could be used against any other person, but that would be human using against human, a tool. But this time, AI as a tool being used as not just as a computer, it would be a computer against a human who would be targeted. So the threat, as my friend from Meta is talking about, is just a real one, and it seems that it’s like a fishing one. The example cannot be equated. I think this is something that we need to discuss. The response measures have to be as sharpened, as quick, and as faster as the technology that we are developing here is, but I would want to seek the response on my earlier point from Vakas and then from Meta. Thank you.

Waqas Hassan:
Okay, thank you. I think when we say technology, it’s primarily, of course, one of the examples is how Meta has just open-sourced their AI model, which of course is something that any nation can use to develop their own models. What we’re talking about is a standardization of these technologies, in my view. Once something gets standardized, it is available to everybody. That’s how telecom infrastructure works across the world. If there is a standardized technology, of course, it is easier to. for, for, for developing countries or developed countries, anybody, any, any interested party to take advantage of. Threat intelligence, what kind of threats are out there? What kind of issues are they dealing with? What kind of information sharing could be there? What kind of new crimes are being introduced? How AI is being misused? And then how that situation is being tackled by, by the West? Technology itself is just a word. It is, it is more about how, what, what are you sharing? Are you sharing information? Are you sharing the tools? Are you sharing experiences? Are you even sharing human resources? You mentioned that now it is human versus AI, but can we, how about AI versus AI? You know, can we develop such tools or AIs that can preempt and work like, it’s like, I’m, I’m, I’m going back into the, into the cyber warfare movies and all that, which used to predict that in the future bots would be fighting against each other, but we’re not there yet. But if we are investing in AI for defense mechanisms of to improve the cyber security posture, like Meta has just done, that investment muscle is currently not that much available to the developing countries. So we have to look towards the West. And what they are developing is something that we need. And we’re going to need for a foreseeable future in terms of the tools, in terms of the information, in terms of the experience sharing, and in terms of the threat intelligence that they have. Thank you. And I’ll leave it to Sarim to respond to the other part.

Sarim Aziz:
Thank you, Waqas. So I think it’s a good question. Maybe I didn’t set the context of what Lama to it. So Lama2 is a large language model, similar to OpenAI’s ChadGPT, except the difference is it’s free for commercial use, and it’s open source. So the technology is available for any researcher, anyone to deploy their own model within their own environment. So you can put it on, if you’ve got the computational power in your own cloud, you can deploy it there on your computer, or you could deploy it on Microsoft’s Azure, or AWS, and any other. So it’s basically a large-legged model that helps you perform those automated tasks. But it’s out there for open source, meaning that we invite the community to use it, invite the community. It’s free. We don’t charge. There’s no paid version of it. Obviously, you have to agree to the conditions and agree to the Responsible Use AI Guide. But beyond that, yeah, that’s what we’ve launched just this year. And we’re excited to see how the community around the world uses it for different use cases. And there are use cases we didn’t even realize. That’s the beauty of open sourcing, is that we won’t know how it will get used by different governments, by institutions to deploy. Of course, we only make it better and safer through red teaming, through testing, all that. But the more cyber security experts tell us the vulnerabilities and use it, that’s how we’ll improve it.

Babu Ram Aryal:
Thanks, Ari. Tatiana, observing these two questions, I was supposed to ask you the debate of global south and global north capacity and the impact on artificial intelligence and cyber defense issue.

Tatiana Tropina:
I must admit here that I cannot say that I cannot speak for global south, which is global majority, right? It is hard for me to assess capacity there. But I can certainly tell you that even in the global north, if we call it, if we call the global minority, global north, the artificial intelligence in cyber, so capacity in cyber defense. On the one hand, of course, if we’re talking about expertise, we might talk about some high quality specialists and better testing and whatever, but believe me, the threat, the threat is still there and there is lack of understanding what kind of threat is there in terms of national security, in terms of cyber operations. Because so much is connected in the global north, because people follow things on the internet so much, the question, for example, deepfakes and elections, and I love the story about Nancy Pelosi video because you don’t have to change anything, you just have to slow down or speed up and whatever. So the question here, again, boils down to capacity to assess the threats before you have capacity to tackle them. And I do believe that right now, in the so-called global north, we have this problem as well, capacity to understand the threat. Are we just saying, oh my God, it’s happening? Or are we really kind of disentangling it, looking at what is actually happening and then assessing it? And I do believe that indeed, indeed, there is a gap when we talk about developing countries and developed countries in technological expertise, in what you can adopt, in how you can address it. But in terms of understanding the threat, we still lack capacity in global north as well. We still lack understanding of the threat itself. And there is a lot of fear-mongering going on as well. And I do believe that in this term, we have to share this knowledge. We have to share this capacity because, yeah, the threat can be. vary from region to region, but at the same time the harm will be to people, be it elections, be it cyber threats, be it national security threats. And here I do believe that there is such a huge potential for cooperation between what you call global north and global south. And by the way, I do think that we have to come up with better terms.

Babu Ram Aryal:
Tatiana and I will come on cooperation. I’ll go to the question.

Audience:
Thank you for giving me the floor. My name is Ada Majalo. I’m coming from the Africa IGF as a MAG member. Very interesting session, really. I think when we talk about AI, most of the time it’s us from the global south or developing countries who have the most questions to ask because we have the bigger concerns. We are still tagging along. When it comes to AI, we are concerned about how inclusive it is and how accessibility it is. For example, coming from an African context, we are still struggling with the infrastructure. We talk about electricity, access to electricity. It’s a problem. And you need to be online, you need to be connected to be able to utilize most of these facilities that come with AI. But we are already having those challenges. So it’s difficult for us to actually either follow the trend or keep up with the trend. It always brings us to mind also as well, we have so many people that don’t really have no access to the internet. We don’t even know what is digital. And we talk about inclusion. How do we bring those people along? And how can they keep up with the whole idea? There is always a concern what are the risks, what are the challenges? How do we move away from the status quo? How do we follow suit? And what are the risks for us? And usually what are the benefits we get? But then it comes back to understanding. digital literacy, how people are digitally trained to understand what are the risks and what the benefits that might come from it and how we practically come to, I would say tag along with already the global not that are far ahead from where we are coming from. There is always the issue of people trusting AI. From where I’m coming from, people will ask, is AI here to take our jobs? How much can we be dependent on AI and not really, would it balance how creative we are? Because some of the consumers, when you are a consumer of AI, you are consuming. So does that really limit you being creative and also just being the consumer and just receiving and receiving and receiving and not also, it limits how can we balance the creativity of the human being? So it’s a bit off balance, but it’s good to bring this to the table to ensure that. When we are moving, there must be people left behind, but we see how to draw them along. And this is something that I just wanted to show out there. Thank you.

Babu Ram Aryal:
Thank you very much. Anything you would like to address us or I have one important side of cooperation. Just we started about the global north and global south and we’re talking from a government perspective and how we can build up a cooperation and addressing at national, regional and global level. So what could be the possible framework for addressing these issues? Tatiana?

Tatiana Tropina:
Okay. Sorry. I think that we already mentioned the principles and they are basically, okay. not that global, but I do believe that, I absolutely love the previous intervention, I’m sorry, I didn’t catch the name, but I do think that there are so many, so if we look about principles of AI, like for example, fairness, transparency, accountability, and so on and so forth, I think that we really need to redefine what fairness means. We really need to redefine what fairness means, because I think that right now when we’re talking about fairness, we do talk about applicability of fairness to what you call global north. And I think that if we look at fairness much broader, it will include the use of technologies and the impact of these technologies to any part of the world, to any part of society. It is hard for me to think about cooperation on the global level, like, you know, we all get together and happily develop something. I’m not sure this can happen really, unless the threat is imminent, but yeah. So I do believe that we have to, when we think about global cooperation, when we think about global capacity building, we should not start from threats. We should start from building a better future, we should start from benefits. And I think that fairness would be the best way to start. How do we make technology fair? How do we make every community benefiting from this technology? I know that you probably want me to talk about more practical steps. I don’t have, I’ll be honest here, I do not have an answer to this question, because unless we frame the place where we start from, which will include fairness for every country and every region and every user, instead of threats, instead of, oh my God, we are all going to die tomorrow from AI, or we are going to be insecure tomorrow. We should start with the benefit, how AI can. and benefit everybody, every population, every community, everyone. And if we start from the premise of good and define it and somehow frame it, and it’s already framed in a way, but you know, widen this frame. I think starting there would be a much better place. And in terms of practical steps, I do believe that the steps, the baby steps already taken by civil society, by the industry, which were certain players throw away the concept, move faster and break the things to the concept, let’s go more open, more fair, more transparent, more inclusive. This is already a good start. I do not know if regulation, attempt to regulate would bring us there. I do not think so, actually. I think that attempts to regulate should go hand-in-hand in what we do as civil society, as technical community, as companies cooperating with each other. But to me, honestly, the first step would be to redefine the concept of fairness.

Waqas Hassan:
I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as well. I’d like to take this from the other angle, which is a reverse angle, which is starting from the national level. Information sharing, threat intelligence sharing, developing tools, mechanisms against, or using AI for cyber defense, that starting point is, of course, your national level policy or national level initiatives or whichever body that you have in your country. For example, in Pakistan, we do have such bodies. Now, on APAC level as well, there are bodies, for example, there is an Asia-Pacific cert as well. They do cyber drills. ITU also organizes cyber drills. for countries to participate on and all. So there is some form of collaboration happening. How effective it is, I can’t say for sure, because this particular mix of AI into this into this cyber security and cyber is something which I haven’t seen in any agenda so far. But the starting point is again, a discussion forum like we are sitting at right now, like in IG, foreign for national cyber security dialogue to start, which can then come, you know, sort of meta size into into a regional dialogue, which then eventually, you know, gives input to the global dialogue. Whether it’s human, whether it’s AI, whatever it is, the starting point of every solution is a dialogue, if in my opinion. So I think this is where collaboration comes in. This is where information sharing comes in, especially for the developing countries. If you if you don’t have the tools or technologies, technologies, at least what we have is each other to share information with. So I think that should be the starting point. Thank you.

Babu Ram Aryal:
Michael on cooperation. How we can build a cooperation on cyber defence and how what kind of strategies we can take on that?

Michael Ilishebo:
So basically, we’ve discussed a lot of issues. Most of them, we’ve looked at issues that have to do with fairness, accountability, and ethical use of the AI. There are many challenges that as a law enforcer, we face. But in all, this discussion would definitely come in a broader way in the future when actually the law enforcers themselves, start deploying AI to detect, prevent and solve crime. Now that will affect all of us, because at the end of it all, we are looking at AI being used by criminals to target probably individuals to get money, probably to spread fake news, but now imagine you are about to commit a crime and then AI detects that you’re about to commit a crime, there’s a concept of pre-crime. So that will affect each and every one of us, just a simple show of behavior will detect what crime you’d want to commit or you commit in the future. So now that will bring up the issues of human rights, issues of ethical use, a lot of issues, because at the end of it all, it will affect each and every one of us. Today we’re discussing on the challenges that AI driven defense system has brought, but in the future, not even in a distant future, just probably in a few years time, it will be something that all of us will have to probably face in terms of being judged, being assessed, being profiled by AI. So as much as we may discuss other challenges, let us also focus at the future when AI starts policing us. Thanks, Michael.

Babu Ram Aryal:
One question from you. Yeah. Question. Come in. Yeah, please. Mic there. Introduce, please. Thank you.

Audience:
Thank you for the insightful reflection. This is Santosh Siddhal from Nepal, Digital Rights Nepal. On the question of collaboration, I think that they understood that we have to define the concept first. I think we have to also define the concept of cyber defense. If we are moving from cyber security to cyber security. cyber defense, we have to have a kind of open discussion because defense is the job of government. And normally government national security, security defense, they are dominant actor. And they do not want to have all the actors on the table citing national security. It has happened in lots of other issues, be it freedom of expression, be it other civil rights. So national security is a kind of domain, their domain, government domain. And we are talking that promoting cyber defense, not cyber security in developing countries. So within the developing countries, we are empowering whom? We are empowering the government. We are empowering the civil society. We are empowering the tech companies, which stakeholder we are talking about. So I think we have to deconstruct the whole concept of cyber defense. And at the same time, we have to kind of deconstruct developing countries. Talking about within the developing countries, in the AI regulation, we also talked about AI regulation. And in the discussion of cyber defense, are the civil society now on the table to discuss these issues? I’ll give you one example. In Nepal, recently Nepal formed, Nepal adopted the national cyber security policy. And one of the provision in the cyber security policy is that the technology or the consultation, ICT related technology or consultation would be procured by the different system than the existing public procurement process. And that process will be defined by the government. So now they are having a new shield, or the new layer where the public or the civil society would not be discussed what kind of technology the government is importing into the country or what kind of consultation they are having on the cyber security issues. So while talking about these issues, I think we have to also, another factor is we have to. to discuss about the capacity of the government to implement it, whether that kind of defense or the capacity we are talking about, whether other governments are supporting them, is it available within the national context, or whether there is a geopolitics in the play? Because it has happened in many situation, cyber defense is part of the geopolitics as well. So we have to also consider that dimension. So in my opinion that you said earlier, technology is different, but the values are same. So we have to focus on the values. And I think the human rights charter or the internet right and principles are the basic values that we have to uphold. We are talking about different, somebody earlier said about those values having in the paper and those values having in the practical world. At least we start, we have to, I think, start with the values that we have already agreed on, all we have agreed on the paper, then we have to make them practical in the real life. Thank you.

Babu Ram Aryal:
Thank you. We have just eight minutes left. Can you please briefly share your thoughts?

Audience:
Hi, thank you. My name is Yasmin from the UN Institute for Disarmament Research. So I just have a quick question. So based on my previous, I’ve been following the issue of AI and cybersecurity for a few years now, and I see that while both fields are so inherently deeply interconnected, fact is that at the multilateral level, other than processes like the IGF, and even so it’s only been recent, most of the deliberations are done in silo. You have processes for cyber, you have processes on AI, but they don’t really interact with each other. So, but at the same time, I see that there is increased awareness on coming up with governance solutions that are sort of multidisciplinary and touch upon tech altogether. And one of the approaches that have been. proposed is responsible behavior and as states are trying to develop their national security strategies along the lines of responsible behavior on using these technologies I was wondering if all the panelists based on your respective areas of work whether it’s in the public or private sector if you have any sort of best practices that you would recommend or you would be sharing to the audiences here on yeah when states are trying to develop their national security strategies what sort of best practices have worked on your experience to govern these technologies in the security and defense sector thank

Babu Ram Aryal:
you thank you very much for this question but we have very less time we have this six minutes left a very quick intervention from Michael and then take away from all the panelists yeah so basically to probably just touch a bit

Michael Ilishebo:
of what she’s asked so in integrating into the defense system of course she’s mentioned issues of national cyber security strategies there’s also need for regulatory frameworks there’s also need for capacity building collaboration data governance incidence response and ethical guidelines of course within the international cooperation so as she’s put it we are discussing two important issues in silos cyber security is discussed as a standalone topic without due consideration for AI the same way I discussed in isolation without due consideration for cyber security and its impact so there should be a point at which we must discuss the two as a single subject based on the impact and the problems we are trying to solve

Tatiana Tropina:
closing remarks yeah I would like to address this question because to me it’s a very interesting one as somebody who is dealing with law and policy and UN process well first of all I think that this is not the first time when to interrelated process are artificially separated in the UN. For example, look at cybersecurity and cybercrime processes, they’re also separated. And then we have cybersecurity and AI and so on and so forth. As to best practices, I will be honest here as well, I do not think that there are best practices yet. We are still building our capacity to address the issues. I would say that the things where I’m looking at to become best practices, there are quite a few. First of all, when we are talking about guiding principles, I believe that they are nice and good whenever they appear, but they do not tell you how to achieve transparency, how to achieve fairness, how to achieve accountability really in a way. So I’m currently looking at the Council of Europe proposal for global treaty on AI. And I think that this might be the, it’s very kind of general as a framework, but this might be a game changer from the human rights perspective, which will play into fairness perspective in terms of agreed values. But I’m also looking to the EU AI Act, because this is where we might get a point where on the regulatory level, we will prohibit profiling and some other AI users. And this might be a game changer and this might become the best practice. And this is what I would be looking at, not at the UN, but on the EU level. Thank you.

Babu Ram Aryal:
Sarim.

Sarim Aziz:
Thanks, Babu. Yeah, I think certainly you’re right. It’s still early days, right? I mean, there’s, we met as a member of the partnerships on AI with other industry players, and there’s, I think, multi-stakeholder collaboration, I know it’s been mentioned in every session, is that is the solution. And I think there are good examples in terms of North Stars to look at in other areas. So for example, you take child safety or you take terrorism, you know, the AI is already doing some pretty advanced defensive stuff there on both fronts, right? So on child safety, the National Center for Missing. and exploitive children like that, they have a cyber tip line where they inform law enforcement in different countries based on CSAM that’s detected on platforms. And that’s because industry, that’s a public private partnership becomes very key there where industry works with them and they enable law enforcement around the world in that issue of child safety and child exploitation. So that’s a good example of where we can get to on cybersecurity. Same with terrorism. You know, the GIF-CT is a very important forum where industry is a part of and where we ensure that platforms are not used to. So I think back to the harms, like we have to go, what is the harm we’re trying to attempt? And do we have the right people focused on it? But I think on the AI front that we’re in the beginning of the stages of getting, we need to have technical standards built like we do on other areas, like things like, okay, watermarking, you know, what does that look like for audiovisual content? And that can be fixed on the production side, right? If everybody has this consensus, not just in industry, but across countries and including countries and developing countries. But I do think the opportunity in the short term is for developing countries to take advantage of the incentivize, you know, like we have a bug bounty program, for example, but incentivizing giving data to local researchers and developers to help figure out vulnerabilities and train systems using that for your purposes locally is sort of the immediate opportunity because these models are open source now and available.

Babu Ram Aryal:
Sorry, Vakas, just you have one minute left.

Waqas Hassan:
Okay, one minute. I think we look at the government to do most of the things, almost everything, but this weight of responsibility to be more cyber ready has to be distributed not only just between the government, but also among the users, among the platforms, among the academy, everybody, I’m circling back to the multi-stakeholder model that we have and the collaborative approach that we always follow. I think if we all, if we cannot, if in the developing countries we cannot have the capacity of the technology to handle these challenges, so far at least what we do have is a share of responsibility that maybe all of us can have, and you know, make sure that you know we are at least somewhat ready to address these challenges being posed by AI and cyber security.

Babu Ram Aryal:
Thank you. We completed this discussion exactly on time. I’d like to thank all of you. Couple of things were very significantly we discussed. One is identifying harm and the another was the capacity and of course these are two major things and without taking more time for another session. I would like to thank all of you, our speakers, our online moderator, our audience from online platform and of course all of you at very late evening session in Kyoto IGF. Thank you very much. I conclude this session here now. Thank you very much.

Audience

Speech speed

173 words per minute

Speech length

2115 words

Speech time

735 secs

Babu Ram Aryal

Speech speed

117 words per minute

Speech length

1603 words

Speech time

820 secs

Michael Ilishebo

Speech speed

159 words per minute

Speech length

1475 words

Speech time

555 secs

Sarim Aziz

Speech speed

220 words per minute

Speech length

4068 words

Speech time

1108 secs

Tatiana Tropina

Speech speed

179 words per minute

Speech length

3064 words

Speech time

1028 secs

Waqas Hassan

Speech speed

159 words per minute

Speech length

2032 words

Speech time

766 secs