Uncategorized
DC-PAL Public access evolutions – lessons from the last 20 years | IGF 2023
DC3 Community Networks: Digital Sovereignty and Sustainability | IGF 2023
Knowledge Graph of Debate
Session report
Full session report
Audience
In marginalized and conflict-ridden areas like favelas in Rio de Janeiro, Brazil, telecom operators do not provide internet or telephony services due to security concerns. This lack of connectivity has become even more pronounced during the pandemic, making it increasingly challenging for residents to access vital resources and opportunities. However, community networks present a potential solution to connect these marginalized communities and offer alternatives.
Building antennas for community networks was initially considered as a viable option for providing internet access in favelas. Nevertheless, due to security risks and potential threats to life, it was decided not to proceed with this approach. This reflects the complex challenges and constraints faced in these areas.
Digital inclusion goes beyond simply implementing community networks. It also involves educating communities about the numerous opportunities that connectivity provides and how it can empower them to change their realities. This comprehensive approach aims to bridge the digital divide and ensure that everyone has access to the benefits brought by the internet.
Concerns about digital sovereignty have also been raised in the context of community networks. While community networks can foster independence and self-determination, some worry that emphasizing digital sovereignty may hinder cooperation and collaboration between different stakeholders. Striking a balance between digital sovereignty and collaboration is crucial for the success of community network initiatives.
Another important consideration is the lack of clarity regarding the definition and representation of the “community” in community networks. Understanding who constitutes the community and their role is essential for effective and inclusive decision-making and resource allocation. This issue highlights the need for greater transparency and inclusivity when implementing community network projects.
Moreover, there is concern about the reliance on mainstream platforms like Zoom and YouTube, which can contradict the ideals of digital sovereignty. While these platforms provide connectivity, their centralization compromises autonomy and control over digital infrastructure.
Community networks, while not a complete solution, can complement other initiatives and bring culture and communication to marginalized communities. They have the potential to act as intranets, providing connectivity and safeguarding those already connected.
Community networks can also be seen as an expression of digital sovereignty and self-determination. By allowing local communities to master their own digital destinies, community networks enable them to shape their digital experiences and use technology as per their preferences and needs.
The Internet Governance Forum (IGF) and the IOMEC coalition on community connectivity provide valuable platforms for discussing community connectivity issues and finding solutions. These initiatives facilitate collaboration and knowledge sharing among stakeholders interested in bridging the digital divide and promoting community networks.
In Nigeria, community networks have been successfully used for citizen science projects. Through community networks, internet connectivity was provided to monitor air pollution and oil spills. This example showcases the potential of community networks in addressing community issues and delivering value-added services.
In conclusion, connectivity remains a significant challenge in marginalized communities, especially in conflict-ridden areas. Community networks offer a potential solution to address this issue and provide alternatives to traditional telecom operators. However, building and sustaining community networks requires addressing security concerns, promoting digital inclusion, balancing digital sovereignty with cooperation, ensuring representation, and expanding partnerships and collaborations.
Carlos Baca
The analysis reveals several important points made by the speakers regarding the relationship between capacity building, sustainability, and community networks. Firstly, it is highlighted that national schools of community networks have been established in several countries to teach, implement, and support community networks. One of the key focuses of these schools is to educate communities about sustainability and e-waste management. Through these capacity-building processes, communities can develop a critical understanding of technology and strategies for sustainability. As a result of participating in these initiatives, sustainable strategies have been developed, such as the creation of bamboo towers in Indonesia and the use of AI for efficient fishing and farming practices. These examples demonstrate how capacity building can lead to innovative and sustainable solutions.
Secondly, the speakers emphasize the significance of peer-to-peer learning and technical know-how in contributing to environmental sustainability within community networks. The analysis highlights community networks in Kenya and South Africa that have effectively transmitted technical knowledge among community members. This knowledge exchange has resulted in improved equipment usage and reduced waste. By harnessing the power of peer-to-peer learning, the need for external technical assistance is reduced, leading to decreased travel and waste. This indicates that empowering communities with technical skills and knowledge can lead to more sustainable practices and self-sufficiency.
Furthermore, the analysis underlines the transformative power of travelling and visiting other territories in inspiring communities to reevaluate their own territories. Notably, participants in South Africa who had never left their communities began to rethink their own territories after observing different ways of living in other areas. This insight suggests that exposing communities to diverse perspectives and experiences through travel and learning can foster the development of sustainable cities and communities.
The analysis also highlights the importance of local content and production in the context of community networks. It is asserted that local content production is integral to the development and sustainability of community networks. By promoting local content and production, community networks can enhance local ownership, creativity, and cultural preservation. This observation underscores the significance of involving local communities as active participants in the design and operation of community networks to achieve sustainable outcomes.
In addition, the analysis addresses the concept of digital sovereignty and argues that it should not be viewed as a black and white concept. Rather, it should be understood as a process that involves understanding risks and making informed decisions. The speakers highlight the need for communities using platforms such as Zoom or Facebook to understand the implications of their use and make autonomous decisions. This argument suggests that digital sovereignty is contingent upon communities’ independent and informed choices regarding the use of digital tools and platforms.
Furthermore, the analysis delves into the complex and necessary element of negotiation with violent elements in rural areas. In particular, the involvement of Narcos in Mexico is acknowledged as they assist in developing infrastructure due to personal benefits. The speakers convey that while negotiating with violent elements is challenging, it is an essential aspect of working in rural areas, particularly when seeking to establish community connectivity in these regions.
Lastly, the analysis highlights the essential role of capacity building in achieving digital sovereignty. It is emphasized that autonomy in digital decision-making requires communities to have access to sufficient information and understanding of associated risks. With capacity building, communities can develop the skills and knowledge necessary to make informed decisions and navigate digital realms effectively. This observation underscores the importance of quality education and increasing access to digital infrastructure to empower communities in the pursuit of digital sovereignty.
In conclusion, the extended analysis sheds light on the interconnections between capacity building, sustainability, and community networks. It highlights the transformative impact of capacity-building processes on community networks, resulting in the development of sustainable strategies. Peer-to-peer learning and technical know-how within community networks are shown to contribute to environmental sustainability by reducing waste and promoting self-sufficiency. Additionally, the importance of travel and exposure to different perspectives in promoting sustainable cities and communities is highlighted. The significance of local content and production, autonomous decision-making in digital realms, negotiating with violent elements in rural areas, and the indispensable role of capacity building in achieving digital sovereignty are also explored. Overall, the analysis provides valuable insights into the critical elements required for the success and sustainability of community networks.
Senka Hadzic
During the session, the speakers focused on community networks and their role in digital sovereignty. The first speaker, an Internet measurement and data expert from the Internet Society, provided an overview of ISOC’s work on community networks and the significance of digital sovereignty.
ISOC’s work in community networks highlights the importance of empowering local communities to take control of their own digital infrastructure and services. By building and managing their own networks, communities can enhance their connectivity, bridging the digital divide and ensuring reliable and affordable internet access for all. This approach promotes digital inclusivity and helps overcome dependence on centralised telecommunications providers, fostering a sense of ownership and autonomy within the community.
Moreover, the first speaker emphasised the role of community networks in promoting digital sovereignty. Digital sovereignty refers to a nation’s ability to exercise control and maintain authority over its digital infrastructure, policies, and data. Community networks play a crucial role in achieving digital sovereignty by placing control of the network infrastructure in the hands of the community rather than relying on external companies or service providers. This shift gives communities the power to shape their own digital ecosystems, enabling them to protect their data, privacy, and interests.
The second speaker, Pedro Vilches from GrifiNet in Catalonia, focused on their flagship community network, which has been operational for almost 20 years and boasts over 37,000 active nodes. GrifiNet not only provides connectivity but also actively promotes circular economy principles and the reduction of e-waste.
GrifiNet’s emphasis on the circular economy involves encouraging the community to reuse and recycle electronic devices and reduce electronic waste. By doing so, GrifiNet aims to minimise the environmental impact associated with e-waste and create a more sustainable and environmentally friendly approach to technology.
Overall, the session highlighted the multiple benefits of community networks in achieving digital sovereignty. By empowering communities to build and manage their own networks, individuals gain access to reliable and affordable internet connectivity while also fostering a sense of ownership and control over their digital infrastructure. Additionally, the emphasis on circular economy principles by networks like GrifiNet showcases the potential for community networks to drive sustainability in the digital realm.
Nils Brock
In a recent publication titled “Can Environmental Practices Foster Community Network Sustainability?”, the challenges, benefits, and future prospects of community networks were discussed. The publication highlighted the difficulties that community networks face in managing the various technologies involved and ensuring the successful transmission of signals for local networks. These challenges emphasize the need for effective management and technical expertise within community networks.
However, the publication also noted that community networks can operate in a complementary or alternative manner to standard internet providers. This suggests that community networks have the potential to offer unique advantages and fill gaps in connectivity that traditional providers may not address. It is important, however, to consider the potential for external providers with different business models to undermine the efforts of community networks.
Another noteworthy point raised in the publication is the potential use of bamboo as a sustainable resource for building infrastructure in community networks. An example was given of a successful project in India, where bamboo was used for construction purposes. This highlights the potential for bamboo to provide both an eco-friendly and cost-effective solution for building and expanding community networks.
Moreover, the publication stressed the significance of solar energy as a critical resource for network functioning. This is because without energy, there can be no networking, including digital networking. The publication showcased an example from Brazil, where a community set up online courses to promote knowledge and understanding of photovoltaic systems. This initiative aimed to improve energy efficiency and promote the use of solar energy within community networks.
Furthermore, the publication emphasized the importance of providing local servers as a means to promote ownership of data and infrastructure in community networks. Local servers not only make services more sustainable to organize but also reduce environmental impact. It was also noted that capacity building efforts are necessary to support the implementation and management of local servers within community networks.
Lastly, the publication highlighted that the future of community networks extends beyond simply providing connectivity and access. The importance of local services, such as agriculture, education, and content creation, was stressed. These services can cater to the specific needs of different communities, both rural and urban, and contribute to their overall development and well-being.
In conclusion, the publication provided insights into the challenges faced by community networks but also highlighted their potential benefits and future prospects. By addressing the challenges of managing technologies, exploring alternative resources like bamboo, harnessing solar energy, promoting ownership of data and infrastructure, and focusing on local services, community networks can make significant contributions to sustainable and inclusive development.
Raquel Gatto
This comprehensive summary explores the state and challenges of community networks in Brazil, emphasising the importance of an evidence-based approach to understanding these networks. The analysis highlights that long-term sustainability is a significant concern, with half of the community networks failing to survive beyond the first year of operation. Additionally, the regulatory environment poses challenges for community networks.
To address these issues, a policy brief was created by APCZ (Association for Progressive Communications) and Anatel (Brazil’s National Telecommunications Agency). This brief not only identifies gaps in telecommunications for community networks but also resulted in the development of a technical toolkit for establishing these networks. Notably, the policy brief includes several recommendations for Anatel and the Ministry of Communication to tackle the challenges faced by community networks.
Recognising the importance of collaboration, a Community Networks Working Group has been formed in conjunction with Anatel. This working group comprises community network leaders and organisations dedicated to fostering the development of these networks. Its aim is to provide a common goal and vision, as well as maintain continuous interaction with government actors.
In terms of Brazil’s global agenda, as the host of the G20 in 2024, the country demonstrates a strong focus on the digital pillar, specifically emphasising the importance of achieving universal and meaningful connectivity. This indicates Brazil’s commitment to promoting digital inclusion and ensuring that all individuals have access to the benefits of the digital world.
The analysis also underscores that meaningful access extends beyond mere internet connectivity. It stresses the significance of considering the entire connected environment and the skills necessary to navigate it effectively. This insight highlights the importance of addressing the digital divide comprehensively, focusing not only on infrastructure but also on empowering individuals with the relevant digital skills.
Furthermore, the analysis emphasises the need to recognise and acknowledge the voices and concerns of local communities, both in rural and urban areas. It dispels the notion that only remote and rural areas face connectivity challenges and underscores the importance of listening to and considering the unique needs of different communities.
The analysis also identifies concerns regarding the trade-offs in collaborative arrangements and the awareness of what is relinquished in the process. This insight serves as a reminder that careful consideration should be given to the potential consequences and compromises involved in collaborative initiatives.
Regarding community networks, caution is advised in the consolidation of services and connectivity within these networks. The analysis suggests that community networks should not be conflated with traditional internet service providers. This cautionary note aims to ensure clarity and prevent misunderstandings regarding the role and scope of community networks.
In conclusion, the analysis underscores the need for an evidence-based approach to understand and address the challenges faced by community networks in Brazil. The efforts made by APCZ, Anatel, and the creation of the Community Networks Working Group signify positive steps towards overcoming these challenges. Brazil’s focus on universal and meaningful connectivity in its G20 agenda further underscores the country’s commitment to digital inclusion. However, it is crucial to consider the entire connected environment and the skills necessary for meaningful access. Local voices and concerns should be acknowledged, and careful consideration must be given to the trade-offs involved in collaborative arrangements. Moreover, community networks must be clearly distinguished from traditional internet service providers to avoid confusion.
Atsuko Okuda
Connecting the unconnected remains a pressing global issue, with approximately 2.6 billion people still lacking access to the internet. However, there have been notable advancements in internet connectivity. For instance, the Asia-Pacific region has made significant progress, with 4G mobile networks now covering more than 96% of the population. Furthermore, the introduction of approximately 265 commercial 5G networks worldwide signifies the ongoing efforts to improve connectivity and bridge the digital divide.
Addressing this challenge requires a holistic approach involving multiple stakeholders. By adopting a whole-of-society approach, meaningful partnerships can be forged, and silos can be overcome. This approach has shown promise, as evidenced by the successful implementation of the International Telecommunication Union’s (ITU) Smart Villages initiative. The initiative serves as a prime example of how a whole-of-government and whole-of-society approach can contribute to enhancing connectivity.
Moreover, community networks, such as telecenters, play a crucial role in achieving both digital and environmental sustainability. A recent joint study by the ITU and the Internet Society (ISOC) highlighted the significance of telecenters and community networks in promoting sustainability. The study identified six dimensions of sustainability, including environmental sustainability, emphasizing the critical role that community networks play in expanding access to information and communication technology and contributing to broader sustainable development goals.
In conclusion, while connecting the unconnected remains a global challenge, progress is being made in improving internet connectivity. The widespread deployment of 4G and the launch of 5G networks demonstrate significant advancements in this regard. Additionally, a whole-of-society approach has proven effective, as seen in the successful implementation of ITU’s Smart Villages initiative. Furthermore, community networks, such as telecenters, are instrumental in achieving both digital and environmental sustainability. These insights highlight the importance of continued collaboration and innovative approaches to address the global challenge of connecting the unconnected.
Amreesh Phokeer
The Internet Society is actively involved in expanding community networks worldwide, with a particular focus on areas in Africa, Asia, and the Himalayas in Nepal. Their initiatives aim to support and enhance over 100 complementary connectivity solutions, while also providing training to over 10,000 individuals to maintain their own internet infrastructure. This commitment to expanding community networks reflects the Internet Society’s dedication to bridging the digital divide and promoting equal access to the internet for all.
A crucial aspect considered by the Internet Society is digital sovereignty. They recognise the importance of ensuring that countries have control over their own digital infrastructure and are not overly dependent on external entities. By supporting community networks, the Internet Society helps empower communities to establish their own internet connectivity, creating a sense of ownership and independence.
Furthermore, the Internet Society also places emphasis on environmental sustainability. In several African countries, issues concerning electricity access and affordability persist. To address these challenges, the Internet Society actively works towards reducing the costs of accessing equipment required for off-grid community networks. This approach promotes the use of renewable energy sources in these networks, aligning with the Sustainable Development Goals of affordable and clean energy and climate action.
In addition to addressing digital sovereignty and environmental sustainability, the Internet Society also advocates for the importance of maintaining local content and connectivity. They promote connectivity to local infrastructure, such as Internet exchange points, which facilitates the exchange of data within local communities. Additionally, community networks have started hosting their own services, such as local caches or video conferencing, particularly during the ongoing pandemic. These efforts not only enhance connectivity, but also contribute to responsible consumption and production, aligning with the Sustainable Development Goals of sustainable cities and communities.
Overall, the Internet Society’s involvement in expanding community networks demonstrates their dedication to promoting access to the internet and bridging the digital divide. By empowering communities, supporting digital sovereignty, striving for environmental sustainability, and maintaining local content and connectivity, the Internet Society plays a significant role in creating a more inclusive and connected digital world.
Pedro Vilchez
In the first argument, the speaker proposes a solution to reduce e-waste by making users responsible for the way Wi-Fi routers are used and allowing these devices to enter the circular economy. The argument is made in light of the fact that Wi-Fi routers are typically designed for a limited purpose and timeframe, leading to a significant amount of e-waste. The suggestion is to allow Wi-Fi routers to be modified and reused, similar to computers, which would prolong their lifespan and reduce the overall waste generated.
Moving on to the second argument, the speaker highlights the importance of community networks in Europe for maintaining telecommunications infrastructure and meeting societal needs. It is noted that both the public and private sectors are facing challenges in maintaining the telecommunications infrastructure efficiently. The speaker emphasizes that community networks can serve as a common resource model, enabling participation from both sectors. This approach can alleviate the burden on individual entities while ensuring the smooth operation of the infrastructure.
Furthermore, the speaker highlights that community networks go beyond just delivering internet access; they also foster mutual aid and knowledge sharing within communities. This aspect further strengthens the case for community networks as they not only provide essential services but also promote collaboration and community development.
To support the effectiveness of community networks, the example of GrifiNet is presented. It is mentioned that GrifiNet, an ISP spin-off, managed to earn 30 million Euros in 2022. This success serves as evidence of the efficacy of community networks and their potential to thrive in the telecommunications industry.
In conclusion, the first argument focuses on reducing e-waste by making users responsible for the proper use of Wi-Fi routers and integrating them into the circular economy. The second argument highlights the significance of community networks in Europe for maintaining telecommunications infrastructure and meeting societal needs. The evidence presented demonstrates the positive outcomes and potential benefits of embracing community networks. Overall, both arguments provide valuable insights into sustainable practices and innovative approaches in the technology and telecommunications sectors.
Luca Belli
Community networks play a crucial role in building digital sovereignty and environmental sustainability. These networks are driven by communities themselves and provide a model of digital sovereignty that is not defined by states, but driven by the communities themselves. They enable self-determination and self-governance, allowing communities to understand and regulate technology effectively. Notably, community networks have successfully been doing this for the past 20 years.
Moreover, community networks manage their connectivity infrastructure as a commons, which supports environmental sustainability. They understand and mitigate the potential negative environmental impacts of technology, ensuring that their actions align with environmental goals. This demonstrates their commitment to building sustainable communities.
Additionally, a multi-stakeholder model is suggested as an effective approach for building and implementing connectivity networks. This approach involves different stakeholders coming together to not only discuss but also actively create and execute plans. By involving various stakeholders, including community members, organisations, and government bodies, this model ensures a diverse range of perspectives and expertise. This can lead to more comprehensive and inclusive connectivity networks.
Community networks also create an entire ecosystem of content and services that are developed by and for the community. This empowers local communities and fosters a sense of ownership and pride. It allows communities to determine their own digital destiny and use technology for their specific needs, contributing to digital sovereignty.
While community networks are not a solution to all the world’s problems, they do bring significant benefits to underserved areas. They can provide access to culture, communication, and education, bridging the digital divide and empowering those who were previously left behind.
It is worth noting that successful community networks can perform like large telecommunication companies but with lower costs and community governance. Some community networks have been successful in creating self-sufficient intranets, allowing information and services to be shared within the community. This demonstrates the potential of community networks to rival traditional internet service providers and bring connectivity to underserved areas in a sustainable and cost-effective manner.
In conclusion, community networks are a powerful tool in building digital sovereignty and environmental sustainability. They empower communities, enabling self-determination and fostering a sense of ownership and responsibility. By adopting a multi-stakeholder model, community networks can create comprehensive and inclusive connectivity networks. They bring culture, communication, and education to underserved areas, bridging the digital divide. Although community networks have their limitations, their positive impact on communities and their ability to change lives is undeniable.
Session transcript
Luca Belli:
5, 4, 3, 2, 1. All right. So welcome to everyone to this annual meeting of the Dynamic Coalition on Community Connectivity, DC3, that has been working on community connectivity issues for the past seven years. And so we are now at the seventh annual report. You can find here hard copies, or also on the web page of the DC3. They are already available in PDF for you to download. And the theme of this year that we have chosen, and some of you have helped us develop in this report, is community networks building digital sovereignty and environmental sustainability. And the idea behind this is that community networks offer us a very good example of an additional conception of digital sovereignty, and how also environmental sustainability can be achieved through a community-driven effort. So not necessarily only through policies and governance system that are defined by states, but also by through policies and governance models that are driven by the communities themselves. And that is an important conception in the debate of digital sovereignty. We have been speaking a lot about this over the week. The fact that digital sovereignty is not only about authoritarian regimes, controlling is not only about protectionism. It’s also very much also about understanding the technology to be able to develop it and regulate it in an effective way. And this is very much what community networks have been doing over the past 20 years in terms of self-determination, in terms of understanding how the technology works, developing it, and creating their own governance models, self-governance model to manage the connectivity infrastructure as a commons. And this is actually, it’s very good also to unleash forces that support environmental sustainability as when you understand the technology, understand also not only the good benefits of the technology, but also the potential negative impact in terms of negative externalities in environmental externalities. And you also try to understand how to develop it in a way that is more green, if you want. And also you can use, you can leverage connectivity at the local level to support initiatives that promote sustainability. And this, in a nutshell, and what we are going to speak today with a lot of very distinguished panelists. Let me first also thank my colleague, Senka Hadzic, who has been developing this work over the past years together, including the addition of the reports together with me. She has been the force behind the organization of the panel. And she will only speak lightly today because she is involved in intense karaoke yesterday evening. So let me also introduce our distinguished panelists, starting from Atsuko Okuda, that is joining us remotely. She is the director of the ITU Asia-Pacific Bureau. Then we have Raquel Gatto from CGI.br, the Brazilian Internet Steering Committee. We have Amresh Phukir from ISOC, that also is joining us online, together with Pedro Vilcets from Giffinet, also joining us online. And then back here in person, we have Carlos Baca, who is from SITSAC, and Nils Broek from Rhizomatica. Without further ado, I would like to ask Atsuko Okuda to provide some introductory remarks to understand also the kind of vision and interest that an organization like ITU may have in this kind of initiative that we are discussing and analyzing here. Atsuko, can you hear us?
Atsuko Okuda:
Yes. Excellent. And I hope that you can hear me too. Very well. Thank you. Great. Thank you. Good morning. I would like to start by thanking the organizer for inviting ITU to today’s session, Dynamic Coalition Session on Community Networks, Digital Sovereignty, and Sustainability. This topic is very close to my heart and is also a core area of ITU in Asia and the Pacific that we are undertaking in partnership with communities, UN agencies, governments, civil society, academia, and financial institutions. Let me first start with the connectivity part, where we have good and bad news. According to the latest estimate of ITU, which is released in September this year, 2.6 billion people still remain unconnected globally. It is good news as there is a decrease by 100 million from the previous estimates in 2022. It is bad news because the pace of connecting the unconnected may be decelerating. Under the COVID pandemic, almost 800 million people were estimated to have joined the cyberspace for a short time span between 2019 and 2021. In Asia and the Pacific, more than 96% of the population is covered by 4G mobile networks, according to the ITU statistics. Furthermore, the GSMA, Global Mobile Suppliers Association, reported that around 265 commercial 5G networks have been launched globally and 62 in Asia and the Pacific. But universal and meaningful connectivity where everyone can enjoy a safe, satisfying, enriching, productive, and affordable online experience remains a challenge in the region. Recognizing the important role that digitization plays in meeting the SDGs, ITU and the Office of the UN Secretary General’s Envoy on Technology have established a set of aspirational targets for 2030 across internet connectivity, achieving gender parity, addressing digital skills, broadband speed, and its affordability, which is measured as less than 2% of GNI per capita by 2025. These remain a high priority for governments across the globe, and various policy measures are being put in place to achieve the targets. In order for us to make significant and accelerated progress towards the targets and SDGs by 2030, we need a qualitative transformation. In the way we approach the digital divide and we connect the unconnected, we learned that a siloed approach may not work any longer, and strengthened partnership is a must to create synergies and impact. More importantly, we are gaining ground in building consensus on the need for a whole-of-government and whole-of-society approach to overcome the silos and build stronger partnerships. ITU Smart Villages and Smart Islands initiatives is an initiative designed on the whole-of-government and whole-of-society approach. It is being rolled out in 15 countries in Asia and the Pacific, and is aimed to deliver connectivity, digital skills, and priority digital services to rural and remote communities. It is being delivered in close collaboration with various line ministries, UN agencies, private sector, and civil society, and academia. And it has generated tremendous support, including that of G20 members during their meeting under Indonesia’s presidency in 2022. On the sustainability part, I’m very happy to see our ISOC colleague here in the session, as we recently conducted a study jointly. The reports are entitled, From Telecentres, Community Networks to Sustainable Smart Villages and Smart Islands, which is under finalization. The study identified six dimensions of sustainability. Of course, financial, sociocultural, organizational, operational policy, as well as environmental sustainability, based on the good practices and lessons learned from telecentres and community networks, and provided suggestions for smart villages and smart islands to look at while looking at the 10 case studies. I’m also very happy to see such a distinguished list of speakers today, who would be sharing their thoughts on this important aspect. And through our discussions and partnerships, I hope that we can accelerate our efforts to connect the unconnected and ensure that no one is left behind and offline.
Luca Belli:
Thank you. Back to you. Thank you very much for the very good overview of all the initiatives and also the ambition of ITU of leading this effort, also as a hub for various stakeholders to interact and promote a more sustainable connectivity. Now, let’s try to narrow down from the global to the local and see what is happening in Brazil. And Raquel has been leading several efforts about this over the past couple of years. So please, Raquel, the floor is yours.
Raquel Gatto:
Thank you very much, Luca. And I’m very happy to join you in this meeting. I see some familiar faces and new faces that I’m glad to interact with. I had a lot to cover, as usual, so I’m trying to keep it short and bring you at least three highlights that I think are important covering the past two, three years since 2020 when more of this movement on community networks landed in Brazil concretely. So first of all, I want to start talking about CGI’s study on the community networks. So this was a study undertaken as more of a statistical approach. So there are some qualitative interviews, but then the idea was really to bring this evidence-based approach to what are the community networks, how they are being organized, and what are the challenges, the state of art of the community networks in Brazil, and really understand those and bring into more of the numbers and indicators that could guide some of the policymaking. I’m not going through all of this study. I can point you, and certainly this has circulated already in the dynamic coalition, but I think it’s important to start with this as an angle where the study showed, for example, that some of the gaps that we have in terms of community networks that are not a surprise for some of you here, most of the community networks, they don’t survive the first year. And the other half, they don’t know if they’re going to survive for another year. So those are kind of the mapping results that we have in terms of the sustainability of the community networks itself and where we need to bring the efforts. It’s not only about the resources in terms of money. Of course, funding is one of them, but it’s also the resourcing in terms of the technical requirements, the registration requirements, and how the regulatory environment is also not helpful for the community networks to continue to survive and blossom. So that’s one of the key takeaways I want to bring in from this study. And then, of course, a major piece, and it was really what moved, let’s say, Brazil government into more the community networks friendly side, is EPCs conducted a study together with Anatel and the UK FDCO funding that had this massive work on a policy brief. It has a lot of recommendations on how, well, it also brought all this historical telecommunications overview in Brazil and how it evolved, and then what are the challenges for community networks itself, explaining what the community networks are and where are the gaps in the regulatory space. But it really landed into these recommendations for Anatel, for the communication ministry, for all those decision makers, what they need to do or that need to be done, right? Not a personal thing, but what needs to be done to help the community networks to grow, well, to be created, and then to grow and evolve. And so that’s among the recommendations. So the work that was done was the policy brief, but also a technical toolkit to show how community networks could be created and, of course, based on many of the materials that the members of the dynamic coalition have already circulated. So this would not be new in terms of content. But it’s new that it’s landed into the telco regulator website. So Anatel is promoting it also as part of their work. And this is an important shift in the telco regulator approach to community networks. And among the recommendations, so I’m not going through all of them. I think there are other valuable recommendations that we can discuss at some point in terms of universal funds and so on. But I want to focus on one that is about the creation of a local committee to interact more in depth with Anatel and the internet service providers group and the community network leaders. And this recommendation has been taken on by Anatel. And this group was created early this year. It’s called the Community Networks Working Group within Anatel. It had the mandate to August. It was postponed to end of this month to the end of October. And I just got the confirmation this morning that it’s going to have an event hosted by Anatel on November 22nd. So for the Brazilians in the room, please take in your calendars. And so the purpose of this group, so first let me go one way back. When the APCZ study was being done, there was this creation of a local group with experts. So not only the community network leaders itself, but also the organizations that were the intermediaries that were fostering the community networks development. And so this local group provided advice on the materials that were done and submitted. But also it has evolved from 10, 12 people and organizations that were involved to now 40 or 50 that are really, and it’s a really growing number. We are calling the local community networks group in Brazil that held weekly meetings. And this group has three seats in Anatel’s working group, so the more official working group. And the way I’m saying all of that, why is this valuable for everyone listening, is the importance of keeping first this connection with the local actors, to keep it lightweight at some point, but also to keep it ongoing. And to have kind of this major goal and common goal, to have everybody on board with the same outcome and vision. And this was really important to bring us more strongly and to show that somehow we are organized within and to interact within the government actors. And this is part of the change that is ongoing right now in Brazil. Of course, there are still a lot of challenges. I mean, even within and now talking about the working group, from Anatel working group, the interaction with the other actors and how still community networks can be misunderstood is there. This is a risk, right? It’s not a local ISPs for remote areas. So the understanding that community networks is community-based and it’s not about the service itself, I know, one minute, is still a challenge. But it’s being broken down into these smaller opportunities to showcase. So the event and the continuous network with the local decision makers is important. And lastly, because I only have one minute or 30 seconds, according to Luca here, I just want to say that also in Brazil we have an opportunity for 2024 with the G20. And I think I too was mentioning that. So Brazil is the host for G20. And it has already announced its agenda with a pretty heavy digital pillar, including universal and meaningful connectivity. So that’s going to be, again, an opportunity to be taking on and to strengthen all these opportunities and tackle not only the policy changes that need to be done, but also the funding and the resources that need to be put in place for community networks. So thank you very much.
Luca Belli:
Thank you, Raquel.
Senka Hadzic:
We’ll make sure to circulate these materials in the mailing list, like both the CGI study, as well as the I’m going to introduce our next speaker, who is going to give us an overview of what is going on in the community networks and also give us an overview of the APC policy brief. Now I’m introducing our next speaker, who is joining us online from Mauritius. He is an Internet measurement and data expert at the Internet Society. He will tell us about ISOC’s work on community networks and also about the role of community networks in the digital sovereignty.
Amreesh Phokeer:
Thank you, Sankar. Good morning, everyone. It’s a pleasure for me to be here today in your panel. As Sankar mentioned, I work at the Internet Society. Not so much involved in community networks, but I can talk about the aspects, some of the aspects such as digital sovereignty or even how it impacts positively community networks, basically. First of all, I would like to remind the audience of the vision of the Internet Society, which is about how the Internet is for everyone and how we are working towards making this vision a reality. One of the projects that we are really involved in is expanding community networks around the world. We hope that by 2025, we will support more than 100 complementary connectivity solutions and also be able to train more than 10,000 people to maintain their own Internet infrastructure. The Internet Society itself has supported a couple of community networks around the world, from Africa to Asia. One recent intervention was a deployment of community networks in the Himalayas in Nepal. The issue of digital sovereignty and environmental sustainability is key. First of all, as you know, there are many places where access to electricity, especially in some African countries, is still an issue. Not only the issue is affordability, but even the stability of the network is a problem, as you can witness about how bad electricity supply is in South Africa for the moment. So having access to renewable energy sources is important, and at the same time, bringing down the costs of access to electricity and at the same time, bringing down the costs of access to equipment that would allow community networks to operate without being on the grid. Another point I wanted to touch upon is also access to content. As we know, even if you are a community network, your customers, your constituents, they still have the same needs as any other Internet user. So they would still want to watch the latest news or the latest YouTube video, and we work as hard as we can to connect community networks to the mainstream Internet. And at the Internet Society, we also try to promote the connectivity to local infrastructure, such as Internet exchange points. So usually what we found is that a community network will rely on an Internet provider, Internet service provider, and as much as possible, we tend to promote Internet service providers that are themselves connected to the local fabric, the local ecosystem. The more Internet service providers are connected to an Internet exchange point, it means that local traffic is going to stay local as much as possible. And as much as this local fabric is maturing, there is also a higher chance for content providers to host themselves locally because the customer base is also increasing. And this is what I would call collateral benefit to the community networks. Even if they are in remote places, they are still connected to the same local fabric, and eventually they would also benefit from having local connectivity. Local connectivity means that it is adding up to the equation of environmental sustainability because, of course, if you’re not using international bandwidth to access faraway content, it means that you’re using less energy to access content which is local. But I would also stress on the very singular characteristic of community networks. We talk about determination and things like that. On the opportunity for community networks to actually even host their own services. So we have seen during the pandemic where people couldn’t really have freedom of movement, how important it was for them to have affordable, even free, and unlimited access to technologies. And we have seen a lot of networks installing local caches or local services for video conferencing. So these are services that we should promote as much as possible on community networks. And obviously this would increase local use and, therefore, having less dependency on external services or paid services and allowing people to use services that are already local and close. And, therefore, they would also benefit from low latency services, higher quality, and so on. So I would really like to stress that sustainability is really broad. First of all, because sustainability can also mean giving the power to the people to create their own type of network. The network that really resembles the community itself and what they think is important. So having the ability to create their own content and upload content at very low cost and hopefully at high bandwidth and high quality is really important. So this increases, to some extent, sustainability of the community in terms of strengthening the community itself. And, of course, bringing content closer to the user and, as I mentioned, creates this environmental sustainability because it uses less energy elsewhere. So, yeah, these are my posts that I wanted to bring up today.
Senka Hadzic:
Thank you. Thank you, Amrush. That was a really great overview. Our next speaker is joining us from Spain. Pedro Vilches has been involved in GrifiNet, which is a, you can say, flagship community network in Catalonia. It has been operating for almost 20 years and has over 37,000 active nodes. And apart from providing connectivity, GrifiNet is also promoting circular economy and reduction of e-waste. And Pedro is going to tell us more about it in his presentation. Welcome, Pedro. Pedro, can you hear us? Hi. Yeah. We can hear you now. Yeah. Okay. So I want to raise two topics for this session.
Pedro Vilchez:
One is a proposal on reducing e-waste, and the other one is remarking why community networks are relevant in Europe. So, well, here is my relevant volunteer activity. So 10 years, more than 10 years of experience in GrifiNet through EXO, that is a non-profit operator from Barcelona with 100 members. But I’m also holding a position in a governing council, holding a position in a governing council of a telecom cooperative called Sunconexio, which is also part of GrifiNet, and it’s giving service to 9,000 members and 20,000 contracts. I also professionally work in that research group, this one, and I’m involved in tech projects with strong involvement of small-scale communities. The proposal on reducing e-waste is very simple. So the root problem is that manufacturers are becoming responsible on how Wi-Fi routers are used. I put notes at the end. This is called an EU radio directive from Free Software Foundation Europe. So the e-waste problem specifically is that Wi-Fi routers are generally designed for a very limited purpose and short timeframe. They cannot be changed or modified, and that eventually produces e-waste. And the proposed solution is do the same as with computers. Make its users be responsible on what they do. Allow these devices to feed and enter the circular economy and be part of, for example, the area use ecosystem we have here nearby Barcelona. Why are community networks relevant in Europe? First, let’s present the problem, problem maintaining telecommunications infrastructure. So it started with the public sector, and at some point, they stopped maintaining it, maybe because it was a business and not just an expense. With the 90s liberalization, the private sector captured it, but it’s struggling maintaining it. Recent discussion in Europe about big telco, they say too many operators is unsustainable, and that the solution is the United States model, hence be less actors in the market. But, for example, from the New York Mesh community network, they complain, but in New York City, far too many people don’t have internet access. Solution, invest on community networks. Community networks really solve society needs. Being a pool resource, common pool resource model, means that public and private sectors can still participate, as the other colleagues were saying. Financial institutions, academia, government, it’s a non-excludable model. Even if the model fails, it could behave as an accelerator, delivering a more competitive private sector. And here we have a proven experience in Giffin. An ISP did a spin-off called Sunbeta, and from nothing, they got 30 million euro on annual turnover in 2022. So, but community networks is not only about delivering internet access. They can also help in mutual aid, international cooperation, sharing knowledges. Here from the perspective, I would recommend apc.org, battlemesh.org. We also have an xrcb.cat project that is a community radio. That also means bridging with the arts and with the neighbors and their concerns. Community radio could be understood as a podcast platform. We also have a project called Plataformas, and these could be understood as a pilot that explores server-side resources, usage by cooperatives from solidarity economy. Or other projects like shoik.coop, which is an open network for the internet of things on top of Giffinet. Given on the comments you did in the previous presentations, yeah, we are also serving real-time traffic, and that reduces international bandwidth for services, such as GT and BigBlueButton, instead of using Zoom, which makes also more sobering on what we use. So here are the sources I use it, and thank you. Fantastic.
Luca Belli:
Thank you very much, Pedro, also for being so sharp in the time management. I think there are a couple of points that emerge that we can connect between what Pedro and Amrish were saying, which resonates a lot with what we have been doing over the past years in terms of community networks, on the one hand, as multi-stakeholder partnerships. So we really speak a lot about multi-stakeholder model during the IGF, but the multi-stakeholder model is not only about having different stakeholders discussing things, but it’s also about having different stakeholder building things, implementing things, defining a governance model that allows them to operate even connectivity networks, but also then implementing them and creating a whole digital ecosystem out of it. And that is the other point that is something, again, that we have been stressing a lot over the past year, that to me is the core of what some years ago I was calling network self-determination, which is really the basis of the digital sovereignty conception of the community networks, the fact that you create not only connectivity, you create an entire ecosystem of content, of services that are created by the community for the community. So the community understands the technology, develops the technology, and they regulate the technology. It’s really the essence of digital sovereignty, not, again, in terms of authoritarian control, but in terms of empowerment and self-determination of the local community. We have been speaking and discussing and writing a lot about this with Carlos Vaca since several years. So Carlos from Chitsack, you have been doing amazing work, not only starting community networks, but also building them with your friends. So please, the floor is yours. Hi, everyone.
Carlos Baca:
Thank you for having me and thank you for being here in the last day of the IGF. So I know this is a big effort, so I’m very happy to have the possibility to share with you. And I want to address one question, and it’s how we can relate or if there is a relation between capacity building and environmental sustainability. And we will share with you some of the lessons that we learned in the process of developing the National Schools of Community Networks. So the National Schools of Community Networks are processes that have been in place since three years ago. We started in the beginning of the pandemic, we started this project, and it’s part of the LogNet initiative that is led by Rizomatica and APC, and is with the support of FCDO from the Digital Access Program from the UK. And these national schools have been taking place in five countries, in South Africa, in Indonesia, Nigeria, Brazil, and Kenya. And in each of these countries, we work with big allies, big organizations that implement the process of these national schools. And each of them are very different. There are no single curriculum. They are very different between each other. And they share only one thing, that it is the methodological way in which we develop these national schools. We depart from this participatory action research methodology, so we begin with the analysis of the context. We conform in each of the countries an advisory committee made by specialists and also from people from the communities and organizations, et cetera. And they start to develop this design and then implement the school. And in each of the countries also, we have seven micro-organizations, seven community-based organizations who took part of the training. And they were involved since the beginning in the design, but also in the implementation of the school, the workshops. They take the workshops. And then they have the opportunity to develop some small projects to benefit or to strengthen the process in the communities. So this is the last part. And in this part is when we realized a lot of the knowledge on how to build a community network and what are the needs that they faced and how to address it. So one of the problems, and I’m sure you all know, that we have this problem of e-waste. And this is from Mexico. This is not from the countries of the national schools. But it is a rupture of the municipality presidency in one of the indigenous communities in Mexico. And only one of these antennas or this infrastructure work. So you know this is a very, very big problem. And it’s related not only to the public policy that is implemented, but also the lack of capacities in the communities to maintain the equipment. So one of the first lessons we have in this process is that if we, through capacity-building processes, we strengthen or we develop the critical vision about the technologies, about the choosing of technologies, we have different ways to get results that are related with the care of the environment and the territory. I’m sure Nils will talk more about it. But for example, we developed in Indonesia, they started to develop these bamboo towers, so are more sustainable and also that are beautiful. Because they made their own houses with this architecture and they took the same artisanal work to develop the towers. But also in the school, some other organizations developed. use artificial intelligence to develop projects. Two projects are beautiful. One of it is, you know, the fishermen to try to know where are the fish bank and to travel less, you know, to go where the fish are, but also to know what banks have more fish and have less. And so they made a strategy, a sustainable strategy of this fishing. And the other one is the shrimp farms that are led by the women in the communities. And now they have these tools in their cell phones who can let them know the temperature, all the things that they need to know to maintain these farms working. So they have the time to start another project, and they are joining and start different projects that are not related, actually, with the need to be all day taking care of the farms. So it is very interesting and it is important. And in other countries, like in Brazil and Nigeria, they use a lot of solar energy for the network. As I said, each of the schools was very different. The other, the second point is that peer-to-peer learning and the technical know-how also help with this, no? Because if the people know how to maintain the equipment, how to look for common failures, and no, it implies less travel for the people who live in the city, who need to go to, no, the technical people who go to repair the things, there are also better handling of the equipment and less waste. And this process was very, very evident in which the National School of Community Networks was led by a community network, no? Not in all cases, the organization had a community network, now almost all have it. So in Kenya and South Africa, they have this experience, so the technical knowledge was very, very, I don’t know, but good transmission to everyone. And the third one is the way in which in these training programs, in this process, the people wave their learning communities, no? How to interact with each other, how to be an encounter point, so they start doing different projects. And one of the things that we learned is the importance of the people to travel to other places. For example, in South Africa, almost all the people who participate in the national school never get out from its communities. So when they start seeing other territories, see how other people live, how the things are doing in other ways, they start to rethink also their own territory and the ways they need to take care of it. And of course, as someone says in this session, the local content and the production is very important in this process, too. So it is a part of this territory and care of the things. And just to finalize with inviting you to visit the CN Learning Repository. You will find a lot of materials soon, I think today you will find also this.
Luca Belli:
And so thank you very much. Thank you very much, Carlos, and also for appointing the Community Network Repository, which is an incredible source of material for anyone wishing, willing to learn more or to build even community networks. Speaking about building community networks, no one better than Nils can provide us a little bit more of insight on the challenges and opportunities of developing them. Thank you, Luca. And hi, everyone.
Nils Brock:
My name is Nils Brock from Raisonmatica. And stepping in today for our colleague Shabani Belur from ISEA and APC due to connectivity issues, she cannot participate, unfortunately. So we see there’s still need to build a better and resilient internet. So the work that we proposed for this publication here was titled, Can Environmental Practices Foster Community Network Sustainability? So we would say yes, and I would like to tell you a bit what was our approach to this, which is kind of a complementary methodology to the work that Carlos Baca has just presented. So community networks, as we learned also before, they have challenges in terms of managing all the technologies that are involved and to transmit a signal or to put up a local network besides the regulatory challenges that we heard so far. And so there is a need for complementary internet solutions as we also heard earlier. So how could a community network do this and also in an efficient way and in a collaborative way? And the LogNet project, the local community network initiative that was already mentioned by Raisonmatica and APC, was working for several years on innovation and technology basically also through peer contacts but also on sub-granting. And in effect, in side effect, another good one of sub-granting is that each grantee works often very much on his or her own and sometimes there is a lack of collaboration but there are shared challenges for the networks and we have tried to engage with the community network ecosystem and the community in a different way, putting up a space that we call communities of practice. So it’s an approach where we brought in not only community networks but also other practitioners, engineers, experts on certain topics, also educators that were able to explain and build capacity for some issues. And so we worked along a concept of emerging technologies, so to say, what does a community network need to really work but also to be sustainable for itself in economical terms but also for the planet. So I will just pick out two examples and happy to dive a bit deeper in later discussions. So there was the question of bamboo and so if we talk about what does bamboo have to do with a tech network, so where does this come in, but there is always a need for infrastructure, so to say, to build up mass structures, they need concrete, they need steel and it’s not resources that are locally available. So bamboo is a plant that is in many countries available as a resource that can be grown or that is already there and so the question is how to treat it, how to select this bamboo. And so we were looking to build a community of practitioners there from India who had already done some work on this. They provided the knowledge of how to plant bamboo if someone really wants to put up a bamboo garden and in a couple of years have its own grown resources there. And there were other examples like the community around the science of towers, so how can we imagine towers that are also easy to replicate and one nice example was the tower that we saw on the image earlier from Indonesia, then a community network from Uganda, Bosco, they said we want to try to replicate this and then online they were tutored and they put up the tower so it’s possible and this is kind of traveling solutions that were created and we are still exploring more about how far can this go and where to take the bamboo approach. Solar energy is another critical resource, so without energy there is no networking, no digital networking and again here there was a capacity gap and a knowledge gap we would say and together with experts but also persons as physicians that are involved in community networks in Brazil setting up online courses to see okay how can we translate photovoltaic systems and the building blocks that are there on the market, how can we make them available that people can use them safely so that the equipment will be there for a good while, how can they calculate what they need and this was something ongoing. Also there are some kind of new technologies, some building blocks emerging like an open maximum power point tracker, so making the energy use more efficient, this is open hardware and open software so very much aligned also with the needs of communities and a last point maybe to just to put out local services is something that really stroke a chord also with the community networks to work on this because there are different solutions when it comes to e-learning and also content production and to have those as it was explained before by Giphynet, to have those available on local servers is a great contribution because then there’s really an ownership on the data, on the infrastructure, of course it is a capacity building needed for this but having those local servers again they make those services if it’s nicely done more sustainable in terms to organize them for the community but also the environmental impact can be reduced. So those are just some
Luca Belli:
examples and thank you very much and looking forward for the discussion. All right, so we have finished our speakers and we now have an open mic for everyone willing to provide comments, ask questions or share any kind of thoughts, so if you want to discuss any, to raise any issue or if you want to ask any questions I invite you to use this mic in the middle, we don’t have a roaming mic but there is, you can line there and ask your question, please go ahead.
Audience:
I’m from Rio de Janeiro, Favela da Maré in Brazil. It’s very important to hear about these experiences around the world about community networks, internet access, but a question I wanted to ask is about, I live in a favela that is dominated by the militia, by the traffic, where in the pandemic internet was very important to us, but today we can no longer have access to the internet or telephone lines because the operators, the antennas were removed and today the internet and telephony networks are dominated by the traffic and the militia, and then the article… If you and I can translate, but yes, but… She was talking about how is militia and traffic dominated territory, how important was the internet in the pandemic times, but right now they don’t have access to telephone or internet because the telecoms won’t enter the territory because it is militia dominated. Brazil, Rio de Janeiro. I am a community communicator and I live in this territory. Article 19 in Brazil gave us the alternative to make antennas, alternatives to make a community network, but analyzing the risks, we evaluated that it was better not to do it. And it means that we, community communicators, we can’t make territorial community communication and the favela itself, which has 140,000 inhabitants, today has no access to telephony or internet. And then I think, what alternatives would we have to look for solutions within a favela so that we had access to telephony and internet in a favela that is so important and so large also in Rio de Janeiro. She’s talking about how Article 19 in Brazil offered them to create a community network, but they assessed the risks and they figured they shouldn’t do it because it was a life-threatening risk because of the militia and the traffic. And so now she’s asking, what could be done, how could we think about this kind of specific problem in this context. Let’s take the other question that we have here and then we can start having a round of answers in the limit of what’s possible from this group. Hello, my name is José Arthur, I’m part of the youth delegation from Brazil, and I’m overseeing community networks in indigenous communities in the Amazon region. I would like to do a small comment on this subject. When we talk about community networks, it is necessary not only to talk about the implementation part and other issues, but also about what actions are being taken to ensure that digital inclusion is actually applied to avoid the digital terrorism, and how to teach the community about what opportunities they can have through the connective obtainment, through these community networks, that is, how they can use it to change their realities. And this is a point that I think is always very important to talk about when the subject is being discussed, because it helps ensure the community survival. I think we… Oh yeah, if you want, we can have another one and then we reply. Yeah, one question you already know. Two things. One is, I’ve been talking to people who have certain ideas of digital sovereignty. One of my friends who is a researcher who wants to work with us in some European countries said like, but you know, we have to be digitally sovereign, so we can’t cooperate very closely. Does it ring a bell for you? So, do you understand?
Luca Belli:
Okay.
Audience:
So when you collaborate with somebody on technology, there is this flag going on, like we have to be digitally sovereign, so therefore we cannot collaborate very closely. Okay. So this is one idea of digital sovereignty. I wanted to bring this to focus because I was very confused. Okay. Second one is, there’s a lot of communities, great work, amazing sessions, but I don’t know a representation of who they are. You cannot have community as rolled in and people talking about, we did Zoom, we connected YouTube, right? We can always, who are they? What are they doing with the community networks, right? We want to understand how they are participants, they are a community in the network for the services. Okay. And I can go on about that, but again, look at web. You can bring all the internet that you want, but without the web in the community, you’re just bringing, connecting Zoom, YouTube, something else, and still talking about digital sovereignty. We have to think about this. Okay. I think we might have several reactions here. Who wants to go first? Thank you, because these questions are, I think, essential, no? We can’t talk about all the things that we learn and we know, no? Because there are very long processes. And just to say that, actually, that’s why capacity building, I think,
Carlos Baca:
is key in this process, no? Because if we think digital sovereignty or digital autonomy or technological autonomy as a black or white thing, we are in the bad way, I think. And also, we think that it is like a place in which we will stay and we will have all the autonomy in our lives, and we will be very happy because all of us have autonomy. It’s also a bad way, I think, to understand it. But if we understand this like a process in which the communities and us, actually, have enough information to take good decisions, the decisions that we think are better for us, we are in the good way. So we want to think, at least in Latin America, the technological autonomy as a process of taking decisions by us, but with all the information that we need to know. So if a community understands that, and they still need to use Zoom because it’s better the signal. I don’t know. Whatever they want. But they understand what they are doing. Or Facebook or whatever, no? But they understand what they are doing, no? And the risk they have, it is better. Because if not, you have this, no? It’s connected or disconnected. And I think this is not the way. In one of the conferences yesterday, someone says that we need to escape from the idea that it is the better stage of the connectivity or no connectivity. And there is a lot of gray scale, I think, in it. And the important thing is that the communities have the understanding of what is happening, no? And what it implies to use one of different technologies. And then how they can have the environment to develop the projects they want to develop, no? So I think this is the key. And the other thing about the violence is very difficult, no? On the one hand, we have in Africa, in South Africa and Kenya, we have community networks that are in urban areas, no? Very good community networks, like Tandanet, like Binet, like Ocean View in Cape Town. And on the other hand, in our experience in Mexico, we have the narcos. I know that you all have seen series of narcos. And they are most of it, really, no? This is a reality in Mexico. And we work in the north of Mexico. And we need to negotiate a lot with them. Actually, they have the better communication, I saw, in the rural areas. But they are part of the environment. difficult. It doesn’t mean that we need to stay in the same table with them, but we know that they are behind all the discussion in the communities, and we need to know that, and it’s very difficult. It’s very difficult. They have like a big truck with a lot of internet, satellite internet and satellite phones, so they have a lot of technology. No, no, no. This is not community. This is the Narcos network. Yes. How was the negotiation? Just, I got curious, how would you approach them? I don’t know. For example, in one of the meetings we had, there is two minds that are like, and then they are very quiet, and they go out, and some people say that they are looking. What are you mean? Because they want also internet for their homes, because the communication for the daily life is for community. That’s their interest. That’s their personal interest. Yes, they want their children to be connected, the school too. So they actually, in some places, they help to develop infrastructure. It’s a very complex thing, and so we know that in the communities, we don’t need to stay with them and to talk with them, but they are part of the conversation, of the decision making. Because the starting point of the negotiation is their personal interest. Yes, to know that they are there, and this is a reality. Thank you.
Raquel Gatto:
And that’s very good. And let me just, a smaller Portuguese version. Gisela, it’s a pleasure to meet you. Welcome, and thank you for the question. I’ll speak in Portuguese with you later. Just saying, I’m going to talk to her in Portuguese later. But anyway, thank you very much for the questions. I can’t address every single point, but I think there is a common line here, which is also to bring that concept of meaningful access. We tackle this a lot, and the policy network on meaningful access. So it’s not only about bringing the connection, but it’s about the whole environment that is connected, that someone is connected to, and the skills that are involved in it, the equipment, but also, let’s say, the local environment. And I think what, first, Gisela and José Arthur brought both in the sense of, well, one into the urban environment, that’s a very non-rural environment. Let’s change the wording if that’s difficult. In a non-rural environment, that, well, first, it’s a myth that only remote and rural areas have problems. So it’s important to bring that, that we have those islands even in the cities. And then that also raises, on the indigenous community part and the slums part, how important it is to have this local voice heard, because there are different challenges, and it’s okay. But first, you need to have a space where you can openly share that, and at least look for someone that is having the same problem, that has addressed this problem, and try to get some inputs on how you can bring to solutions. But also, it is important if you think about how to scale that, because if there are many that are facing the same problem, how we’re going to find a long-standing solution and a more sustainable solution in the future. So that’s my first point, connecting all these dots and all these experiences and having the places where this can be done, it’s the first step. And then, I’m not going into the trade-offs and the digital sovereignty concept in full, but I think it’s a good alert that Carlos said, in terms of, I think the major concern is the trade-offs you have in this kind of collaborative arrangements, and the real awareness of what you are giving up when you buy into the solution. So there is no right or wrong, it’s just a matter of how well this is understood, and how this is advertised in terms of being community network driven or not. So I would put it in that sense.
Luca Belli:
I just wanted to add a point that we also have to understand that what we are speaking about, and which are the problems that we want to face, and what are the solutions to those problems. So the community networks are not a silver bullet to solve all the problems that we have in the world. So the fact that there is, let me speak, the fact that there is criminality in a given area is not something that sadly can be solved with community networks. It’s not the task of the community networks to deal with warlords or drug lords. So the community network can help. Actually, they are a very good complementary solution to solve the problem, because they bring culture to people, they bring communication to people. There is, I mean, you as a Brazilian communicator, I’m sure you know very well Paulo Freire, and he used to say that education doesn’t change the world. Education changes the people, and then changes the world. And I think you have to have this similar approach to connectivity. Connectivity does not change the world. Connectivity can change the people, and then the people will change the world. So if you think that the community network is the silver bullet that solves all the problems, I’m sorry, but you will be disappointed. But it is a very good, it’s something, it’s an alternative solution to bridge the gap that are evident in the classic traditional connectivity solutions that are state and markets. Because if all the, be them rural or areas that are not connected, or peripheral area, or slam in cities that are not connected, they are so-called market failure. It’s technical, they are called market failure areas, because the market fails to connect them, because there is no interest, economic interest in connecting them, and so you don’t have return on investment. Some of them may also be state failure areas, because the state, for various reasons, has abandoned those areas. And, but, we all know that no area is without, no community is without rules. So when the state is not ruling, someone else is ruling. And that is the problem that I think the state should solve, not really the community networks. But the community networks are a good complementary solution to expand connectivity. Osama, you wanted to say something. So actually this is not a question, but a point of observation, and also the experience, and most of the, many of the players are sitting there, and some players are sitting here. The observation is community network is so far practiced as an alternative way of providing, or building connectivity, which may be frugal, which may be commune oriented, and so on and so
Audience:
forth. The second point is that as soon as GSM comes, or the internet itself reach you, in terms of access, community network gets challenged, and they either close down, or they go haywire, or all the users get on to that network, right? Not that the previous network was not connected to the internet, but in terms of viability of existence. The third is that the best community network practice may have become an ISP, in the local area, like Goifi, and maybe Rizomatica, and there may be some examples that I may not know. The discussion that I want to do is that, what is the future of community network in itself? And coming forward, because first 15 years, internet was look up to. Now, we are fear of internet. We are fearful of internet, because there are more bads coming to you, or you have to go through those bads to get the good out of it. And therefore, can community network become an alternative commune in itself? In other words, intranet. Can it become an intranet, and I connect to the internet only when we want, or something like that, right? Is there some practice like that, or is there something, I mean, since you document a lot, I mean, this is something very important that we need to discuss, is that can community networks not in technological term, not as an alternative of ISP, not an alternative for access, but to create your own commune, like your own gated community, whatever, I mean, you can safeguard yourself, you can run on your own, even though you have Airtel, or you have NTT Docomo, or whatever, I don’t want it, you know, I just plug in and then plug off, you know, if somebody wants, because of this, and this is something that we need to discuss, because now only one third of the world is yet to be connected, so are we looking at community network as an alternative to connect those who are one third of the world, or safeguard those who are already connected, is the question. Yeah, thanks for the question, Osama, so maybe I can start, I think you are pointing out in a good direction, so I think it’s beyond connectivity,
Nils Brock:
and it’s beyond access, so when we talk about the future of community networks, and so the inter-internet approach, to have like local services, this is really something, or where there can be a difference, and to start at the other end, so what does meaningful connectivity mean at the community level, and also during the IGF, we have seen like different categories and things, but what is missing is also the question, how would the community respond to this question, what brings meaning to the connectivity from their end, that can be very, very different if you look at the rural or urban community, and then to another one, there’s so many different factors, and so only if we take into account this, I think then it’s possible to rethink, and from recent work that we have done, there is a study that we are working on also on local services, and understanding what are the importance, what are services of importance for communities, and again, that can be different, it can be agricultural services, can be educational ones, content creation, but there are things there, and that can be often done in a different way, in a complementary way, we could say, in an alternative way, whatever is the framing, but I think this is important, because you’re right, if it’s only about connectivity, and there is a provider who has a business model that comes at a community point, so this collective effort could be undone. I can take it very quickly, just to react to that, first, if we are going to put this question of the community networks becoming an internet service provider or not, we are in a good place, that’s a good problem, and it means that the community networks has grown and evolved to the place that it is, perhaps an ISP, but anyway,
Raquel Gatto:
I’m not going to the nitty-gritty of, you know, it should be one or not, but because I think there are some other regulatory discussions that needs to, that might change in some places where we are looking for more of this social license for community networks as an alternative provider, and do not confuse with the traditional internet service provider, but anyway, and as you can see, while I’m a lawyer, nobody’s perfect, and so I take more for the, you know, regulatory and process environment. I would just put the cautions on the examples that you were mentioning, like, oh, and then it becomes the internet service provider, and perhaps it’s also the content providers, and, you know, this consolidation of all the services and connectivity in the community networks, because then you might be becoming, I mean, there is no, again, right or wrong if this is really the community will, and it’s community-driven. The problem is when this package has community, or something for the community, but it’s really a top-down, and something that is not, you know, their will, and it’s not their self-determination, so that’s just the risk for this consolidation that you’ve been outlining. Thank you. I just want to add an element to this
Luca Belli:
with regard to the digital sovereignty debate, which is actually a twofold dimension. On the one hand, if, as Raquel was mentioning, if the community network is so successful that it becomes a very well-performing ISP with very low prices, well, that is, I think, the community network has succeeded, because it became exactly, starting from scratch, it became exactly like the big telcos, but without being a big telco, while being community-driven, so that is an enormous success of the local community, as long as it is maintained by the local community, and the governance model is self-driven by the local community. On the other hand, if there are some, we have also documented over the past years, there are community networks where they are, as Osama was mentioning, basically intranet, and local communities, and that is another element of their sovereignty. If their choice is to create a local network to share information, to even have their own platforms to communicate, or to trade services, or to have information on medical treatments, and they only connect sometimes to the internet to do whatever they want, again, that is the reason why we may argue it’s an expression of digital sovereignty, because it is local communities, people willingly understanding what technology means, building it, and using it for what they want, and if they choose not to communicate with you, I’m sorry for you, but it’s their choice, and so, Carlos, do you have a question also, a comment? No, okay, so it’s the, so I’m keeping on, last five minutes, so you have a question or a comment, okay, I saw. No, so to conclude, is simply, I wanted to stress that we really have to consider the self-determination element of it, which is being the master of your own digital destiny, being the one that understands what you are dealing with, and crafts a plan to what you, to succeed in your aspiration, and if then your aspiration is becoming, having a local ISP that works like Telefonica in terms of quality, but has half the prices, and you redistribute the benefits in the local economic environment, well, I would say that you have been very successful, and we can disagree, but I think that is not a failure, on the contrary, it could be seen as a success. Please, Carlos. Thank you, Luca. Carlos Rey Moreno, Association for Progressive Communications. First of all, thank you very much,
Audience:
Senka, thank you very much, Luca, I mean, we are talking about the WSIS plus 20 review, the IGF, what is the impact of the IGF, and certainly the IOMEC coalition on community connectivity, I think, has shown over the years, over the outputs, over the discussions, how much value there is in holding these type of conversations. Second, I want to speak on behalf, I’m not them, of course, of Okoro, who was supposed to be speaking for connectivity reasons, he’s not with us, he’s an APC member from Nigeria, the Media Awareness and Justice Initiative, we’re actually starting on some of the elements that Osama was talking about, right, so they are working, collaborating with another APC member, the Open Culture Foundation, with the SOOD project, on doing, bringing meaning to their community around a spill of oil in the Port Harcourt area, where their communities are based, as well as monitoring air pollution with devices and bringing, you know, value-added services to the internet that they had, right? The thing is that by doing that project, they also realized that the connectivity that they were having from the mobile operators was not enough, so they went and set up, in the last year actually, it wasn’t, you know, one of the pioneers on this movement, they started less than 10 months ago, starting a community network, two community networks, actually, in the areas where they work, so this type of citizen science can be done in, you know, and have the internet quality that it requires to do what they require, because also the affordability issues that they face in Nigeria, right? So it started the other way around, it started from bringing meaning and value-added services and using the digital platforms and solutions for solving the issues that they were having and touching upon what the problems that they are facing around, you know, air quality and pollution of oil, and, you know, it was, you know, their challenge, and using those type of tools to actually bring the community together and solve as well their connectivity issues. Anyway, I really hope Okoro was here and could speak on behalf of the project that they are doing, that is really, really amazing. Thank you. Okay, so we have also the announcement that there will be a nice, an excellent talk this afternoon. I’m sorry, I will have to fly right after this lunch, but I’m sure that I will watch
Luca Belli:
it on streaming. Thank you very much to everyone for your excellent food for thought. I think all the participants here have many more ideas now to reflect on community networks and digital sovereignty and environmental sustainability, and if you want to have even more ideas, do not forget to have your complimentary copies of the report of this year that are here for free, so if you want, please have as many as you want. Thank you very much. Thank you. Thank you. Thank you.
Speakers
Amreesh Phokeer
Speech speed
129 words per minute
Speech length
913 words
Speech time
426 secs
Atsuko Okuda
Speech speed
122 words per minute
Speech length
735 words
Speech time
361 secs
Audience
Speech speed
159 words per minute
Speech length
1719 words
Speech time
651 secs
Carlos Baca
Speech speed
142 words per minute
Speech length
1925 words
Speech time
811 secs
Luca Belli
Speech speed
159 words per minute
Speech length
2254 words
Speech time
853 secs
Nils Brock
Speech speed
160 words per minute
Speech length
1281 words
Speech time
479 secs
Pedro Vilchez
Speech speed
126 words per minute
Speech length
668 words
Speech time
319 secs
Raquel Gatto
Speech speed
139 words per minute
Speech length
2031 words
Speech time
875 secs
Senka Hadzic
Speech speed
155 words per minute
Speech length
239 words
Speech time
92 secs
DC-SIG Involving Schools of Internet Governance in achieving SDGs | IGF 2023
Knowledge Graph of Debate
Session report
Full session report
Audience
During the discussion on Internet governance schools, it was highlighted that these schools strive to build effective, accountable, and inclusive institutions. Institutions like the Internet Engineering Task Force and ICANN were cited as examples of inclusivity and effectiveness in the field of internet governance. Schools of internet governance expose students to different institutional forms, including NGOs that manage standardization or open-source communities.
Another significant aspect of internet governance schools is their role in promoting peace and cultural understanding. By connecting people from different countries, these schools leverage the internet as a tool to combat prejudice and foster peace. The schools invite guests from various countries to demonstrate the world’s diversity and encourage cooperation and mutual understanding.
Furthermore, these educational institutions have the potential to reduce unemployment and empower individuals economically. For instance, the Pakistan School on Internet Governance includes sessions on leveraging the internet for entrepreneurial opportunities, highlighting successful digital initiatives that inspire youth. By training individuals on internet governance and fostering entrepreneurship, these schools contribute to employment generation and economic growth.
In areas with limited internet access, it is crucial to inform local communities about the plans of local operators and regulators. The Pakistan School on Internet Governance, for instance, invites local operators to share their internet access plans, while regulators and government officials inform the audience about their visions and strategies. This knowledge exchange helps bridge the gap in connectivity by ensuring that affected communities are aware of plans to improve access.
The African School on Internet Governance focuses on leadership development, gender equality, and addressing the gender digital divide. This collaborative initiative between Research ICT Africa, the African Union Commission, and the Association for Progressive Communications targets middle to senior management in government, regulators, and civil society. The school aims to provide a platform for women thought leaders, promote African expertise, and address gender-based violence and the digital divide.
Internet governance schools also facilitate discussions on sensitive topics, such as internet shutdowns. By creating an inclusive environment for dialogue, these schools bring together civil society, human rights activists, and government and regulatory representatives from African countries. This enables open and constructive discussions on internet-related issues, including internet shutdowns.
Overall, internet governance schools play a crucial role in building effective institutions, promoting peace and cultural understanding, reducing unemployment, bridging the urban-rural divide, and addressing societal issues. Through education, inclusivity, and dialogue, these schools contribute to the sustainable management of the internet and the achievement of Sustainable Development Goals.
Speaker 1
The Japan School of Internet Governance, which started this year, aims to promote Internet Governance on a larger scale and raise awareness of its importance. An announcement by Toshi from the school showcased its successful launch. Notably, the school conducted a full-day session with youth participants, emphasizing its commitment to engaging the future generation in discussions and decision-making processes regarding internet governance.
To achieve its goals, the school intends to foster information exchange and facilitate meaningful discussions on various topics, including contentious issues like the Manga Pirate Site. These subjects are incorporated into the curriculum, equipping students with the knowledge and skills needed to navigate internet governance and address potential challenges.
The efforts of the Japan School of Internet Governance align with SDG 4: Quality Education and SDG 9: Industry, Innovation, and Infrastructure of the United Nations’ Sustainable Development Goals (SDGs). By focusing on these SDGs, the school contributes to improving the quality of education and promotes the development and innovation required for a robust internet infrastructure.
The establishment of the Japan School of Internet Governance is a significant step towards increasing awareness and understanding of internet governance in Japan. It strives to create a well-informed and proactive society by facilitating dialogue, promoting information exchange, and addressing relevant issues.
In conclusion, the Japan School of Internet Governance, which began its activities this year, seeks to elevate the importance of internet governance and expand its reach. Through its curriculum and initiatives, the school empowers individuals, particularly young participants, by equipping them with the knowledge and skills necessary to navigate the complexities of internet governance. By addressing significant issues such as the Manga Pirate Site, the school demonstrates its commitment to fostering dialogue and nurturing a well-informed society.
Olga Cavalli
During the session, the speakers emphasized the importance of finding connections between internet governance schools and the Sustainable Development Goals (SDGs). Olga Cavalli specifically highlighted the significance of discussing activities related to the SDGs at different schools of internet governance. This highlights the need for these schools to align their work with the broader global agenda of achieving sustainable development.
One of the main topics discussed was the energy consumption of the internet, with predictions suggesting that it will double by 2030. This raises concerns about its environmental impact. It was also highlighted that there are still parts of the world that lack access to electricity, exacerbating energy disparities. The emergence of the Internet of Energy as a new field further emphasizes the need to address energy consumption and sustainability in the context of internet governance.
The schools of internet governance were commended for their role in promoting understanding and action around energy consumption and sustainability. The South School of Internet Governance, in particular, focuses on issues related to energy consumption and its potential impacts, such as climate change. This demonstrates that these schools are not only educating individuals but also becoming platforms for addressing pressing global challenges.
The approach of organizing schools in different cities was endorsed as a means to reach and include diverse communities in the discussions. The Pakistan School on Internet Governance, for example, rotates among different cities, allowing more diverse communities to access education and engage in the dialogue on internet governance.
Efforts to bridge the gap between urban and rural communities were highlighted, particularly by the Bangladeshi School of Internet Governance. The speaker, Ashrafur Rahman, who is a coordinator of the school, mentioned their endeavors to involve rural and transgender communities and promote innovation in rural areas. This showcases the school’s commitment to inclusivity and addressing the digital divide between rural and urban populations.
Another notable aspect of the schools of internet governance is their focus on the SDGs and the integration of the goals into their programs. Olga Cavalli organized a school in Rio with Fundación Getulio Vargas that placed specific emphasis on the SDGs. By incorporating the SDGs into their curriculum, these schools are contributing to the realization of the global goals.
The evolving nature of the schools of internet governance was emphasized, with references to partnerships and collaborations. The Argentina School of Internet Governance was highlighted for partnering with a university to offer certifications from Fortinet, a leading cybersecurity company. Additionally, the production of a document for the Global Digital Compact involving over 80 fellows from around the world further demonstrates the schools’ evolving and expanding role in the field of internet governance.
The schools of internet governance were also recognized for their role in enhancing communication and learning among schools. The usefulness of the dynamic coalition in supporting these endeavors was acknowledged, as it provides materials and helps schools understand the multi-stakeholder model and its evolution. Furthermore, the availability of a website with a map showing the locations of the schools was noted as a means to share and consult information between the schools.
However, the issue of limited time for active participation in these activities was acknowledged by Olga Cavalli herself. This suggests that despite their commitment to internet governance, time constraints can hinder active engagement in these initiatives.
On the positive side, the availability of school content on their YouTube channel in multiple languages serves to disseminate their knowledge and insights to a wider audience. Additionally, students are kept engaged through a telegram group, where they can access fellowship opportunities, job opportunities, research, and news about internet governance. This further strengthens the sense of community and provides students with ongoing learning and development opportunities.
In conclusion, the session highlighted the importance of integrating the works of internet governance schools with the SDGs. The energy consumption of the internet and the need for sustainability were key concerns discussed. The schools of internet governance play a significant role in promoting understanding and action around these issues. They reach diverse communities through their approach of organizing schools in different cities and strive to bridge the gap between urban and rural populations. The schools’ focus on the SDGs and their evolving nature, as well as partnerships and collaborations, contribute to their expanding role in the field of internet governance. Despite time constraints, the schools continue to enhance communication and learning, with the dynamic coalition and the sharing of information and documents through their website. Overall, the session provided valuable insights into the achievements and challenges faced by internet governance schools and their contribution to a more sustainable and inclusive digital future.
Satish Babu
The analysis of the provided statements reveals several key points and arguments made by the speakers. Firstly, Satish Babu is associated with two schools, specifically the Asia Pacific School on Internet Governance and the India School. These schools were founded in 2015 and 2016 respectively, with the purpose of providing capacity building and building awareness in the field of internet governance. The primary function of these schools is to equip individuals with the necessary skills and knowledge to effectively navigate the complexities of internet governance.
Furthermore, it is highlighted that there is a need for schools on internet governance globally. Many countries and regions, including Africa, Asia Pacific, Argentina, Armenia, Chad, Ghana, Europe, North America, Nigeria, Pakistan, and Russia, have already established their own schools in response to this need. These schools serve as platforms for individuals from different parts of the world to convene, collaborate, and share ideas related to internet governance.
Another important aspect discussed is the incorporation of the Sustainable Development Goals (SDGs) into the curricula of these schools. While the two schools that Satish Babu is associated with do not explicitly highlight the SDGs, it is mentioned that the curriculars were developed without considering the SDGs, as they were adopted after the schools were already operational. Nevertheless, Satish finds value in discussing how these schools naturally address many of the SDGs, emphasizing the alignment of their educational programmes with the broader goals of sustainable development.
Satish also advocates for the enhancement of cybersecurity efforts and the development of online education resources. It is emphasized that cybersecurity is a central issue in internet governance, and the Global Forum on Cyber Expertise, as well as the London Process, provide opportunities for African colleagues to engage and address this issue effectively. Additionally, the successful workshops conducted by the schools in collaboration with various stakeholders have led to the development of new projects.
Moreover, the speakers acknowledge the proposal for global schools on internet governance, highlighting that two schools have already evolved from regional to global stages. Satish also emphasizes the importance of content evolution in internet governance education, specifically citing the India School of Internet Governance as an example. The school has made its course content available on their website, demonstrating the journey and evolution of the curriculum over eight years.
In conclusion, the speakers address the importance of quality education, partnerships, and the SDGs in the field of internet governance. The schools on internet governance play a crucial role in building awareness and capacity, and there is a global need for such schools. Satish Babu advocates for the enhancement of cybersecurity efforts, the development of online education resources, and emphasizes the importance of content evolution in internet governance education. The analysis provides valuable insights into the current landscape of internet governance education and the efforts being made to address the challenges and opportunities in this field.
Avri Doria
Avri Doria hosted the Dynamic Coalition on Schools and Internet Governance session, emphasising the importance of education in internet governance and the role of schools in achieving the Sustainable Development Goals (SDGs). The session was divided into three sections: presentations from new schools, discussions on SDGs and the actions schools are taking, and an examination of the objectives of the Dynamic Coalition on Schools.
Despite some participants initially being absent, Avri Doria ensured that the session followed the outlined agenda. She highlighted the dynamic coalition’s significance in supporting schools and promoting governance education. The coalition has developed useful resources for schools, such as documents and materials, to aid learning about governance and the multi-stakeholder model. Avri Doria believes that the dynamic coalition could be an invaluable resource in teaching governance and enhancing understanding of the multi-stakeholder approach.
The session also delved into discussions on the SDGs, specifically focusing on SDG 5 (Gender Equality), SDG 7 (Affordable and Clean Energy), and SDG 16 (Peace, Justice, and Strong Institutions). Avri Doria stressed the importance of addressing these goals within schools and internet governance. This emphasised the need to incorporate the SDGs into school curriculums and promote gender equality, clean energy, and peace and justice within educational institutions.
During the session, Avri Doria highlighted the collaboration between the dynamic coalition and the IGF Secretariat. She suggested the need for a follow-up to evaluate the impact and outcome of this collaboration, examining its success rate and feedback on the collaboration document. Avri Doria believes that additional elements could be included to enhance the effectiveness of the document.
Furthermore, Avri Doria advocated for the use of modern internet standards and global good practices to enhance justified trust in the internet and email. She referred to the Global Forum for Cyber Expertise, which includes a track focused on enhancing justified trust. Avri Doria mentioned the availability of resources and a handbook explaining these modern internet standards and their significance. She considers these resources valuable assets for schools and encourages their use in educational settings.
Additionally, Avri Doria discussed a website aimed at open educational resource sharing for internet governance. The website features a map function and sections dedicated to fellows, faculties, and a dynamic coalition wiki. While only one school has contributed materials thus far, Avri Doria encouraged others to contribute resources in order to enrich the website and promote collaborative learning among different schools.
In conclusion, the session hosted by Avri Doria underscored the importance of education in internet governance and the role of schools in achieving the SDGs. The discussions emphasised the significance of teaching governance, integrating the SDGs into school curriculums, and fostering a deeper understanding of the multi-stakeholder model. The session also highlighted the collaboration between the dynamic coalition and the IGF Secretariat, as well as advocating for the use of modern internet standards and global good practices. The importance of sharing open educational resources for internet governance was also emphasised, promoting collaboration among schools to enhance education in this field.
Sandra Hoferichte
Schools on internet governance play a significant role in promoting gender equality and empowering women. These schools have been successful in attracting a greater number of female participants compared to males, contributing to a more balanced gender representation in this field. For example, the European Summer School on Internet Governance has seen a higher turnout of female participants. These schools provide a comprehensive education that equips young professionals with the necessary knowledge and skills to pursue leadership positions.
However, despite progress in educational initiatives, there is still a lack of advancement in women’s representation in managerial positions in Germany. The proportion of women in managerial roles has only slightly increased from 21% in 2014 to 23% in 2018. Continued efforts are needed to address gender disparities in leadership roles.
To address this issue, more emphasis should be placed on adult education for promoting gender equality. Schools on internet governance can serve as a valuable platform for adult education, helping to bridge the gap and empower women in various aspects of their lives. By providing access to education and empowering women, these schools contribute to the progress towards achieving gender equality, aligned with Sustainable Development Goal 5.
In the context of Sustainable Development Goals (SDGs), Japan has shown greater awareness and promotion compared to Europe. Japanese society demonstrates a visible commitment to SDGs, with initiatives such as displaying SDG symbols on windows and cars. Europe could learn from Japan’s approach and consider adopting similar strategies to raise awareness and garner public support for achieving the SDGs.
Furthermore, it is important to address the limited utilization of digital resources like the wiki and website for global networking. Despite the availability of these platforms and partial funding from the Medienstadt Leipzig association, their usage has been relatively low. Sandra Hoferichte expresses concern over this limited engagement and emphasizes the need for financial support from other schools or organizations to sustain these digital resources and the work of the dynamic coalition. Such support would contribute to the effective dissemination of information and knowledge sharing among a wider network, enabling greater collaboration towards achieving SDGs 4 and 10.
In conclusion, schools on internet governance have proven to be instrumental in promoting gender equality and women’s empowerment. However, there is still work to be done to address the underrepresentation of women in managerial positions in Germany. By embracing adult education and adopting Japan’s approach to SDG awareness, progress can be made towards achieving these goals. Additionally, supporting the wiki and website for global networking through increased funding would enhance their effectiveness in facilitating collaboration and knowledge exchange for the SDGs.
Wolfgang Kleinwaechter
In the analysis, the speakers delve into the multidimensional nature of internet governance and its intersection with education. They emphasize that internet governance encompasses the evolution and use of the internet, covering both the technical and application layers, as well as various public policy issues related to the internet. It is noted that the complex and evolving nature of internet governance makes it difficult to study within a traditional university setting.
The importance of specialized courses in internet governance is highlighted. The speakers point out that new questions and issues have arisen in recent years that were not previously on the agenda. These require accurate understanding, as seen in the example of artificial intelligence (AI) governance. The speaker mentions growing confusion surrounding concepts such as digital governance, AI governance, and cyber governance. This underscores the need for courses that provide clarity to address the evolving landscape.
Furthermore, the speakers stress the significance of academic independence and proactivity in developing educational programs. They advocate for taking inspiration from the global community while also thinking independently about what is beneficial for one’s own country and community. They emphasize the need to be proactive in addressing the challenges and opportunities presented by internet governance.
The analysis also draws attention to the importance of judges with knowledge of internet governance. It is stated that in a world where many conflicts may end up in court, judges without understanding of internet governance may make incorrect decisions. This underscores the need for education and expertise in this area to ensure fair and accurate rulings.
Additionally, the analysis touches upon the topic of cybersecurity and the establishment of the Global Forum for Cyber Expertise. The origins of the Global Forum for Cyber Expertise from the London process are mentioned, with its focus on cybersecurity. It is noted that the Global Forum will be hosting a world conference on capacity building, emphasizing the importance of collaboration and partnerships in addressing cybersecurity challenges.
Overall, the analysis reflects a comprehensive understanding of the importance of education, collaboration, and continuous development of expertise in the field of internet governance. The speakers provide valuable insights, highlighting the multidisciplinary nature of the subject, the need for specialized courses, the significance of academic independence, the role of judges, and the importance of cybersecurity. These observations are crucial for navigating the complexities of internet governance and addressing its challenges effectively.
Alexander Isavnin
The analysis examines various arguments and stances on different topics, addressing issues such as internet governance schools, travel and international exposure, Russia’s societal norms, the obscurity of UN processes, access to water and healthcare, the government’s decision-making, and the perceived obscurity of certain Sustainable Development Goals (SDGs).
One argument posits that internet governance schools can contribute to the development of effective and inclusive institutions. It highlights the importance of including diverse stakeholders, with the Internet Corporation for Assigned Names and Numbers (ICANN) serving as an example. This argument stresses the need for governance frameworks that accommodate different perspectives and foster collaboration.
Another argument emphasises how travel and international exposure promote understanding and peace. It cites a quote from Mark Twain, asserting that travel has the power to eradicate prejudice, bigotry, and narrow-mindedness. The widespread availability of the internet is also seen as a means to bring diverse experiences from different countries into people’s homes, further enhancing global understanding.
In contrast, a negative stance suggests that Russia is slowly reverting to its old societal norms. However, the analysis lacks specific supporting facts for this claim, making it somewhat speculative.
Furthermore, concerns are raised regarding the obscurity of UN processes in the Russian Federation. It is highlighted that these processes are not widely publicised and are considered opaque by the local population, thus raising questions about transparency and accessibility.
On a positive note, the analysis acknowledges that Russia generally has good access to water and healthcare, attributing this to the legacy of the Soviet Union, which laid a solid foundation in these areas.
A negative argument contends that the government may hold the belief that certain actions should not be taken in other areas. However, no specific evidence is provided to support this claim, leaving it open to interpretation.
The analysis also notes the perception of certain SDGs, specifically the 9th and 16th goals, as being obscure. However, without specific details or evidence, this argument lacks substance.
In addition, the analysis highlights a dedicated course aimed at explaining UN processes and SDGs. This course aims to provide information to attendees, ensuring they are well-informed and understand the objectives behind the SDGs and the workings of the United Nations.
In conclusion, the analysis covers a range of arguments and stances on various topics. While some points are supported by evidence, others lack specificity or supporting facts. The analysis provides insights into the significance of internet governance schools, travel and international exposure, concerns about the obscurity of UN processes and certain SDGs, access to water and healthcare, and the government’s decision-making. The course dedicated to explaining UN processes and SDGs is seen as a valuable resource for enhancing understanding in this field.
Session transcript
Avri Doria:
Let me start. My name is Avri Dori and I want to welcome you to the Dynamic Coalition on Schools and Internet Governance session. We have a fairly full agenda. We don’t quite have everybody in the room yet but we can get started. So the first thing is let me just go through the agenda that we’ve got. We’ve got two moderators. We’ve got Satish and we’ve got Olga who will be joining us. I’m not sure if we have any of our report reporters in the room but but we’ll take care of that. We have basically there’s three sections in this. In the first one we’ll be talking about schools and we’ll invite Satish and Olga we’ll invite new speakers to people that have new schools to come up. What we had showing before and hopefully can keep showing for a little while is that single slide from many of the schools that we’re not going to invite to speak a lot about their schools but just to have their slides appearing for for a short bit of time. Then we’ll have a second section where we’ll basically take three of the SDGs and we’ll have presentations on those SDGs in terms of what are various schools doing in them. And then finally we’ll have a discussion with what time we have left of what the dynamic coalition on schools would like to do. So with that I’d like to pass it to you Satish to sort of go into the introduction of the schools and such, thanks.
Satish Babu:
Thanks very much, I’m Satish and I’m based out of India part of the ICANN at large and also I’m associated with two schools. First one is the Asia Pacific School on Internet Governance which was founded in 2015. The second one is the India School which was founded in 2016. So we have a bunch of slides on different schools. We can quickly run through them. We don’t want to kind of stop and present each. Can you advance to the next one? This first slide is about the African School on Internet Governance. Next please. This is the Asia Pacific School. As I mentioned it was founded in 2015 and we are planning this year’s edition in Manila in November. Next. This is Argentina SIG. As you can see Olga is unfortunately not here but she is the one that is part of this. Next. This is the Armenian SIG. Armenia has also been having quite continuously actually their SIGs. Next. The Chad SIG. We have a representative from Chad so later on in the interaction we can speak about it if you want to highlight anything. Next. Ghana. Anybody from Ghana school here? No, nobody’s from. Sorry. Yeah, next. This is the European Summer School. I am myself an alumni and Sandra is here from the Euro SIG. Next. This is India School founded in 2016. Couple of weeks back we had the eighth edition in India. Next. NASIG. NASIG is a school, a North American school set up by the ICANN community at large people, Glenn and others. I don’t think there’s anybody here from that school here. Next. This is the Nigerian School on Internet Governance. Is anybody here from Nigeria or the school? I know. Next. Pakistan. We have Akash here from Pakistan. The PK SIG from 2015. One of the earliest schools in Asia Pacific and certainly in South Asia, the first in South Asia. Next. The Russian School on, Summer School on Internet Governance, St. Petersburg University. Next. This is South School. Oh, Olga School. So this is one of the oldest again in the world actually. This is Sri Lanka IGF which also doubles as a SIG. So it’s a kind of combined structure. Next. Virtual School. This is set up by again, Glenn McKnight and Alfredo who are part of the ICANN community. And when the COVID shutdown came up, this was their response to the shutdown. And it is now continuing as a virtual school. Next. Chad, second time it’s coming I think. That’s it.
Avri Doria:
So these are the schools that we have having this slide set, slide deck. Are there anybody from any other school not mentioned so far? If so, we can quickly introduce yourself. Yeah. Okay, I’m from Benin and, from Benin School Internet Governance. Yeah. Thanks for that. This is from Benin. We have one more school which is brand new and Avery is going to introduce that school. I’m actually gonna invite an introduction to that school. And Tadashi, if you would like to, I don’t know if you could hear me. Would you like to introduce the new Japan School? While we’re talking about it, let me talk about an event that they had. KCG this year, just before this meeting started, had basically a whole day session where there was a youth session and they introduced, they basically had a session of the school. And it was really quite an interesting day in terms of the students and having sessions and such. Would you like to actually come and introduce the new school that you’re doing in Japan, the Japan School? Yeah. Yeah. Thank you, Lee.
Speaker 1:
My name is Toshi from Japan School of Internet Governance. We start this spring. Last year, I met her at the Ethiopia Addis Ababa ISGF last month. So I know slightly about School Internet Governance, but what is School Internet Governance? What is the use? I was just very confused. But at the time, I found out what is School Internet Governance. So in 2018, we have a big discussion about Manga Pirate Site. Probably you may know at the ISGF village, there are the big booths about that pirate site. So then in 2019, I started to teach in the university, talk about a pirate site and internet governance. So I also, I’m a professor in Kyoto. So then I start again this year, about the, how can I say, internet governance. So then, I’m very happy to many of you to Kyoto, and I hope that we have, how can I say, promotion of School Internet Governance and exchange for more information and help us. Thank you. Thank you very much. And quite looking forward. So back to you. I don’t know if there are any other new schools here that wanted to do some results before we move on. So I’d like to ask anybody who’s online, whether they represent any schools of, internet governance schools or internet governance. If there are some, if there is someone,
Satish Babu:
then please raise your hands. I don’t see any hands, so I’m assuming that. So we have quickly gone through the slides, but towards the end of the session, we have some discussion time. We have Olga, our moderator, who’s coming up just now. So we have time at the end of the session to discuss. Yeah, so back to you, this is the pre-gathering. Thank you. So yeah, give yourself a chance to breathe,
Avri Doria:
but we basically have gone through the new schools, invited new schools. The new Japan school discussed a little bit. I talked a little bit about the event that occurred earlier this week at KCG. And we’re now at the point where we’re gonna talk about SDGs and schools. And that’s good that we’re giving this a fair amount of time. We’re gonna have three of the SDGs are gonna be discussed. The first one, and I’ll just start this. The first one will be on SDG 5 on gender. The second, SDG 7 on energy. And the third, SDG 16 on peace, justice, and strong institutions. So I don’t know, Olga, if you wanted to introduce the whole theme more than I just did.
Olga Cavalli:
One of the purposes of this session is to try to find linkages in between what we do at the different schools of internet governance and the sustainable development goals. There are some of the activities, I would say that several of the activities that we do with the schools in different focuses of training are totally related with the different SDGs. So this is why Sandra, myself, I don’t know if we have other colleagues talking about different SDGs. We would like to explain some of the activities that we do in relation with these SDGs. And perhaps some other schools that are in the room could maybe jump up and share some other activities that are related with this issue. Sandra, would you like to go to the gender issue? And then I will follow with energy. Yes, thank you very much.
Avri Doria:
Welcome, everyone.
Sandra Hoferichte:
My name is Sandra Hoferichter. I’m the organizer of the European Summer School on Internet Governance, which is a global school other than the name suggests. The euro just comes from the fact that we are based in Europe, but we are inviting globally. And I’m proud to say that we were the first school on internet governance, and it’s really amazing to see how many schools evolved over the years and how this became a movement with a really greater impact. Speaking about impact, I would like to focus a little bit on SDG five, which is achieve gender quality and empower all women and girls. And there are several under achievements or under goals that are defined, and I looked at those who are most relevant to schools on internet governance, which I believe is 5.5, ensure women’s full and effective participation and equal opportunities for leadership at all level of decision-making and political, economic, and public life. I do believe that schools on internet governance do contribute to this goal because most of the schools are not only focused on youth engagement and youth participation, but indeed are an opportunity for young professionals to get a holistic knowledge about internet governance, which then helps them to serve on certain boards or take leadership positions and organizations that are dealing with internet governance. In SDGs, usually there are a lot of young people in SDGs, usually there are also indicators mentioned that support or that are supporting numbers for the respective goal. I have here a number from Germany, only the proportion of women in managerial positions in Germany in 2014, it was only 21%. And in 2018, not much progress has been made, it’s only 23%. So you see there’s still a lot of work to do in order to get really the woman into managerial positions. Same applies for women in parliament or local governments. Also here, little progress has been made. If I look at the numbers, it even lowered, but this is in a very small range, so I would not go into much detail, but it’s around 30 to 40% of women that are in Germany. In local governments or in parliaments, in national parliaments. But I want to focus a little bit on the second goal that applies to our schools, which is goal 5.B, enhance the use of enabling technology, in particular information and communications technology to promote the empowerment of women. The respective indicator is not really related to what we are doing at our schools because it shows the proportion of individuals who own a mobile phone by sex. I think this is nothing that is relevant for us, but I think the overall goal, enhance the use of enabling technology, in particular information and communications technology to promote the empowerment of women. I do think this is really indeed the goal where the schools of internet governance should possibly provide an indicator because here I can say, speaking from the European summer school, I can realize that over the past 10 years, the participation of women and the application rate of women is much bigger than the application and the participation rate of men. So what does it tell us? Women are obviously more often willing to dedicate vacation time or education time and travel costs or participation costs in order to participate in such a summer course. And speaking from this Euro SSIG, I indeed, in order to have parity in our classes, I sometimes indeed looking for male participants that are qualified and can participate in our school. This is something that might look other in different regions of the world, but I wanted to give you this very personal view or this very local view from the school that I’m running. But I have also consulted with UNESCO and we had a discussion this year in Meissen at our school and UNESCO numbers prove that the gap, at least in primary schools, have narrowed tremendously over the past years. There is still a gap remaining in adult education and here I do believe we can pick up the qualified women that are coming from a really good qualification in the primary school, we can pick them up and include them in our courses and here I do believe the schools on internet governance, looking at them from a perspective of what they can contribute to the SDGs can really do and are doing a wonderful job and are creating a good opportunity for adult education, which of course then at some point should also lead to bring women into more leadership position, not only in managerial, but also in parliaments and governments. I have some resources here, so whoever is interested, I’m happy to share those. They are from, as I said, UNESCO, from Sciences Po Paris, but also from the World Bank. If you would like to have some more details, I don’t go into these details right now.
Olga Cavalli:
Thank you. Thank you very much, Sandra, and apologies for being late. I was confused, I thought the session was at 11 and I was running from other session this morning. My name is Olga, I didn’t present myself properly. My name is Olga Cavalli. Me and my colleague, Adrian, and other colleagues from Latin America, we run the South School of Internet Governance. I think it was the second one in the history. And it is interesting what Sandra says that after the pandemic, we went to a hybrid format and now we have fellows mainly from Latin America, which is the focus, but we have fellows from all over the world. We have a translation all the time between Spanish and English. And also this year, we organized it in the Northeast of Brazil, also in Portuguese. So it has become somehow global, but the focus is Latin America or the Americas because it has been organized also in North America and in the Caribbean. About the SDG that I wanted to comment, which is number seven, focused on ensure access to affordable, reliable, sustainable, and modern energy for all. Access to energy is important pillar for the well-being of people, as well as for economic development and poverty alleviation. I think schools have a major role from different perspectives related with energy, and also we have to think energy very, very much linked with climate change, which is a problem for several, for the whole world, but especially for developing countries that are suffering consequences, perhaps happening in other parts of the world. So, about energy, we have, in our school, we had several panels about the impact of what would happen if we would achieve a connectivity for all. What would happen with climate change? What would happen with the demand of energy? So, powering the Internet consumed 800 terawatts of electricity in 2012-2022, and it is expected to really increase this year and the next year, and I have some information from different resources. The energy consumption of the Internet will double by 2030, so the consumption of energy will be a major issue, and the impact that it may have in climate change may be relevant for many countries, including the demand that we have by artificial intelligence and other new developments, Internet of Things, and many automatic devices all over the world. There is another aspect of the energy, which is, there are, it’s difficult to understand or believe, but there are areas in the world that don’t have electricity today. So, in the schools, we can talk with different stakeholders, professionals from developing economies, developing countries, in trying to bridge that gap of areas that don’t have electricity. Imagine having no electricity, it’s, perhaps for many of us, we cannot imagine. We were talking in Brazil with fellows from living in the Amazonian area. They don’t have roads, they only get there by boat, and the only Internet that they are having today is the one by Starlink, with some mobile, starting with the low Earth orbit satellites, and government is installing some fiber-optic cables through the Amazons. But some parts of that region don’t even have electricity. So, the work that we do in the schools, in talking with different professionals, governments, and different stakeholders in trying to enhance the reach of electricity, and the good use of electricity, and the impact in climate change will be, it’s very important. Also, there is a new concept called the Internet of Energy. It means the Internet of Things, but focused on energy, focused on all the devices that control and manage the energy as a critical infrastructure. So, includes generation, transmission, functionality, and energy usage. So, that is a new area of work that we may include in the issues that we review in the schools. So, this is what I wanted to share with you. I have some resources here also about energy and climate change. If you want, I can send them to you. And maybe, in the audience, there are schools that could share some ideas about the Sustainable Development Goals. Do you think that’s good? Do we have time? We have one more speaker. Okay. I think that would be good. I think after we finish with Alexander, we can
Avri Doria:
then see if others want to comment, either on the schools that have, I mean, the SDGs that have
Alexander Isavnin:
already been discussed, or on others. Hello. My name is Alexander Savnin, and I teach Internet Governance in Moscow University in Russia. And I would like to talk about SDG 16, which sounds promote peaceful and inclusive societies for sustainable development, provide access to justice for all, and build effective, accountable, and inclusive institutions at all levels. So, I would like to start talking about how schools of Internet governance could help build effective, accountable, and inclusive institutions. Actually, one of Internet governance institutions about which you have to talk at your school, it’s an Internet Engineering Task Force, and this is the most effective working standardization body, because actually, if things work, it becomes standard. If it does not work, or no one needs it, it’s not standard. It’s much more effective than ITU, for example. Or another example which we are bringing in school to our students at schools of Internet governance is actually ICANN, which is, first of all, demonstratively inclusive, with different group of stakeholders, different group of possibilities, and for sure, it’s keeping diversity, which might not be enforced in some societies. So, like Sandra and Olga from very developed countries in society meanings, Russia is actually now slowly going back to previous centuries, and then we have to bring this to our students’ information about how governance works. Actually, in our school, we talk not only about classical Internet governance institutions, we also touch different bodies, well, classical like ITU, also different NGOs, which govern standardization of technologies, or working with communities like open source community managing as NGOs, or like Wikimedia, who manages well-known Wikipedia. So, such examples of inclusion and effectiveness, again, if you are not in the West, and as a stakeholder group, could be really impressive. One of my students reported after our course that we were telling like science fictions about universities that doesn’t exist. But anyway, if you can bring such examples in your country, you can bring examples of such working institutions, and actually, people usually know that Internet works. You can show relations to what people see if they access Internet, to how it existed. So, I can talk a lot of this. Maybe just to save time, I will not give exact targets of this goal, but there are like 16.7, 16.8, something like which are more precise in this case. Also, this goal is about peace and promote peaceful things. For us, it’s very important, and I will give just one quote from a very favorite American writer, Mark Twain. Yeah. Once, many years ago, when Internet had not existed, he said that travel is fatal to pre-justice, bigotry, and narrow mindlessness. And actually, all these words are a source of current wars and conflicts. Now, from many countries, young people are not able to travel and actually be cured from pre-justice or something like this. Internet, first of all, bring us to many different places. Internet governance, demonstration to our students’ Internet governance, shows experiences from many different countries. It allows us, usually, schools of Internet governance invites guests from different countries, different other schools or something like. So, demonstration to students that world is different, world is interesting. That other people in the world and their activity, they are not aliens, they are just interesting. They are so interesting for you and your students are interesting for them. And Internet is not just a source of dangerous information, but via governance, people start cooperating, understanding each other, and spreading this into their countries, and that could bring peace to our planet more effectively. Thank you. Thanks to you. We have a mic. Maybe you can join in the mic.
Audience:
Hi. My name is Vakas, and I’m part of the team at Pakistan School on Internet Governance. We last week, we had our ninth edition in this seventh different city of Pakistan. So, our model is a bit different. We don’t, we’re not located at a central location in white people. We actually go to a different city every year. That has its pros and cons as well. I won’t go into details. But there are two SDGs that I wanted to mention, which are actually part of why we do things at SIGS. One is 8.6, which says, by 2020, substantially reduce the proportion of youth not in employment, education, or training. Since we educate people, we train people about Internet governance, and we also, at least in our school, we have a full-fledged session about Internet for entrepreneurial opportunities. So, invite people who have digital initiatives that led to successful business ventures in the end, just so as to, you know, inspire the youth about seeing the Internet as an economic empowerment tool as well. The second one I would like to mention is 9.5, subclause 9.5C, it would be then, eventually. It says that significantly increase access to information and communication technology, and strive to provide universal and affordable access to Internet in these developing countries by 2020. I’m sure many of the other schools do this as well, but we also invite mobile Internet operators to the schools, just to provide information about their plans on going to these particular areas where we go. For example, last year, we were in Gilgit, which is actually the hometown of K2. It is a mountainous area, tough terrain and all, so Internet access is a big problem there. So, we invited the local operators to talk to the people who are there at the school and to share with them what are their plans to actually provide Internet access to these areas. Similarly, we also invite our ministry and our regulators to come up and inform the audience about what are the plans, eventual plans, and what is their vision to provide Internet access to different areas of Pakistan. So, I just wanted to mention these two issues that are relevant to the
Olga Cavalli:
work of our schools. Interesting. Well, you mentioned that you rotate among cities. We do exactly the same. We organize the school every time in different cities in the Americas, which is, as you said, has good things and complicated things, because you have to start from scratch in every school, but at the same time, you are more keen to go into different communities and different countries, and so I commend you for that. It’s a new challenge for us every year, being organizers, but it’s much easier for that community to, you know, become
Avri Doria:
part of the school. Thank you. Thank you. We have another. Henriette, welcome. Thank you.
Audience:
Apologies for being late. There are also new security measures, by the way, so you cannot go through the other entrance, so everything took longer. My name is Henriette Esterhuizen. I’m the organizer of the African School on Internet Governance. We’ve just had our 11th school. It’s a joint initiative of Research ICT Africa at University of Cape Town, the African Union Commission, and Association for Progressive Communications. I wanted to speak to SDG 5. I know Sandra’s already spoken, so I’m not sure that I covered, but maybe we covered in different ways. AFRICIC is a little bit different from some of the other schools in that it’s more of a leadership development event that targets sort of middle to senior management in government, in regulators, and in civil society. This year, for example, we had members of parliament. We had six members of parliament that are also here, part of the UN parliamentary track, and we often have deputy director level heads of regulatory agencies, so we actually target people that are active in the digital internet or ICT policy context, but that don’t have a strong grasp of internet governance. The way we deal with gender is that, well, firstly, we always have at least 50 percent of our participants are women, and faculty. We really emphasize having women presenters, women thought leaders. We also actually really emphasize having African experts. There’s a lot of training that’s done in Africa, particularly even by the African Union, and it’s done by Diplo, and Diplo does excellent work, but they bring mostly presenters that are from other parts of the world, so we really try to focus on having African experts. We deal with gender-based violence, so that’s the one SDG 5 target we address. We deal with the one on leadership development, which I think EUROSIG does also really brilliantly. I have the pleasure of participating in EUROSIG, and then on the one on policy, and there we focus particularly on access, and we look in quite a granular way at what conditions in African countries lead to a gender digital divide, both at the demand side and the supply side, and then at how regulators, for example, by making universal service funds more gender aware, can actually have a positive impact on that. And yeah, I can share more, and we also do evaluations, and I mention this every year because I think it’s such a good methodology. I want to share it with the other schools. We do tracer studies, where we look back on four, five, ten years of the school, and have independent research done on how people that were in the school have had their thinking about the multi-stakeholder process change, and how it has influenced their career. We have an alumni network like EUROSIG as well, and would like to actually collaborate more with EUROSIG on finding innovative ways on strength, because we’re quite similar in some ways. Thank you. Thank you, Henriette. More comments about the SDGs? Yes, go ahead. Which is a very interesting and a very difficult thing to do. We deal with LGBT issues, and we try and deal with them in a very sensitive way, because if we have people from African governments and regulators, but we bring them together with civil society and human rights activists. We try and deal with some of these sensitive issues, including internet shutdowns, in a way where we create a trusted environment where you can actually have a conversation, not always reach consensus, but actually build better understanding of the differences of the perspectives. Interesting. Thank you very much, Henriette. More comments?
Avri Doria:
Yes.
Audience:
Thank you so much for giving me this opportunity. I’m Ashrafur Rahman. I’m the coordinator of Bangladeshi School of Internet Governance. I just want to share a few information about the Bangladeshi SIG. We are trying to make a bridge through our school with urban and rural people, because you know, always the rural people have a behind-the-scenes scenario that cannot connect with the mainstream people. I’m sorry, but is it about SDGs? Because this part is about SDGs. Yeah, yeah. So here we focused on the SDG 5, which is gender equality, and I should mention we are trying to include transgender communities also with the SIG. And another one is SDG 9, where it is the mainstream industry innovation in the infrastructure, because the rural people have, our rural students have lots of ideas, but they can implement, like the urban students or the school and college. So we are trying to work on that. Thank you so much.
Olga Cavalli:
Thanks to you. And you said you’re from Bangladesh? We have several Bangladesh students in the school. Since we started to become hybrid during the pandemic and after the pandemic, it’s very interesting. For some reason, Bangladeshi like Argentina. Maybe it’s because of football, but… We’re also a big fan of Argentina. Good to know. Thank you. Thank you so much. If I can add, as part of what was going on this week in Japan at the Kyoto school, they gave a very extensive presentation on their school, which was really quite enlightening. Fantastic, thank you, thank you so much. Bravo. More comments about SDGs. Do we have someone in remote that maybe want to say something? I’m not in the.
Avri Doria:
Is there anybody from the Zoom room that would like to make a presentation? Yeah, by the way, I do wonder whether our remote moderator is here because I haven’t been following, I haven’t seen them, so I’m sort of doing that role. He’s not there, Raymond is not there online. Right, I’ve got, yeah, I’ve got one comment here, which was dynamic coalition. Actually, the comment is about ethics, whether the courses cover ethics, and that was the original comment, and then you have. And then there was one that robotics talked about e-health access in remote areas, so I’m wondering whether that, you know, the schools, but it really didn’t talk about a school and an SDG. We’ll take it up in the next part. So shall we move on? Yeah, sure, I don’t know, Anridh, if you want to add something. I have a question, I have a question.
Audience:
So to all of you, and including ourselves, who deal with the SDGs, do you deal with them explicitly? So we, for example, also deal with some of the other SDGs, you know, we have human rights, I’m sure other schools have too, but do you actually, in your curriculum and in your agenda, have sessions that go over the SDG process that links it to the WSIS process? So, you know, I’m just curious. It’s not something we actually do. We talk about the WSIS process, and indirectly we address SDGs, but we’re not directly. So I’d like to know how you feel about that. If I may, in 2017, the whole school that we organized
Olga Cavalli:
in Rio with the Fundación Getulio Vargas was totally focused on SDGs. At that time, we prepared the whole program in trying to focus on all aspects of SDG. What we do every year is we have a kind of a general focus. This year was sustainable development and generative artificial intelligence. So although we go through all aspects of internet, we try to bring some experts and some special focus on some days on these issues. In 2017, we did especially focus on SDGs. But, you know, you have those issues in the program always, especially with climate change. We have had several also with energy, not all of it, but sometimes.
Alexander Isavnin:
Yes, please. Yeah, okay. Actually, in Russian Federation, all these UN processes are a bit obscure. So it’s not very public and so on. Because maybe since Soviet Union, we have the country at home, generally, have good access to water, have good health care and something like. And then maybe government thinks it should not be done in other things. And some SDGs like 9th and 16th are a bit obscure. So we have a separate part of our course, which does explaining how things are going around United Nations. We also are talking about SDGs, where did they get it, just an informational purpose for people if they come to such audiences, not to be wondering what SDG is, why it’s happening and something like.
Satish Babu:
The two schools that I’m associated with, neither of them explicitly highlight the SDGs. We have subjects and topics around them, but not directly as SDGs. The SDGs were adopted in 2015. At that time, many schools were already running. And so the curriculars were developed without looking into SDGs. But I think it’s good that we are using this session in particular to see how our schools naturally address many of the SDGs. And I found it very valuable also the comments from the audience, what SDGs are relevant in which region.
Sandra Hoferichte:
Because as Alexander said, also in Europe, you don’t read much about SDGs. And I’m pretty impressed how visible the SDGs are here in Japan. You can see them in some windows, you can see them on the cars that remove the trash. And it’s pretty amazing to me that there’s a much greater awareness of these important goals here in Japan than I could realize that is the case in Europe and possibly also elsewhere. So I think we could pick up what Japan is doing in this regard.
Satish Babu:
There is a comment for Vakas. This is about, because he mentioned the mountainous areas where internet access is poor. This DTN has been proposed as an access solution. DTN is delayed tolerant networks. So that has been proposed, it’s come up as a comment for you. So any other SDG related discussions or interventions? Yes, please. Hi, everyone. It’s Bashar from Chad.
Audience:
So thank you for the speaker about SDG. But as you know, we are doing, we contribute to SDG now with our school because what we are doing is good quality of education. So with the school, we teach people, we interact with them. I think that it’s contributed to SDG. So SDG is not like physical persona. It’s like the objectives that we can help to attain, like gender. So when you bring women inside and the leadership, teaching them, I think that is contribute to gender balance. When you talk about climate change, how to save energy, what Olga said is very important also. Because what we are doing, our first edition of SDG, we have problem of electricity also. So we bring the school and at the hotels to have a sustainable electricity. Because when you don’t have electricity, you don’t have projection of light, you don’t have anything. So the school is down. So I think that everything what we are doing is linked to SDG. So what Aria said, how we can improve that? Because SDG, the mandate will be a start and something like this, and how we can incorporate in our agenda is very important. As she said, so we can have a workshop to link it to SDG. But in Africa, we have agenda 2063 also. So how to localize this SDG and grassroot. So thank you so much. Thank you. More comments? Some comments from remote? No? No? No more comments about SDGs?
Avri Doria:
No?
Olga Cavalli:
So we move to the next section. So it’s roundtable discussion, which is not round, but it’s conceptually roundtable. It’s really hard to get a real roundtable at these meetings. So how do we see schools and training in internet governance evolving? So we want to discuss with you this concept and also the value of schools in reinforcing relevance of multi-stakeholder model. I have some names here in the list, but I don’t know if someone wants. Yeah, sure. So like I said, I am associated with two schools and one of them is the India School. And let me share very briefly what we have achieved.
Satish Babu:
The primary function of a school of internet governance is awareness building, capacity building basically. But beyond that, that is the first deliverable. But what we have done, we have experienced is that in large countries like India, this provides a platform for people to come together from different parts onto one table. So that is actually quite an enabling thing because what the India School did was after two editions, it started incubating the India Youth IGF. Now that has just completed six years now. So that’s a mature organization now. The second thing we did was we associated with the GFC, and Martin is here from GFC, and we started this GFC IIII series of workshops. And this is a capacity building again, awareness building on cybersecurity related norms and best practices and so on. We just, two weeks back, we finished the fourth edition of that workshop. So it gives us a platform to kind of take up new things, which otherwise cannot be taken up because we are getting a lot of people from different backgrounds. Actually, that is the multi-stakeholder system itself. In many countries, we don’t have, unlike Brazil, which has a kind of CGI.br, we don’t have anything similar to that in India and many other countries. These schools actually provide the initial part of a multi-stakeholder model. It is without any mandate. Nobody’s kind of given a mandate to us. We have kind of assumed that mandate. So what India School did then was to kind of, the third part is that we pushed for the India IGF, which was not happening for the longest time. So the school itself took the initiative and pushed the government and got everybody together. So now we are in the third year of the India IGF now. Again, that’s an achievement of the school. And another project which is going to come up as an action item, not awareness building, is that we’re going to start an India project measuring internet, the quality of internet. Again, that’s come out of the school. So a school is not just a school. It is actually an organization that can have much broader ramifications. So I’ll stop here.
Olga Cavalli:
Thank you, Santish. This is very interesting. I would like to share with you some evolution, talking about the IG schools of internet governance evolving. We started 15 years ago. This is our 15th edition, just happened in September. And after, once we finish each school, we do a survey with the fellows and with the experts. And they started to ask information previously to the week. They needed more, they wanted to be more prepared. So now we have included in the last three years a self-assisted virtual training prior to the school. It lasts two months, not all the time. Of course, it’s three hours per week with videos that we have produced and material that we have produced. It’s not copy-paste. And then the school. And what we did last year, we partnered a university. So for those students that have complied with the evaluations of the first two stages, the virtual and the hybrid, whether they are virtual or on-site, they can do a research with the university and receive a university diploma on internet governance and regulations. None of these things is paid for the fellows. Everything is for free for the fellows. And in the Argentina School of Internet Governance, we have partnered another university from Argentina. And we will offer this year certifications from Fortinet. We have some fellowships, I think like 40 fellowships for doing a certification for free in cybersecurity. So I think we are, I think Satish said a very interesting thing that schools become kind of a vortice of activities related with internet governance at the national level. And last, at the beginning of this year, we did a survey with the students and we produced a document for the Global Digital Compact, which is published in the Global Digital Compact. We did that with more than 80 fellows from all over the world and we did that in three languages, Spanish, English, and Portuguese, always working online. And now, with the student this year, we’re working on a different document in how to enhance the multi-stakeholder model through participation of fellows. I have the material, I have to work with the team to do a document, but this will be in the near future. Maybe other schools would like to comment or? Okay, perfect.
Satish Babu:
So we have roughly half an hour for discussions after which we’ll have to wind up. We have several speakers already lined up. First is Olga, would you like to go again about South School or you’re done? We have Wolfgang here from Euro SSIG. Please, can you come over here? Wolfgang? The father of all the schools. Applause for the father. The internet governance father.
Wolfgang Kleinwaechter:
Thank you very much. As Swin said yesterday, it’s always suspicious if you clap your hands and you get an applause before you have said anything, because probably I will say something where you disagree, which will not produce any applause. But I think it’s always good to remember where all this comes from, and it’s fantastic to see how a crazy idea has triggered a development where we see now so many schools which are inspired by the basic idea. As you remember, the World Summit on the Information Society in Tunis adopted a broad definition of internet governance, which included the evolution and the use of the internet. That means the technical layer and the application layer, which are the so-called internet-related public policy issues. And this goes from cybersecurity to the digital economy, to human rights, to artificial intelligence, a lot of other things. And so the problem is that internet governance is such a multidisciplinary approach which you cannot study in a regular university. You have to study law or political science or informatics or cultural science or something else. So that means the idea of the founding fathers of this summer school was to find a format which would be realistic and allow this multidisciplinary presentation, both from a technical and a political perspective, from a practical and an academic perspective. So I think this was the challenge, and so the pilot project was why not to use the format of a summer school, of a one-week course? So over the years, what I think we have seen is that this is really an interesting format because it’s very flexible. So you can adjust this to special needs, special local needs or to special target group needs, and you can have one week or you can have a one-year virtual course, so just a weekend course. But if you follow more or less, or if you based the concept on the pro-definition from the Tunis agenda, then you can pick some elements, but you are rooted in this process. I think this is also important for the self-understanding of the school, that they contribute to a process which is inspired by the program of action and the principles of the World Summit on the Information Society. So for instance, when we had the pre-conference here with Kyoto University, the Brazilian school presented its model and say, we have weekend courses, we have full-year course and this, and you can have also courses for special target groups. So we are discussing now to have a special course for parliamentarians or for governmental officials or for judges. I think we heard in the open plenary there was a judge from Africa who said, I’m the only judge here. So we have the legislative, it’s the government, the parliament, the executive is the government, but what about the third form of power, the judges? And a lot of conflicts in the world of tomorrow will be at least then go to a court. And if you have judges who have no idea what internet governance is, they probably, they make stupid decisions. So in so far, judges are an important target group and you can, that’s why the format is a very good one, is flexible, and that’s why it’s an encouragement also for academics or other groups in many countries to take this as a source of inspiration. There is no single model. So we have started this in Meissen and it was testing out, so we are learning by doing. If you go to our first course in 2007, it’s so different what we offer today. So that means you have to be open to a changing environment. And as you have seen also here, issues like AI were not on the agenda 10, 15 years ago, but now say, and you have new questions, how all this is related. confusing concepts like what is digital governance? Is this different from internet governance? Do we have AI governance, cyber governance? So that means growing confusion. And insofar, schools are important, you know, to bring more clarity to the processes that you avoid this confusion and chaos and then we have a better understanding. So it’s work in progress. It will never finish. As Bill Clinton something have said in an ICANN meeting, internet governance is like stumbling forward. So that means small steps are better than big jumps. So but be very careful and be inspired what you want to achieve and what is the target group. I think these are two or three key questions you have to ask yourself if you start to develop a program and feel free. So because the beauty of an academic person, so independence of thinking is important. Do not try to please somebody. So take your inspiration from the global community and say, what is good for my country? What is good for my community? And then be proactive. Thank you. Thanks to you, Wolfram.
Olga Cavalli:
Thanks so much.
Audience:
Bravo. Does anybody have any question? We have Andrea that wanted to speak. I want to speak about the evolution. Can I come and stand here? Of course, of course. Go ahead. I think this point is really interesting to look at the evolution. And I agree with everything Wolfram said. I think we have to be, there is no perfect model. But some things I think are standing out. I think, firstly, there is a lot more other people. Mic is off. Other institutions that are delivering training. For example, UNESCO is training judges and judiciary in internet-related policy. But they’re not really part of this network. There’s also quite a lot of training for regulators as well that are not a part of this network. And I think the big difference is that they don’t emphasize the multi-stakeholder approach to the same extent. I think what’s unique about what we do is that even when we are focusing on a particular group of practitioners or professions, we always bring that diversity to the conversation. But the other things that have struck me is that there’s increased need. And it came out a little bit at EUROSIG this year. I’m, you know, EUROSIG is my inspiration. And about the social impact of the internet. I feel there’s more a demand now to not just learn how internet governance operates and who’s involved in internet governance, but to have a deeper understanding of how do we as an internet community respond to some of the social impact issues. So looking at misinformation, looking at education, looking at democracy, at political processes, looking at the media and how the media environment is affected. And I find this very challenging, you know, because it kind of is crossing over out of the narrow internet governance and maybe even out of the broad definition, Wolfgang. But I think it’s interesting to do that and to think about it. And the other thing I think that we might want to think about is sometimes taking the same cohort
Olga Cavalli:
of people and doing a follow-up. So, you know, doing like a EUROSIG or AFRISIG, I can’t speak for the South School, sorry. But having a group of people like that and then having maybe the same people rather than having a new group every year so that you actually deepen the engagement. I don’t know if we have the capacity to do that, but it is something that has occurred to me that might be quite a useful thing to do. I do think we need to evolve. Thank you, Anne-Marie. And we do have fellows that come to several schools. And that’s very interesting because they evolve with the group. Not all of them. This year we had 400, 200 on-site and 200 virtual. And a group of them are coming to several schools, which is extremely interesting because they have seen the evolution. And some of them become speakers in the experts or they start to work in companies or in governments and they become experts in the next editions. And do we have, we have some names here, but I don’t know if they’re. Muriel Alapin. Yeah, would you like to make a, your name is listed here as one of the short intervention. Can you make a short intervention about the Benin?
Audience:
And we spent the break for, I think we’ve got another half an hour. Oh, okay. Thank you. This is Ben Rashad-Sanosi from Benin. Can I go in French? Yes, I can. Yeah, but maybe I can also. If we have someone that can do, we know what I’ve learned is called consecutive translation. But you have to be really slow because my French is. Right, but no, we would need somebody to translate. He would speak a couple of lines and then someone. So if we have someone that volunteers as being good enough to do it. I can do it if he speak slowly. I will do it in English, don’t worry. Okay. I was wanting to try my French, okay. Okay. This is Ben Rashad-Sanosi from Internet Society Benin Chapter. So we also organize the School of Internet Governance. Last September, it was from September 11 to 15, September. So it was five-day training. So we have some participants from Benin, from Togo, from Cote d’Ivoire, and also from Chad. So there were about 32 people who were trained. So during the five-day, they have many session and many training as well. So it was really amazing because the Francophone region were engaged. They learn a lot about internet governance, how they can be engaged. And now some of the fellow are here on site also attending the IGF like me. Thank you. Thanks to you. Thank you very much. Bravo. Thank you also. We have Andrietta Abdelhaji. Oh, sorry. How do you pronounce it? Oh, that’s difficult for me. Abdelhaji. Yes. Hi again. It’s Abdeljalil Bashar from Chad. So I’m coordinator in National School of Internet Governance. It’s not national, but it’s School of Internet Governance. So we founded in 2019. It’s funded by House of Africa. And the main objective is to bring, as you know, Chadian ICT student, youth, digital professional, closer to the global internet ecosystem. Because what you saw that there’s not many Chadian. It can be in ICANN, it can be in IGF and other ecosystems and ITU. So main objective also to fill the gap, as I said, observe during the year in term of effective participation of Chadian and the policy development process related to internal governance, national, regional, and international ecosystems. So the first edition we organized in partnership. So a civil society partnership with government is a national ICT agency called ADETIC. So organized with them. So for 14 to 15 December, 2020, it was in Yemena. So we bring people from outside also. There’s Sebastian from France, Tijani Benjama from Tunisia. There’s Estelle from Cameroon also. And there’s some people in ICANN also, Yaowi, he did intervention online and some people from Africa also. So it is the first time that we organized this kind of school in Chad. Very appreciated. Political side also. The minister congratulated us. It’s the first time that we teach people from this sector and no sector also. So in this year, so we have, yes, 15 participants from 35 entities, government ministry, parliamentary, civil society, youth, et cetera. So this year also we will organize our second edition. It will be from 60 to 8 December. So it will be in Yemen also. So we need your support, your contribution also. It can be online, it can be to coach the people also because it’s very important for us. So I need to stop there. Thank you so much. Thank you, thank you very much. I’ll do it in French also. I’ll do it online, shall I do it now? Yeah, sure. I suggest the inclusion of children in 10 to 18
Satish Babu:
in the emerging STEM that is trying to found their way to mathematics. Is it a good idea to start with a program from the start? Or you start there? Yeah, I didn’t understand. Sorry, I suggest, this is a comment from Abdullah Kamar who is an alumnus of PK SIG and AP SIG. I suggest the inclusion of children aged 10 to 18 in emerging STEM education programs. STEM, of course, is science, technology, engineering, and mathematics. Within the framework of internet governance, it is a forward thinking and crucial step towards achieving SDGs. By providing them with early exposure to STEM disciplines in the context of internet governance, we can foster a generation of digitally literate, responsible, and socially conscious individuals. Furthermore, teaching internet governance principles to young minds can instill values of online safety, digital ethics, and respect for human rights in the digital sphere, aligning with several SDGs that emphasize inclusivity, peace, and justice. Was that a question or a comment? It’s a comment.
Olga Cavalli:
Okay, I would like to comment on that. In the Argentina School of Internet Governance, we do have a lot of high school students attending the school, especially high school students of their last year, where they are mainly technical schools. And some of them are quite engaged after that and interested in following IT careers and following also online discussions about different issues about internet governance. Not in the global one. We have young people. We don’t have age limits, but I don’t remember we had high school students, but mainly young professionals. But in the Argentina one, yes,
Satish Babu:
we had a lot of high school students, which I think it was very interesting. Yeah, we have something called the India Youth IGF, which also covers young people. Not in the school, but above school. So I think the floor is now open for any comments. Other comments? Yeah. Avery? I’d like to make a comment that’s within this theme,
Avri Doria:
but more about the dynamic coalition itself and its role in doing that. I don’t organize a school. I just go to a bunch of them. And one of the things is the dynamic coalition and its usefulness, both in having the schools communicate to each other, having the schools learn from each other, and also doing the multi-stakeholder model. One of the things, for example, we try to a moderate degree of success and failure, we mix it both ways, is to have the dynamic coalition be extremely bottom up. And basically sort of always, I don’t know if you guys notice, that follow it constantly begging people to say something, to do something. And where the dynamic coalition started, it wrote documents. It produced materials that hopefully could be useful to schools. And it may be the kind of thing that would be useful to look at again, whether there’s something, for example, in the notion of the multi-stakeholder model, how it’s seen, how it works, that would produce things that could sort of help the schools themselves sort of bootstrap programs in that. The dynamic coalition cannot obviously force any kind of learning on anybody or any kind of curriculum, but certainly as a way to make these things available to the other schools. And so I’m just sort of wondering whether that makes sense to people that there is a help in the dynamic, and can we use it more for the schools? What would the schools want from a dynamic coalition to help them in teaching governance and understanding the multi-stakeholder model and the evolution of the model? And those are just kind of things I was thinking about, that I don’t do a school, but I do dynamic coalitions. And I teach at schools. In the website where we have the map, I think we have the possibility of sharing information, documents. Yeah, that should be. Yeah, that’s right, perhaps. That should be interesting for us to remember, even myself, because sometimes I forget. Oh, yeah, we have a wiki space where any school that wants to, and some have, for example, the North American school, Glenn has been amazing in terms of contributing pieces of curricula and others, and it’s open for anybody to be able to do that. Yeah, let me see if I can bring that up while we keep talking. That document in three languages for the Global Digital Compact,
Olga Cavalli:
we are working now on a different document about multi-stakeholder model. Those and other things that we all may have, it’s a good space to share, because we have the map. We can see the whole map of all the schools, and maybe others can consult and share information. Honestly, it’s lack of time. It’s not lack of interest. I actually forget, because a part of this I have to work. This is not my work. Just a quick question for Avri. Sorry, Martin. It is lack of time.
Avri Doria:
Avri, what happened with the collaboration between the DC and the IGF Secretariat on the IGF capacity building? Yeah, that was last year. That was basically a one-time thing where they took the document that we had spent several years developing and produced there, and I guess you were, were you secretary, or were you chair at the time? I don’t remember, but basically, and produced a document of their own that I haven’t followed up to see how it gets distributed, whether it gets distributed, whether there’s been any feedback on it, and said, gee, nice document, but it would be great to have X, Y, and Z added to it. I have not followed up, and that might be a good thing to do in the next year is follow up with the secretariat and say, did you use this? Did how many schools came and got it? How many schools tried to use it? How many schools found it useful? How many schools didn’t find it useful, and why? So it’s a good idea as a thing to follow up on. Martin.
Audience:
Thanks, Martin Boteman. Amongst things I also support, indeed, as was said beforehand by you, that the Global Forum for Cyber Expertise, and there we have a track on triple I, on enhancing justified trust of the internet and email in the region by use of modern internet standards and global good practice. Now, if you go to GFCE triple I, you’ll find there is a handbook that explains these modern internet standards and why they matter, and they also relate to some of the global good practices that you could refer to. So maybe this is also a resource that could end up on this page as a possible resource for schools. Basically, to teach it, you only need one person who understands the issues good enough to explain it, because the material is there, so. Thank you. Can you repeat the website? GFCE for Global Forum for Cyber Expertise,
Avri Doria:
and then triple I.
Audience:
If you Google for that, you’ll find it. Okay. By the way, the Global Forum for Cyber Expertise, which came out from the so-called London process, which started the UK government 15 years ago, will have a world conference on capacity building
Wolfgang Kleinwaechter:
end of November in Ghana, so in Africa. And I think this is probably a good opportunity for our African colleagues, you know, to link to this. So the Global Forum on Cyber Expertise, expertise had its root in the issue of cyber security. This was also the main target of the so-called London process to concentrate on cyber security, less on the broader internet-related issues. But cyber security is such a central thing, and I would recommend, in particular, our African friends
Satish Babu:
to make use of this opportunity in Ghana. It’s easy to get involved. It’s GC3B to Google on. You need to get an invite. Thanks, Martin. I think, for us, it’s been very useful. We’ve been running these workshops. This is the fourth one we just completed, and it’s been very useful. And it has even put us on a path towards an action item, a new project. I have two comments online. I’ll read them out. Both are from Keiko Tanaka. The first comment relates to the children. On the previous comment, dynamic teen coalition may be the place to go for focus on teens. It’s a new effort. Question, any chances of opening up education resources, or youth MOOC, or OER? I don’t know what OER is. I think it is online education resources. Open education. Open education. Oh, OK. OK. Do you want to respond to this question? I am a teacher. Oh, you are a teacher. I’m sorry. I’m sorry. I’m so sorry. Does anybody want to respond to this? I think that it’s really good that you are raising that. And I think UNESCO is actually reinvigorating
Audience:
a bit their open educational resources program. But I don’t see it coming to the IGF. And it’s not really coming to the IGF. So I think it’s actually important. There’s also some MOOCs that have been established. So in Africa, there’s a MOOC on internet governance training for journalists that was actually developed with also some support from UNESCO. If anyone’s interested, I can’t remember it right now. But I think that the thing about if we were to work on open educational resources, we would have to standardize. And I think that’s the challenge. Because with open educational resources, it kind of works if you use standardized templates and formats. Otherwise, you just have a repository. And in a way, the dynamic coalition already gives us that space for the repository. But it’s a good conversation to start. Thank you. If I can, I’m trying to share the website. And I don’t know whether it’s something that, yeah, OK.
Avri Doria:
So OK, it’s up now. And I can go through it a little if people want in just a few minutes. So there is a website that has been maintained. Basically, there’s mailing lists and archive. There’s about DC meetings. There’s the schools on IG. I think that’s the one that may have the map. Yep, there’s the map. And so basically, each of the schools is offered, is requested. Obviously, it’s up to each of the schools whether they want to. But basically, there’s a form that you fill out. And you get your school with a marker on the map. And then you click on it. And you get the name of the school and some information. So that’s good. There’s a fellow section where fellows who basically want to put themselves in a list of fellows so that they can be found by others. They could perhaps be reached out to as possible teachers, et cetera. So if you’re looking for a teacher for your school, especially remote, then it’s a place to go and say, oh, OK, this person was a fellow, et cetera. Faculties, some of the faculties list each other. I don’t know how many of us have listed each other. But we can. And at that point, others can find pieces of faculty, members of faculty, not pieces of faculty. Somebody should teach me how to talk. Then we have a DC WIKI that lists a lot of schools and is editing the schools, sort of listing when they are, when they were formed. There’s current work. There’s a place, let me see, where you can basically put, I’m looking for, I didn’t. Not for the form, OK. Then there’s materials. It’s still only one school that’s done it. But there’s the materials provided by schools participating in the D. So any of you that are really proud of your curriculum, really proud of a course you put together, really proud of whatever it is about your school, you’re really proud of that you’re willing to make public and available to the other schools, you’ve got it. So the North American school here basically provides a whole set of individual provided material, operations manual, introduction plan, recruitment. So there is already a rich collection of information there. It could be so much richer. It’s purely a voluntary effort, but it could be so much richer if those of you that want what your school does to be visible and usable by others, you took advantage of it.
Sandra Hoferichte:
Since Afri is mentioning the wiki and the website, it’s up for a while already. And I find it a bit sad that not really many have made use of the wiki, because I think it would be a great source for a global network of faculty, of fellows, of schools, of exchanging material, et cetera. The point is, and I’m saying that here on the record on purpose, because it’s getting difficult meanwhile, the wiki and the website and Afri’s work is only supported by our association who is organizing the Euro SSIG, which is Medienstadt Leipzig e.V. So if anyone here in the room, any school, or any other organization have some leftover budgets to support our work, that would really help to engage more into the dynamic coalition. Because doing it on voluntary work is one thing. And everyone who is contributing to the dynamic coalition is contributing on a voluntary basis already. But at least a secretariat and the digital sources that we all need to work with, they need to be funded. And at the moment, Medienstadt Leipzig, the Euro SSIG, is the only source of funding. So we are taking it basically from our Euro SSIG budget. I could believe that most or many of the school could dedicate a little bit of, it doesn’t need to be big money. I think if everyone contributes a little money, that would really help to maintain these resources and also to help the dynamic coalition to move forward. I needed to say that, sorry.
Avri Doria:
Thank you. So we have one hand up there. Yes. Two things, one is a question, one is a comment. First question is, has there been an effort or an initiative or a talk about having a world school on internet governance or a global school on internet governance?
Audience:
I know there are regional, there are national. This network has expanded so much over the last years that maybe now there is a time that all of us could pool in resources and look at something like organizing a global school on internet governance or something like that. We are at the IGF. This is the global forum for discussion on IG. But over here, we are, this room is the room which develops capacities for internet leaders to come over here and to talk about these issues. But what about our own forum? I know this DCSIG is here, and we organize a session every year at the IGF. But if we cannot pull in resources to organize a global school, can we leverage this particular DC, maybe organize more events around this, maybe have quarterly calls with the schools who are interested, could share what they did, just to have more collaboration within the SIGs, rather than having, let’s say, one meeting per year where we come down for one and a half hour, talk to each other. This is so inspiring, honestly, coming from Pakistan and seeing that how other schools are doing it. And since this network is growing, I think this is an opportunity to actually leverage this potential and probably make something which is global or at least cooperative within SIGs. Thank you. Thanks, Vakas.
Satish Babu:
I think global schools, there are two people sitting on either side. This is the first one. This is the second one. Both of them have become from regional to global. So there are these things already in place. Now, the discussions in here in the DC are at the meta level, where we don’t work at internet governance, but we talk about the things associated with it, how to run the school, what are the constraints, what is evolution, and so on. So we have to probably think about this kind of proposal that you have put up. One other thing I want to mention is that in terms of evolution, the India School has put all their content from the first edition to the eighth edition on the website. So it’s very interesting to look at the 2016 course content and then the later ones and see where it has taken us. So there is actually a very clear journey that has happened in terms of the course content on this eight years. We have five minutes more. Are there any burning questions?
Olga Cavalli:
I would like to make a comment. All the content that the school has is published in our YouTube channel in two or three languages. So after each school, our team divides each of the panels or keynotes into different videos with a clear sign who’s talking and the issue and the languages. So all that content is available. And also, one good experience that we had since almost the beginning is that with all the group of students, we create a telegram group that is active since the school, and it doesn’t stop. And some members of our team are feeding all the time different fellowship opportunities, working opportunities, research, and news about internet governance. So that has been working very well and keep on the momentum in between the groups of students. Thanks. I think we have to now think of winding up. So Avri, would you like to make any closing statement?
Avri Doria:
Not really. I mean, I invite, I don’t know if our rapporteur wishes to make a quick summary statement that was listed on the agenda, but you don’t have to unless you feel comfortable. But because what we do have to produce is we’ll have to produce a statement or two that comes out of here. And if you have such a statement that you can give to the group, that would be great. You want the mic? Yeah. You can come here. Please. Thank you. Do we have a mic? We do. You want to go? Yes, go ahead. I’ll speak after. Hello. Hello. Hello, everyone. Excuse me, I’m going to speak in French for the linguistic diversity because we are in the last row. What? I can translate. No, no, no. If you speak slowly, I can translate. Yes, because we are in the IGF. All the countries are grouped here. I can speak in English, but my English is not very good. I prefer to speak in French.
Audience:
He will speak in French because his English is not so good. Yes, because in my country, the Ivory Coast, we speak in French. So I would like to speak in French to be more comfortable. My name is Fanny Saliou. I am the country coordinator of the Internet Governance Forum. We are giving you this opportunity to inform you that the Ivory Coast is organizing its first Internet Governance Forum which will take place in the month of… He’s from Cote d’Ivoire and you’re organizing a first IGF? No, school. School of Internet Governance. Sorry, my French is limited. We had the opportunity this year to organize and host the West African Forum. So they are going to open the West African Forum? Yes, Cote d’Ivoire had the opportunity to host it this year. So you’re hosting the West African IGF this year? Yes. You both helped me. That was too much for me. So Fanny is from Cote d’Ivoire. They organized their first school of Internet Governance in December. So they need to bring all the people together. So they need the support of the DC, all the people here. It can be a speaker, it can be financial, it can be human resources. So what he need to tell us here. So he’s very happy to be here. Thank you. Thank you so much. So me to my English is very new. So at some time I was lost with my translator in my head. So I have a few takeaways that I picked and write it down. Hello. Oh, excuse me. So… We’re losing about two minutes because we are over time now. Yeah. So… We’re losing about two minutes because we are over time now. Yeah. The topics was the SIG and the SDGs. I noticed that for the SDG 5 on gender, Ms. Sandra from Euro SIG shared with us the work that SIG doing to have an inclusive and have various thematics or topics about these SIGs. For the SIG CDG 7 on access to energy, Ms. Olga explained that access to energy and climate change have a great link. So in her SIG they have a few panel discussing on this topic and his impact of the consume of energy. So excuse me if I made some mistakes. I will put it down later. Mr. Alexander from Russia SIG speaks about SDG on peace and justice and say that the SIGs can help build new standards, help enforce the multi-scholars process like in ICON and also enforces inclusion and effectiveness. From the SDG 5, Ms. Henriette from AFRISIC talked about the new era of AFRISIC which is leadership development and she said that this year they have a lot of parliament who came to enforce their knowledge about internet governance and many of these fellows are here to discuss with the parliament track.
Olga Cavalli:
Thank you very much. Thank you everyone. So final words. I think we have to use the wiki more. Can you remind us the URL? IGschools.net IGschools.net. Go to IGschools.net and we should contribute more. It’s always a time issue. It’s not like a volunteering but it’s a time issue. Apart from the school I do have to work which this is kind of a hobby. So thanks to everybody who came to the session online as well as physically here. It was a very great session. So we are now looking forward to working closer with the DC. Thank you. Thank you everyone.
Speakers
Alexander Isavnin
Speech speed
130 words per minute
Speech length
805 words
Speech time
372 secs
Wolfgang Kleinwaechter
Speech speed
163 words per minute
Speech length
1045 words
Speech time
385 secs
Audience
Speech speed
170 words per minute
Speech length
3963 words
Speech time
1395 secs
Avri Doria
Speech speed
184 words per minute
Speech length
2306 words
Speech time
753 secs
Olga Cavalli
Speech speed
167 words per minute
Speech length
2635 words
Speech time
949 secs
Sandra Hoferichte
Speech speed
166 words per minute
Speech length
1339 words
Speech time
485 secs
Satish Babu
Speech speed
203 words per minute
Speech length
1967 words
Speech time
582 secs
Speaker 1
Speech speed
152 words per minute
Speech length
263 words
Speech time
103 secs
DC-BAS A Maturity Model to Support Trust in Blockchain Solutions | IGF 2023
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023
Knowledge Graph of Debate
Session report
Full session report
Wout de Natris
Summary: The analysis of IoT security policies across different countries revealed some significant findings. Firstly, there is a noticeable gap in the policy framework for IoT security, particularly in many countries of the Global South. This suggests that these countries lack comprehensive guidelines and regulations to address IoT device security challenges. Additionally, national policy practices for IoT security often differ significantly from those of other countries, indicating a lack of alignment and standardization. The study highlights the importance of implementing accountability frameworks throughout the IoT device lifecycle. The complexity of IoT security requires a comprehensive approach that considers factors such as data privacy, cybersecurity, and standards. Governments are urged to prioritize security by design during hardware and software procurement to enhance security standards. Lack of user awareness about data privacy implications necessitates improved education and awareness campaigns. Data security standards are recommended to protect against abuse and misuse of data. The analysis raises concerns about future implications of data insecurity, emphasizing the need for proactive actions to address IoT security challenges. These findings provide insights for policymakers and stakeholders in developing robust IoT security strategies and frameworks.
Mark Carvell
The discussion centred around key topics related to the Internet of Things (IoT) and its impact on society. One important point raised was the necessity for a universal labelling scheme for IoT devices to ensure harmonisation and clarity for consumers. The argument posited was the need for a standardised labelling system that enables easy identification and comprehension of IoT products, especially as individuals increasingly travel with their devices. The sentiment surrounding this topic was neutral, reflecting concerns without strong opinions expressed.
Another topic of discussion was the role of public administrations in IoT applications, particularly in addressing government concerns about security. The question was raised regarding how IoT applications can meet government security requirements, given the interactions between governments and citizens. This inquiry underscored the significance of striking a balance between innovation and security in IoT technologies. The sentiment surrounding this topic was also neutral, highlighting the need for further exploration and understanding.
Ethical considerations in the development of IoT systems and networks were also emphasised during the discussion. The unpredictability factor associated with IoT development was addressed, and developers were encouraged to ensure that their systems and networks are developed ethically. This topic generated a positive sentiment, indicating a belief in the paramount importance of ethical innovation in the IoT industry. The sentiment reflected a acknowledgment of the potential ethical challenges posed by the rapid advancement of IoT technologies.
Lastly, there was an encouragement for the dynamic coalition to utilise the EUDIG platform for advocacy purposes. The EUDIG platform was described as having a call for issues, and a forum was scheduled to take place in Vilnius in June. The sentiment surrounding this topic was positive, indicating a belief in the effectiveness and value of using the EUDIG platform for advocacy.
In conclusion, the discussion covered a range of important topics related to the IoT and its societal impact. These topics included the need for a universal labelling scheme, the role of public administrations in ensuring security, ethical innovation in IoT development, and the value of using the EUDIG platform for advocacy. It is evident that there are various considerations and challenges associated with implementing and developing IoT technologies, and further exploration and collaboration are necessary to effectively address these issues.
Barry Lieber
Security for the Internet of Things (IoT) is a multifaceted and intricate issue, encompassing factors such as authentication, confidentiality, and data integrity. Barry, an expert with almost 25 years of experience in the field, emphasizes the importance of prioritising IoT security. To fully comprehend and address this issue, it is necessary to break it down into various components.
The integration of different sources is paramount in realising the full potential of the IoT. The seamless communication and collaboration among diverse devices, such as cars, houses, and calendars, serve as prominent examples of how integration enhances the IoT experience. However, the complexity of maintaining this integration while ensuring security and privacy presents a significant challenge.
Authentication is one aspect of IoT security that requires careful consideration. With numerous devices exchanging information and interacting within the IoT, it is crucial to establish secure methods of verifying their identities. This helps prevent unauthorised access and malicious activities, safeguarding the overall IoT ecosystem.
Confidentiality is another significant factor in IoT security. As vast amounts of sensitive data are transmitted and processed within the IoT, protecting this information from unauthorised disclosure is imperative. Implementing robust encryption protocols and secure data storage mechanisms becomes crucial to maintaining confidentiality and safeguarding user privacy.
Data integrity plays a pivotal role in IoT security as well. With the vast quantity of data being communicated and processed within the IoT network, it is essential to ensure its accuracy, consistency, and reliability. Implementing mechanisms for data validation, verification, and error detection is vital to maintain the integrity of the information exchanged within the IoT environment.
The analysis of the various supporting facts and arguments highlights that security is not merely a buzzword in the IoT landscape. The inherent complexities involved in integrating diverse systems while maintaining security and privacy underscore the challenges faced in fully harnessing the potential of the IoT. The insights gained from this analysis underscore the need for ongoing research, development, and implementation of robust security measures to address the complexities and mitigate the risks associated with IoT security.
In conclusion, security for the Internet of Things is a multifaceted and complex issue that necessitates attention to various factors such as authentication, confidentiality, and data integrity. The integration of different sources is crucial in unlocking the true potential of the IoT, but it also poses challenges in maintaining security and privacy. With the rapid expansion of the IoT landscape, it is imperative to invest in developing and implementing robust security measures to safeguard the IoT ecosystem and protect user information.
Elaine Liu
The speakers in the discussion agree that IoT (Internet of Things) should have different policies and guardrails depending on the use cases involved. They argue that considering the diverse range of data collection in IoT, which can vary from consumer to organizational to agency levels, it is vital to establish suitable policies that address the specific needs and risks associated with each use case. This approach recognizes the importance of tailoring regulations to the unique characteristics and requirements of different IoT applications.
Furthermore, the speakers emphasize the significance of taking into account the entire value chain when setting guiding principles for IoT. They highlight that hardware, software, operating systems, and data analytics all play crucial roles in the IoT process. By considering the entire value chain, policymakers can develop comprehensive and effective guidelines that address various aspects of IoT implementation, ensuring its smooth and secure operation.
These discussions align with SDG 9: Industry, Innovation, and Infrastructure, which emphasises the need to foster sustainable industrialisation, promote research and development, and enhance access to information and communication technologies. IoT is a key aspect of Industry 4.0 and digital transformation, and thus, setting appropriate policies and guidelines for IoT corresponds to addressing the goals and targets outlined in SDG 9.
The speakers’ arguments are supported by the evidence provided throughout the discussion. They acknowledge the complexity and diversity of IoT applications and the need for tailored policies to manage the risks associated with each use case. Additionally, they emphasise the interconnected nature of the IoT value chain, where hardware, software, operating systems, and data analytics all contribute to the overall functionality and performance of IoT systems. Therefore, their arguments are well-grounded and offer valuable insights for policymakers and stakeholders involved in IoT governance.
In conclusion, the speakers advocate for the development of different policies and guidelines for IoT based on its specific use cases. They also stress the importance of considering the entire value chain, encompassing hardware, software, operating systems, and data analytics, when setting guiding principles for IoT. These discussions align with the objectives of SDG 9 and provide valuable insights into the complexities and requirements of IoT governance.
Alejandro Pisanty
The analysis reveals several key points related to the consumer Internet of Things (IoT) and its impact on security, industry, and infrastructure.
Firstly, consumer IoT devices are causing significant concern regarding security. It is essential to identify the entities that are leveraging IoT to exert power. These entities may include individuals, organisations, or even governments. Identifying these entities is crucial to establish accountability and take necessary security measures to protect against potential breaches or attacks.
Secondly, the development of consumer IoT is primarily driven by small companies. These companies often produce and sell IoT devices at very low prices, making them accessible to a wide range of consumers. However, this also creates challenges in terms of security awareness and compliance. Consumers may not be fully aware of the need to secure their devices or the potential risks associated with them. Additionally, the affordability of these devices means that they may not undergo rigorous security testing or meet established standards.
Furthermore, the deployment of consumer IoT devices poses challenges to openness, interoperability, and core internet values. Different technologies and standards are used for communication between these devices, making it difficult to establish the necessary interoperability and ensure seamless connectivity. This can lead to fragmented systems and hinder the growth and development of IoT applications. Additionally, the increased deployment of these devices expands the attack surface for everyone. With numerous connected devices, the potential for vulnerabilities and cyber-attacks increases, posing a threat to individual privacy, data security, and overall network integrity.
Moreover, the sale of many IoT devices occurs outside the oversight of national standardisation bodies. This means that these devices may not adhere to specific standards or regulations, raising concerns about their compliance and quality. The lack of standardisation can lead to compatibility issues and hinder collaboration and innovation in the broader IoT ecosystem.
In conclusion, the analysis highlights the urgent need for enhanced security measures, awareness, and standardisation efforts in the consumer IoT sector. It is vital to address the security concerns surrounding these devices, identify the entities responsible for IoT deployments, and ensure that consumers are informed about the importance of securing their devices. Additionally, industry stakeholders should collaborate to establish common technological standards and guidelines to promote openness, interoperability, and cybersecurity in the consumer IoT realm. By doing so, the potential of IoT can be fully realised while simultaneously safeguarding privacy and ensuring the integrity of connected systems.
Sandoche Balakrichenan
The presentations on IoT emphasized the significance of interoperability, scalability, and zero trust. It was argued that these features are essential for the success of IoT. The domain name system (DNS) was proposed as a potential solution for IoT-based identity and access management in a zero-trust environment. DNS is widely used for communication by internet users and can potentially be used for IoT as well, enabling secure and controlled access to IoT devices and systems.
LoRaWAN, regarded as one of the most constrained networks in IoT, was highlighted as an ideal testing ground for the concept of interoperability, scalability, and zero trust. The successful implementation of this concept with LoRaWAN could potentially be applied to other IoT networks and devices.
AFNIC, a prominent organisation, is developing a dynamic identity management system based on DNS. The aim of this system is to enable interoperability among various types of identifiers such as RFID and barcodes, facilitating efficient and effective management of identities within the IoT ecosystem.
The use of DNS and DANE (DNS-based Authentication of Named Entities) was discussed as a way to eliminate the need for a certificate authority ecosystem. This approach, combined with the successful tests of TLS 1.3 and ongoing efforts to add privacy features, highlights the potential of DNS and DANE to achieve dynamic, scalable, and zero trust capability in IoT.
The presentations also touched upon the collaboration between the supply chain industry and IoT, particularly in relation to GS1 devices such as barcodes and RFID. This collaboration highlights the integration of technology systems with the supply chain industry, fostering innovation and enhancing efficiency.
Furthermore, the speaker mentioned the use of LoRaWAN with MAC IDs, showcasing an alternative approach to identification beyond traditional names and IP addresses. This demonstrates that concerns in IoT extend beyond conventional methods and require exploration of new and diverse approaches.
In conclusion, the presentations underscored the importance of interoperability, scalability, and zero trust in IoT. The potential application of DNS for IoT-based identity and access management, the development of a dynamic identity management system by AFNIC, and the use of DNS and DANE to eliminate the need for a certificate authority ecosystem were notable insights. The collaboration between the supply chain industry and IoT, as well as the exploration of alternative identification methods such as LoRaWAN with MAC IDs, further exemplify the dynamic nature of IoT and the need for innovative solutions.
Dan Caprio
In a recent discussion on the Internet of Things (IoT), it was highlighted that there is a significant power asymmetry between consumers and their understanding of IoT. This issue has been observed not only in the United States but also in other parts of the world.
To address this, the US government has launched an ongoing effort aimed at bringing consumer labelling to the IoT. This initiative is being carried out through a public-private partnership, with the Federal Communications Commission (FCC) being responsible in the US. The aim is to ensure responsible consumption and production in the IoT sector, in line with SDG 12: Responsible Consumption and Production.
This labelling scheme would involve putting labels on IoT device packaging, providing consumers with information about the level of security offered. This proposed labelling system is seen as a means to empower consumers by giving them the necessary information to make informed choices and protect themselves in the rapidly growing IoT landscape.
Furthermore, having consumer labels on IoT devices could also facilitate international harmonisation. The idea is that these labels could pave the way for global standards and interoperability in the IoT industry. This notion aligns with Vint Cerf’s view on the importance of standards and interoperability in the IoT ecosystem.
However, it is important to note that the US consumer label for IoT is still in its early stages. The FCC announced this initiative in August, but it will not take effect until at least the end of next year. Therefore, additional work is required to develop and implement a comprehensive labelling system that effectively serves the needs of consumers.
During the discussion, it was suggested that the Internet Governance Forum (IGF) should play an active role in addressing this issue. It was acknowledged that raising awareness and fostering dialogue around consumer labelling in the IoT is a crucial step towards ensuring responsible and secure IoT adoption. It was proposed that the IGF, along with regional IGFs, should include this topic in their agendas and actively engage stakeholders in finding effective solutions.
Overall, the discussion emphasized the need for consumer empowerment and protection in the IoT sector. The ongoing efforts in the US to introduce consumer labelling and the potential for international harmonisation through such initiatives are promising steps in the right direction. However, more work needs to be done to ensure that a comprehensive and effective labelling system is developed and implemented. The active involvement of the IGF and its regional counterparts can significantly contribute to addressing this issue and promoting responsible IoT practices.
Vint Cerf
The speakers in the analysis delve into various crucial aspects of the Internet of Things (IoT). They highlight the importance of standards and interoperability in order to ensure that devices from multiple manufacturers can effectively work together. This is crucial for the IoT to reach its full potential as it allows for seamless communication and integration between devices. It also enables consumers to configure their IoT devices in a way that is useful and tailored to their specific needs. The argument put forth is that without standards and interoperability, the IoT ecosystem would be fragmented and hindered by compatibility issues.
Another key point discussed is the need for secure and upgradeable operating systems for IoT devices. The speakers emphasise that every IoT device will require an operating system, and with that comes the need for regular updates and bug fixes. The argument is made that these updates are necessary to address vulnerabilities and ensure the overall security of the devices. Without secure and upgradeable operating systems, IoT devices are at risk of exploitation by malicious actors.
The speakers also stress the significance of strong authentication, cryptography, and digital signatures in the context of IoT devices. They argue that these measures are crucial for ensuring trusted communication between devices. The speakers assert that IoT devices need to have a strongly authenticated identity and must also be aware of what other devices they are allowed to communicate with. By implementing cryptography and digital signatures, IoT devices can authenticate and verify the integrity of the data being exchanged, reducing the risk of unauthorized access or tampering.
Additionally, the scalability of configuration management and control for IoT devices is highlighted. The speakers note that in residential settings, the number of devices could easily reach the hundreds, while in industrial settings, it could be in the thousands. They argue that effective configuration management and control systems need to be in place to handle the sheer volume of devices and ensure efficient and reliable operation.
However, one speaker expresses a negative sentiment towards voice recognition as a control method for IoT devices. They highlight concerns regarding the reliability of voice recognition technology, as it is not 100% accurate and can lead to frustration for users. Moreover, there is the possibility of misuse, where unauthorized individuals could gain access to IoT devices by mimicking the owner’s voice. This raises security concerns and questions the reliability of voice recognition as a viable control method for the IoT.
In a somewhat unrelated observation, the analysis briefly mentions Vint Cerf’s extensive wine collection in his house, with approximately 3,000 bottles. It is suggested that the next owner of his house will have the responsibility of managing this impressive collection.
In conclusion, the speakers emphasize the importance of standards, interoperability, secure operating systems, strong authentication, cryptography, and digital signatures in the world of IoT. These elements are seen as crucial for the successful deployment and operation of IoT devices. Additionally, the scalability of configuration management and control systems is acknowledged as a critical factor in managing a large number of IoT devices. It is important to carefully consider the control methods used for IoT devices, as voice recognition may not be the most reliable option due to its limitations and potential for misuse.
Hiroshi Esaki
According to experts, the correct functioning of artificial intelligence (AI) relies heavily on trustworthy data. AI does not have its own algorithm; instead, it requires reliable data to provide accurate and insightful results. This emphasizes the importance of data quality and integrity in AI systems.
In the business field, IoT devices are increasingly prevalent across various industries, including agriculture. These devices offer numerous benefits, such as improved efficiency, increased productivity, and enhanced decision-making. However, to fully leverage the potential of IoT, there is a need for good ownership, responsibility, and authentication. This ensures that the devices are used ethically and securely, protecting sensitive data and mitigating potential risks.
The evolution of IoT into the Internet of Functions (IOF) brings a paradigm shift from traditional cloud computing systems. With IOF, functions can be transferred and executed anywhere over the internet. This opens up new possibilities for decentralized and distributed systems, enabling greater flexibility and scalability in IoT networks.
One critical aspect of the IoT ecosystem is the security of devices. To ensure secure and safe IoT deployment, scalable systems for labeling or certification are needed. This helps in identifying and verifying the authenticity and integrity of IoT devices, making it easier for users to trust and rely on them.
A noteworthy observation is the increasing importance of zero-trust capability in IoT devices. This means that every single device must have built-in security measures that continuously verify and authenticate network connections. By adopting a zero-trust approach, the IoT ecosystem can provide a higher level of security, protecting sensitive data and preventing unauthorized access.
Furthermore, IoT devices and the data they produce can make a significant contribution to carbon neutrality and decarbonization efforts. These devices, along with the concept of digital twins, enable better monitoring and management of resources, leading to more sustainable practices and reduced environmental impact.
Additionally, internet security is a crucial element that should be considered in the IoT ecosystem. It should be end-to-end, starting with individual users taking responsibility for protecting their network. Traceability and interoperability play a vital role in ensuring secure internet operation, and efforts are being made worldwide, including in Japan, to provide users with traceability features.
In conclusion, the future use of IoT devices is expected to evolve beyond their original purposes. These devices have the potential to transform industries, improve efficiency, and enable innovative applications. However, realizing the full potential of IoT requires addressing critical areas such as data quality, device security, and internet security. By doing so, we can create a more reliable, secure, and sustainable IoT ecosystem.
Jonathan Cave
The Internet of Things (IoT) is described as a complex adaptive system that produces things that are yet to be imagined. This system consists of connected devices that work together to create complex functions, even though these functions may not have well-defined or objectively defined definitions. The IoT has the potential to revolutionize various industries and aspects of our lives through its interconnectedness.
However, privacy concerns arise when it comes to the IoT. These devices have the ability to collect vast amounts of personal and private information from their users, regardless of whether it is relevant to their nominal functioning or design. The collection of such data raises questions about the privacy of data, devices, and their functions within the IoT context.
Another aspect to consider is the impact of IoT devices on human behavior. For instance, when people use smart speakers, they begin to trust them to deliver content, thereby giving these devices a power they did not originally have. This trust implies that IoT devices are not just sensors but also actuators, with the ability to reprogram their users’ behavior, understanding, and attention.
The interaction between individuals and IoT devices also calls for a reshaping of ethical frameworks. As the operation of these devices and systems changes people’s behavior, understanding, and attention, there is a need to align our ethical frameworks with the evolving nature of individual and collective psychology in relation to IoT devices.
Additionally, the concept of data ownership is being reconsidered in the context of the IoT. It becomes necessary to resurrect the notion of data ownership so that people can be held responsible for their actions and the functioning of these systems. This is crucial in maintaining accountability and ensuring that individuals take ownership of their data and its usage within the IoT ecosystem.
Furthermore, ethical reflection, consideration, and control are fundamental when it comes to IoT devices. The ethical implications of these devices should be thoroughly assessed and addressed, with due consideration given to the potential consequences on individuals and society as a whole. This involves scrutinizing IoT projects for their ethical considerations and the application of legal mechanisms to make control measures more predictable.
Overall, keeping the conversation open on ethical considerations and control issues is of utmost importance. The emergence of new problems within the IoT ecosystem requires a collaborative approach, as no single party can perceive and address all the challenges alone. Simply ticking the ethical box at the beginning of a project and leaving it to lawyers is not enough. Ongoing ethical reflection and open discussions are essential to ensure that the ethical implications of IoT devices are adequately addressed and controlled.
Sarah T. Kiden
In the realm of the Internet of Things (IoT), power imbalances exist, calling for accountability and responsibility measures. These imbalances may arise during the design or research phase. Concerns are raised about the lack of consumer influence on future IoT deployments, leading to a need for empowering consumers.
To address these issues, collecting user stories on the harms caused by IoT devices can guide the creation of design guidelines and influence policy changes. Organizations like the Algorithmic Justice League, Data & Society, and Amnesty International have begun documenting AI harms, providing evidence to sway policymakers in the right direction.
Overall, the analysis highlights the presence of power asymmetries in the IoT ecosystem and underscores the importance of accountability and responsibility measures. Empowering consumers and involving them in shaping the future of IoT deployments is crucial. Furthermore, gathering user stories and documenting the harms caused by IoT devices can serve as valuable evidence for influencing policy changes and creating design guidelines. This comprehensive summary emphasizes the significance of addressing power imbalances and promoting responsible practices in the IoT industry.
Avri Doria
During the session, it was mentioned that no questions had been received online thus far. However, the speaker kindly invited participants to submit any questions through the chat or QA function. The audience was asked to keep their questions brief since only 15 minutes remained in the session due to the amount of content covered in the first part.
This demonstrates the speaker’s willingness to engage with attendees and provide valuable insights. Despite the lack of questions at that point in the session, it emphasized the importance of participant engagement to enhance the overall learning experience.
In conclusion, the speaker encouraged participation by inviting individuals to submit their questions through the chat or QA function. This call for engagement highlighted the significance of participant interaction in shaping the session and allowing for a more enriching learning experience.
Maarten Botterman
The Internet of Things (IoT) is a global technology that offers new opportunities to address challenges and is adapted and developed globally. It has the potential to revolutionize society by improving efficiency, decision-making, and connectivity through device communication and data exchange. The IoT is seen as a necessary technology with positive sentiment.
The argument for the IoT is that it can ethically address societal challenges by deploying systems in disaster-stricken regions and rural areas. It requires the involvement of all stakeholders and acknowledges the varying challenges across different regions. Sustainability and inclusivity are emphasized, with a focus on creating accountable ecosystems.
However, the adoption of the IoT also presents challenges such as new risks and the potential weaponization of technology. Legal clarity and regulation are necessary for IoT investment and development, and procurement practices can improve security. It is important to take proactive measures and implement self-certification and DNS for enhanced security.
Different networks and the use of DNS for interoperability and scalability are considered. AI also comes with risks, but the potential benefits justify them. Informed consent, labeling, and change management are emphasized to inform people about risks and adapt to the fast pace of change in the IoT space.
In conclusion, the IoT has the potential to address challenges ethically and create sustainable ecosystems. Legal clarity, regulation, and proactive measures are needed to address risks. Different networks and DNS can improve interoperability and scalability. Informed consent, labeling, and change management are important considerations for successful implementation.
Session transcript
Maarten Botterman:
It’s, it’s, it’s, it’s set to launch, but it’s, anyway. Can you put the, ah, yeah, that’s good. And Jonathan is now on the line. Good morning, everybody. Good morning. Welcome to this. Good morning. Good morning, Jonathan. Welcome to this session of the Dynamic Coalition for the Internet of Things. I’ll give a short introduction to get us all up to speed on what this is about. And then we’ll, we’ll dive into the panel discussion with a couple of introductions. And everybody’s invited to participate. If you have clarifying questions, we’ll take those earlier and discussion is for after the contributions. So with that, I’d like to see the slides. Please start posting the slides. I need to do it from the slide room? Okay. On the desktop. The blue one here, for me is stretch. Okay. I can see it, yeah, we’re online. So the Internet of Things is talking, the Dynamic Coalition is really talking about how to get to global good practice on the Internet of Things, a development that has been progressing over many years. The Internet of Things, for all clarity, is a technology that we need. And it comes with benefits as well as with challenges, like all new technologies. And it offers opportunities to respond to today’s challenges in ways that were never possible before. Yet it comes with new ones. And just a reminder, preempting any discussion, technologies are not the ones that are good or bad, it’s the way we use them. Particularly, we need them for addressing societal issues also on global level, across borders. And this is a global technology that is adapted globally and is developed globally and adopted locally. So it requires sharing global knowledge about solutions, as well as local knowledge about what needs to happen and action to make things happen, to go beyond talking about it. There’s many different applications. And just to give a little bit impression of the width, the buoy you see is a tsunami buoy and it’s connected and it measures the waves. So this gives the people at the coasts of vulnerable areas just a half hour extra to get away from the coast when necessary. To under that, you see a little sensor that is actually part, can be part of a body, visual sensor, your blood pressure changes and will warn you, well, your blood pressure is going up, maybe lay down and call somebody to rescue you because there may be a heart attack imminent. Just above that is in room, cold monoxide measurer. You can see there’s a lot of different applications, ranging from wildlife tracking to autonomous systems that manage networks of roads around busy cities. I’m going the wrong way. So we talk about a global approach towards IIT at this global IGF. We’ve been talking about it in regional IGFs more focused at the region that has brought a lot of insight also that global solutions aren’t always the best locally or regionally. IIT for us is merely a specific aspect of the internet, just like social media, communication, access to information. And it does link to AI, it does link to big data. It generates data, it uses data. Specific characteristics that co-determine the development of future network include, in particular, the collecting, storing, providing access to many data related to an observation by sensors. It’s autonomous networks with actuators that take action following receipt of specific data on other, on sensors. And to take pre-programmed decision models or learn from it, and AI is a clear component that adds to that development and what it can do. IIT is also, because it’s physical as well, something that you can actually weaponize, whether it’s the MOTIC devices or other IOT devices to attack third parties, and that is something to be aware of. So these specifics make a difference. Dynamic Coalition is set up in 2008, so we celebrate our 15th year and active ever since, also in regional meetings. And as said, the aim is to develop global good practice. And the dialogue is about meeting multi-stakeholders on equal terms at global level. The principle that we currently have, and that’s always subject to review, is taking ethical considerations into account from the outset, and find an ethical, sustainable way ahead using IOT to create a free, secure, and enabling rights-based environment, the future we want. And for the case of time, I would like to introduce our first speaker today. We both grew older. And this is 2016. It relates to the fundaments of the internet. I’m very happy to have Vint Cerf speak here on how that relates to IOT and how that fits into the vision for the future as well.
Vint Cerf:
Well, thank you all very much for the invitation to join you. I will have to scoot very quickly because I have a leadership panel meeting to run at nine o’clock, so my normal one-hour rant will have to be curtailed. The headline that I want to avoid is 100,000 Refrigerators Attack Bank of America. And unfortunately, we’ve already had headlines that are similar to that, the Dyne Corporation attack from webcams is a good example of that. So the first point I want to make is that standards and interoperability are really critical here. We want multiple manufacturers’ devices to interwork, to have compatible kinds of control models. So as consumers of these devices, we can acquire and configure them in a way that’s useful. The second thing is that every one of these devices is gonna have to have an operating system in it. And we had better insist that the operating systems both be as secure as possible and also be updatable because there will be bugs, they need to be corrected. So the device in situ needs to be upgradable to correct for vulnerabilities or to add to new functionality. Strong authentication is absolutely critical for the use of IoT devices. So at the point where you are provisioning the device, putting it into use, it needs to have a strongly authenticated identity which can be validated remotely. It also needs to know what other devices it’s allowed to talk to. And so we should insist that the device be provisioned to know how to validate an incoming query or an incoming command from another device so that it is not subject to takeover by an unauthorized party. Once again, strong authentication and the use of cryptography and digital signatures will be our friend here. The device should have a limited access control list that it will listen to and all it would ignore. There’s a scaling issue here because the number of devices that you might have in a residence could number in the hundreds in the long term if every light bulb has its own control, for example. And in an industrial setting, we could be talking about thousands of these devices. So configuration management and control needs to be scalable. You don’t wanna spend the entire week typing IPv6 addresses into these devices to configure them. So the scaling issue is very important. There’s also a dynamic discovery question for some types of these devices. When something shows up that should become part of the residential network or part of the corporate network or the manufacturing network, you’d like to automatically find a way to configure it, but you clearly don’t want the wrong parties to be automatically configured in. So in a residential setting, you can imagine the service person coming out to do maintenance. They might have a mobile with them. They might have other devices. You might detect their presence, but you have to make the system decide whether or not to incorporate that device into the local control or not. And you might, as the owner of the system, be asked, should I configure the maintenance man’s mobile into the household network or not? So once again, we have to have the capability for doing dynamic addition. If you bought a new IoT device, you’d like to make it easy to add that. There are some discussions about what happens when you sell a house that’s full of IoT devices. What does the recipient of the house do? Do they have to reconfigure everything? How do we make that easy to do? What about voice control? This is increasingly popular. You have lots of devices. Google has the Google Assistant, for example. The problem with voice control, of course, is that there are risks. Who is allowed to control the device? What are they allowed to do with it? And you probably want to distinguish among parties with regard to their capacity for controlling the devices. For example, parents might want to have more control than the kids. Although, if your experience is like mine, the kids know more about how to do this than the parents do. You certainly don’t want the casual robber to walk up to the front door and say, open the door, and have it open the door. So voice recognition, which, as you know, is not 100% reliable, may not be the best way to do this. You may actually have to have some identifier with you that is sensible, so to speak, by the IoT devices that qualify you for certain capabilities. One interesting problem is guests that come to the house, if it’s in the residential setting. How do you train the house to know what the guests are allowed to do, and which guest is it? Do you have to issue little badges to them? If it’s a voice control system, do you have to have them stand in front of a microphone and say a bunch of words so that the system can learn their voice and to correctly interpret that? I mean, it would be kind of a weird thing to invite your guests over for dinner and have them recite in front of a microphone so that they can use the house, get the refrigerator to open, get the toilet to flush, or whatever else that they have to do. Suppose you’re standing in a room like this one with a whole lot of light bulbs. How do you turn one light bulb off or on, or which lights? Do you have to give names, like Frank and George and Eddie, and then teach your guests what the names of the light bulbs are? So we have to find ways of interacting with the system that’s easy to learn. Also, if you give authority to a guest, you don’t want that authority to go on longer than they are still welcome guests. And so when they leave the house, the house should forget their ability to access it. So those are just a list of the various things that come to my mind. And I hope in the course of today’s session that you’ll shed some light on how we achieve some of these objectives of safety and security and reliability and flexibility so that the IoT space turns out to be a useful one, both from the point of view of constructive application, but also a big opportunity for companies to design, build, and sell these devices that tend to work with each other. So Mr. Chairman, I’ll stop there and dash out the door. If these were stupid ideas, I’m sure you’ll document that. But to the extent that it stimulates your thinking, I hope it’s been helpful.
Maarten Botterman:
Thank you so much. And I’m curious, too, who would be the next owner of your house and how it would deal with everything you put in place.
Vint Cerf:
They’ll have to deal with the 3,000 bottles in the wine cellar with the little tags on them.
Maarten Botterman:
That will make up for all the other hassle, no doubt. Thank you, Vint, for sharing that. Good. If you can go back to the slides, then allow me to, in a way, put also Vint’s remarks into context. Again, the thinking and summary is to embrace IoT to address societal challenges in an ethical way. And we need IoT to keep this world vengeable. We need it to be inclusive. Deployment needs to be possible where necessary. This also means in areas where, for instance, the tsunami buoys or other agricultural systems, where the economics may not naturally offer a business case for a profit industry to build. The second thing is to create that IoT system that encourages investments. So to do that, you need to involve all stakeholders. There’s no single stakeholder holds the key. Regulation is important, because you need to understand the legal clarity in which you’re going to invest, going to develop your legal mechanisms. And we realize that nothing happens in isolation or in a vacuum. There is legislation. But how do you deal with it specifically when you develop new applications that are IoT-based? Maybe sandboxes, legal sandboxes, is part of the solution there. Create ecosystems that are sustainable and inclusive. Also means understand the issues wherever you go. They may be different. And stimulate awareness and feedback, because developments are nowadays so fast that people don’t know what’s possible until years after sometimes. That’s something that deserves attention, too. So as Vint alluded to, if we develop all this and we are in the process, then it needs to be a trusted IoT environment. So in short, in line with our current good practice document, this means meaningful transparency. And you could think of certifiable labels, understandable risks, and how to deal with devices and bigger systems. Clear accountability. So who is responsible? Not that obvious always. So it’s something that debate needs to progress. And lo and behold, let’s hope there is real choice. No lock-in. And I think that’s a point for discussion, too. So with that, is Orly online? Is Orly online? OK. Orly, if you’re online, unmute, please. Good morning, silly. Then Orly was to talk about the impact of AI and IoT. And the core of her contribution is that AI does come with risks, but sometimes these risks are really worthwhile taking. For instance, in medical applications, where AI help to improve the quality of life, even if they affect the way you move around. And that comes with a lot of ethical aspects as well that are worth thinking about and exploring. But in the end, it’s all about people. And that was the core of her story, too. So with that, Hiroshi, I would love to hear you to talk about IoT deployment and your security perspective in how to make that responsibly happen.
Hiroshi Esaki:
OK. Thank you for the introduction. I’m Hiroshi Saki from WISE Project Japan. First of all, regarding the AI, AI really need a trustable data. Otherwise, the AI is going to do very bad behavior. And also interesting for the AI is AI doesn’t have any algorithm by himself or by herself. Means their algorithm came from data, right? So we need a very trustful data in order to use AI correctly. That’s the single point in the first item. And also I’m working long time regarding the IoT business, say agriculture or the other industries. Then people are now, every single industry going to digital trend based on the transparent, interoperable and trustful data, right? In order to have the trustful data or transparency data, that is really, really important for the governance. How the people using the IoT device or how the IoT device can be manufactured, maintain software and function in it. Therefore we need a good ownership of the data and devices and the responsibility of the devices in the business field, authenticate as well. And also that’s not only on the earth in this day. We are going to include space and moon and Mars. That’s there’s no such a regulation at all at this time. We must have new area to tackle with. Second thing I want to share with you is the IoT gonna mutating into IOF. Things are connected, means data are gonna travel around on the earth. The function is the next one from the data. Means every single function attacks be able to transfer everywhere if we have the internet. That’s a completely different from bare metal computer system to cloud computing. So the function be able to travel around on the globe. That’s a completely different paradigm. Means the certification or control or management or excuse way of the things must be changed to function. Not that purely devices, physical devices, but what kind of process gonna run over any single device. So we must labeling or certificating not device but a function or software running on the hardware device. That is an important thing I believe. And also in order to have a secure or safe operation, we need labeling or certification or authentication. Then scalability is quite important. I always talk with the government. They want to control everything, but that is not scalable. Therefore, we need a very clever scalable system in order to have such a labeling or certification for secure, safe IoT or IRF devices. The third point I wanna share with you is that we have new stakeholder. As Martin mentioned, agriculture people, official people or the other people, they are not came from IT or ICT arena. They completely have a different culture and terminology. When I talk with them, completely different language, completely different structure of the industry, I have to talk with them. That is a new challenge. And also we welcome the new stakeholders come together. That is in principle of the IGF itself. So I really want to say that is a new players gonna come in in our field. The other interesting for this focusing on the IoT, IoT device requires very small latency in many cases. In the case of internet, we allow 100 millisecond, right? In order to see the video, CDN providing you say 10 millisecond. The robot requires microsecond. You must feel speed of lights, size of the earth. In the case of IoT application, it may be called as edge computing. The completely different requirement, they ask to us for the computer system alone. Then IoT went to the IOF, then more zero trust capability is required because every single device be able to travel around over the globe, then air gap or firewall protection provision doesn’t work well. Of course, that is very useful technique. The every single device must have zero trust capability in the future, otherwise we cannot enjoy IoT or IOF. Then the last one would be a IoT device or every single data for the digital twin has a huge contribution to a carbon neutral, decarbonization because we must grasp what’s going on on the earth, what’s going around you. We need a data, it must be trustable, must be transparent, otherwise we cannot live with healthy earth. That’s it, thank you.
Maarten Botterman:
Thank you very much and linking it very much to where we are today, the challenges today. And one would still think whether there’s different levels of devices that have different requirements in terms of both carbon neutrality and security, I would say. But we’ll hear more about it. We’ll also have a contribution later on about LoRa networks and how they can play in. So with that, thank you very much. Sarah Kyden is a researcher who’s been just getting her PhD in design and congratulations with that, Sarah. And really would like to hear about your insights from that perspective on IoT and how to make it deployable wherever it’s needed.
Sarah T. Kiden:
Hi everyone, I hope you can hear me well. Good evening from my end. So my name is Sarah Kyden and I would like to start with two things right now and maybe I’ll add on some more later. The first one is that as we develop guidelines for IoT as a dynamic coalition or really any group that’s developing guidelines, we need to acknowledge that there are power asymmetries in the IoT ecosystem. So if you think about it, there are people who build, who develop the IoT devices, there are people who use these devices in the context of consumer IoT and there are people who are impacted by the devices. So the impact could be positive like what Martin was talking about earlier where your medical IoT device notifies your health practitioner and you’re able to get immediate help or it could be negative in a way that perhaps an IoT device has been used, for example, to facilitate gender-based violence. There’s a group I follow at University College London that’s doing very interesting research about how IoT is being used to facilitate gender-based violence. So these power imbalances could manifest at different stages. So at the design phase or research phase where I am currently, if, for example, I interview participants and I’m analyzing data, the insights that I could draw are based on maybe what I’m interested in or what I see or just acknowledging that as a designer or as a researcher, I come with biases. So things that stand out to me could be underlying infrastructure that supports IoT, access to electricity, access to a network and so on and so forth, but it might be different for someone else. So at that point, it means the designer or engineer has the power to make design decisions. At another point, it could be a funder, for example. So they are giving you money to do particular IoT work and you have obligations for the grant agreement. So that means that the interest now lies with the funder. So I think we need to have some sort of mechanism for accountability and responsibility so that the power is not misused, but to also think about if the consumers have any power at all. If they have it, how are they using it? If not, how can we empower consumers to actually influence future deployments? The second thing I would like to talk about is something I’ve seen happening in the AI space. So organizations like the Algorithmic Justice League, Data & Society and Amnesty International, among others, are now beginning to document AI harms. So they’re actually collecting user stories about a harm that’s happening to them. It could be a hiring decision. It could be maybe they were not considered for a loan or a tenancy application and so on and so forth. It’s something that I think as the IoT, people who are interested in IoT design and deployment, we could think about. And these can serve as evidence. So basically, you can use that to create design guidelines. If I use the previous example where IoT devices are facilitating gender-based violence, if out of 500 reports, 100 are about a particular thing, then you could think about how to implement safety, for example, for smart IoT devices. Or you could nudge policymakers in a particular direction. So you tell them maybe the way the law is written currently, you cannot litigate a particular issue. And maybe we need to amend the law so that we can cover some of the things. So this is the initial thoughts that I have. And I’m happy to add some more later on. Thank you.
Maarten Botterman:
Thank you very much, Sarah, and also for illustrating the differences and the different requirements in different areas that happen. One of the examples we talked about in the preparation was, for instance, that data protection is legislation existing in many countries, but not in all. Does it mean everything goes in those countries where no data protection legislation is yet in place? It’s one of the things, if you think about it on a global level, is important to address. With that is the next person. Is Alejandro online? OK. Alejandro, you’re online, I hear. Sorry, my computer died because I don’t have electricity on it anymore.
Alejandro Pisanty:
Ouch. So yes, Alejandro Pisanti, present here.
Maarten Botterman:
Yes, please.
Alejandro Pisanty:
Thank you. This is Alejandro Pisanti from the National University of Mexico in Mexico City. Today I am in Washington, DC, and pleased to be with you. First, I would like to very briefly address one point that Sara Kidane has made, which is, who are the entities exerting power through IoT? And I think there’s room for more detailed analysis. We certainly can think, first of all, I think Martin, as we have spoken previously, and others, we have to distinguish between consumer internet of things and industrial internet of things. Consumer internet of things is a major concern for security, for example, as Vint Cerf stated at the beginning of the session, you don’t want your refrigerator to be responsible for launching missiles somewhere, or a DDoS attack on a major government. And the people exerting power in that sphere are not necessarily the ones we think of usually in a north-south divide. It’s more probably a company in a large country, which is not acting all the time in the system of rules. It doesn’t have a large transnational structure, but it’s more likely a lot of small companies making devices that are sold at a very low price to consumers that are not necessarily aware of the need to secure their devices. And devices aren’t even possible to secure, because you don’t have any access to them. You don’t have any access even to passwords, and certainly not, as we mentioned, to their operating systems and other underlying layers. So we’d need to split that kind of analysis into more different categories. Now, the main point for which I was invited to this session is to link with the dynamic collision on core internet values with the question whether the internet of things can have an impact on core internet values, on the way the internet’s core values are deployed, displayed, or challenged. We remember that some of these core internet values are the layered architecture, packet switching, which are sort of underlying assumptions. And then we have the best effort hypothesis or assumption. We have interoperability, openness, and so forth. And what we see first is that the deployment of devices in the consumer internet of things, which do send their packets and data over the open public internet, are a challenge already to openness, sometimes to interoperability. Certainly, they are increasing the load on the systems. And they have increased the attack surface for everybody, as has seen in many examples, where, for example, a specific model of surveillance cameras, standard facilities, CCTVs can be weaponized for denial, distributed denial of service, for example. And we have a further very complex challenge in the standards and layers field, where the standards for communicating the technologies and standards for communicating internet of things devices, both consumer and industrial, use a lot of different technologies. They use, for example, LoRa. They use open Wi-Fi. They use 4G. They will use 5G or even 6G if they come. For different sets or segments of their communications and for backups for some of those, as Hiroshi Esaki has already mentioned, the requirements, for example, may be of micro-sequences. So you may need to have VPNs or dedicated links that subtract bandwidth. Some telcos may decide to sell you bandwidth that’s reserved. That is one of the big discussions around the 6 gigahertz band, for example, how you split it into the open part and into the restricted or registered part. So these are important challenges. And no single manufacturer of these devices will care about these open internet effects or the effects of interoperability as long as their devices work and sell. So we have to find a way to make awareness. And part of this will have to be in consumers. One last point is some of these issues have been set up. And there’s an attempt to address them by, for example, warnings to consumers or registrations or standards bodies. But a lot of these things are sold under the radar of national standardization bodies and of commercial regulations. So people just pick them up in a mobile market and put them into a network without having to comply with any standards of, let’s say, national telecommunications authority or regulator nor anything else. So at least this is a way of making a list and inventory of the challenges and giving them some hierarchy so that we know that some of the solutions proposed may really be very limited in reach or unworkable at all. Thank you.
Maarten Botterman:
Thank you very much for your perspective, very much informed by the work also of the Dynamic Coalition for Core Internet Values. Really appreciate it. And then can I check with you whether you’re available to speak to labeling and certification? Ben Caprio? You’re unmuted.
Dan Caprio:
Yes, thanks, Martin.
Maarten Botterman:
Thank you. Ben is based in Washington, D.C., and he’s been involved in the work of the Dynamic Coalition for a long time. He’s also involved in the White House initiative to look into labeling and certification. So please, Ben, the floor is yours.
Dan Caprio:
Thank you, Martin. I’m trying to find my camera. Is that better?
Maarten Botterman:
We see you.
Dan Caprio:
Yes, thank you. And thanks for pulling this together and for your continued leadership. I think one of the issues that ties a lot of things together quite well that have been mentioned by other speakers, the issue of power asymmetry and how consumers have some idea of what’s happening with the Internet of Things, so their devices, is something that we’ve observed in the United States, and it’s also happening in other parts of the world. But the effort to bring consumer labeling to the Internet of Things. And so there’s been a real push in the United States, a public-private partnership, which was announced by the White House back in the summer, which is being the responsible party in the United States is the Federal Communication Commission, which is sort of our equivalent of the telecom regulator. And the idea is, you know, to have a widely available consumer label on packaging for devices that gives a consumer some sense of, you know, what level of security is offered on the particular device, how to update the security, how to upgrade it, and then how to become more aware. Because I think there’s a growing appetite, especially at the consumer level, for, you know, what is the device that I’m buying, what is the capability. And so, you know, there are other parts of the world and other speakers that are going to speak to this later. I know we had a regional IGF in Australia where this was the topic of discussion. But I think it’s something that’s reflective, the idea of the consumer label is something that’s reflective of the dynamic coalition itself, which is it’s a very positive development. It’s something that we’ve all been working on, working hard on for a very long time. But I think it also, you know, gives the possibility in terms of some of the labeling efforts for international harmonization, which goes to Vint’s point about interoperability and standards. So with the label in the U.S., we’re not talking about creating a standard. It’s a public-private partnership that will be run by the Federal Communication Commission and by, you know, interested stakeholders. So view it as a very positive development and hope that it’s something that we can continue to work on in the dynamic coalition and see it become more globally accepted.
Maarten Botterman:
Thank you, Dan, for that. And the U.S. is not the only one, as said. There’s national initiatives. There’s also initiative by IEEE to look into how to do this. We’re currently all very explorative, I would say, but with deep intent. Good morning, Wout. Next speaker, if we can get Sandoz. Can you make Sandoz Balakrishnan co-host? He will speak instead of Lucien Costics.
Sandoche Balakrichenan:
Yeah, good morning, Macron. Can you hear me?
Maarten Botterman:
We can hear you very well. And I asked the support, sorry for this very last-minute request. So they made you co-host, so you can also present your slides if you want to. Good morning.
Sandoche Balakrichenan:
Yeah, that will be fine. Yep. I have slides, but I will not take much time. I hope you can see my slides.
Maarten Botterman:
We can see your slides in presentation mode.
Sandoche Balakrichenan:
Thank you. Thank you, Martin. So, you know, in the opening statement of windsurf, he talked about IoT needing interoperability, scalability, etc. And Professor Isaki also said about the zero trust necessity for IoT. So both these presentations are quite a preamble for this one. So here, you know, we are looking at zero trust from an identity management angle. So to have identity and access management using DNS is a perspective that we are looking at AFNIC. AFNIC is the .FR registry that I am working at. We are based in Paris. So DNS, the domain name system, is a system that is used by most Internet users for Internet communication. And it is, to simplify, it is just mapping human-based names, like domain names, to IP addresses. So most of us, we use DNS for our Internet communication. So what we are trying to have a look is that how to use the same system that has been mostly used in the Internet for IoT-based. So in a zero trust, if we say briefly what NIST proposes is that you can have communication from a device to the network on a case-by-case basis where you can have context, where you can have different administrative access. It is not, and you don’t need to provision early. So we also see that we could do the same with DNS. So this is the use case that we see usually in IoT. The device maker, they provision the devices with some keys, and these keys need to be shared among the stakeholders over the ecosystem. So that’s a huge issue. It’s an operational nightmare. So the use of symmetric key works in IoT, but it doesn’t scale. So that’s a problem that we are trying to solve here. So we try to work with LoRa. LoRa is the long-range wide area network. Why did we try to work with LoRaWAN is that LoRa is concerned with the classification of LPWAN, local wide area networks. It is one of the most constrained networks in IoT. And if our proposition works in LoRaWAN, it will work around the other IoT networks and devices. So we were able to do the communication between the different servers in a LoRaWAN scenario using mutual authentication. When I say mutual authentication, it’s that both the client and server authenticate each other. And this could be done by normal asymmetric keys that we use on the internet, that is public and private keys. And how we do them is that we do with self-signed certificates. And in the self-signed certificates, we are able to do this mutual authentication, even when we don’t have the certificate authority. For example, in the internet, we need to have a certificate authority, and that certificate authority needs to be authorized by the browser vendors. But here we could do that in the DNS without having a certificate authority and having your own self-signed certificate. That is done, I’ll go here, that is done thanks to a technology standardized by the IETF. It’s called DANE, DNS authentication of named entities. And I will not go deep into it, but it just shows that in the DNS, you can provision both the identity resolution, as well as which key you have to authenticate. So here with the help of DNS and DANE and DNSSEC, we don’t need a certificate authority ecosystem. We can use the DNS ecosystem for both identity and access management. So we have tested that with the TLS 1.3. We even did a hackathon at the IETF. So the next step that we are going to do is that so I’m coming back to here is that we have zero trust capability here because we don’t need provisioning a priori by keys or by having a certificate authority. You can do that dynamically. And with the DNS, you have scalability. And you can use the existing identifiers because if you see in the IoT, there are different identification systems like barcode, RFID, NFC, etc. And etc. So all these different types of identifications could interoperate with each other. We have worked with the supply chain GS1 standards also. So we tested with them also. So at AFNIC, we are building on a dynamic identity management system based on DNS and we have built blocks by blocks on different projects that we have. As you can see in the slides, it’s like a lego block. We started with whether to see whether we could provision different identifiers in the DNS. When I say different identifiers, it could be a digital object identifier. It could be an object identifier. It could be an RFID. It could be a barcode. It could be domain name, etc. URI, etc. So that works. We work with the supply chain industry. Then we see whether all these identifiers could resolve with the different ecosystems. That also works. Now with the security, we have added one more layer. And we are now working on another project called Pivot where we want to add privacy features based on DNS. So that’s how we plan to do that. And I hope we could also work with the Dynamic Coalition on adding this thing here. For information, there are different standardization organizations like the IETF, the ITU, all working in the same scenario, looking at DNS for resolving the issues that we see in the IoT. Thank you. If you have any questions, I’m ready to answer.
Maarten Botterman:
Thank you, Sandor, for that. We saw startup organizations like ITU. I’m not sure ITU qualifies as a startup organization. But thanks for what you do. Because basically what also Sandor brings in is the fact that what is IoT? Is it a device? Is it a cyber physical system which brings together a couple of devices? Or is it an ecosystem of application, a coherent one, in which the self-certification may be quite part of the solution to make sure it’s a secure system? The other element is also with the LoRa networks is that whereas IoT is an extension of the Internet, it doesn’t mean that every IoT application needs streaming video capabilities. Sometimes it’s enough to ping once every five minutes or even once every hour. What’s happening? With that, Lucia also, sorry, Sandor’s presentation can be shared as well, right?
Sandoche Balakrichenan:
Yeah, it can be shared, yeah.
Maarten Botterman:
Super. So come to me after the meeting if you want, and I’ll send it by email. And we’ll also make sure that with the report, that will be very clear where you can find the presentations later on. Thanks for bringing this aspect. Zero trust. Self-signed certification is part of the solution. And the awareness that, yeah, different networks will facilitate IoT systems in different environments.
Hiroshi Esaki:
That is one of the technical components. But also we need another, you know, more wider thing. Otherwise, you know, not only the name domain or IP address, but the other part we need.
Sandoche Balakrichenan:
Just to answer to Professor Isaki, you know, we did work with the supply chain industry on GS1 type of device. When I say GS1, it is barcode and RFID. And if you see with the LoRaWAN, we are working with MAC IDs. So it’s not just names and IP addresses here.
Maarten Botterman:
So how to also deal with, and what you said also, to deal with privacy issues in systems that have very little extra capability of sharing data. Thanks for that. With us also, Wouter Natris, he’s coordinator of the IS3C Dynamic Coalition. And that coalition has done research into legislation and policy initiatives in IoT and has recently launched a report, or yesterday launched a report on findings and commonalities with that, and even has some recommendations. Wouter, would you be willing to share?
Wout de Natris:
Be glad to, Martin. Thank you. My name is Wouter Natris, and I’m a consultant in the Netherlands. And as such, coordinator of a dynamic coalition called Internet Standard Security and Safety Coalition within the IGF. As Martin said, we had our session yesterday and published two reports and launched a toolkit for Internet Standard Deployment. I was late here because I was in another session on IoT presenting on our work and then got a ping for Martin to come here. The chair of the working group is presenting as we speak in that session, so that I’m taking his place basically here to share his results. Very short, what is IS3C? We started this dynamic coalition in 2020 with the idea to get the Internet standards that are out there for sometimes decades and would make the Internet far more secure and safer if they were massively deployed by industry, most of the time by industry. And for some reason, that is not happening. So how can we make the world more secure and safer? That is by incentivizing organizations to deploy these existing standards. And that is what we do our work on. So we have several working groups, and then I’ll get to the IoT part. But we do work on security by design Internet of Things. We do work on education and skills, on tertiary education, whether they teach these standards, how the Internet works, et cetera. There’s a huge gap there. Procurement by government and industries, are they demanding these Internet standards? We have a working group on emerging technologies, which will probably start in 2024. And we have a working group on the deployment of RPKI and DNSSEC. And not because the technical problems they have, but how can we change the narrative so that when a CEO or a CFO or a Secretary General has to make a decision within his organization, that he understands why he has to go for security and not because of the technique, whether it’s political or economical or social or security motivation. So we have a working group. group that’s going to start in November. Sorry, in December. We are in October. I forgot where we are. It’s going to start in November. And hopefully we’ll have a result there early next year. So what did we do with IoT? Because that’s the reason why I’m sitting here. We came up with a plan to do research into policy documents that are findable on the Internet and to do a comparison. And as I understand, they found documents from 18 countries, a total of 30 documents in 18 countries, mostly from the global north, with 442 different practices in them. So between 18 countries, there were 442 practices. And do they align? Sometimes the terminology is even explained in a different way. So there’s no coherence between these policy documents. And that is, I think, the first thing that I want to say. I’m going to put on my glasses, because it reads a little bit easier. But what they did is they studied it from four categories. They looked at it from data privacy and confidentiality. They looked at secure updating. They looked at user empowerment and operational resilience. And from those four categories, they had five research questions. And the first one is, what are the recommended best practices for setting out the responsibilities of all stakeholders involved in IoT security, including manufacturers, providers, and users? The second question is, what policy and regulatory measures can be identified for promoting IoT security by design, and specifically with regard to ensuring device resilience against crashes, power shortages, and outages? Three, what policy and regulatory guidelines can be identified to promote user empowerment in IoT security, and what are the recommended best practices for implementing vulnerability disclosure mechanisms? Four, through what mechanisms are regulators and policymakers enforcing compliance with established IoT security standards and encouraging manufacturers to adopt the recommended best practices? And five, how do policy and regulatory documents relate security updates with warranty policies for IoT devices and services? So there’s a lot of questions that they put out on these 30 different documents. They found a lot of things, but when they started grouping them, things became quite clear very soon. So what were the main conclusions to be drawn? That one, IoT security is complex and multi-faced. Issues require a comprehensive approach. Many countries, including the whole of the Global South, lack any policy framework for IoT security, and that is almost. There are a few exceptions. Many of the national practices identified did not match other countries’ policies, and there are many differences in taxonomy. Many of the practices are voluntary guidelines without effective accountability and consequences for non-deployment. National administrations rarely require or specify security by design in the hardware and software that they procure, and this would drive and increase the deployment of security-related standards. The standards that form the public core of the Internet, which is basically software, and on which the Internet runs, are not formally recognized as such by governments, and are usually absent in all policy documents such as analyzed in this research. Specifying links between security flaws and device integrity is a strong basis for security updates. So that is the findings, and as you can see, there are huge gaps between when we talk about cybersecurity and what is actually being addressed by these governments, and that leads to a certain set of recommendations, and the first one is accountability frameworks from the design stage through to use. Two, strategies for countering unauthenticated vulnerabilities such as denial-of-service attacks. Three, stakeholder cooperation on coordinating vulnerability disclosure. Four, endorsing global implementation of open standards. Five, the integration of security updates and warranty policies. And finally, governments get your act together and agree on what a term and a definition is of a specific piece of IoT. So can we actually change this situation? And if I look back at the whole dynamic coalition, in all other studies that we found, as I said, already said, the public core of the Internet is something governments discuss, and they think that it should be protected and it should not be attacked. And my idea is that my personal idea from reading the different reports we’re producing is that governments think of the cables of the server parks, they think of the undersea cables that they have to be protected, and what they forget is what makes the Internet actually function and work as it does. So if governments don’t recognize it, it will also mean that they won’t procure it. So what would make the IoT or other functionings of the Internet more secure is when a government starts putting its money where its mouth is. In other words, if you want cybersecurity, you will have to demand certain standards to be built in the product that you are actually procuring. So if you do not demand it up front, in some cases you can’t even get it afterwards after you discover the vulnerabilities, because they can’t be mended or they don’t do it, or because it’s an end-of-life cycle for them. So in other words, you have to consider these standards up front. And so only when bigger organizations, public and private, start demanding security by design when procuring, that is the moment that things will change in the world. And that will also mean that for us as individual users, they’re not going to produce two sort of coffee machines that connect with the Internet. They will all be secure from that moment onwards, because they won’t sell secure things to the government that are insecure to us. If consumer organizations will start testing these devices, also on the IoT component, also that would prove a lot of things. So that is where we try to work with this IS3C. But when all else fails, then I’m convinced that there will be only one solution, and that is that they’re going to regulate it and legislate it. And if that is a desirable thing to happen, I’m not so certain about that, but it will happen between now and five to six years. So it’s time to get our act together. And that act can be by deploying what is out there and can’t be that difficult, I’m told. So let me stop there, Maarten, and happy to answer any questions later.
Maarten Botterman:
Yes, thank you for that, Wout. What we see is the rapid developments make it more and more difficult also for governments to keep up with what they should do. And legislation is just one of the last resort, one would say. I very much appreciate the concept that comes forward, that procurement might be a way in. If governments know how to procure for safe, secure IoT devices, they may also better know how to propose legislation or guidelines to the rest of the public. Thank you, thank you for that insight. And I also heard you having listened to Vint. Let’s think about the world we want, but also act, otherwise we may end up with the world we deserve. And we may not like that. I loved that quote. The last element I really would like to bring in and to emphasise further, because it’s a key element, not only of the society we live in, but also specifically for IoT, is how to deal with privacy and data protection. And for that, I have my friend and colleague Jonathan Cave online, who also volunteered to be our rapporteur for this session. But he’s an expert with a policy background, regulatory background, and a micro economist and game theorist. Jonathan.
Jonathan Cave:
Okay, thank you, Martin. Thank you, everybody. It’s coming up on two o’clock in the morning here. So I will attempt to be coherent. There were a couple of, not to preempt the discussion, I think it’s useful if we get quickly into the main issues. But there are a few things I wanted to say in relation to privacy. I think, from the perspective of the economics of privacy, from the perspective of the ethical aspect, and certainly from the legal perspective, one of the questions that keeps coming up through this discussion is whether the things that we’re talking about, and I include privacy in this, but also things we’ve talked about today, like security, transparency, and accountability, are meant to be principles that we adhere to or espouse when we get a chance, or are meant to be mechanisms that produce a result. Because the Internet of Things, linked into the Internet of People, is a complex adaptive system. It produces things that we can’t yet imagine. And so the engineering perspective of designing things which have specific characteristics and functions and so on, and then you turn them loose and judge them according to how well they do those things, for users who are deemed to have fixed characteristics, may not be the most useful perspective. So I just wanted to flag up this sort of game-theoretic view that all the things we’re talking about are mechanisms, and then make a few observations that are relevant, I think, to the Internet of Things. Some of these are things that have been said before. For example, we know we need to have multiple stakeholders, but it’s important to be quite clear on who those stakeholders are, what kind of voice we want them to have, and what sort of decisions we involve them in. One of the problems that’s come up, particularly with the use of AI in relation to the Internet of Things, is the question of whether agency is still a useful concept in the sense that we had it before, where we can base an entire system of markets, engineering, and laws on the idea of people being told what they can do, and then being held responsible for how they do it. Now, in this respect, I think one of the elements here is the privacy element, and I’ll just sort of round in on that, and we can discuss other things later on. When we talk about privacy, the central question is privacy of what, and why is this a useful idea? In most cases, we start from the perspective of the privacy of data. But we’ve heard all the way through, it was hinted at by Vint, and certainly picked up quite strongly by Hiroshi and everybody who spoke later, that when we talk about the Internet of Things, we’re probably talking about the data plane, certainly when AI comes in there, because you can’t understand what these things or complex assemblages of these things do without understanding how they learned, how they were trained, what data they were trained on. Then there are the devices themselves. Are they secure? Do they fit certain characteristics? Can they be updated and so on? That’s the hardware, and it includes the software as it changes over time. Then there are the functions. But because the Internet of Things contains things that are connected to each other, those functions may not be well or objectively defined. What I use the device for is not necessarily the function that you see. The function that you see may be entirely different. For example, these IoT devices that harvest vast amounts of personal, private information from their users, even when that has no connection to the nominal functioning or design of the device or its operation. The cars that observe whether we’re sleepy or whether we’re behaving well, that kind of thing. So as we move up the plane, away from the data plane and the device plane, things become, as it were, more complicated. And that produces a changing surface, not just an attack surface for cybersecurity, but a surface for, let’s call it, ethical concerns. Now, so that’s item one, which is the complexity of the things. We can engage with these things at certain levels, but they have implications at other levels. Now, I think this is important in terms of the good practice elements of what we want to see for the IoT. Many of us come from engineering or analytical backgrounds, but as many others have pointed out, a lot of the people making decisions here may not share those perspectives. And that’s not just something we have to patch together as a kind of human interoperability, but it’s part of the richness and resilience of the system that we have and give expression to those different perspectives. But that brings me to the second aspect of the privacy, which is the privacy of action and intention. When people use these devices, they develop relationships with them and through them, different relationships with each other. When people use a smart speaker, for example, they begin to trust it in certain ways. Now, partially, that gives the speaker or the people feeding data and instructions to the speaker a power that they didn’t have originally. They move from being sensors, as it were, or deliverers of content, to being actuators, to reprogramming their users. And that perfectly innocent function has really profound implications for who gets held responsible for these things. Now, another small comment I wanted to make that came up early on in the conversation was the question of how we control and own the data. For a long time, we’ve been told that you can’t own data and can’t own personal data. But, of course, now we learn that in order to make these systems function, we have to resurrect the notion of the ownership of data, simply so that we can hold people responsible. Then the final thing I wanted to talk about was the nature of our ethical engagement. We can do certain things with law, certain things with standards and certification, but behind that there needs to be an appropriate ethical framework. Most of our frameworks are based on what Martin called, at the very beginning, respect for the individual. But what we’re beginning to learn is that the individual, at least as they interact with the world, is not a kind of fixed entity. It’s not an anchor point for ethical reflection. So if I give you voice and if I give you respect, am I doing it for you right now or the you that you will become when you interact with these systems? And if it is the latter, how do we take account of the fact that the way the systems operate changes the way people use them, changes the way people understand them? Now, as an economist, I believe that this richness of perspectives is not something that we can resolve or standardize, but is instead a source of resilient interaction that helps us to understand the kinds of things that we see. So in that respect, I’ll close at this point simply by saying that I think that we need to work on the ethical dimension to understand whether concepts like privacy still serve us as useful principles or need to be modified, particularly in light of the fact that we now have different understanding of how our individual and collective psychology is affected by interacting with devices, which at the one time are mechanical devices, but at the same time are AI-empowered entities with whom we form relationships, who change our behavior, our understanding, and the things that we pay attention to.
Maarten Botterman:
Thank you so much, Jonathan, for sharing your insights on this journey. Basically, it’s also amazing how quickly our insights and what good practice should be like is evolving. And then we know the next step is to implement it in society. But also walking around in this IGF, I heard a lot of things I thought are really, truly getting us to next levels of understanding of how to deal with systems. For the sake of time, I first would like to ask Avri, is there any questions online?
Avri Doria:
No, there haven’t been any questions online unless one just came in. But, so please, if anybody wants to put one in the chat or the QA, I can read it. And please be short, because we only have 15 minutes left because we put so much content in the first part. But if anybody puts anything in chat, I’ll read it.
Maarten Botterman:
Okay, the content was based on interactions in several regional events. So in that way, the voice of people has been heard and reflected. But we look forward to the voices here in the room. Barry, please, please introduce yourself.
Barry Lieber:
Yes, this is Barry Lieber. I’ve been working on some Internet of Things related stuff for almost 25 years now, from before we called it Internet of Things. And so I’ve got a lot of thoughts on it. I’ll try to condense it to two points that I wanna make. We talk about security, and I don’t like using that term as a buzzword. It’s much more complex. And I think we need to think about it broken down into different aspects, authentication, authorization, confidentiality, data integrity, all those sorts of things. Because putting that all together makes a much more complicated picture, especially when we go to the second point I wanna make, that when we talk about turning on lights with our voice or even something that’s more dear to me as I age of the example you gave of Martin, of monitoring my blood pressure or my heart rhythm or something like that, it’s still just something we’ve been able to do for a long, long time. But now it communicates over the Internet. To me, that’s not the Internet of Things in its full potential. What I think of as Internet of Things is different sources all working together. My car and my house and my calendar, and my calendar resets my alarm clock and makes coffee earlier and tells my car where to go in the morning and that kind of stuff. And that really makes the security, all those different aspects of it, very complicated to put together. And as we think about making a secure Internet of Things and a private and a confidential and whatever Internet of Things, we really need to think about the real robust scenarios and the complexity that that puts into it of how to secure all these different pieces and make sure that the data doesn’t leak and all of that sort of thing.
Maarten Botterman:
Thank you very much, Barry. Hiroshi, please.
Hiroshi Esaki:
Yes, I think the core part of the Internet on the end should be the same, end to end. I mean, the end to end means protect yourself first by yourself, community second. The last one is public health. So the core part of the Internet, try to making a secure, good operation as a backbone network, then end station must have their own protection first. Then that is a really, really good thing for we need a traceability or interoperability. The meaning of interoperability is user must have such a capability, that education or capacity building or literacy building up. Then one of the action we are doing in Japan is the providing traceability to user, not all. But people can have a traceability function, then how many person are gonna use? That really depends on the technology usable and how we deploy or how to advocate these technologies. Then, again, end to end is very powerful, the scalability. So that’s the way we should do.
Maarten Botterman:
Loud, please.
Wout de Natris:
Thank you, Barry, for the question. I think it shows how complex our life is going to be. It’s gonna be much worse than this probably not too long from now. But the question is where do we put the accountability basically or the responsibility? And despite that the end user has a role to play here, we can be 100% certain that 99% of the people won’t even know how to protect themselves because they think this device works. My car drives and that 170 machines just like E.T. phone home in that car the whole time. You have no clue that it’s happening except when you get a very strange message all of a sudden in your car saying, what do I have to do? But that shows what happens today and it’s all about the companies gathering the data. And because of that it’s insecure because otherwise it’s probably harder to get the data for them. But we have to work or as a society we have to work a way around that somehow because otherwise we’ll probably lost forever where from a privacy point of view but also from the attack factor point of view because that is the other side, the dark side can abuse this 24 seven hours a day. Sorry, 24 seven, you know what I mean. So I think that that is why it’s so important to make sure that standards are installed at the outset and otherwise it will probably never happen and we have to start working to make that happen. Thanks.
Maarten Botterman:
Thanks for that very much. Mark, please.
Mark Carvell:
Thank you, Martin. Mark Carvell, I’m a member of Eurodig which is the European Regional Internet Governance Forum. I’m also an advisor to the IS3C coalition on standards, security and safety. So a colleague of Bart DeNatris on the panel. And first of all, thanks very much for a very interesting and wide ranging discussion. A couple of points sprang to mind. And first of all, a quick question to Dan about labeling schemes and harmonization. Where does he think the best platform is for developing harmonization given that people are gonna be traveling around the world with devices and they need to be able to understand a coherent universal labeling scheme. So where is the platform best placed for that? I did bump into somebody from the FCC on Sunday, I think it was. So there is, and I noted what Dan said about FCC involvement in the US public-private partnership. So maybe if I’d known about this, I would have asked him, perhaps if the FCC had some thinking about this and maybe that’s one of the reasons why he’s here, that particular person. So that was a first point. Now, procurement about described as a driver. But I mean, we’ve heard about consumer IoT and industrial IoT. And speaking as a former UK government official, I just wonder where we are in terms of IoT applications in public administrations generally. How can these applications be developed to meet in particular government concerns about security given that this could be a revolution in the interface between governments and citizens? So are you as a dynamic coalition looking at that particular aspect and talking to governments know what they need assurance about in terms of IoT applications? Thirdly, on Jonathan’s point about innovation, I was at an interesting session about ethical development of technologies, ethical innovation yesterday evening. Martin, you were there as well, I think. And the point I made there was that you can strive to innovate ethically, but of course, what direction does IoT, for example, take? It’s very difficult to predict, the unforeseen consequences and applications may be positive, may be negative. So how are IoT developers really approaching ethics in a way that’s going to ensure that these systems and networks are going to be developed with a degree of confidence given the unpredictability factor? Final point, as I said, I’m a member of EUDIG. So EUDIG has a call for issues. I really urge the dynamic coalition to consider using the EUDIG platform forum next June in Vilnius as an opportunity to advocate the work, the valuable work you’re doing. So the call for issues is out now. Okay, thank you. I’ll stop there.
Maarten Botterman:
Thank you very much. For sure, like any dynamic coalition, I think we also think in different messages to different stakeholder groups of their specific role. So that’s a key element. Dan, just checking, I realize it’s a different part of their view, but can you come back on the question from Mark and maybe also the remark from Jonathan in the chat? Okay.
Dan Caprio:
Am I unmuted?
Maarten Botterman:
You are.
Dan Caprio:
Yeah, yes. Yes. In terms of the US consumer label, it’s early days. The FCC just put their notice out back in August. So in the US, this is not gonna take effect until the end of next year at the earliest. And so I’m happy to get back with you with more specific information. There is some discussion in the rule that the FCC put out about international harmonization and also working hand in hand with the White House and with the State Department. But I would imagine that I’m glad you asked the question that this is something that IGF can take a very active role in, because this is something, I mean, with the Internet of Things, something that we’ve all been working on for a very, very long time. So I’d like to see IGF and the regional IGFs, I mean, sort of begin to take this issue up. But in terms of what’s the exact platform or how do you do all this? I mean, that’s to be determined.
Maarten Botterman:
Yes, thank you for that. Any last questions in the Zoom room? Okay.
Jonathan Cave:
In the interim, could I make a very small brief response?
Maarten Botterman:
And then we have the last question in the room and then time is flying.
Jonathan Cave:
It’s very quick on the issue of the ethical reflection, ethical consideration and control of these IoT devices. This is something, and in particular, there are consequences once unleashed. This is a particular concern of many organizations. At the Turing Institute, I’m part of a group called T-REX, Turing Research Ethics, that scrutinizes the Turing Institute’s projects for their ethical considerations. Part of this is, of course, making people think about what will happen when these things are turned up. In some cases, you can do this with things like behavioral or psychological or sociological analysis. You can control it and help to make it more predictable with legal mechanisms. But in general, the answer is usually to keep the conversation open, not to tick the ethical box at the beginning of the project and then turn it over to the lawyers to manage the liability, but to keep the information flying because the problems that we’re thinking about are emergent problems. No single party can possibly perceive them, nor can they be analyzed by considering just one layer of this internet. So really, the only thing to do is attention must be paid and continue to be paid. So I just wanted to make that small remark.
Maarten Botterman:
Very clear point. Can I invite you to introduce yourself?
Elaine Liu:
Thank you. Good morning, everyone. My name is Elaine Liu. I’m from Singapore. I came to this IGF as an individual learner, not related to work, so I took time off. So relating to the IoT, I personally feel three points I’d like to share and seek your guidance. First is IoT to me is like an edge devices, data collection devices. It’s all about collecting certain data. It can be text, images, and all. I feel that in setting up policies or guardrails, it all depends on the use cases, right? We talk about IoT that’s for consumer, IoT that’s led to organization, IoT that’s a higher level for agency or certain operational resilience, situational awareness, and all. So I think does it make sense to have different policy and guardrails depending on the use cases? So that’s the first point. The second point is we all know that with hardware, there’s software, there’s operating system, and at the end of the day, the data analytics that comes out of it. So I think in setting up any guiding principle, we will look at the whole value chain because looking at just the edge or the IoT part, it’s just the beginning of it or the starting point. But how it’s being consumed and distributed, that’s related as well. So I think that’s the two points I’d like to share.
Maarten Botterman:
Thank you. Thank you very much. Thank you for your observations. Indeed, as it is time, I will round off if that’s okay.
Hiroshi Esaki:
Very quickly regarding the use case, regarding the IoT device or any single devices, right now we have multiple use. Future use is gonna come out. So even though you have a single device, that have the original usage first, that’s gonna be used to the other purposes. So we have to think about that. That is the use of the devices gonna happen every day. That’s we experience in the internet.
Maarten Botterman:
Yes, thank you. And just to, at this point to say, indeed we do, we are very conscious that it’s data, that it’s about, that there’s different applications. I think everything we say is about also the use of IoT in context, whether it’s device or a combination of devices or a service or ecosystem, all with different requirements, all with different returns, different risks. And one of the key things that has become more and more visible and is high in the interest also in Singapore, I’m aware, is labeling, informing people about what the risks are they’re dealing with, with the stuff they’re confronted with. All the information is to be found also on the DC IoT site. I invite you all to also participate to subscribe to the list from the Dynamic Coalition on IoT, where we will release main news, where you can also raise questions or issues, if that’s what you like. And we’re also very happy with the support of Medianstat that allows us to have supports, a specific website where we can also have discussions, where we can also share some of the presentations we have. And all the reports are available through that as well. This is an iterative process, so much is clear. The space of change is fast and we’re on it because we’re aware we need it and we want it to serve us in a way that it serves us more as a benefit than as a threat. But in the end, it’s all risk management as well. So thank you all for your interest and the speakers for your contributions. I hope to see you in the future, either in a regional event or next year in Riyadh, right? So thank you all very much. This meeting is closed. Thank you.
Speakers
Alejandro Pisanty
Speech speed
150 words per minute
Speech length
871 words
Speech time
347 secs
Avri Doria
Speech speed
187 words per minute
Speech length
69 words
Speech time
22 secs
Barry Lieber
Speech speed
191 words per minute
Speech length
361 words
Speech time
113 secs
Dan Caprio
Speech speed
118 words per minute
Speech length
631 words
Speech time
321 secs
Elaine Liu
Speech speed
191 words per minute
Speech length
264 words
Speech time
83 secs
Hiroshi Esaki
Speech speed
143 words per minute
Speech length
1132 words
Speech time
474 secs
Jonathan Cave
Speech speed
171 words per minute
Speech length
1610 words
Speech time
566 secs
Maarten Botterman
Speech speed
147 words per minute
Speech length
3150 words
Speech time
1284 secs
Mark Carvell
Speech speed
150 words per minute
Speech length
518 words
Speech time
207 secs
Sandoche Balakrichenan
Speech speed
142 words per minute
Speech length
1028 words
Speech time
434 secs
Sarah T. Kiden
Speech speed
179 words per minute
Speech length
704 words
Speech time
236 secs
Vint Cerf
Speech speed
201 words per minute
Speech length
1304 words
Speech time
390 secs
Wout de Natris
Speech speed
155 words per minute
Speech length
1790 words
Speech time
693 secs
Data Protection for Next Generation: Putting Children First | IGF 2023 WS #62
Knowledge Graph of Debate
Session report
Full session report
Audience
The discussion focuses on the necessity of age verification and data minimization in relation to children’s rights in the digital environment. It is argued that companies should not collect additional data solely for age verification purposes, and trust in companies to delete data after verification is considered crucial to protect children’s privacy.
Another important point raised in the discussion is the need for the early incorporation of children’s rights into legislation. The inclusion of children in decision-making processes and the consideration of their rights from the beginning stages of legislation are emphasized. This is contrasted with the last-minute incorporation of children’s rights seen in the GDPR.
The discussion also advocates for the active participation of children in shaping policies that affect their digital lives. Examples of child-led initiatives, such as Project Omna, are mentioned to illustrate the importance of including children’s perspectives in data governance. The argument is made that involving children in policy-making processes allows for better addressing their unique insights and needs.
The role of tech companies is also explored, with an argument that they should take child rights into consideration during their product design process. Collaborating with tech companies to develop age verification tools is suggested as a means of ensuring the protection of children’s rights.
Additionally, it is noted that children, often referred to as “Internet natives,” may have a better understanding of privacy protection due to growing up in the digital age. This challenges the assumption that children are unaware or unconcerned about their digital privacy.
The discussion concludes by highlighting the advocacy for education and the inclusion of children in legislative processes. Theodora Skeadas’s experience in advocacy is mentioned as an example. The aim is to educate lawmakers and involve children in decision-making processes to create legislation that better safeguards children’s rights in the digital environment.
Overall, this discussion underscores the importance of age verification, data minimization, the incorporation of children’s rights in legislation, the active participation of children in policy-making processes, and the consideration of child rights in tech product design. These measures are seen as vital for protecting and promoting children’s rights in the digital age.
Edmon Chung
The discussion revolves around various important topics related to internet development, youth engagement, and online safety. Dot Asia, which operates the .Asia top-level domain, plays a crucial role in these areas. In addition to managing this domain, Dot Asia uses the earnings generated from it to support internet development in Asia. Moreover, Dot Asia runs the NetMission program, which aims to engage young people in internet governance. These initiatives are viewed positively as they promote internet development and youth engagement in Asia.
Another significant development is the launch of the .Kids top-level domain in 2022. This domain is specifically designed to involve and protect children, based on the principles outlined in the Convention on the Rights of the Child. By prioritizing children’s rights and safety, the .Kids initiative aligns with the principles of the convention. This positive step highlights the importance of involving children in policy-making processes that affect them.
Cooperation among stakeholders is emphasized for ensuring online safety. Various forms of online abuses and domain name system (DNS) abuses exist, requiring collaborative measures to create a safer online environment. The .Kids top-level domain is seen as a valuable platform to support online safety initiatives. By creating a dedicated space for children, it can contribute to the development and implementation of effective online safety measures.
The discussion also focuses on privacy, particularly in relation to data collection and age verification. Privacy is not just about keeping data secure and confidential but also about questioning the need for collecting and storing data in the first place. The argument is made that data should be discarded after the age verification process to strike a balance between protecting children and safeguarding their privacy.
The use of pseudonymous credentials and pseudonymized data are suggested as appropriate approaches for age verification. These methods allow platforms to verify age without accessing or storing specific personal information, addressing privacy concerns while still ensuring compliance with age restrictions.
Additionally, it is highlighted that trusted anchors should delete raw data after verification, and regulation and audits are necessary for companies that hold data. The importance of building the capacity for child participation in internet governance is also emphasized. These factors contribute to creating a safer, more inclusive, and child-centric online environment.
In summary, the discussion focuses on various important aspects of internet development, youth engagement, and online safety. Dot Asia’s initiatives and the introduction of the .Kids top-level domain reflect positive steps toward promoting internet development and protecting children’s rights. The importance of stakeholder cooperation, privacy considerations, and child involvement in policy-making processes are also highlighted. By addressing these aspects, stakeholders can work together to create a safer and more inclusive online space for all.
Sonia Livingstone
The discussions revolved around the significance of safeguarding children’s right to privacy in the digital realm and its interlinkage with other child rights. It was emphasised that children’s privacy is essential as it directly influences their safety, dignity, and access to information. Sonia Livingstone, an expert in the field, played an instrumental role in the drafting group for general comment number 25, which specifies how the Convention on the Rights of the Child applies to digital matters.
Furthermore, it was noted that children themselves possess an understanding of and are actively involved in negotiating their digital identity and privacy. To understand their perspective, a workshop was conducted by Livingstone to gauge how children perceive their privacy and the conditions under which they would be willing to share information globally. It was found that children universally recognise the importance of privacy and view it as a matter that directly affects them.
The introduction of age-appropriate design codes, tailored to cater to a child’s age, was highlighted as an effective regulatory strategy to protect children’s privacy. These codes have been implemented in various international and national settings, ensuring privacy in accordance with the child’s developmental stage. Livingstone, alongside the Five Rights Foundation, spearheaded the Digital Futures Commission, which sought children’s views to propose a Child Rights by Design approach.
Addressing the identification of internet users who are children for the purpose of upholding their rights online was identified as another crucial aspect. Historically, attempts to respect children’s rights on the internet have failed because the age of the user was unknown. It was emphasised that a mechanism is needed to determine the age of each user in order to effectively establish who is a child.
Regarding the implementation of age verification, it was suggested that a new approach is needed, involving third-party intermediaries for age checks. These intermediaries should operate with transparency and accountability, ensuring accuracy and privacy. However, it was acknowledged that not all sites and content necessitate age checks, and a risk assessment should be conducted to determine the appropriateness of such checks. Only sites with age-inappropriate content for children should require age verification.
The role of big tech companies in relation to age assessment was also discussed. It was posited that these companies likely already possess the capability to accurately determine the age of their users, highlighting the potential for collaboration in ensuring child rights protection online.
Furthermore, the importance of companies adopting child rights impact assessments was stressed. Many companies already understand the importance of impact assessments in various contexts, and embedding youth participation in the assessment process is seen as crucial. Consideration should be given to the full range of children’s rights.
There were differing perspectives on child rights impact assessments, with some suggesting that they should be made mandatory for companies. It was argued that such assessments can bring about significant improvements in child rights protection when integrated into company processes.
The active involvement of children and young people in the development of data protection policies was also highlighted as a key recommendation. Their articulate and valid perspectives should be taken into account to ensure effective policy formulation.
Finally, the importance of adults advocating for the active participation of young people in meetings, events, and decision-making processes was emphasised. Adults should actively address the lack of youth representation and ensure that young people have a voice and influence in relevant discussions.
In conclusion, the discussions centred on the necessity of protecting children’s privacy in the digital environment and its alignment with other child rights. Various strategies, including age-appropriate design codes and third-party intermediaries for age verification, were proposed. The involvement of children, youth, and adults in policy development and decision-making processes was considered pivotal for effective protection of children’s rights online.
Emma Day
Civil society organizations play a crucial role in advocating for child-centred data protection. They can engage in advocacy related to law and policy, as well as litigation and regulatory requests. For example, Professor Sonia Livingstone’s work on the use of educational technology in schools and the launch of the UK’s Digital Futures Commission highlight the importance of civil society organizations advocating for proper governance of educational technology in relation to children’s data protection.
Litigation and making requests to regulators are another important avenue for civil society organizations to advance child-centred data protection. This is evident in cases such as Fair Play’s complaint about YouTube’s violation of the Children’s Online Privacy Protection Act, which resulted in Google and YouTube paying a significant fine. These actions demonstrate the impact civil society organizations can have in holding tech companies accountable for their data protection practices.
Community-based human rights impact assessments are crucial for ensuring child-centred data protection. This involves consulting with companies, working with technical and legal experts, and including meaningful consultation with children. By involving children in the process, civil society organizations can better understand the implications of data processing and ensure that their rights and interests are taken into account.
Civil society organizations should also involve children in data governance. Involving children in activities such as data subject access requests can help them understand the implications of data processing and empower them to participate in decision-making processes. Additionally, auditing community-based processes involving artificial intelligence could involve older children, allowing them to contribute to ensuring ethical and responsible data practices.
Education about data processing and its impacts is crucial for meaningful child involvement. It is important for people, including children, to understand the implications of data governance for their rights. Practical activities, like writing to a company to request their data, can be incorporated into education to provide a hands-on understanding of the subject.
Civil society organizations need to collaborate with experts for effective child involvement. In complex assessments, a wide range of expertise is required, including academics, technologists, and legal experts. By collaborating with experts, civil society organizations can ensure that their efforts are based on sound knowledge and expertise.
Age verification should not be the default solution for protecting minors’ data. Other non-technical alternatives should be investigated and considered. Different jurisdictions have differing views on the compliance of age verification products with privacy laws, highlighting the need for careful consideration and evaluation of such solutions.
In efforts to protect children’s data, it is essential to centre the most vulnerable and marginalised children. Children are not a homogeneous group, and it is important to address the varying levels of vulnerability and inclusion across different geographies and demographics.
Designing products for the edge cases and risky scenarios is crucial for digital safety. Afsaneh Rigo’s work on inclusive design advocates for designing from the margins, as this benefits everyone. By considering the most difficult and risky scenarios, civil society organizations can ensure that digital products and platforms are safe and accessible for all.
In conclusion, civil society organizations have a vital role to play in championing child-centred data protection. Through advocacy, litigation, regulatory requests, human rights impact assessments, involvement in data governance, education, collaboration with experts, exploring non-technical alternatives to age verification, considering the needs of the most vulnerable children, and designing for edge cases, these organizations can contribute to a safer and more inclusive digital landscape for children.
Theodora Skeadas
The discussion revolves around several key issues related to children’s data protection and legislation. One focal point is the importance of understanding international children’s rights principles, standards, and conventions. The UN Convention on the Rights of the Child features prominently as a widely ratified international human rights treaty that enshrines the fundamental rights of all children under the age of 18, serving as a foundational document in safeguarding children’s rights.
Another significant aspect highlighted is the need for appropriate data collection, processing, storage, security, access, and erasure. It is emphasized that organizations should only collect data for legitimate purposes and with the consent of parents and guardians. Moreover, these organizations should use children’s data in a way that is consistent with their best interest. Implementing adequate security measures to protect children’s data is also underscored as crucial.
Consent, transparency, data minimization, data security, and profiling are identified as major issues surrounding personal data collection, processing, and profiling. It is mentioned that children may not fully understand what it means to consent to the collection and use of their personal data. Additionally, organizations may not be transparent about how they collect, use, and share children’s personal data, making it difficult for parents to make informed decisions. The over-collection of personal data by organizations is also highlighted as a concern.
The need for strengthening legal protection, improving transparency and accountability, as well as designing privacy-enhancing technologies, is emphasized as ways to address the issues related to children’s data. Governments can play a role in strengthening legal protections for children, such as requiring parental consent and prohibiting organizations from profiling children through targeted advertising. It is also mentioned that educating parents and children about the risks and benefits of sharing personal data online is crucial. Technologists are encouraged to design products and services that collect and use less personal data from children.
There is a global focus on legislation discussions that will impact child safety. Measures such as the Digital Services Act and Digital Markets Act in the European Union, as well as the UK online safety bill, are mentioned as examples of legislation that will have an impact on child safety.
In the context of the United States, there is a gap in legislation related to assistive education technology (ed tech) in schools. Existing bills mostly focus on access, age verification, policies, and education, rather than addressing the usage of assistive technology.
There is also concern about the challenges faced in passing comprehensive legislation related to children’s data, particularly due to competing interests and a divided political landscape. It is acknowledged that despite the proliferation of data and data-related issues concerning children, passing effective legislation proves difficult.
The dataset analysis also reveals the need to educate legislators about the rights and principles of children. Often, legislators may not be adequately informed about the rights of children and the specific meaning of rights like privacy and freedom of expression in the context of children.
The importance of including children in decision-making processes is emphasized as it makes legislation child-centric and serves the intended purpose well. Inclusion of children in the legislative process ensures that their voices and perspectives are heard and considered.
The analysis also highlights the necessity of considering the needs of children from diverse backgrounds. It is crucial to acknowledge and address the unique challenges and requirements of children from different social, cultural, and economic circumstances.
Furthermore, the inclusion of children as active participants in conversations about their well-being is stressed. This can be done through their participation in surveys, focus groups, workshops, and empowering them to advocate for themselves in the legislative process.
There is a suggestion for children to be represented on company advisory boards, emphasizing the importance of their inclusion and representation in corporate governance.
In conclusion, the discussion delves into various aspects of children’s data protection and legislation, shedding light on key issues and suggestions for addressing them. It emphasizes the significance of understanding international children’s rights principles, implementing appropriate data collection and processing practices, ensuring transparency, accountability, and consent, and designing privacy-enhancing technologies. Additionally, it highlights the importance of including children in decision-making processes, considering their diverse needs, and strengthening legal protection. However, there is recognition of the challenges posed by political division and the difficulties in passing comprehensive legislation.
Njemile Davis-Michael
During the discussion, various topics relating to data governance and the impact of digital technology on protecting children’s rights and promoting their well-being were covered. One significant highlight was the influence of the United States Agency for International Development (USAID) in technological innovation, as well as its efforts in humanitarian relief and international development. With 9,000 colleagues spanning 100 countries, USAID plays a significant role in funding initiatives to improve digital literacy, promote data literacy, enhance cybersecurity, bridge the gender digital divide, and protect children from digital harm.
Digital tools were identified as increasingly important for adults working to protect children. These tools, such as birth registration systems and case management support, help facilitate the protection and integration of children into broader social and cultural norms. However, it was acknowledged that increased digital access can also lead to increased risks, including cyberbullying, harassment, gender-based violence, hate speech, sexual abuse and exploitation, recruitment into trafficking, and radicalization to violence. The negative consequences of these risks were highlighted, such as limited exposure to new ideas, restricted perspectives, and impaired critical thinking skills due to data algorithms.
To address these risks, it was argued that better awareness, advocacy, and training for data privacy protection are crucial. The lack of informed decision-making about data privacy was identified as an issue that transfers power from the data subject to the data collector, with potentially long-lasting and harmful consequences. Recognizing the need for safer digital environments, data governance frameworks were presented as a solution to mitigate the risks of the digital world. These frameworks can create a safer, more inclusive, and more exciting future.
The importance of responsible and ethical computer science education for university students was emphasized. Collaboration between USAID and the Mozilla Foundation aims to provide such education in India and Kenya, with the goal of creating technology with more ethical social impacts. The integration of children’s rights in national data privacy laws was also advocated, highlighting the need for a legal framework that safeguards their privacy and well-being.
Empowering youth advocates for data governance and digital rights was seen as a positive step forward, with projects like Project Omna, founded by Omar, a youth advocate for children’s digital rights, gaining support and recognition. The suggestion to utilize youth networks and platforms to inspire solutions further highlighted the importance of involving young voices in shaping data governance and digital rights agendas.
The tension between the decision-making authority of adults and the understanding of children’s best interests was acknowledged. It was argued that amplifying children’s voices in the digital society and discussing digital and data rights in formal education institutions is necessary to bridge this gap and ensure the protection of children’s rights.
Notably, the need for a children’s Internet Governance Forum (IGF) was highlighted, recognizing children as stakeholders in internet governance. It was agreed that raising awareness and capacity building are essential in bringing about positive changes for children within this sphere.
In conclusion, the discussion shed light on the crucial role of data governance and digital technology in safeguarding children’s rights. It emphasized the importance of responsible technological innovation, data privacy protection, and the inclusion of children’s voices in decision-making processes. By addressing these issues, society can create a safer and more inclusive digital world for children, where their rights are protected, and their well-being is prioritized.
Moderator
The discussion on children’s privacy rights in the digital environment emphasised the importance of protecting children from data exploitation by companies. One argument raised was the need for regulatory and educational strategies to safeguard children’s privacy. Age-appropriate design codes were highlighted as a valuable mechanism for respecting and protecting children’s privacy, considering their age and understanding the link between privacy and other rights. Professor Sonia Livingstone, who was part of the drafting group for general comment number 25, stressed the need for a comprehensive approach that ensures children’s privacy rights are incorporated into the design of digital products and services.
The .Kids initiative was discussed as an example of efforts to promote child safety online. This initiative, which focuses on children’s rights and welfare, enforces specific guidelines based on the Convention on the Rights of the Child. It also provides a platform for reporting abuse and restricted content. Edmon Chung, in his presentation on the .Kids initiative, highlighted the importance of protecting children’s safety online and addressed the issue of companies exploiting children’s data.
USAID’s involvement in digital innovation and international development was also mentioned. The organisation works with colleagues in various countries and supports initiatives related to digital innovation. Their first digital strategy, launched in 2020, aims to promote technological innovation and the development of inclusive and secure digital ecosystems. USAID is committed to protecting children’s data through initiatives such as promoting awareness, aligning teaching methods with EdTech tools, and working on data governance interventions in the public education sector.
The discussion also brought attention to the risks children face in the digital environment, including online violence, exploitation, and lack of informed decision-making regarding data privacy. It was emphasised that digital tools play a significant role in protecting children and aiding in areas such as birth registration, family tracing, case management, and data analysis. However, the risks associated with digital tools must also be addressed.
Civil society organisations were recognised for their crucial role in advocating for child-centered data protection. They engage in advocacy related to law and policy, and their efforts have resulted in updated guidance on children’s privacy in educational settings and the investigation of violations of children’s privacy laws. The importance of involving children in data governance and policy development was highlighted, along with the need for meaningful consultation and education.
The discussion underscored the need for age verification mechanisms and risk assessments to ensure the protection of children online. The development of age verification products that comply with privacy laws was seen as a vital step. Concerns were raised regarding the lack of transparency and oversight in current age assessment methods. It was suggested that products should be designed for difficult and risky scenarios to benefit all users.
Overall, the insights from the discussion highlighted the importance of protecting children’s privacy in the digital environment and called for action to create a safer and more inclusive online space for children.
Session transcript
Moderator:
Finally, scan the Mentimeter QR code, which will be available on the screen shortly, or use the link in the chat box to express their expectations from the session. As a reminder, I would like to request all the speakers and the audience who may ask questions during the Q&A round to please speak clearly and at a reasonable pace. I would also like to request everyone participating to maintain a respectful and inclusive environment in the room and in the chat. For those who wish to ask questions during their Q&A round, please raise your hand. Once I call upon you, you may use the standing microphones available in the room. And while you do that, please state your name and the country you are from before asking the question. Additionally, please make sure that you mute all the other devices when you are speaking so as to avoid any audio disruptions. If you are participating online and have any questions or comments and would like the moderator to read out your question or comment, please type it in the Zoom chat box. When posting, please start and end your sentence with a question mark to indicate that it is a question or use a full stop to clearly indicate that it is a comment. Thank you. Let us now begin the session. Ladies and gentlemen, thank you very much again for joining today’s session. I am Ananya. I am the youth advisor to the USAID Digital Youth Council and I will be the on-site moderator for today’s session. Mariam from Gambia will be the online moderator and Nelly from Georgia will be the rapporteur for this session. Today, we embark on a journey that transcends the boundaries of traditional discourse and delves into the intricate realm of safeguarding children’s digital lives. In this age of boundless technological advancements, we find ourselves standing at a pivotal juncture where the collection and utilization of children’s data have reached unprecedented heights. From the moment there existed. becomes evident, their digital footprints begin to form, shaping their online identities even before they can comprehend the implications. Ultrasound images, baby cameras, social media accounts, search engine inquiries, the vast web of interconnected platforms weaves a tapestry of data silently capturing every heartbeat, every interaction. But amidst this digital tapestry lies a profound challenge, the protection of children’s data and their right to privacy. Children, due to their tender age and limited understanding, may not fully grasp the potential risks, consequences, and safeguards associated with the processing of their personal information. They are often left vulnerable, caught in the crossfire between their innocent exploration of the online world and the complex web of data-collecting institutions. Hence today, we are gathered here to delve deeper into the discourse on children’s online safety, moving beyond the usual topics of cyberbullying and internet addiction. Our focus will be on answering the following questions. How do we ensure that children in different age groups understand, value, and negotiate their digital self and privacy online? What capabilities or vulnerabilities affect their understanding of their digital data and digital rights? What is a good age verification mechanism so that such mechanism does not in itself end up collecting even more personal data? And finally, how can we involve children as active partners in the development of data governance policies and integrate their evolving capabilities, real-life experiences, and perceptions of the digital world to ensure greater intergenerational justice in laws, policies, strategies, and programs? We hope that this workshop will help the attendees unlearn the current trend of universal and often adult treatment of all uses, which fails to respect. children’s evolving capacity, often lumping them into overly broad categories. Attendees will be introduced to the ongoing debates on the digital age of consent. Panelists will also elaborate on children’s perception of their data self and the many types of children’s privacy online. Participants will also be given a flavor of the varying national and international conventions concerning the rights of children regarding their data. As our speakers come from a range of stakeholder groups, they will provide the attendees with a detailed idea on how a multi-stakeholder, intergenerational, child-centered, child-rights-based approach to data governance-related policies and regulations can be created. I invite you all to actively engage in the session, to listen to our esteemed panelists, and to ask questions, contribute your insights, and share perspectives. I would now like to introduce our speakers for today. To begin with, we have Professor Sonia Livingstone, who is a professor at the Department of Media and Communications at the London School of Economics. She has published about 20 books and advised the UN Committee on the Rights of the Child, OECD, ITU, and UNICEF on children’s safety, privacy, and rights in the digital environment. Next, we have Edmund Chung, who serves as the CEO of .Asia on the board of ICANN, Make a Difference, Engage Media, Exco of ISOC Hong Kong, and Secretariat of APR IGF. He has co-founded the Hong Kong Kids International Film Festival and participates extensively on internet governance matters. Next, we have Njimile Davis-Michael, who is a Senior Program Analyst in the Technology Division of USAID, where she helps to drive the agency’s development efforts related to internet affordability, data governance, and protecting children and youth from digital harms. Next, we have Emma Day, who is a human rights lawyer specializing in human rights and technology, and she is also the co-founder of the International Human Rights Council. founder of TechLegality. She has been working on human rights issues for more than 20 years now, and has lived for five years in Africa, and six years in Asia. And last but not the least, we have Fyodor Askeres, who is a technology policy expert. She consults with civil society organizations, including but not limited to, the Carnegie Endowment for International Peace, National Democratic Institute, Committee to Protect Journalists, and Partnerships on AI. I would now like to move to the next segment. I now invite our speakers to take the floor and convey their opening remarks to our audience. I now invite Professor Sonia Livingstone to please take the floor.
Sonia Livingstone:
Thank you very much for that introduction, and it’s wonderful to be part of this panel. So I want to talk about children’s right to privacy in the digital environment. And as with other colleagues here, I’ll take a child rights focus, recognizing holistically the full range of children’s rights in the Convention on the Rights of the Child, and then homing in on Article 16 on the importance of the protection of privacy. So I was privileged to be part of the drafting group for general comment number 25, which is how the Committee on the Rights of the Child specifies how the convention applies in relation to all things digital. And I do urge people to read the whole document. I’ve just here highlighted a few paragraphs about the importance of privacy and the importance of understanding and implementing children’s privacy often through data protection and through privacy by design as part of a recognition of the wider array of children’s rights. So to respect privacy must be proportionate, part of the best interests of the child, not undermine children’s other rights, but ensure their protection. And I really put these paragraphs up to show that we are addressing something complex in the offline world, and even more complex. I fear, in the digital world, where data protection mechanisms are often our main, but not only tool to protect children’s privacy in digital contexts. I’m an academic researcher, a social psychologist, and in my own work I spend a lot of time with children seeking to understand exactly how they understand their rights, their privacy, and we did an exercise as part of some research a couple of years ago that I wanted to introduce the types of privacy and the ways in which children, as well as we, could think about privacy. So, as you can see on the screen, we did a workshop where we asked children their thoughts on sharing different kinds of information with different kinds of sources, with different organisations. What would they share and under what conditions with their school, with trusted institutions like the doctor or a future employer? What would they share with their online peers and contacts? What would they share with companies, and what do they want to keep to themselves? And we used this as an exercise to show that children know quite a lot, they want to know even more, and they don’t think of their privacy only as a matter of their personal, their interpersonal privacy, but it is very important to them that the institutions and the companies also respect their privacy. And if I can summarise what they said in one sentence, the idea that companies would take their data and exploit their privacy, the children’s cry was, it’s none of their business. And the irony that we are dealing with here today is that it is precisely those companies’ business. We can see some similar kinds of statements from children now around the world in the consultation that was conducted to inform the UN Committee on the Rights of the Child at General Comet 25. And as you can see here, children around the world have plenty to say about their privacy and exactly understand it both as a fundamental right in itself and also as important for all their other rights. Privacy mediates safety. Privacy mediates dignity, privacy mediates the right to information and so forth, many more. I think we’re now in the terrain of looking for regulatory strategies as well as educational ones. And I was asked to mention, I think this panel will discuss the idea of age appropriate design codes, particularly as one really proven invaluable mechanism and we will talk further about this, I know. But the idea that children’s privacy should be respected and protected in a way that is appropriate to their age and that understands the link between privacy and children’s other rights, I think this is really important. And we see this regulatory move now happening in a number of different international and national contexts. I’ve spent the last few years working with the Five Rights Foundation as part of running the Digital Futures Commission. And I just wanted to kind of come back to that holistic point here. In the Digital Futures Commission, we ask children to comment and discuss all of their rights in digital contexts and not just as a research project, but as a consultation activity to really understand what children think and want to happen and what to be heard on a matter that affects them and privacy online is absolutely a matter that affects them. And we use this to come up with a proposal for child rights by design, which builds on initiatives for privacy by design, safety by design, security by design, but goes beyond to recognize the holistic nature of children’s rights. And so here we really pulled out 11 principles based on all the articles of the UN Convention on the Rights of the Child and on. And so you can see that privacy is a, is a right to be protected in the design of digital products and services as part of attention to children’s and the age appropriate service and building on consultation, supporting children’s best interest, promoting their safety well being development and agency, and I will stop there, and I look forward to the discussion. Thank you.
Moderator:
Thank you very much, Professor Livingstone that was very very insightful. We will now move to admin. Would you like to take the floor?
Edmon Chung:
Hello. Thank you. Thank you. Thank you for for having me and admin from Asia will be sharing, I guess, building on what Sonia just mentioned, we’ll be sharing a little bit about our work at kids, which actually also kind of trying to operationalize the convention on the rights of the child. But first of all, I just want to give a quick background why dot Asia is involved in this dot Asia ourselves is a obviously operates the dot Asia top level domain so you can have domains such as whatever dot Asia that provides the income source for us and so every dot Asia domain actually contributes to the internet development in Asia. One of the things you know some of the things that we do include youth engagement, and we actually are very proudly that very proud that to the net mission program is the. longest-standing youth internet governance engagement program. And that sort of built our interests or our awareness to supporting children’s and children’s rights online. Back in 2016, we actually launched a little program that looked at the impact of sustainable development goals and the internet. And we recently launched an eco-internet initiative, but I’m not going to talk about that. What I want to highlight is that engaging children on platforms, including domain, top-level domains, is something that I think is important. And one of the things that I would like to share. So on the specific topic of .Kids, actually the .Kids initiative started more than a decade ago in 2012 when the application for .Kids was put in through ICANN for the .Kids top-level domain. Right at that point, actually, there was a engagement with the children’s rights and children’s welfare community about the process itself, but I won’t go into details. What I did want to highlight is that part of the vision of .Kids is actually to engage children to be part of the process in developing policies that affect them and to involve children’s participation and so on. And in fact, in 2013, during the process where we were going through the ICANN process, we actually helped support the first children’s forum that is focused on the ICANN process itself, and that was held in April of 2013. Fast forward 10 years, we were finally able to put .kits into place in late, well actually last year, but the .kits top-level domain actually entered the internet on April 4th of 2022 and was launched late last year in November 29th of 2022. So it is less than a year old, so really not even a toddler for .kits. But let’s focus on the difference between .kits and for example .azure or .com. One of the interesting things is that at the ICANN level, there is no difference. For ICANN, operating a .kits would be exactly the same as operating .com. We disagreed and that’s why we engaged into the decade-long campaign to operate .kits and believe that there are policies that are required above and beyond just a regular registry, just a regular .com or … wherever, because there is not only a set of expectations, there are … it is important for … and here is why we say it’s the kids’ best interest domain. That is the idea behind .kits and let’s look at part of the registry policies. But for .kits ourselves, if you think about it, of course we don’t keep children’s data or data about kids, but does that mean we don’t have to have policies that actually is around the registry or for .kits domains itself? Well, we think no. And building off what … as Professor Livingstone was saying. In fact, we have a set of guiding principles that was developed with the support from the Children’s Rights and Welfare Community and based on the Convention of the Rights of Child. And of course, there’s additional kids-friendly guidelines, there’s kids’ anti-abuse policy, and also kids’ personal data protection policies. And I wanna highlight that the entire guiding principles is actually based on the Convention on the Rights of the Child, and probably not all the articles, but certainly articles that outlines protection and prohibited materials. A kind of way to think about it is probably that for the .kids domain, we do enforce and to ensure that restricted content, and the best way to think about it is really that if you think of a movie, the restricted content or the rated R movies would obviously not be acceptable on .kids domains. But on top of that, we also have specific privacy provisions also built on Article 16, as Sonia mentioned earlier, and some other aspects that is around the Conventions of the Rights of the Child. So we think there’s something that is important and is being built into it, and we’re definitely the first registry that builds policies around Convention on the Rights of the Child, but we are also one of the very few domain registries that would actually actively engage in suspension of domains or processes to deal with restrictions. restricted content. Beyond that, there’s a portal and a platform to report abuse and to alert us on issues. And in fact, I can report that we have already taken action on abusive content and restricted content and so on. But I will like to end with a few items. There are certainly a lot of abuses on the internet. But the abuses that is appropriate for top level domain registries to actually take is a subset of that. There are many other abuses that happen on the internet. And there are different types of DNS abuses and different types of cyber abuses that may or may not be effective for the registry to take care of. And that’s, I guess, part of what we discussed. That’s why we bring it to IGF and these type of forums to discuss, is because there are other stakeholders that need to help support a safer environment online for children. So with that, I guess there are a number of acts that are put in place in recent years. And I think .Kids is a good platform to support the kid safety online bill in the US and on the online safety bill in the UK. We do believe that collaboration is required in terms of security and privacy. And one of the vision, as I mentioned, for .Kids is to engage children in the process. And we hope that we will reach there soon. But it’s still in its toddler phase. So it doesn’t generate enough income for us to bring everyone here. But the vision itself is to put the policies and protection in place and also, into the future, be able to support children’s participation in this internet governance discussion that we have.
Moderator:
Okay. Thank you so much, Edwin. That was very, very inspiring. Let’s now go to Njimile.
Njemile Davis-Michael:
Thank you, Ananya. Wonderful to be here. Thank you so much for joining the session, giving me the opportunity to speak about USAID’s work in this area. So USAID is an independent agency of the United States government where I work with 9,000 colleagues in a hundred countries around the world to provide humanitarian relief and to fund international development. In the technology division where I sit, there are a number of initiatives that we support related to digital innovation from securing last-mile internet connectivity to catalyzing national models of citizen-facing digital government. And we work in close collaboration with our U.S. government colleagues in Washington to inform and provide technical assistance, to support locally-led partnerships, and to create the project ideas and infrastructure needed to sustain the responsible use of digital tools. Although we rely consistently on researching, developing, and sharing best practices, our activity design can be as varied as a specific country and community context in which we are called to action. Indeed, the many interconnected challenges that come with supporting the development of digital societies has challenged our own evolution as an agency. So in early 2020, we launched USAID’s first digital strategy to articulate our internal commitment to technological innovation, as well as for the support of open, inclusive, and secure digital ecosystems in the countries we serve through the responsible use of digital technology. strategy is a five-year plan that is implemented through a number of initiatives and there are some that are particularly relevant to our work with young people. Specifically, we have made commitments to improve digital literacy, to promote data literacy through better awareness, advocacy, and training for data privacy protection and national strategies for data governance, to improve cyber security, to close the gender digital divide and address the disproportionate harm women and girls face online, and to protect children and youth from digital harm. Each of these initiatives is supported by a team of dedicated professionals that allow us to think about how we work at the intersection of children and technology. Digital tools play an increasingly important role for adults working to protect children, for example, by facilitating birth registration, providing rapid family tracing, supporting case management, and by using better, faster analysis of the data collected to inform the effectiveness of these services. And they can also play a role in the development and integration of children themselves into larger social and cultural norms by providing a place to learn, play, share, explore, and test new ideas. Indeed, many children are learning how to use a digital device before they even learn how to walk. However, we also know that increased digital access also means increased risk. And so in the context of protecting children and youth from digital harm, USAID defines digital harm as any activity or behavior that takes place in the digital ecosystem and causes pain, trauma, damage, exploitation, or abuse directly or indirectly in either the digital or physical world, whether financial, physical, emotional, psychological, or sexual. For the estimated one in three Internet users who are children, these include risks that have migrated onto or off of digital platforms that enable bullying, harassment, technology-facilitated gender-based violence, hate speech, sexual abuse and exploitation, recruitment into trafficking, and radicalization to violence. Because digital platforms also generate and share copious amounts of data, our colleagues who’ve done an incredible amount of highly commendable work at UNICEF, for example, around children’s data, as well as my colleagues on today’s panel, will likely agree that there are other perhaps less obvious risks. For example, we’ve observed in recent years that children seem to have given up or into or in, I should say, to uniform consent of their data collection, probably due to their naivete and trust of the platforms in which they’re engaging. But a lack of informed decision-making about data privacy and protection effectively transfers power from the data subject to the data collector, and the consequences of this can be long-lasting. The number of social media likes, views, and shares are based on highly interactive levels of data sharing, affecting children’s emotional and mental health. Data algorithms can be leveraged to profile and manipulate children’s behavior, narrowing exposure to new ideas, limiting perspective, and even stunting critical thinking skills. Data leaks and privacy breaches that are not just harmful on their own but can be orchestrated to cause intentional damage is another risk. And we can counteract these and other challenges by helping practitioners understand the risks to children’s data and to ensure accountability for bad actors. The theoretical physicist Albert Einstein is famously quoted as saying that if he had one hour to solve a problem, he would spend 55 minutes. minutes thinking about the problem and only five minutes on the solution. And the sheer amount of data that we generate and have access to means that our vision of solving the global challenges we face with data is still very much possible, especially as we are realizing unprecedented speeds of data processing that are fueling innovations and generative AI will enable the use of 5G and that we will see in quantum computing. So as we celebrate the 50th birthday of the Internet at this year’s IGF, it’s amazing to think about how much all of us here have been changed by the technological innovations paved by the Internet and in that same spirit of innovation, we’re optimistic at USAID that data governance frameworks can help mitigate the risks we see today and be leveraged to create a safer, more inclusive, and even more exciting world of tomorrow, which is the Internet our children want.
Moderator:
Thank you very much, and Jameela, Emma, would you like to take the floor next? Emma, are you here with us?
Emma Day:
Thank you. Yes. Can you see my screen?
Moderator:
Yes. Please go ahead.
Emma Day:
Great. Thank you. Okay, so I’ve been asked to answer how civil society organizations can tackle the topic of child-centered data protection. I think this is a multi-stakeholder issue, and there are many things civil society organizations can do. As a lawyer, I’m going to have a focus on the more law and policy-focused ideas. So there are three main approaches that I have identified. The first is civil society organizations can engage in advocacy related to law and policy. Second, they can engage themselves in litigation. and requests to regulators, I should say. And third, they can carry out community-based human rights impact assessments themselves. So the first example of advocacy related to law and policy, here the target is policymakers and regulators. As an example of this, I was involved in a project that was led by Professor Sonia Livingstone, who’s also on this panel. And this was part of the UK Digital Futures Commission. And it was a project which involved a team of social scientists and lawyers. And we looked in detail at how the use of ed tech in schools is governed in the UK. And we found it’s not very clear whether the use of ed tech in schools was covered by the UK age appropriate design code or children’s code. So the situation of data protection for children in the education context was very uncertain. We had a couple of meetings with the ICO and the Digital Futures Commission also had a group of high level commissioners they had brought together from government, civil society, the education sector and the private sector. And they held two public meetings about the use of ed tech in UK schools. Subsequently, in May, 2023, the ICO published updated guidance on how the children’s code applies to the use of ed tech in UK schools. I won’t go into the details of that guidance now, but suffice to say this was much needed clarification. And it seemed to be as a result of our advocacy, although this was not specifically stated. The second example is a civil society organisations engaging themselves in litigation and requests to regulators. So some civil society organisations have lawyers as part of their staff, or they can work with lawyers and other experts. So an example of this is an organisation in the US called Fair Play. In 2018, they led… coalition asking the Federal Trade Commission to investigate YouTube for violating the Children’s Online Privacy Protection Act, or COPPA, by collecting personal information from children on the platform without parental consent. And as a result of their complaint, Google and YouTube were required to pay what was then a record $170 million fine in a settlement in 2019 with the Federal Trade Commission. So in response, rather than getting required parental permission before collecting personal information from children on YouTube, Google claimed instead it would comply with COPPA by limiting data collection and eliminating personalized advertising on their Made for Kids platform. So Fairplay wanted to check if YouTube had really eliminated personal advertising on their Made for Kids products, and they ran their own test by buying some personalized ads. Fairplay says that their test proves that ads on Made for Kids videos are in fact still personalized and not contextual, which is not supposed to be possible under COPPA. And Fairplay wrote to the Federal Trade Commission in August 2023 and made a complaint and asked them to investigate and to impose a fine of upwards of tens of billions of dollars. We don’t know the outcome of this yet, that complaint was only put in in August this year. And then the third solution, which I think is a really good one for civil society organizations, which I haven’t really seen done completely in practice yet, is to carry out community-based human rights impact assessments. So often companies themselves carry out human rights impact assessments, but it’s also absolutely something that can be done at a community level. And this involves considering not just data protection, but also children’s broader human rights as well. It’s a multidisciplinary effort, so it involves consulting with the company about the impact of their of their products and services on children’s rights, perhaps working with technical experts to test what’s actually happening with children’s data through apps and platforms, and working with legal experts to assess whether this complies with laws and regulations. And crucially, this should also involve meaningful consultation with children, and I think we’re gonna talk a little bit later about what meaningful consultation with children really looks like. I’m going to leave it there because I think I’m probably at the end of my time, looking forward to discussing further, thank you.
Moderator:
Thank you very much, Amma. And finally, Theodora, would you like to let us know what your opening remarks are?
Theodora Skeadas:
Yes, thank you so much. Hi, everybody. It’s great to be here with you. Let me just pull up my slides. Alrighty. Okay. Mm-hmm. And hold on one second. Let me just grab. Okay. Great. So, alrighty. So, it’s great to be here with all of you, and I’ll be spending a few minutes talking about key international children’s rights principles, standards, and conventions, as well as major issue areas around personal data collection, processing, and profiling, and then some regulation and legislation to be keeping an eye out for. So, I’ll start with standards and conventions and then turn to some principles. So, some of the major relevant standards and conventions that are worth discussing I’ve listed here, which include the UN Convention on the Rights of the Child, a widely ratified international human rights treaty, which enshrines the fundamental rights of all children under age. 18. It includes a number of provisions that are relevant to children’s data protection, such as the right to privacy, the right to the best interests of the child, and the right to freedom of expression. Also, the UN guidelines for the rights of the child as it relates to the digital environment in 2021. These guidelines provide guidance around how to apply the UNCRC or the rights of the child to children’s rights in the digital environment, and they include a number of provisions that are relevant to children’s data protection, like the right to privacy and confidentiality, the right to be informed about the collection and use of data, and the right to have data erased. Then there’s GDPR, or the General Data Protection Regulation. So this is a comprehensive data protection law that applies to all organizations that process data. For those in Europe, although sometimes this has been extended beyond specifically for companies or employers that are international and exist beyond the European area. These include a number of special provisions for children as well. Then COPA, the Children’s Online Privacy Protection Act in the US, is a federal law that protects the privacy of children under age 13 and requires websites and online services to obtain parental consent before collecting or using children’s personal information. So some of the principles that are important to discuss here include data collection, data use, data storage and security, data access, data and erasure, transparency, and accountability. So this means that organizations should only collect data for legitimate purposes and with the consent of parents and guardians. On data use, it’s that organizations should use children’s data in a way that is consistent with their best interest. On data storage and security, organizations should implement appropriate security measures to protect children. On data access and erasure, organizations should give children and their parents or guardians access to children’s personal information. children’s data and the right to have it erased. On transparency and accountability, organizations should be transparent about what they’re doing to make sure that they’re protecting children. Additionally, there’s age-appropriate design, privacy by default, data minimization, and parental control. Products and services should be designed with the best interests of children in mind, and also be appropriate for their age and developmental stage. On privacy by default, products and services should be developed with privacy in mind. On data minimization, products and services should only collect and use the minimum amount of data required. On parental controls, products and services should provide parents with meaningful control over their children’s online activities. Major issues around personal data collection, processing, and profiling that are in discussion today, include consent, so children may not fully understand what it means to consent to the collection and use of their personal data. That’s also true for adults, but it’s especially true for children. Transparency, so organizations may not be transparent about how they collect, use, and share children’s personal data, which can make it difficult for parents to make informed decisions about their children. Data minimization, so organizations often collect more personal data than is necessary for the specific purpose. This excess data can have other purposes like targeted ads profiling. On data security, organizations may not be implementing adequate security measures to protect the personal data of children from unauthorized access, disclosure modification, and destruction, which can put children at risk. Profiling, organizations may use children’s personal data to create profiles, which can be used to target children with advertising and content that might not be in their best interests. Additionally, strengthening legal protection. So there’s an ongoing conversation around how governments can strengthen legal protections for children, such as requiring parental consent and prohibiting organizations from profiling children through targeted advertising. Also raising awareness. There is a huge conversation ongoing now about how parents and children should be educated, but the risks and benefits of sharing personal data online to make sure they’re making informed decisions about what to share and what not to share. Also improve transparency and accountability. Organizations should be transparent about how they collect, use, and share children’s personal data, and they should be accountable for that data. And then last is designing privacy-enhancing technologies. Technologists can design products and services that collect and use less personal data from children, and also that help children and parents manage their privacy online. So next, we’ll look at regulation and legislation. We’ve been seeing a huge amount of regulation and legislation in this space. In the U.S. context, we’ve seen some U.S. federal bills, but because those haven’t passed, we’ve been seeing a transition to state-level bills. So I wanna pull up, there we go. So this is a piece that I wanted to share that talks about bills in the area that we’re seeing in the U.S. So there is here a compilation of 147 bills across the U.S. states. Not all are represented, but a lot of them are, and interestingly, states across the political divide. And you can see here the legislation that’s in discussion includes themes like age verification, more age verification, instruction, parental consent, data privacy, technologies, access issues, more age verification, so that’s clearly a recurring theme, recommendations on the basis. of data, et cetera. And you can see here, there are some categories. So we see law enforcement, parental consent, age verification, privacy, school-based, hardware, filtering, perpetrator, so that looks at safety, algorithmic regulation, and more. And then we can see the methods. So these include third parties, state digital IDs, commercial provider, government, IDs, self-attestation, and then you can see what ages these are targeting. So mostly they’re targeting age 18, but there are a few that look at 13, and sometimes other ages as well. And then the final categories of analysis look at access limited content or services, content types, and status. And I think that is it. Thank you so much.
Moderator:
Thank you very much, Fiora. I have received a request, actually, from the audience. If you could kindly share the link to the website that you were just sharing with us, that would be great. It was a very, very, very good remark. Thank you very much. Okay, so we will now be moving on to the next segment, where I will be directing questions to each of our speakers. We will begin with Professor Sonia Livingstone. While I had a set of questions prepared for you, Professor Livingstone, I think you kind of answered most of those. So let’s pick something from what you have focused on in your opening remarks. You mentioned about age-appropriate design code. So I wanted to know what are your views on this age-appropriate design code for different countries, since in different cultural, national, international, and local contexts, what is appropriate for what age differs? So what would you like to say about that, and how can an age-appropriate design code be the answer in such a context?
Sonia Livingstone:
And I think that’s a great question And I think that’s a great question, and I think others will want to pitch in. I think my starting point is to say that if we’re going to respect the rights of children online. We have to know which user is a child. And the history of the internet so far is a failed attempt to so far is a failed attempt to respect children’s rights without knowing which user is a child. So, we need a mechanism So, we need a mechanism that we can use to deal with the problem. And at the moment, we either have no idea who a user is or we somehow assume, or produces product producers or produces product producers somehow assume that the user is an adult, often in the global north, often male, and rather competent to deal with what they find. So we need a mechanism So we need a mechanism, and age that we can use to deal with the problem. And the extent to the problem. And the extent to which is being taken up in the global north and the global south shows the genuine need to identify who is a child. There are two problems, and one There are two problems, and one you didn’t highlight but it does mean that we need to, in some way, identify the age of every user in order to know which ones are children, because we need to identify the age of children, because we need to identify the age of children. So there’s a mechanism, and So there’s a mechanism, and which others have alluded to, and then, as you rightly say what is appropriate for children of different ages, varies in different cultures. I think I would answer that by returning to the UN Convention on the rights of the child, it addresses the child rights at the level of the universal, the infrastructure of that right so therefore, on the basis of the concept of gender equality and equality of civil rights and liberties to participate, be of children’s rights at the universal level. But there are also many provisions in the convention, and also in general comment 25 about how this can be and should be adjusted and tailored to particular circumstances, not to qualify or undermine children’s rights, but to use mechanisms that are appropriate to different cultures. And I think this will always be contested and probably should be. But at heart, if you read the age appropriate design codes, they focus on the ways in which data itself is used by companies in order to support children’s rights rather than setting a norm for what children’s lives should look like.
Moderator:
Thank you very much, Professor Livingstone. That was a very, very detailed and very nuanced answer. Next, Edmund, since we are on the subject of age, what do you think is a good age verification mechanism which does not in itself lead to the collection of more personal data?
Edmon Chung:
Of course, that is a very difficult question, but I guess a few principles to start with. First of all, privacy is not about keeping data secure and confidential. Privacy, the first question is whether the data should be collected and kept in the first place. In terms of privacy, if it is just an age verification, and whoever verifies it discards or erases or deletes the data after the verification, there should be no privacy concern. But of course, platforms and providers don’t usually do that, and that’s one of the problems. But the principle itself should be just like when you show your ID or whatever, The person takes a look at it, you go in, and that’s it. They don’t take a picture of it and keep a record of it. So that’s privacy to start with. The other thing, then, we need to probably think about whether the age verification is to keep children out or let children in. It’s a big difference in terms of how you would then deal with it. But especially on whether or not data should be kept or should be discarded. Now on the actual verification mechanism, I think, in fact, there is well-developed systems now to do what is called pseudonymous credentials. So basically, the platform or the provider doesn’t have to know the exact data, but can establish digital credentials with digital certificates and cryptographic technologies, techniques such that parents can vouch for the age and complete the verification without disclosing the child’s personal data. I think these are the mechanisms that are appropriate. And more importantly, I guess I go back to the main thing, is that if it is just for age verification, whatever data that was used should be discarded the moment the verification is done. Thank you very much.
Njemile Davis-Michael:
That was very comprehensive. Next, Njimili. How is the USAID thinking about data governance, especially with relation to children’s data governance? Yeah. We spend a lot of time thinking about data governance, and that’s because data really fuels the technology that we use at either. generates data in some way or uses data for its purpose. And technologies have a tendency to reinforce existing conditions. And so we wanna be really intentional about how data is used to that end. Data governance is important for a few basic reasons. One is because data by itself is not intelligent, so it’s not gonna govern itself. And because data multiplies when you divide. So there is so much of it, right? We know that the sheer amount of data, again, that we’re generating needs to be wrangled in some manner if we’re gonna have some control over the tools that we’re using. So data governance framework helps us think to think about what needs to be achieved with the data, who will make decisions about how the data is treated and how governance of the data will be implemented. Writ large, we look at five levels of data governance implementation, and that’s everything from transnational data governance down to individual protections and empowerment. And that’s really the sweet spot for us in thinking about children. It’s about developing awareness and agency, about participation in data ecosystems. Kind of in the middle is thinking about sectoral data governance, where we find that there are highly developed norms around data privacy, data for decision-making, data standardization, that help structure data-driven tools like digital portals for sharing data. And so we are currently working with Mozilla Foundation on a project similar to the one that we heard Emma talking about, where we are working in India and India’s public education sector to think about data governance. interventions there. India has one of the largest, if not the largest, school systems for children in the world. 150 million children are enrolled in about 1.5 million schools across the country. India had one of the largest, the longest, I’m sorry, periods of shutdown during COVID-19, and EdTech stepped into that gap very altruistically, right, to try to close gaps in student education. However, as, again, Emma has pointed out and we have found in our own research, there was some nuances in the ways that these EdTech institutions were thinking about student learning compared to the way schools were. And so, you know, private industry is incentivized by number of users and not necessarily learning outcomes. There needed to be some clarity around the types of standards that EdTech companies are to meet. There’s a reliance on EdTech replacing teachers’ interaction with students and data subjects generally lacking awareness about how their data is used by EdTech and schools to measure student progress and learning. So, we’re currently working with a number of working groups in India to really understand how to bridge this gap and to synchronize the collection of data and data analysis that harmonizes analog tools with digital tools. So, teachers who are taking attendance, how does that correlate to scores on EdTech platforms? And so, we’re, you know, focused right now on the education sector, but we imagine that this is going to have implications for other sectors as well. We’re also working, and I don’t know if I mentioned that, on this partnership. with Mozilla Foundation. We’re also working in partnership with Mozilla to look at responsible and ethical computer science for university students, also in India and in Kenya. And here, we’re hoping to educate the next generation of software developers to think more ethically about the social impacts of the technology they create, including generative AI. And then, going back to the protecting children and youth from digital harm that we’re doing, we are extremely proud to be working alongside and supporting youth advocates through our Digital Youth Council. We have Ananya, who participated in Cohort 1, and Mariam, who was, I believe, in the room a little bit earlier, helping to moderate the chat for today’s session, who are extraordinary examples of the type of talent that we’ve been able to attract and to learn from. In year 2 of the cohort, we received almost 2,700 applications worldwide. And from that number, we selected 12 council members. And we’re anticipating just as fabulous results from them. And so that’s generally how we are thinking about children’s data through our data governance frameworks. I think just riffing off of what I’ve heard today, we can also advocate through data governance for inclusion and enforcement of the rights of children into national data privacy laws, especially as we know, in IGF, lots of countries are thinking about how to develop those privacy laws. We should be advocating for the rights of children to be included. And in civil society, there’s opportunity to explore alternative approaches to data governance. Data cooperatives, which are community-driven, .can help groups think about how to leverage their data for their own benefit. Civil society perhaps has room to explore the concept of data intermediaries where they are a trusted third party that works on behalf of vulnerable groups like children to negotiate when their data is accessed and to also enforce sanctions when data is not used in the way that it was intended to.
Moderator:
Okay, thank you so much. And since Njimri has already approached on the conversation on bringing in the civil society, why don’t we move to Emma Day and ask her the next question. So Emma, how do you think civil society organizations could work with children to promote better data protection for children?
Emma Day:
Thanks so much for the question and yeah, I think Njimri came up with some really good starting points for this conversation already. I think to involve children, it has to really be meaningful and one of the difficulties not just with children, in fact, with consulting with communities in general on these topics of data governance is it’s very complex and it’s hard for people to understand immediately the implications of data processing for their range of rights, particularly projecting into the future and what those future impacts might be. So I think to begin with, to make that consultation meaningful, you have to do a certain amount of education and I think some of the great ways to do this is to involve children in things like data subject access requests where they can be involved in the process of writing to a company and requesting the data that that company is keeping on them so they can see in practice what’s happening with their data and form a view on what they think about that and for children to be involved in. these kinds of community data auditing processes or so there is some auditing of AI community-based processes that have been going on which I don’t think have involved children so far but obviously older children could get involved in these kinds of initiatives and I think involving children in conceptualizing how data intermediaries can work best for children of different ages is really important this is something we talked about a couple of years ago now I was one of the authors of the UNICEF manifesto on data governance for children and we had a few ideas in there about what civil society organizations can do to involve children I haven’t seen a lot of this happen in practice another one of the key things that that I would like to see is for civil society organizations to involve children in in holding companies accountable by auditing their products by doing these kinds of community-based human rights impact assessments and I think we need to think about not just the platforms and the apps but also some of the things like age verification tools like edtech products like health tech products tools that are used in the criminal justice system that are used in the social welfare system you really almost technology products impact almost all areas of children’s lives and we have to remember that all of these are private sector companies even where they’re providing solutions that are that are essentially to promote children’s rights we need to ensure that children are involved in in auditing those products and making sure that they really are having benefit for for children’s rights but I think to do that civil society organizations need to ensure that they involve academics they involve technologists and they involve legal experts to make sure that they they really get it right because these are complex assessments to make.
Moderator:
Thank you very much. Let’s move to Theodora. I know you mentioned about a lot of the existing international standards, conventions and laws regarding children’s rights and their data. What about the regulations and legislations which are underway to address some of these concerns? Are there any particular areas where these regulations could do better or any other suggestions that you might have for any such future conventions?
Theodora Skeadas:
Hi everyone. That’s a really great question.. Thanks Manya. So I’m gonna screen share again just so folks can see the database that I was referencing earlier. I think to me it’s not so much that there are specific technical gaps in what we’re seeing but rather and and of course this is a US focused conversation and it’s important to mention that there is legislation being discussed globally outside of the US as well and that legislation that’s happening elsewhere is inclusive of children’s safety issues. So for example in the European Union transparency related measures like the Digital Services Act and Digital Markets Act will have impacts on child safety and the new UK online safety bill which is underway will also impact child safety and and legislation discussions are happening elsewhere as well but within the US where this data set was collected and where my knowledge is strongest I think that it is pretty comprehensive although it’s it’s interesting to note that one of the questions that I saw in the chat touched on a theme that wasn’t discussed here in this legislation. So specifically the question was whether there was and I’m just looking through the chat again whether there was here we go Oh yeah, legislation related to assistive ed tech in schools. So I observed here that there are four school-based policies and two hardware-based policies, but none of them are focused on assistive ed tech. The ones that are focused on schools look more at like access, age verification, policies and education, and the hardware ones are focused more on filtering and technical access, or rather. So you can see those here, like requiring tablets and smartphone manufacturers to have filters that are enabled at activation and only bypassed or switched off with a password. So you can see that there is quite a range. I think to me, the bigger concern is whether this legislation will pass. We see a really divided political landscape. And even though we’re seeing a proliferation of data and data-related issues around children in legislative spaces, the concern is that there isn’t going to be a legislative majority for this legislation to pass. So it’s not per se that I see specific gaps and more that I have broader concerns about the viability of legislation and the quality of the legislation, because not all of it is equally as high quality. And so I think the increasing fraught political landscape that we find ourselves in is making it harder to pass good legislation, and there are competing interests at play as well. Thank you.
Moderator:
Thank you very much. I would now like to thank all our speakers for sharing their insights with our attendees. At the very same time, I would like to thank our attendees who I see are having a very lively chat in Zoom. Hence, since you have so many questions, why don’t we open the floor for questions from the audience? We would be taking questions from both onsite and online audience. If you’re onsite and if you have a question, you have two stand mics right there. You could kindly go to the microphones and please ask your question by stating your name and the country you’re from, and post that we will be taking questions from the chat.
Audience:
is Jutta Kroll. I’m from the German Digital Opportunities Foundation. They’re heading a project on children’s rights in the digital environment. First of all, let me state that I couldn’t agree more with what Sonja said in her last statement that if we don’t know the age of all users, age verification wouldn’t make sense. We need to know whether people are over a certain age, belong to a certain age group, or under a certain age. My question would be, we need to adhere to the principle of data minimization, so whether any of you have already thought how we can achieve that without creating a huge amount of additional data, and even the Digital Services Act doesn’t allow to collect additional data just to verify the age of a user. So it’s quite a difficult task, and Edwan has already said if we could trust companies when they do the age verification that they delete afterwards the data, but I’m not sure whether we can do so. So that would be my question, and the second point would also go to the last speaker, Theodora, that when you gave us a good overview on the legislation, the question would be how could we ensure that legislation that is underway takes into account from the beginning the rights of children, not like it was done in the GDPR on the very last minute, putting a reference to children’s rights into the legislation. Thank you for listening.
Moderator:
Thank you very much. Why don’t we deal with the first half of the question? Would any of the speakers like to take that? And we would then direct the second question to Federa. Yes, please go ahead.
Edmon Chung:
I’m happy to add to what I already said. I guess in terms of, in those cases, then it’s a pseudonymized data, right? I mean, so instead of collecting the actual data, there is a, it is very possible for a system, like platforms to implement pseudonymized credential systems. And those vouching for a participant’s age could be distributed, right? I mean, could be schools, could be parents, could be your workplace or whatever. But as long as it is a trusted data store that does the verification and then keeps a pseudonymized credential, then the platform should trust that pseudonymized credential. So I think that is the right way to go about it. The other part, as much as I still think it is the right way to ask for it to be deleted, can we trust companies? Probably not, but of course we can have regulation and audits and those kind of things. But for trusted anchors themselves also, whether it’s a school or whether it’s, whatever trusted anchor that the person. actually gives the age verification to, that institution should also delete the raw data and just keep the verification record, verified or not verified. And that’s the right way to do privacy in my mind. Thank you.
Moderator:
I think Professor Livingstone wants to add something. Please go ahead.
Sonia Livingstone:
If I may, yes. Actually, Edmund just said much of what I wanted to say, so I completely agree. And I’ve been part of a European effort, EU consent, which is also seeking to find a trusted third party intermediary that would do the age check, hold the token, and not have it held by the companies. So I think there are ways are being found. Clearly, the context of transparency and accountability and kind of third party oversight that scrutinizes those solutions will really need to be strong. And that also must be trusted. I’d add, I think we should start this process with a risk assessment because not all sites need age checks. Not all content is age inappropriate for children. So one would like to, I would advocate that we begin with the most risky content and with risk assessment so we don’t just roll out age verification excessively. And I’ll end by noting, big tech already age assesses us in various ways. I think the big companies already know the age of their users to a greater or lesser degree of accuracy. And we have no oversight and transparency of that. So I think the efforts being made are trying to write what is already happening and already happening poorly from the point of view of public oversight and children’s rights.
Moderator:
Thank you. Emma.
Emma Day:
I think this is a still a question that everyone’s grappling with really, and there are differing views maybe in different jurisdictions around how well age verification products comply with with privacy laws in different countries. I would really agree with what Sonia said about starting with a risk assessment, I think we need to look at first, what is the problem we’re trying to solve, and then is age verification the best solution, because to start with, if we’re going to process children’s data, it should be necessary and proportionate, and so we have to look at what other solutions there are that are not technical first, that might address the problem we’re trying to address first, rather than looking at just age verification across everything. I think also there’s an issue within, certainly under EU law, pseudo-anonymization is very difficult to say, but it’s also pseudo-anonymized data is still personal data under the GDPR, so it’s not that straightforward within the EU to just use pseudo-anonymized data as an alternative. So I think it’s still very tricky, and at European Union level, this is not something that has been settled yet either.
Moderator:
Okay, and Theodora, any remarks from you?
Theodora Skeadas:
Sure. Yeah, I think this is a really great question. It’s not easy to ensure that legislation takes into account the stated rights of children. I would start with education. I think, frankly, from my experience interacting with legislators, since I participate in the advocacy process, I found that most legislators are just under-informed, and so making sure that they understand what these rights and principles and standards actually are, what does it mean for the right to privacy to be manifest in legislation, or what are the best interests of a child. child? What is the right to freedom of expression? What do we think about the right to be informed when it comes to children? I think most legislators just don’t really know what those things mean. And so educating them, in particular, building coalitions of civil society actors and multi-stakeholder actors can be very effective in educating and influencing legislators around the rights of children. And then as was also mentioned in the chat, I think, Omar just put it in a few minutes ago, I believe including young people in decision-making processes is not just essential, it’s empowering. I think that’s an important part of the process too. Bringing together legislators, so the people who are actually writing legislation and the children themselves is really important so that the legislation process can be child-centric and really center the voices and experiences of the children that we’re trying to serve. And then last, I think it’s important to recognize that this needs to be done in an inclusive way and in a way that engages children from all different kinds of backgrounds so that all different experiences are included as legislation is happening. But again, I think education really is at the core here. Legislators want to hear from us and are excited when we raise our hands. Thank you.
Audience:
Thank you very much. We will now be taking questions from the online audience. May I request the online moderator to kindly read out any questions or comments that we may have received from the online audience? Hi. So we have two questions from the online participants and two comments. Question one is from Omar, who is a 17 year old. He asks, how can child-led initiatives be integrated into data governance, ensuring that children have a voice in shaping policies that directly impact their digital lives? He is the founder and president of Project Omna, which is an upcoming AI-powered mobile app that is focused on children. mental health and child rights and he wants to increase his impact in data governance for children. Second question is from Paul Roberts from the UK and he asked, when it comes to tech companies designing products and services, how common is it for them to be including child rights design in their process and at what stage? Proactive or afterthought for risk minimization? Comment one is also from Omar who said that he is from Bangladesh and is one of the 88 nominees for International Children’s Peace Prize 2023 for Advocacy Works. He is the founder and president of Project Omna and he’s also the youngest and only child panelist of every global digital compact session representing children globally and provided statements on everydata protection and cyber security for children. He suggested answer to the guiding questions that you started the session with is that one, children’s perspective are dynamic and he suggests the use of interactive story-based digital tools to help children grasp the importance of their digital data and rights, adapting these tools to different age levels. Two is that to collaborate with tech companies in order to develop age verification methods that employ user-created avatars or characters safeguarding personal data. Children’s feedback will be instrumental in refining this approach and three, establish child-led digital consoles or advisory groups for direct input into policy decisions. These groups should meet regularly, ensuring real-time feedback from children and aligning policies with their evolving needs and digital experiences. The final comment is from Ying Chu who says that maybe the younger generations know more about privacy protection and how to protect their data than educators or us. After all, the children were born in the internet age and they are internet kids. Many of us are internet immigrants. Oops, sorry, sorry.
Njemile Davis-Michael:
Okay, I’m going to go ahead and start with the first one, and I would love to see your application. One of the things that we try to do there is to raise the voice of youth advocates, not just to the level of international development organizations like USAID, but to also empower them to activate other youth networks. So, we have a platform that we use to encourage youth advocates to do that, and we try to do it in a way that is inclusive, that is awareness-raising, helps to inspire and incentivize solutions that we have not thought of yet. There’s this constant tension between adults who have authority to make decisions, and children who understand what’s best for them, but perhaps don’t have the agency to do such.
Moderator:
Okay, are there any other comments from the panellists? And since we are running short on time, I would otherwise like to move to the next segment. Okay, we see Professor Livingstone has some comments. I would request you to kindly keep it short.
Sonia Livingstone:
Yeah, it’s funny, I’m more familiar with 80 for 30 but I probably have an irritation about social altruism, rightly so. I think the challenge is for those who haven’t yet thought of it of haven’t yet embraced its values And so my answer to Omar and also to Paul Roberts would be to talk more, give more emphasis to child rights impact assessments. I think many companies understand the importance of impact assessment of all kinds and a child rights impact assessment requires and embeds youth participation as part of its process along with gathering evidence and considering the full range of children’s rights. But perhaps it’s more a mechanism in the language of companies and so one that if child rights impact assessment were embedded in their process, perhaps by requirement, I think that would make many improvements.
Moderator:
Thank you, Professor Livingstone. As we enter the final eight minutes of this very, very active and enlightening session, I’m very, very happy to invite our esteemed speakers to kindly share their invaluable recommendations in less than a minute, if possible. The question for all the panelists is how can we involve children as active partners in the development of data protection policies to ensure greater… Before I give the floor to our speakers, I would also like to strongly encourage the audience to seize this opportunity and share the recommendations by scanning the QR code, which is right now displayed on the screen or by accessing the link shared in the chat box. I would now like to welcome Professor Livingstone to kindly share her recommendation once again in less than a minute. Thank you.
Sonia Livingstone:
Well, I’ve mentioned child rights impact assessment and perhaps that is my really key recommendation. I think that what we see over and again in child and youth participation is that children’s and young people’s views are articulate, are significant, and are absolutely valid. The challenge really is also for us… who are adults. Every time we are in a room or a meeting or a process where we see no young people are involved, we must point it out. We must call on those who are organising the events, and that includes ourselves sometimes, to point out the obvious omission and to be ready to do the work to say these are the optimal mechanisms and here is a way to start, because people find it hard but youth participation is absolutely critical in this domain and is of course young people’s right.
Edmon Chung:
Thank you. Edmund? I will be very brief. I think a children’s IGF is called for and that’s the beginning of this wider awareness and I think it’s about building the capacity as well. I mean you can’t just throw children into a focus group for two hours and expect them to come up with a brilliant policy decision, right? So it’s a long-term thing, so it starts with actually the internet governance community and all these things that actually have children as part of a stakeholder group and that I think is probably a good way to go about it.
Njemile Davis-Michael:
Thank you. I agree with everything that I’ve heard and I would add that we need to do a better job discussing digital and data rights in formal education institutions. I think we can do a much better job of that globally, so that there’s a welcome, encouraging environment to hear children how they would like to advance their digital identities in a digital society. They have awareness, they have tools, and they have opportunities to do so in safe ways with mentorship and guidance.
Emma Day:
Emma? I will be very brief. I think a child’s IGF is called for Thank you, some great suggestions so far. I would like to just emphasise that children are not a homogenous group and I think it’s really important to centre the most vulnerable and marginalised children that can be within a country or it can be geographically, particularly considering global reach that a lot of apps and platforms have these days. There’s a particularly great scholar I would recommend reading up on Afsaneh Rigo’s work on design from the margins where she talks about how if products are designed for the edge cases, for the most difficult, most risky scenarios, in the end it’s going to benefit everyone much more. I’m going to share a link to that in the chat, thanks.
Moderator:
Thank you and finally Theodora.
Theodora Skeadas:
Yeah I think that this has been reiterated a few times but it’s worth mentioning it again. Really we need to be centring the voices of children as active participants in conversations about their well-being and so this can be done by including them in surveys, focus groups, workshops, various methods that are children friendly. Like I said I think in the legislative process I think that children should be empowered to advocate for themselves, specifically older children but children from all different backgrounds because this is their well-being at stake. I also think that when it comes to companies I would personally like to see children represented on these advisory boards. That hasn’t traditionally happened and I put a few of the advisory boards in the chat because these are ways to elevate the voices of children directly in conversation with the people making policies for the platforms.
Moderator:
Thank you. Thank you very much ladies and gentlemen. As we come to the end of this enlightening session I would like to express my heartfelt gratitude to our distinguished speakers for their unwavering commitment to sharing their knowledge and expertise and for also making our lives easier as moderators. because I see you have been responding to the comments and questions in the chat box. I would also like to extend my deepest appreciation to the very, very active audience for their extremely energetic engagement and thoughtful participation. Without your presence this session would not have been as meaningful and while we are on the subject of people who have been instrumental in making this session a success, I would like to thank my teammates, the very talented co-organizers of all the four workshops that we have hosted during the UN IGF 2023, Keo from Botswana and Nelly from Georgia. I cannot thank you both for your exemplary commitment, relentless hard work, awe-inspiring creativity and tireless efforts. In the absence of all of which we would not have been able to create the impact we have, I want everyone here in attendance to be aware and appreciative of the countless hours, late nights and personal sacrifices this team has made to keep this ship afloat. It was my good fortune indeed to have had the honor of leading this exceptional team, so thank you once again for making this happen. As we conclude this session, I urge all of us to kindly reflect on the insights we have gained and the recommendations put forth. Let us not let this be just another event or seminar, but rather a catalyst for action. It is up to each of us to take the lessons learned today and apply them in our respective fields, organizations and communities. Together we can create a better world for ourselves and future generations and we are right on time. Arigato gozaimasu, sayonara. Thank you.
Speakers
Audience
Speech speed
155 words per minute
Speech length
710 words
Speech time
275 secs
Edmon Chung
Speech speed
142 words per minute
Speech length
1963 words
Speech time
828 secs
Emma Day
Speech speed
171 words per minute
Speech length
1779 words
Speech time
623 secs
Moderator
Speech speed
164 words per minute
Speech length
2378 words
Speech time
868 secs
Njemile Davis-Michael
Speech speed
154 words per minute
Speech length
2257 words
Speech time
880 secs
Sonia Livingstone
Speech speed
164 words per minute
Speech length
2124 words
Speech time
776 secs
Theodora Skeadas
Speech speed
162 words per minute
Speech length
2268 words
Speech time
841 secs
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150
Knowledge Graph of Debate
Session report
Full session report
Mohammad Atif Aleem
The study conducted by Atif in collaboration with the Alexander von Humboldt Institute of Internet Society sheds light on the low internet access and digital inclusion in Southeast Asian countries, particularly in Vietnam. This finding highlights the pressing need to increase internet connectivity and promote digital participation in the region. On a positive note, the study acknowledges the potential of new digital technologies in improving internet access and inclusion.
Furthermore, Atif, a research analyst at a reputable IT consulting firm, emphasizes the importance of addressing digital access for disadvantaged groups in India. The study reveals that only 29% of rural areas in India have internet penetration, indicating a significant digital divide. The Digital India and Skill India initiatives have been implemented to improve access and bridge this gap. However, there remains a concern for the digital inclusion of disadvantaged groups, underscoring the need for innovative approaches.
In order to expand high-speed internet connectivity in remote or inaccessible areas, the study suggests exploring innovative solutions. Examples provided include Google’s Internet Saathi initiative, which aims to provide internet access to rural women through community networks. Additionally, the use of low earth orbit satellites, community radios, and collaborations between major integrators of digital networks and organizations such as MIT Sloan India Lab are highlighted as potential tools to overcome challenges in expanding connectivity.
The study also recognizes the importance of inclusive technology for people with disabilities. Carlos Pereira’s Livox app is highlighted as a prime example of inclusive technology, as it was deemed the best inclusion app by the United Nations. Developed for Clara, Pereira’s daughter with cerebral palsy, the app has since been adopted by individuals with various disabilities.
Collaborations between private entities, governments, and other companies are considered necessary for significant impact on digital activities and inclusion. The study cites examples of Google consulting the government of India and collaborating with Facebook for its Internet Saathi program, further emphasizing the importance of public-private partnerships.
Entrepreneurs are encouraged to explore various modes of partnerships, including seeking guidance from academia and participating in internet governance schools. These collaborations can provide valuable insights and expertise in navigating the digital landscape and promoting digital inclusion.
To support digital inclusion efforts, the study suggests funding sources such as NGOs, such as the Internet Society, as well as participation in hackathons by IT software firms. These initiatives can provide resources and support necessary to address the challenges and barriers to digital inclusion.
In conclusion, the study highlights the low internet access and digital inclusion in Southeast Asian countries, particularly Vietnam, underlining the urgent need for increased connectivity and participation. It explores various approaches such as new digital technologies, inclusive technology for people with disabilities, innovative solutions for remote areas, collaborations between different stakeholders, and funding sources to address these challenges and achieve digital inclusion.
Anna
The discussions focused on the barriers that marginalised youth face in internet governance and the importance of inclusive participation. It was highlighted that these barriers stem from various social, economic, political, and cultural contexts. By identifying and understanding these factors, it becomes possible to develop effective practices and policies that promote digital inclusion.
One of the key points made was the need for concerted efforts to navigate and dissolve these barriers. It was emphasised that a collective approach is necessary to achieve the inclusive participation of marginalised youth in internet governance. Participants stressed the importance of unpacking these factors comprehensively in order to recognise and address the solutions that address the issue as a whole.
Anna expressed concern regarding digital access and participation barriers specific to certain regions. Unfortunately, no further details or supporting facts were provided on this topic, but it indicates that there are unique challenges faced by marginalised youth in different geographical areas.
Another important area of interest was strategies and practices that promote meaningful participation for young people in internet governance. Participants discussed the significance of engaging young people in decision-making processes and ensuring their voices are heard. It was recognised as a positive step towards reducing inequalities and empowering young individuals to actively contribute to internet governance. Unfortunately, no specific strategies or practices were mentioned in the provided information.
The discussions also touched upon the importance of multi-stakeholder cooperation in advancing the inclusion and participation of young people in internet governance. Successful initiatives were highlighted, involving collaborations between the private sector and government to boost participation. It was considered ideal to have stakeholders from multiple sectors working together to address the challenges faced by marginalised youth.
In conclusion, the discussions highlighted the barriers faced by marginalised youth in internet governance, such as social, economic, political, and cultural factors. The importance of mapping these factors and developing comprehensive solutions to promote digital inclusion was emphasised. It was acknowledged that concerted efforts and a multi-stakeholder approach are necessary to enable the inclusive participation of marginalised youth in internet governance. However, more specific strategies and practices need to be explored to achieve meaningful participation and address region-specific challenges.
Pavel Farhan
The analysis includes several speakers discussing different aspects of internet access and inclusion. One of the main voices in this discussion is Pavel, a program officer at the Asian Institute of Technology. Pavel’s work is heavily grounded in technology and academia, and he also represents civil society. He is passionate about promoting equal internet access for minority groups and believes in the importance of youth participation in internet governance. Pavel strives to create opportunities for inclusive internet access and highlights the significance of youth involvement in the multi-stakeholder process.
Another topic highlighted in the analysis is the significant barriers to digital access faced by underprivileged and underrepresented groups in Bangladesh. These barriers include limited internet infrastructure in rural areas, prohibitive costs of internet access for those in low-income communities, lack of digital literacy skills, and the language barrier, as not all online content is available in the primary language of Bangladesh, Bangla. These challenges have a drastic impact on the digital inclusion of vulnerable communities.
However, there are government-led initiatives that aim to address these barriers. The ‘InfoLadies’ programme, for instance, involves women travelling to rural areas to provide internet services, thus improving digital literacy and access. Additionally, the ‘Bcash’ mobile finance initiative has provided opportunities for people without traditional bank accounts to engage in digital transactions, promoting economic inclusion. These initiatives play a crucial role in bridging the digital divide and ensuring that underprivileged individuals have access to the digital world.
Educational institutions are identified as key players in reducing the digital divide and fostering internet governance literacy among youth. While there is a push for more digital literacy in general, there is not enough focus on specific courses teaching internet governance. The analysis stresses the importance of equipping students with the necessary skills to navigate the online world and understand the implications of their digital actions. It suggests that universities should foster leadership and create opportunities for students to advocate for their online rights by hosting forums, clubs, and events related to digital inclusion and governance.
Furthermore, universities contribute to addressing the digital divide by conducting research and gathering data on internet access and usage. This research helps identify gaps in access and usage, allowing policymakers to make informed decisions. Universities can also play a vital role in equipping students with essential digital skills through structured programs.
Overall, the extended analysis showcases the importance of various stakeholders, including individuals like Pavel, government initiatives, and educational institutions, in promoting equal access to the internet and fostering digital literacy. It highlights the need for collaboration and multi-stakeholder involvement to bridge the digital divide and ensure that underprivileged and underrepresented groups have equal opportunities in the digital world.
Jaewon Son
The analysis of the given statements highlights several important points raised by multiple speakers. Jeewon, for example, emphasizes the significance of including citizens and people in policymaking and urban development. This belief is supported by her PhD research at the Karlsruhe Institute of Technology, which focuses on this very topic. Furthermore, Jeewon also recognizes the connection between internet governance and social/environmental issues. Her experience with the Asia Pacific Internet Governance Program has led her to understand that internet governance is not only a technological concept but also relevant to daily life and social and environmental issues.
In addition to these points, Jeewon advocates for the inclusion and increase of youth and citizen participation in internet-related matters. However, there are no specific supporting facts provided for this argument. Nevertheless, it can be inferred that Jeewon believes that the active involvement of young people and citizens in internet governance is essential for reducing inequalities and promoting industry and innovation.
Another notable observation made in the analysis is the emphasis on digital needs and stakeholder participation in advanced internet environments, particularly in South Korea. The supporting facts mentioned include the country’s strong internet connection and high smartphone ownership. However, it is also noted that while basic digital needs are met in South Korea, stakeholder participation is an additional stage that needs to be achieved. This suggests that there is a need for increased engagement and involvement of stakeholders in shaping internet governance policies.
Furthermore, the analysis brings attention to the impact of cultural barriers and gender imbalance on stakeholder participation in internet governance discussions. In the Korea Internet Governance Committee, for instance, Jaewon Son was the only youth and one of the few women representatives, while most participants were male IT professors. This observation highlights the need for addressing cultural and gender disparities to achieve more inclusive and diverse stakeholder involvement.
The analysis also points out a negative factor affecting youth commitment to internet governance – concerns about job security. Many Korean youths, it is mentioned, quit involvement in internet governance due to fears of not securing a job in their field. This suggests that job security is a significant barrier to sustained youth participation in internet governance initiatives.
Lastly, the need for more understanding and opportunities in internet governance for individuals from different backgrounds is highlighted. The speaker expresses the belief that skills learnt in internet governance can be beneficial in various fields outside of IT. It is also argued that it is important for more people to understand what internet governance is about. As such, the speaker supports the idea of promoting internet governance education and creating more accessible opportunities for individuals with diverse majors and backgrounds.
In conclusion, the analysis reveals the importance of including citizens and people in policymaking and urban development, as well as the connection between internet governance and social/environmental issues. It underscores the need for increased youth and citizen participation in internet-related matters and emphasizes the significance of meeting both digital needs and stakeholder involvement in advanced internet environments. The impact of cultural barriers, gender imbalance, and job security concerns on stakeholder participation is also highlighted. Furthermore, the analysis brings attention to the importance of more understanding and opportunities in internet governance for individuals from different majors and backgrounds. Overall, the insights gained from the analysis shed light on various aspects of internet governance and its implications for inclusive and sustainable development.
Audience
Throughout the conversation, there was repeated emphasis on the act of saying goodbye, with multiple individuals expressing their intention to leave. This repetition not only served as a common theme, but also underscored the significance of this action. The frequent utterance of “bye” could suggest a desire for closure or a need to conclude the discussion. It could also indicate a sense of politeness and respect among the participants, as they take the time to bid farewell before departing.
Furthermore, the repetition of “bye” might indicate a strong emotional connection among the conversationalists, as they repeatedly express their desire to part ways. It could be seen as a way of acknowledging the shared experience and expressing gratitude for the interaction. This repetition in saying goodbye could serve as a gesture of goodwill, reinforcing the positive nature of the discussion.
One could also interpret the repeated “bye” as a form of social ritual or convention. In many cultures, it is customary to exchange pleasantries and bid farewell before leaving a conversation or gathering. By adhering to this cultural norm, the speakers demonstrate their adherence to social etiquette and appropriate behavior.
In conclusion, the repeated use of “bye” during the conversation serves as a common and notable theme. It signifies the desire for closure, politeness, emotional connection, and adherence to social conventions. This emphasis on saying goodbye reinforces the cordial and respectful nature of the interaction, underscoring the importance placed on proper communication and social etiquette.
Tatiana Houndjo
Tatiana Houndjo is an IT professional from the Benin Republic in West Africa. She works as an IT system and infrastructure engineer in a private company with branches in Ivory Coast, Niger, and Togo. Her role involves helping businesses and governments implement digital technologies as part of their processes. Tatiana provides support and guidance in the adoption of digital tools and technologies, ensuring their efficient integration into existing systems, and assisting in the resolution of technical issues.
In addition to her work in the IT field, Tatiana is actively involved in the internet governance ecosystem. She was selected for the Women’s DNS Academy Fellowship in 2018, which marked the beginning of her journey in this domain. Since then, she has led various projects and programs to promote women’s participation in internet governance. Her efforts were recognized, and she was elected as the vice chair for a two-year term.
Tatiana firmly advocates for the importance of digital tools and technologies in today’s world. She believes that embracing these tools and technologies is essential for businesses and governments to stay competitive and drive innovation. Her work focuses on assisting organizations in implementing these tools and technologies to enhance productivity, efficiency, and overall performance.
However, Tatiana also highlights the need to consider the hierarchy of needs for young people. She acknowledges that many young individuals struggle with basic needs and are unable to actively participate in internet governance discussions. It is challenging for them to engage when their basic needs, such as access to food and shelter, are not met. Therefore, initiatives must address these fundamental needs before expecting their active participation.
Furthermore, Tatiana stresses the need for meaningful participation of young people in the internet governance ecosystem. She believes that their insights and perspectives are valuable and should be considered in decision-making processes. She advocates for partnerships between stakeholders to create inclusive environments that empower young people to contribute and have their voices heard.
An important challenge highlighted by Tatiana is the inequality in internet usage. There is a clear divide between those who have access to the internet and information and those who do not. Additionally, there is a discrepancy in the efficient use of the internet. Bridging this divide is crucial to achieve SDG 9 (Industry, Innovation, and Infrastructure) and SDG 10 (Reduced Inequalities). Efforts should be made to ensure equitable access to the benefits of the internet.
Lastly, Tatiana raises concerns about the lack of meaningful data to monitor and evaluate actions in the internet governance ecosystem. She emphasizes the need for young people, private companies, and governments to collect relevant data that can provide insights into internet usage and its impact. The Internet Society’s initiative, Internet.Beijing, is an example of a project aimed at monitoring internet usage. Such initiatives are essential for informed decision-making and evidence-based actions.
In conclusion, Tatiana Houndjo, an IT professional and an active participant in the internet governance ecosystem, advocates for the importance of digital tools and technologies. She supports businesses and governments in implementing these tools and technologies. However, she also recognizes the need to address the hierarchy of needs for young people and ensure their meaningful participation in internet governance discussions. Additionally, she highlights the inequality in internet usage and the lack of meaningful data to monitor and evaluate actions. By addressing these challenges, stakeholders can work towards achieving SDG 9 and SDG 10, promoting industry, innovation, infrastructure, and reduced inequalities.
Rashad Sanusi
Rashad Sanusi, a technical support at Digital Grassroots, is taking the initiative to commence a discussion centered around the crucial topics of digital inclusion and Internet Governance. This discussion will involve four speakers who will share their personal experiences and insights on the issue of digital inclusion within their respective communities. Through this, Rashad aims to shed light on the barriers to internet access and explore potential solutions in order to promote widespread inclusivity.
Rashad’s emphasis on understanding the barriers to internet access highlights the need for a comprehensive understanding of these challenges and finding ways to overcome them. By delving into the root causes of limited internet access, Rashad aims to generate discussion and brainstorm practical strategies that can empower individuals and communities to navigate and overcome these hurdles effectively.
Moreover, Rashad’s goal is to foster an interactive and inclusive environment during the discussion. This creates an atmosphere where participants feel encouraged to contribute and exchange ideas freely. By promoting dialogue and collaboration, Rashad seeks to cultivate an atmosphere that is conducive to exploring innovative approaches to digital inclusion.
Rashad’s advocacy for inclusivity in Internet Governance signifies the importance of ensuring that everyone’s voice is heard, especially those at the grassroots level. He believes that by comprehensively understanding the challenges faced by individuals in these communities, policies and initiatives can be developed that align with their needs. Rashad contends that through inclusivity, the decision-making process will be more representative and effective in addressing the collective needs of all stakeholders involved.
In conclusion, Rashad Sanusi’s discussion on digital inclusion and Internet Governance aims to tackle the barriers to internet access and promote inclusivity. By bringing together speakers to share their experiences and perspectives, Rashad hopes to foster an interactive and inclusive environment that facilitates collaboration and generates innovative solutions. Through his advocacy for inclusivity in Internet Governance, Rashad emphasizes the need to consider the voices of those at the grassroots level, ensuring their needs are prioritized in decision-making processes.
Session transcript
Rashad Sanusi:
you you you you you you you you you you you you you you you you you Okay, can you hear me online? Okay, hi everyone. Okay, good morning, good afternoon. Depending on where you are, I am Rashad Sanusi, technical support at Digital Grassroots, and I am thrilled to welcome you all to this session on digital inclusion. So for this session, I will share my screen before we start. Our session will follow this outline. Let me share my screen. Okay, I want to share my screen. Sorry for that, I just want to share my screen. It’s okay now, thank you. We can continue. So I am Rashad Sanusi, and I’m super happy to have you all for this session, for this session on digital inclusion, and this session will follow this outline. First we have the welcome and introduction, and after we have an introduction and a nice breakout before the panel discussion, where we have four amazing speakers. They will share their experience about digital inclusion in their own community, and after that we will go for Q&A and participant insights before the closing remarks. So as I was saying, I’m super happy to have you all today for this session, and I am here with my colleague Anna, and we are happy to moderate this session. So to start, today we embark on the journey to explore digital inclusion and how this intersects with our participation in Internet Governance. So this session is about to know how we are faced, how our community are facing the barrier to access to Internet, and also what are the challenges we are facing in our community about Internet Governance. So by understanding this challenge, we will make sure that we know how we can be more engaged in this space, so we can have our voice heard and also know the challenge we are facing at the grassroots level. So our aim today is to create an interactive and inclusive environment where everyone can be invited to share their own view, and how we can break this barrier to help everybody to be engaged in Internet Governance. So I will invite Anna for the
Anna:
next part of the session. Thank you. Thank you. Hi everyone, thank you so much for joining us. I’ve been having some issues with my audio, so please let me know if you can hear me well. Yes, we can hear you. Amazing. Yeah, thank you so much to all of you for joining us. And as Rashad mentioned, navigating through intersectional access and participation in Internet Governance, we recognize a vast area of barriers, notably impacting marginalized youth. And this challenge is streaming from different factors, social, economic, political, cultural, and other contexts present distinct obstacles in different environments. And we believe that unpacking these factors is vital in enabling us to recognize and map solutions that fully grasp the issue and also leverage our collective insight toward effective practices and policies. And to start the session on the note of a collaborative exploration, we invite you to share words or phrases that come to your mind when you think about the barriers faced by the marginalized youth. Through the Menti, I’m going to screen share. Rashad, would you mind to stop screen sharing for a moment so I can screen share? Thank you. So we invite you to join us in Menti and share some of your thoughts about this issue before we proceed to the panel. I believe you should be able to see my screen. So please use the code that you can see 1525 4103 and let us know your thoughts. Thank you so much for for contributing your ideas. I can see that quite, quite an array of issues. And yeah, I think that this work cloud reflects the collective acknowledgement of these barriers. And also we can see how diverse they are. Something that I was mentioning earlier that the intersection of different factors and contexts that come into arena when we talk about inclusive participation and access. And this is something that we hope to discuss today during the panel discussion with our amazing speakers and to see how we can leverage this knowledge. Keeping in mind is this visualization and this map that we have on the screen and how we can navigate these issues to promote the meaningful participation of young people in internet governance across different contexts. And I will now give the word to Rasha to present our guest speakers that will dig deeper into this conversation and guide us through this discussion.
Rashad Sanusi:
Thank you, Anna. We continue our session, and the next part is about the panel, and we have four amazing speakers, and I will let them introduce themselves, but before I will introduce themselves shortly, and also let them to say something before I will give them the floor. So, we have Muhammad Atif Alin. He’s from India, and he is currently engaged as a research analyst in TCS, and also he has a relevant background in research, consultancy, information technology, and sustainability, and internet governance. Also, we have Jeroen Schoen. She’s a doctoral researcher at the Kaushik Institute of Technology with a passion for multi-stakeholder model in internet governance. She’s also an ICANN fellow, an APNIC fellow, and also a PGA fellow. We also have Tatiana Fungio from Benin, an IT professional who works to protect, she advocates for women’s rights, and also Tatiana serves as a digital expert at the AU, AU Youth Corporation Hub, monitoring and advising development projects in IT. We also have Pavel. He works as a program officer at the Internet Education and Research Lab at AIT in Thailand. He has been strongly involved in internet governance since 2019. So, I will let them to introduce themselves better, so I will give the floor to Mohammed after Jeroen can continue, and Tatiana, and after Pavel. So, Atif, over to you. Okay, so thank you so much, Rashid, and I’m really excited to be here on this panel to speak on digital access and inclusion. So, is this just an introductory remarks that we have to give, or is there any topic you have in mind that we need to? Yes, just introduction, and I will guide you to the session, to the panel after. So.
Mohammad Atif Aleem:
Okay, because you already gave such a nice introduction of everyone.
Rashad Sanusi:
Yeah, I want you to talk more about you before we continue.
Mohammad Atif Aleem:
So. Okay, yeah. So, yeah, my name is Atif, and as Rashid introduced me, I have been working as a research analyst for Tata Consultancy Services, which is one of the major IT consulting firm based in India, and it has offices across the world. So, as in my current engagement, I am based out of Sweden. I’m working in the Stockholm office of the TCS, where I research on the latest technologies that have been evolving in the banking, in the retail, or in the innovative digital sector, and how it can help businesses to bridge the gap in bringing modern technologies to the public and the forum. And in my previous experience of internet governance, I have been working on various issues like privacy, digital divide, access, inclusion, and I also collaborated with Alexander Wohn Humboldt Institute of Internet Society recently in studying about the Vietnamese digital inclusion sector and the state of farmers working in Vietnam. So, we did a holistic study on Southeast Asia countries where digital access has been minimum and how to increase that, how new digital technologies could act as lever is something we pondered about. And I would like to share those insights as well when the discussions would go on during the deliberations of this session. So, I’m excited for this session as it would not only bring about how these technologies could help all of us, which of course have been the discussion of IGF as always, but from a gender lens, from a youth lens, from a holistic inclusion lens, how it could manifest into something which can be a purpose-driven approach to help all the stakeholders that are there in the multi-stakeholder approach that is being given in the internet sector’s domain is something I’m really excited to talk about. So, look forward to this interactive session. Thank you.
Rashad Sanusi:
Okay, thank you so much, Atif. I will let Jeewon to tell more about herself. Thank you, Jeewon.
Jaewon Son:
Hi, I’m Jeewon from Korea, and currently I’m based in Germany. I’m doing my PhD at Karlsruhe Institute of Technology. Before that, my master was more focusing on how do people assess in basic needs, such as internet and water in developing countries. And now I’m more focusing on how do we engage more citizens and people when we are making such policies and development for the urban settings. So, I think my first internet governance experience was when I joined APICA, Asia Pacific Internet Governance Program in Korea. And I think during that time, I learned how actually my work and research can be also related to internet governance, because I think in Korea, as I use internet governance was not a familiar topic that everyone knows. And I think it was great opportunity for me to learn about it and see how internet issues are not only like technological concept itself, but also linked to our daily life and social and environmental concepts. So, yeah, I’m looking forward to talk with all the other speakers and see how can we also include more youth and increase more citizens in internet. Thank you.
Rashad Sanusi:
Okay, thank you so much, Jiwon. Tatiana, you have the floor.
Tatiana Houndjo:
Hi, everyone. Can you just confirm that you can hear me clearly?
Rashad Sanusi:
Yes, we can.
Tatiana Houndjo:
Thank you, amazing. Hi, everyone again. My name is Tatiana Wunjo. I’m from Benin Republic, which is a West African country. So, creating from Benin, if you come to Benin every day, every day in West Africa, make sure you put Benin in your list of country to visit. So, as Harsha said before, I’m an IT professional. I work as an IT system and infrastructure engineer in a private company here in Benin, but also with branches in Ivory Coast, in Niger, and also in Togo. Basically, my everyday work focus on how to help businesses and also government to implement digitalization, digital tools, digital technologies as part of their processes. And as part of this work, I’m happy to have worked on various projects, including public services and so on in Benin. But besides my, I would say my cap as a professional, I’m also engaged in the internet governance ecosystem. This journey started in 2018, when I got selected for a fellowship, which is a Women’s DNS Academy Fellowship in Benin. Because of this, it was like a five days training. And after that, I got engaged with Internet Society. And since then, I’m happy to have joined different projects, different programs. I also got, became the program lead of the Women in Internet program. I’m happy to talk about that later as part of the discussion. And I also got elected to become the vice chair for a mandate of two years that finished few months ago. So thank you, everyone. I’m happy to join you for this discussion. And yeah, looking forward to it. To express more.
Rashad Sanusi:
Okay, thank you so much. Tatiana, happy to have you here. I will let Pavel now to introduce himself.
Pavel Farhan:
Hi, good afternoon to all. This is Pavel for the record. And thank you, Rochelle, and thank you, Hannah, for giving me this opportunity, of course, to be a part of such an amazing cohort of members who will be talking about a very important session today. As Rochelle mentioned before, I’m actually based in Thailand. I work as a program officer for the Internet Education and Research Lab at a university here called the Asian Institute of Technology, AIT for short. I actually have a very technical and academic background, but at times I also do wear the hat of civil society as well. And therefore, I have been strongly involved in the Internet Governance Academy, I would say since APNIC 48 in 2019. As Rochelle mentioned, at that time I was a conference fellow. It was also the first time I met J1. So, you know, fond memories. And since then, you could say I didn’t look back and I’ve been part of several other exemplary fellowships as well, like ICANN and APGA back in 2021. And even INSEC, the Indian School of Internet Governance back in 2021, I believe. And I’ve also been an individual member of the APRALO, which is the Asia Pacific regional at large. And I’m quite eager, I would say, to make valuable contributions to the Internet ecosystem in the Asia Pacific region. And my passion for ICT and ICT for development is what drives me into striving for equal access to the Internet for minority groups. And as a result, I actively promote inclusive Internet and emphasize the importance of youth participation in the multi-stakeholder process. I’m glad that I got to be a part of Digital Grassroots ambassadorship program back in, I believe, 2021 as well for Cohort 5. And that’s how I got involved with Digital Grassroots. So thank you so much to them. And thank you for having me today.
Rashad Sanusi:
Thank you so much, Pavel, as well. So, Anna, over to you.
Anna:
Yeah, thank you, Rachad. And thank you to everyone for presenting themselves. Yeah, I believe that we have a very unique platform with so many experiences and backgrounds coming together. So I’m really excited about our discussion. I would like to start with a more general space. And maybe you can share how do barriers to digital access and participation in Internet governance manifest in your own regions and contexts, particularly for disadvantaged and underrepresented groups. And whether you’ve seen a strategy or a practice that has proven or you’ve seen to be successful in increasing meaningful participation in young people in Internet governance. I think we can maintain this order, if no one minds.
Mohammad Atif Aleem:
So is it back to me or?
Anna:
Oh, yeah.
Mohammad Atif Aleem:
OK, so then I would go on the first one. Yeah, you raised a very pertinent concern when you mentioned about the digital access and participation for especially the disadvantaged and underrepresented or the minority groups in our regions. So especially speaking about my region, when it comes to meaningful and affordable access, it is still a very big challenge with millions still unconnected, especially from the marginalized communities in different countries of Asia, be it India, Vietnam or any Bangladesh, Pakistan. So there is an urgent need for multi-stakeholder dialogue to focus on providing infrastructure and access to all of them and to further enable the use of emerging technologies that have become so famous as of now for the socioeconomic development as well. One of the studies to just to quote statistics, there was a study by MIT Sloan. And as per that study, internet penetration in rural India stands at roughly 29%, which means that over 700 million citizens are still living in the digital darkness. So we understand that universal and meaningful access deserves further consideration and it is not just limited to connectivity and infrastructure. It encompasses other aspects like digital literacy or general access to information, which I could see on the screen when participants typed in the Menti quiz run by the moderator here. So it is important to adopt ways to measure access and identify current methods to empirically measure, track, assess and evaluate the benefits from increasing access inclusivity. And many companies, many private firms have seen that when they do that, they have seen a purpose-led driven growth in their revenues as well. So we will talk about that part later, but here I can say that with rapid development of emerging technologies, these technologies should provide an enabling platform for everyone to participate, to raise their voices and to partake in the benefits as well. When it comes to India, there have been several initiatives from the government of India, like Digital India, Skill India, which tries to abridge the access and technology divide among the masses. And likewise, there have been initiatives from private firms as well. For example, there was one initiative, which was a big hit, was Google’s Internet Saathi Initiative, which empowered female ambassadors to train and educate women in more than three lakh villages of India on the benefits of internet in their day-to-day lives. So that was one good initiative from Google, which tried to abridge the connectivity gap by building a chain of women entrepreneurs and women farmers to propagate the knowledge of internet among other community members as well. So there have been community initiatives with the help of private firms, governments and other stakeholders in the multi-stakeholder group that we can see in Asia-Pacific region. So these were some of the examples that I wanted to share. Also, it’s important to understand that in an increasingly interconnected society, lack of access to internet can tremendously impact day-to-day activities. And in the lure of making it more digitally, we might make some actors in the long run, we can make them aloof of the technical fruits that it can provide. For example, some organizations, I was just reading one report yesterday, like some government banks, they withdrew the physical provision of services to push for web-based services, justifying closure of offices in small communities. I mean, I’m sure some of us have seen in their country, I mean, I’m sure some of us have seen in their countries also that many banks are closing their physical offices just to push for web-based services. So such decisions also affect the daily operations and lives of communities. So there is a need to identify innovative approaches to connect the population in remote and geographically inaccessible areas. It’s not just about withdrawing physical offices and pushing for web-based services, because sometimes it hinders the overall success of giving out the digital services as well. So when it comes to empirical parameters, which should be considered here, the first would be the technical, which is the distance and remoteness of the areas, and other would be adoption challenges as well. There could be language barriers, there could be, beat a disability. The literacy rate differs among people of diverse age groups for for young people say 18 to 30 it is easy easy to you know court get hold of those web-based informative technologies that have but for a person who is above 60 years of age or above 70 years of age it is hard to you know get accustomed to those services which are not as new for him or which are not as exciting so we have to think of ways in which we can include those and those genders as well. So in many areas with the no Internet or Internet a very bleak Internet connect connectivity community radios used to serve as a medium of communication so that can be one area where we can think of how to you know bring community to the radios in an innovative manner to bring along other other people as well and not one not only our age but include them and scale them and educate them about how they can contribute to the digital for and become the digital ambassadors for their communities. So there have been technologies like low earth orbit satellites which are tools for cost effective Internet access to remote and inaccessible areas. So such technologies also have the communication divide. Then I mentioned about a study of mighty Sloan there was one. Industry leading integrator of digital networks called still light technologies which collaborated with the mighty Sloan India lab for helping developing a business model for for profit initiative with the goal of expanding high-speed Internet connectivity in more than 20 villages of India. And its target is to you know do this. Across 3 lack villages by 2000 and 24. So these are some innovative approaches that have been going on in the rural areas. But along with these innovative approaches what we as you can do that is also a very important question has to bring along society which is just and okay is about also that there was one very motivational story that I came across in 2019. And it made me also think up on the lines of Internet governance which I would like to share with you all. So there was this man called Carlos Pereira who was driven by the passion to empower people by enabling them to have a voice. He did something innovative through a mobile app. So he was a computer scientist and his 10-year-old daughter Clara could not walk or talk because she was born with cerebral palsy which is a group of disorder that affect a person’s ability to move and maintain balance. So basically to give his daughter a voice. But what he did was he quit his job as a computer scientist and developed an app to help her communicate. That app was called Livox. The app’s algorithms could interpret motor, cognitive and visual disorders and it used machine learning algorithms to predict and understand what the person would want or need. So that Livox app could be used by people living with a range of disabilities including be it cerebral palsy or down syndrome or multiple sclerosis or even a stroke. So for Clara the app had given her a voice that her daughter, his daughter when her dad asked her what she wanted for breakfast the app recognized his voice and gave Clara the options on the screen allowing to select what she wanted. So that app also gives disabled children a more inclusive education. For example if it is used at a school the software can hear a teacher’s question and provide appropriate multiple choice answers for the students to select. And that app was named the best app by United Nations best inclusion app in the world. So if you can Google more about it you will see that how multiple software companies adopted the idea behind the app and went on to bring on softwares which would be more inclusive. So that is one example where an individual through his mind could change the holistic viewpoint of the society and made people understand that how gender divide or how people of other with disabilities could be brought into the fore of digital inclusion. And in my panelists as well I see very erudite computer engineers. I’m sure across the board and these participants there would be people with multiple talents and skills who can you know think of innovative ideas doing for inclusion. So that is something I wanted to highlight and I would be happy to hear the insights from other panelists as well.
Jaewon Son:
Thank you. Yeah I agree with many of your points. So yeah when I was thinking of what to say I was thinking of this psychological theory about this mass loss hierarchy of needs. I think many people knows it that like facing discomforts like food water then the psychological need and self-fulfillment something like that. I thought like yeah of course we need to try to assess to have our basic needs. But then after that we need our digital inclusion and the right to participate as a stakeholder. And at the top of the hierarchy maybe we can think of like some ecological impact of the Internet or sustainability as a long term. And yeah I think in Korea while we have like strongest like Internet connection and many people are owning a smartphone and so on. So I think we fulfilled the first basic needs. But however the second stage of having like stakeholder participation I think is another stage that we have to achieve because yeah I believe since Internet the decision on making like how to do about this Internet are influencing in so many factors and we really need to like hear from other people as well. But from my experience even in the like such high tech country when it comes to the discussion about Internet I think there are still very few stakeholders who are having a say. So for example in when I was joining this Korea Internet Governance Committee for managing the Internet Governance Program and so on after finishing the APICA I was the only youth who were there and also yeah like beside me there were only like two or three people who were like female. So mostly was the male IT professors who knows a lot and I don’t know if it’s only like Korea or Asia but the youth were considered as someone who doesn’t know much who should learn from the others so that even though I might have a saying they were not really like taking it seriously and say like I will tell you you know. And so when the environment was set like this I think even though I was trying to encourage some other youth who has finished this program together not many people were willing to join as they knew how will be the discussion environment like and they think when they cannot be heard anyway what’s the point of like going there and have a stay. So I think there’s also some cultural barriers that make us having a hard time having a saying in this participation but also I think there some of the youth were having a hard time as they are really stressed about getting a job in the future after they graduate and when they are not like directly met majoring in IT they were afraid even if what if I like spend so much time in internet governance and not being able to get a job how will I be managed to like you know look for another job if my only activities has been in internet governance and so on. So many of the Koreans who have finished this APICA program even though some of them got award and everything in the end they suddenly like quitted all the internet governance program as they like so like since I do not have much expertise in coding or so maybe still I cannot get a job in this so I should like switch to something else something like that. So I think it would have been really nice if we could have shared like in a soft skill or in like some indirect way how can like involving in internet governance actually can also benefit them because for me even though I’m like not like majoring in IT or so I think many of the like skills that I’ve learned throughout the internet governance really benefited me in research and many perspective. For example I think by learning about the importance of multi-stakeholders and like paternal approach and so on I think I was also been able to or see like how can I involve in into my research doing more surveys to the public and try to do some stakeholder interview to include them in the decision-making process and I think it’s not only about like computer science or so on so yeah I wonder why it was either A or B and yeah also I think when I think before COVID we were usually like expected to be at this forum in person and so on but I think many people had a hard time like yeah asking their supervisors or professor if they can have a leave for such an event where not many Koreans were understanding what is even internet governance about so I think to continue their journey I think it’s really beneficial if more we’ll be able to know what actually internet governance is and I think we have like quite a long way to go about many people understanding about it then also giving them one more opportunity in participating regardless of major or background then then we will get to think about some of these environmental impact or sustainability yeah I will stop now
Tatiana Houndjo:
you then I want to say something you can yes I can do this thank you Joan and Mohammed for for already putting so many things on the table so far I think talking about the inclusion and participation of young people in the internet governance system we we need to think about a way to figure it’s many initiatives to get us because when you talk about inclusion of youth and also the engagement in this kind of topic in this kind of discussion the question is do they have the access do they have the information about this today are they using the internet J1 I’m so happy that you talk about the needs of young people to be able to you know surpass the the way of saying okay I don’t have a job I don’t know how to eat I don’t know how to we have something because I cannot be engaging in discussion today we are here for like one hour one hour and have discussion but the young people who doesn’t know what to eat today at lunch or what to eat at dinner I’m sure he won’t be interested in coming here on this table to talk about something like that that’s one point we we need as internet governance ecosystem to partner with initiated that I’m making sure young people are able to sustain themselves that’s one points another point is the meaningful assets what you realize is that every day we have usually the same people coming this every year to talk about the same thing the question is how do we make sure that this information these knowledge this capacity these skills are spreads outside of the general I would say the you need to get to work that we have talked about how do you think organizations and part of the society organization how are we making sure that people do not have yet. But of these discussion how do you make sure that people who are leaving the disability at the impact of the discussion. I would be sure that you know people as seeking on the decision-making table how we make sure but of the decision-making processes and at the end of the society we had this idea was that it I think in 2018 or 2019. The name of the whole point is 5 8 down it’s in English it is. We made it into an 8’s and the basic idea behind this program of which it is to make sure. You mean we mean in the Internet governance ecosystem. Seeking to continue to she was a show in the system because we we get to the point that you have that that is only. Me that to see them come to the idea for me come to the. We’ve been doing this for all of you go to me for me again this can be mean or so even 20 visit when you have only. Many people come in here. I mean that just for the 20 also back to the 20 organizing the 20 buses. So we we use this program and it has been on going until 2022. And we’re happy to see now that we may not know more interested about this discussion about this topic and that’s one thing we need to think about and also as part of reaching the digital divide because I’m sure that’s in countries in that country as well as in the end it is just device between people who have access to the information people who do not have access to the information people who have the skills to be online to be interacting to be using Internet efficiently in people who do not have the skills so that isn’t really in this 20 that between both of these people so we don’t just look at which is a new law and it is a platform who has these capability to be done with the offline so even if you’ve got some to look at the way we live on the Internet access you can see access to those policies into those you know the formation of the 2 of the people from all that he chose on the in the country’s. Unfortunately, we just put it didn’t go for them because there was a lot of support. If you want to do something as part of the money as part of the making sure it’s that’s of the education process we need the support of the government’s that’s one thing we also need the support of the private companies of the Internet service provide that so I think talking about digital inclusion of young people in Internet ecosystem. There is a need to figure it’s many initiatives to get that and make them from the case that’s this should be done in a way that we think sustainability as part of the process one thing is to to be launching initiatives and on that thing is to make sure that’s it that these initiatives are sustainable and can long can be can have an impact on the long term. And I’m not asking them to to talk about these days. Is the lack of it that the lack of meaningful data that we need to focus on we need to take action on if I need to start to put it in front of the nation to start to put it off if the private company need to support an organization who is working with young people also put to young people who is working on. If you’re still picking the question is how am I how we as but not I mean I’m sure that’s what I’m so what is what is going to put on the community what is going to happen to people in that’s on the society in general. So that is a need for but you know people first because. from the, you know, the statistic agencies in every country to provide data that we can post and that we can use in order to make sure that the projects we are working on are going the way we want them to go. At Internet Society, we launched a project in 2020, which is Internet.Beijing, and this project is in two different, I would say two different parts. The first part is to reconstitute the history of Internet in Benin at one point, and the second part is to have a big platform which is communicating on real time with the different Internet Society, Internet providers in Benin. We have in Benin, we have MTN, we have Moon, we have Celtis, and we also have the optical fiber provider like SBN. The idea behind this project is to to have a platform when we see how Internet is working in different areas of the countries, how many people are using them on a daily basis, how many people are using Internet in Kotonou, how many people are using Internet in Baraku, in Natitengu, and also what’s, how are they, I mean, how are they using the Internet? Are they using the mobile phone, or are they using the optical fiber? Are they using the BLM? So these are data we are trying to work as part of this project, and yeah, I think talking about the inclusion of young people again is not something we can tackle as just on one initiative or one project, but it has to be something that we think broadly about, we think sustainably about, and we need to partner together, we need to work together, and make sure that one and other, we are trying to work toward the same
Pavel Farhan:
goal. Thank you. All right. Hi again, this is Pavel for The Record. I guess the benefit of going last is Atif, Jaewon, and Tatiana have pretty much checked all the boxes on what I would like to talk about, but it makes my job easier though. So I am based in Thailand, but I’m originally from Bangladesh, so I’d like to spotlight a little bit about the barriers that we face in Bangladesh, the barriers to digital access, and the participation for disadvantaged and underrepresented groups, especially the youth, which are multifaceted. And these challenges manifest in several ways, but I’ll keep it short due to the time limitations. There are four barriers which I feel are significant to address, the first being the limited infrastructure. So in many rural areas of Bangladesh, the lack of proper internet infrastructure remains a significant barrier. People in these areas often struggle with slow or unreliable connections, or carrier services are reluctant to go and set up phone towers in these areas, because they simply do not have the infrastructure to set them up in these remote areas. The second barrier that I would like to talk about, which is very important to address in the modern world, is the affordability factor. So even though the internet has been a basic human right now, but the cost of internet access can be really prohibitive to Bangladeshis in general, and of course the Bangladeshi youth, particularly those in the low-income and marginalized communities. And even though Bangladesh has adopted many affordable data plants and devices as well, they have only played a percentage of, they’ve only been beneficial to a percentage, and not really covered the broader side of accessibility for the people. And thirdly, of course we have to address digital literacy, because a lack of digital literacy and awareness is a significant challenge. And many individuals, especially in rural areas, lack the skills and knowledge to use internet effectively. There is still, you’d be surprised, some people who forget the internet, they probably have never seen a computer before. And to them, it doesn’t really, it doesn’t affect their lives in any way, but it is for us to go and make them aware that this should be affecting their lives. And finally, the language barrier. So, you know, while Bengali, or I would say Bangla, is the primary language spoken in Bangladesh, not all of the content that is online and available is in Bangla. And although there has been a lot of push for universal access in the last few years, the language barrier can actually limit access to information and services for those who aren’t proficient in English. And if somebody, if a youth or if a marginalized community is okay with not learning English, what can we do, right? This is where we have to come up with these successful strategies, you know, we have to overcome these barriers. And the Bangladesh government has been doing something similar. They have an initiative for the past decade or two decades, actually, it’s called Digital Bangladesh. Similar to what Atif mentioned about Digital India, you know, the government-led program aims to address the digital inclusion by providing the various services and promoting digital literacy. And there’s this one particular project they have, and it’s called InfoLadies. So, basically, it’s an initiative where trained women travel to rural areas with their bicycles and with internet-equipped laptops to provide information and services to local communities. And, you know, this initiative was, I think, started back in 2013, so it’s been 10 years now. And they have played a crucial role in improving digital access and literacy for not just marginalized communities, but for youth who probably would have never been aware of how the internet affects them or how they can contribute to the internet community. And additionally, we also have another project, which is a mobile financial service, such as Bcash. And it has made it easier for people, even who don’t have a traditional bank account, to engage in digital transactions, just to show them that just because you don’t have a bank account doesn’t mean you cannot still use the internet to make transactions with your finances. So, this fosters economic inclusion as well. And, you know, there have been other efforts to improve digital Bangladesh or digital access in Bangladesh, which are ongoing. And, yes, there will be challenges. Challenges still persist, and we have a long way to go. And I think this is probably the third decade that digital Bangladesh has been working on. But we have made some significant strides in bridging the digital divide and promoting inclusivity. But this is still the beginning, and we have a long way to go. Back to you, Roshan.
Rashad Sanusi:
Thank you so much, Pavel, Tatiana, Jiwan, and Atif, for the great contribution. I really enjoyed hearing you. And now we go to the Q&A session. And also, if some people want to share some ideas or insights, we are happy to hear them. I don’t know. You can raise your hand also as well. Okay.
Anna:
We have one question from a community member who’s watching us online, John. And he’s asking, it’s a question to all panelists. And he’s saying, thank you so much for sharing the perspective. And his question is, how can we streamline the cooperation between different sectors to advance the inclusion and participation of young people in internet governance? He’s saying that there were some successful initiatives mentioned by Atif and Pavel about the private sector and government boosting the participation and inclusion issues and how we can advance that and make sure that there is a true multi-stakeholder approach in these solutions.
Pavel Farhan:
I think I can just go ahead and answer this first, and I can pass it on to Atif afterwards. So, if I have to say from an academic perspective, educational institutions obviously play a significant role in reducing the digital divide and nurturing internet governance literacy and leadership among youth. The way I see it, digital literacy education, we talk so much about it. We push so much for digital literacy. But are academic institutions doing enough? Because, yes, they are instrumental in equipping students with essential digital skills. But if you go and check any university, is there a specific course where they’re teaching internet governance? Like, what is internet governance? Do the students actually know what internet governance is? They’re in a world where internet governance is affecting so much of their daily livelihoods, and they’re on the internet 24-7. But how much of the skills do they have to navigate the online world and understand the implications of their digital actions? So, through structured programs, I believe students can actually gain a more proficient understanding of how they should put themselves online. We are lucky to live in a world where research and data is so advanced that academia contributes to our understanding of the digital divide through research and data. There are people who are going around creating surveys and conducting surveys and research, which helps us understand and identify the gaps in internet access and usage, which in return then helps inform the policymakers and organizations how they should structure and make their informed decisions. So, this is another thing, I believe, the research and data capabilities of academia to implement policy development, of course. And finally, as we keep pushing, we talk about youth engagement. So, the universities itself cannot just have a course, right? There has to be youth who want to, out of their own self, want to engage and talk about internet governance with their peers. Academic institutions can provide a platform, and this can actually include hosting forums, clubs, and events related to digital inclusion and governance. And what this does is actually it fosters leadership and helps students understand how they should be advocating for their rights online. So, yeah, these are some of the points I believe can answer the question that has been asked. And I guess, Ape, if you can go ahead and speak a little bit more.
Mohammad Atif Aleem:
Yeah, I think I agree with Pavel. And from an academia point of view, he has, I mean, already stated what needs to be done to foster more participation in order to include people. So, from a private perspective, I think the question was that how can more partnerships be made to cater to inclusion and digital activities. So, I can say that it is not, it is very difficult for any private entity to do on its own. Okay, it can take one initiative. So, for even a company like Google, which is headquartered in the United States of America, in order to implement its Internet Saathi program, it had to consult the government of India. And it had to collaborate with another company called Facebook to implement that, to deploy that program into around 3 lakh villages. So, it’s not an easy task, especially as young entrepreneurs, you guys should seek two different modes of partnerships that is available to you. And that can come through guidance from academia, as Pavel has stressed upon. And it can also come by taking active participation in various internet government schools that are there. There’s a for-profit entity called Internet Society. There’s Internet Foundation, which calls for every six months, it calls for people to submit a project proposal. So, that also answers the question of Joshua, which he has on how to source for funding. I think Tatiana can also give her inputs. But there are many calls from entities like NGOs or like Internet Society. They are hackathons, if you are in the IT industry, from many major IT software firms, which you can project your ideas and you can win some ransom money and take your projects forward. So, there have been these initiatives and I think other people can also put… Yeah, we are running out of the time. So,
Rashad Sanusi:
thank you, Atif. And thank you, everyone. I think for further discussion, we can stay in touch and you can send your question as well for us, so we can see how we can help as well. So, I want to thank Pavel, Tatiana, Jiwon and Atif and all the people who participated to this session. We learned a lot about how we can tackle digital inclusion and digital inclusion is a complex issue, but we can do more for it. And thank you all for the participation and I hope to see you for the future engagement as well. Thank you and have a good day. Bye. Thank you so much.
Audience:
Bye. Bye. Bye. Bye. Bye.
Speakers
Anna
Speech speed
139 words per minute
Speech length
679 words
Speech time
292 secs
Audience
Speech speed
112 words per minute
Speech length
10 words
Speech time
5 secs
Jaewon Son
Speech speed
160 words per minute
Speech length
1174 words
Speech time
440 secs
Mohammad Atif Aleem
Speech speed
161 words per minute
Speech length
2349 words
Speech time
875 secs
Pavel Farhan
Speech speed
153 words per minute
Speech length
1644 words
Speech time
643 secs
Rashad Sanusi
Speech speed
122 words per minute
Speech length
1009 words
Speech time
497 secs
Tatiana Houndjo
Speech speed
168 words per minute
Speech length
1720 words
Speech time
613 secs
Conversational AI in low income & resource settings | IGF 2023
Knowledge Graph of Debate
Session report
Full session report
Dino Cataldo Dell’Accio
In this analysis, several key points and arguments about AI applications in healthcare, the potential of AI and chatbots in low-resource settings, the concept of trust in AI and digital technologies, and the need to establish frameworks for evaluating the reliability and trustworthiness of AI solutions are discussed.
Firstly, the importance of user identification in AI applications in healthcare is emphasised. The use of facial recognition for digital identity is highlighted as an effective solution implemented for the United Nations Pension Fund. This demonstrates how advanced technologies like AI can be utilised to enhance security and streamline processes within healthcare systems.
Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that these technologies have the ability to address resource limitations and reduce inequalities in healthcare access. To support this argument, a blockchain solution designed and implemented for the United Nations Pension Fund is mentioned. The use of blockchain technology can provide secure and transparent data management, enabling efficient delivery of healthcare services in low-resource settings.
The concept of trust is recognised as crucial in AI and digital technologies. It is argued that the public should have confidence in the solutions and entities that offer these technologies. The analysis highlights the importance of not burdening individuals with technological details, but rather fostering trust in the overall solution. Trust is seen as a vital factor in promoting widespread adoption and acceptance of AI and digital technologies.
Furthermore, the need to establish frameworks for evaluating the reliability and trustworthiness of AI solutions is emphasised. The analysis suggests that not all solutions have the same level of reliability, and there is a need to develop criteria for comparing and contrasting different AI solutions. This would enable the identification of trustworthy and reliable solutions that can be implemented effectively. The speaker believes that such frameworks will promote accountability and transparency in the AI industry.
In conclusion, this analysis brings attention to various critical aspects of AI applications in healthcare, the potential of AI and chatbots in low-resource settings, the concept of trust in AI and digital technologies, and the need for frameworks to evaluate the reliability and trustworthiness of AI solutions. It underscores the importance of user identification, the potential of advanced technologies in addressing resource limitations, and the value of trust in fostering widespread adoption. Furthermore, it highlights the necessity of establishing criteria for evaluating and selecting reliable AI solutions, promoting accountability and transparency in the industry.
Olabisi Ogunbase
Digital patient engagement is crucial for maintaining relationships with patients even after they leave the hospital. Platforms like WhatsApp play a vital role in this aspect. WhatsApp is a powerful digital tool that enables ongoing interaction between healthcare providers and patients. It allows doctors, nurses, dieticians, and social workers to provide guidance and answer patient questions. This continuous engagement helps prevent relapses and educates patients about their health conditions. WhatsApp also serves as a platform for passing on education and notices, and as a support system for patients to share ideas and support each other. However, there are some limitations with the WhatsApp platform, such as delays in response and lack of personalization. Implementing AI in healthcare communication, specifically conversational AI, could address these issues and provide real-time, appropriate responses. Collaboration and knowledge-sharing are essential for driving innovation in healthcare, particularly as technology continues to advance. By working together, we can improve digital patient engagement and achieve better healthcare outcomes.
Rajendra Pratap Gupta
Conversational AI is emerging as a promising solution to improve accessible healthcare in low-income and low-resource settings. A study showed that Conversational AI scored 81% in the MRCGP, surpassing human physicians who scored 72%. This highlights the potential of AI to enhance healthcare delivery and bridge gaps caused by the lack of qualified doctors and inadequate healthcare infrastructure. AI in healthcare is aligned with SDG 3 (Good Health and Well-being) and SDG 9 (Industry, Innovation, and Infrastructure).
However, there are concerns about awareness and implementation of Conversational AI in low-resource settings. Some digital health professionals are unfamiliar with its concept and potential applications. This lack of awareness might hinder successful implementation.
Rajendra Pratap Gupta supports using voice-based data through Conversational AI to increase the accuracy and volume of health data, leading to improved healthcare outcomes. Collaboration and a user-centric approach are crucial in AI implementation. Involvement of different sectors, including the private sector, is vital for sustainable business models. The WHO, ITU, and WIPO play significant roles in facilitating AI implementation.
Addressing the digital divide is important, as 2.6 billion people globally lack reliable internet access, hindering effective AI implementation. Efforts should be made to increase internet access in underserved areas.
Education in AI and robotics is necessary, with initiatives in place to develop courses for students and train frontline health workers. This will create a skilled workforce to utilize AI technologies effectively.
The debate on regulation in AI continues, with some advocating for guidelines over over-regulation to maintain flexibility and ethical standards while promoting innovation.
In conclusion, Conversational AI shows great potential in improving accessible healthcare in low-income and low-resource settings. It requires awareness, collaboration, and efforts to address the digital divide and provide education in AI and robotics. Finding the right balance between regulation and innovation is crucial. By addressing these issues, AI can play a significant role in advancing healthcare and achieving the Sustainable Development Goals.
Sameer Pujari
In this analysis, the speakers focus on the transformative potential of technology, specifically conversational artificial intelligence (AI), in addressing existing gaps in healthcare services. They assert that these gaps, particularly in low middle-income settings, can be effectively tackled through the implementation of technology. The argument put forward is that technology, especially conversational AI, serves as an enabler in bridging the healthcare divide.
One important observation made by the speakers is the need for a people-focused, collaborative, equitable, and sustainable approach when integrating technology in healthcare. They emphasize the importance of considering the specific needs of individuals and communities, as well as fostering collaboration between various stakeholders. In addition, they stress the importance of ensuring that the benefits of technology are accessible to all, regardless of socioeconomic status.
The World Health Organization (WHO) plays a crucial role in this conversation by providing guidance and support for the effective implementation of AI in healthcare. The speakers highlight WHO’s efforts in maximizing the value of AI in healthcare through initiatives such as the global collaboration with the International Telecommunication Union (ITU) and the World Intellectual Property Organization. These efforts aim to harness the potential of AI to improve global health outcomes.
Ethics and regulations emerge as important considerations in the implementation of AI in healthcare. The speakers stress the need for ethical approaches to AI development and deployment, ensuring that the technology is used in a responsible and beneficial manner. They also highlight the importance of regulations to provide guardrails and prevent potential misuse of AI. However, it is asserted that regulations should not stifle innovation but instead strike a balance between regulation and technological advancement.
Education and training play a significant role in achieving responsible AI implementation. The WHO offers courses on ethics and governance of AI to promote understanding and ethical approaches among developers, policymakers, and implementers. These courses aim to equip individuals with the necessary knowledge and skills to navigate the complex ethical considerations surrounding AI implementation.
In conclusion, the analysis underscores the potential of conversational AI in addressing healthcare gaps and improving global health outcomes. A people-focused, collaborative, equitable, and sustainable approach is deemed essential in effectively implementing technology in healthcare. The WHO’s guidance and support, along with the development of educational courses, ensure that AI is deployed ethically and responsibly. It is evident that harnessing the potential of AI requires a well-balanced approach that brings together technology, ethics, regulations, and education for the betterment of healthcare systems worldwide.
Mevish Vaishnav
Conversational AI has the potential to revolutionize the healthcare industry by analysing health conversations and generating valuable insights and decisions. This presents an incredible opportunity to gather and analyze health data from billions of people and clinicians, leading to more effective healthcare outcomes. Supporters argue that Conversational AI can be the starting point for generating health AI. By leveraging the power of Conversational AI, healthcare professionals can better understand patient needs and tailor treatment plans accordingly.
Conversational AI also addresses the lack of access to basic health information, particularly in rural areas. Many people living in remote or underserved locations struggle to access crucial information about their health. Conversational AI can bridge this gap by providing easy-to-understand and readily accessible information. Advocates argue that generative AI could eliminate the need for doctors to address basic health problems.
The potential of implementing Conversational AI and generative health AI is widely recognised. However, no supporting facts are provided to elaborate on this stance.
Conversational AI is also seen as a powerful tool in patient engagement and health-related education. The effort required in typing and texting often hinders effective communication between healthcare providers and patients. However, Conversational AI streamlines this process by allowing patients to converse naturally, making them feel heard and fostering a better doctor-patient relationship.
Advocates propose the creation of a global generative health AI group under the stewardship of Dr. Gupta. This group would bring together stakeholders, regulators, policymakers, doctors, hospitals, and frontline health workers to set a direction for all involved. This initiative is supported by the belief that the United Nations, as the largest multi-stakeholder and multilateral body, is in a prime position to facilitate this collaboration. This would promote partnerships and support SDG3 (Good Health and Well-being) and SDG17 (Partnerships for the Goals).
The Academy of Digital Health Sciences is working on a report about generative health intelligence. This report aims to explore the role of generative health intelligence in shaping the future of healthcare. While further details about the report’s content or expected release date are not provided, it is expected to contribute to advancements in healthcare intelligence.
Training and deployment of generative AI in healthcare are emphasized as crucial. Understanding how generative AI works and developing the necessary skills are essential for effectively utilizing this technology. The positive sentiment towards this necessity stems from recognizing the potential benefits of generative AI in improving healthcare outcomes. However, no specific evidence is provided to further support this argument.
In conclusion, Conversational AI has the potential to transform healthcare by analyzing health conversations, delivering information in remote areas, enhancing patient engagement, and facilitating health-related education. The establishment of a global generative health AI group, the training and deployment of generative AI, and the ongoing work by the Academy of Digital Health Sciences highlight the need to fully harness the potential of this technology. Further supporting evidence and details would strengthen the arguments presented.
Shawnna Hoffman
During the discussion, the potential of conversational AI to bridge the healthcare gap was highlighted as a significant advantage. The ability of AI to provide 24/7 assistance and access to healthcare globally, through mobile phones, was emphasized. This can greatly benefit individuals in remote areas or those who may have limited access to healthcare services. The convenience and availability of AI-based healthcare assistance can help address health disparities and provide support to individuals in need.
The combination of AI with blockchain technology was also discussed as an efficient solution during crisis situations. It was mentioned that during the COVID-19 pandemic, an AI chatbot combined with blockchain technology helped locate over 10 billion pieces of personal protective equipment (PPE) within the first 24 hours. This demonstrates the potential of AI and blockchain to rapidly respond to critical needs and find effective solutions in times of crisis.
The importance of fact-checking AI and ensuring its accuracy was emphasized. Even though AI is probabilistic and not always correct, it is crucial to verify the information provided by AI systems. One of the speakers, the president of Guardrail Technologies, highlighted the need to put guardrails around AI and fact-check generative AI to ensure its reliability and accuracy. This point stresses the importance of being cautious and critical when relying on AI-generated information.
The discussion also raised awareness about the issue of internet access and connectivity for AI solutions to be effective. It was mentioned that 2.6 billion people globally lack internet access, which significantly hinders the overall success and reach of AI solutions like chatbots. Ensuring internet access for all individuals, especially those who currently lack it, is necessary to fully harness the benefits of AI and provide equitable access to its solutions.
A holistic approach that considers individual needs, even in remote locations, was emphasized. The experience from an IBM Watson project was shared, where access points were set up in various villages, allowing people to reach these points in half a day and gain access to medical information. This approach recognizes the importance of tailoring AI solutions to meet the specific needs of individuals regardless of their location or resources.
Lastly, the speakers acknowledged the complexity of implementing AI solutions on a wide scale. It was acknowledged that the challenge extends beyond just conversational AI and that the complexity of the problem makes it difficult to implement AI solutions effectively. This realistic perspective highlights the need for careful planning, research, and collaboration to overcome these implementation challenges.
In conclusion, the potential benefits of conversational AI in bridging the healthcare gap, providing 24/7 assistance, and access to healthcare globally through mobile phones were discussed. The combination of AI with blockchain technology was seen as an efficient solution during crisis situations. The importance of fact-checking AI and ensuring its accuracy, considering internet access and connectivity, adopting a holistic approach, and addressing the challenges of implementing AI solutions were all key points discussed during the session. Overall, the speakers expressed optimism about the potential of AI while also acknowledging the complexities and challenges that need to be addressed for its successful integration.
Sabin Dima
Artificial intelligence (AI) is widely recognised as a powerful tool that can replace certain skills, while still acknowledging the importance of human involvement. It is acknowledged that AI can outperform humans in certain tasks, offering greater efficiency and accuracy. Notably, humans.ai, led by the CEO and Founder, has achieved significant milestones in AI development, including creating the first AI counselor for a government and an AI capable of real-time conversations with 19 million Romanians. These accomplishments demonstrate the transformative potential of AI across various domains.
Data traceability and ethics are emphasised as critical considerations in AI development. The CEO’s firm has developed the first blockchain of artificial intelligence to ensure transparency and accountability in AI systems. Additionally, they have contributed to research papers on the ethical implications of AI, emphasising the need to address these concerns.
In the context of healthcare, the CEO argues for a bidirectional approach to AI, aiming to understand people’s problems and provide effective solutions. Emphasising human-like interaction, the CEO advocates for grasping individuals’ problems and urgency. They envision an open innovation platform that fosters collaboration and comprehensive problem-solving.
While technology itself is not the issue, optimising its usage is crucial. The CEO suggests that resources for experimenting with AI projects are readily available to everyone. The focus should be on tackling real-world challenges and driving innovation across sectors.
Furthermore, the CEO asserts that trust can be bolstered in healthcare through the implementation of AI solutions. For instance, the CEO references a project where they cloned a doctor’s voice to send audio messages to patients, enhancing patient care and building trust.
To better understand and regulate AI, the CEO proposes real-world experimentation. By implementing AI solutions in specific regions, regulators can gain insights and make informed decisions on regulations and policies.
The urgency for action and application of AI is evident throughout the discussion. The CEO highlights the readiness of technology and the availability of skilled professionals passionate about AI. Encouraging seizing the opportunities presented by AI rather than merely contemplating its potential is emphasised.
In the conversational AI domain, the CEO suggests making the technology more accessible to underserved populations in low-income areas. By developing efficient models that can run on mobile phones, conversational AI can bridge gaps in healthcare access.
Finally, AI is portrayed as a beneficial tool for employment, increasing productivity and reducing human error. The CEO suggests that AI can supervise performance and mitigate errors, potentially enabling employees to work fewer days while achieving greater results.
In conclusion, AI is a powerful tool capable of replacing certain skills but not humans. The CEO and their firm exemplify the transformative potential of AI across various domains. Ethical considerations, data traceability, bidirectional approaches in healthcare, effective technology utilization, trust-building, real-world experimentation, accessibility, and increased productivity are crucial aspects guiding the application and development of AI. The overall sentiment strongly favours embracing AI to drive positive change in multiple sectors.
Ashish Atreja
Generative AI and AI technologies have the potential to revolutionise the provision of medical care by overcoming the limitations of time and location, extending healthcare access to a larger number of people, irrespective of their physical location. The use of generative probabilistic models in combination with rule-based care plays a crucial role in bridging the gap between scientific treatments and patients’ understanding.
Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only among patients but also among countries, states, and healthcare organisations. Through collaborative efforts and leveraging technology, healthcare can be democratised, ensuring equal access to quality care for everyone.
AI technologies can bridge the digital divide in healthcare. Existing care solutions have the potential to become global solutions if properly validated. Humans play a vital role as transformation agents in bridging this gap, working collectively across silos to ensure inclusivity in healthcare.
Prominent figure Ashish Atreja advocates for a global thought leadership group on generative AI in healthcare. He believes in the power of collective work and engaging with global partners to drive advancements in healthcare systems. Collaborating and sharing knowledge can contribute to the development and implementation of generative AI solutions worldwide.
Conversational AI has the potential to dispel healthcare fallacies by providing accurate and reliable information. However, it is crucial that the technology behind conversational AI is based on validated and trustworthy sources. The FDA has a tiered system for validating health-related technologies based on their potential risk, ensuring their reliability and safety.
To ensure the accuracy and effectiveness of conversational AI in healthcare, an automated or semi-automated governance framework is needed. Currently, there is no specific framework to regulate the validation of conversational AI in healthcare. Establishing such a framework would help maintain the accuracy and credibility of conversational AI, benefiting patients and healthcare providers.
In conclusion, generative AI and AI technologies have the potential to revolutionise healthcare provision, extending care to more people while overcoming limitations of time and location. Collaboration, inclusivity, and validation of technologies are crucial in addressing healthcare inequity and bridging the digital divide. Through collective work, the creation of a global thought leadership group, and the implementation of an effective governance framework, the potential of AI in healthcare can be fully realised, improving outcomes for patients worldwide.
Session transcript
Rajendra Pratap Gupta:
Hi, greetings from Kyoto, and good morning, good evening, and good afternoon, and for some late night. As I start this very important panel discussion on conversational AI in low resource settings and low income settings, let me first give you a perspective on this and how we build up this session. So while we were conceptualizing this very important topic of conversational AI, I did reach out to a lot of my friends who have long time been in digital health, and I must put this through this forum that a few of them weren’t aware of this topic, and which was a big surprise for me. So I think it makes this session all the more important and relevant, because conversational AI is basic digital health. I mean, this is something that we need for the fact that AI is all pervasive, is getting into every aspect of health care delivery, and more than that, what I call is the 80-80-80 rule. 80% of the people don’t have access to health care or qualified doctors. 80% of the areas that we have do not have anything that they can call as health care, and 80% of the problems people have are treatable by probably OTC medications or non-specialist doctors, and that’s where I think our role comes into very importantly. And if you have to talk about affordable and accessible health care, conversational AI is important. While I was serving the union health minister as advisor, I think my boss was very clear, let’s not force doctors to go to rural areas, because they have studied in urban cities for better life, better conditions, and rural areas don’t provide that infrastructure. So even if they go there, what they will do? I mean, coming from that reality, from a country which is of a large population of 1.4 billion, and knowing the effect of what most of these LMICs pass through, I must also relate one experience I had with one of the country heads of IGF who came to our booth and just asked me a question. We’ve been hearing a lot about generative AI. Will it solve our health care problems? And my immediate, instant response was generative AI is based on data. We do not have that. What will it analyze? So if you’re having a very high expectation of saying that generative AI will immediately solve problems, I’m sorry to say that there has to be a baseline of data, there has to be a baseline of clean data for generative AI to work on. So while there is hope and hype, there is a long journey ahead for all of us. With that having said, I also must give you a very interesting example of conversational AI, which is actually chatbots, AI-based chatbots. So in my book, I do mention about this example that there is a very, what do you call, highly respected exam that doctors aspire to pass through. It’s called MRCGP in the UK, member of the Royal College of Physicians. The conversational AI chatbots, they scored 81% compared to human physicians who scored 72%. So I think the evidence is around that there is a future of conversational AI. In fact, there is a present if we deploy it very well, what conversational AI can do. But what we need to create is awareness because so-called LinkedIn leaders of digital health didn’t actually know about it when I reached out to them. They’re all friends. But today, those whom you have on the screen are the actual leaders who understand. Those who are sitting next to me, Dinu and Shauna. So what we are going to do today is to ask people their experience, their expertise and their expectations from conversational AI. And with me, I have Dino Cataldo Dell’Accio, who serves as the Chief Information Officer of the UN Joint Staff Pension Fund and leads the UN Digital Transformation Group. Besides, he has many accolades, but I will just point the one that he got was the UN Secretary General’s Award for his work in applying blockchain technology for digital certification of entitlement process of the UNIJSPF beneficiaries and retirees. Mr. Sameer Pujari, who is among many hats he wears, leads the AI at WHO and is the Chair of AI for Health at WHO ITU focus group. But besides that, he has done a number of things, including Be Healthy, Be Mobile, which is the first mobile app from WHO for chronic diseases. We have Sabin Dima. I think we all in the world of AI and blockchain, I would personally have the highest hope from him given the fact that he’s the first person in the world to merge AI and blockchain together, the founder and CEO of humans.ai. And he’s an entrepreneur who started his first social media at the age of 16. And if you hear him, I bet you that it will change your perspective on what AI and blockchain can do. We have Ashish Atreja. He’s a doctor. He’s a gastroenterologist. He’s currently the CIO and the Chief Digital Health Officer of UC Davis Health. And he’s in pioneering work for digital health. And at least the reason I got him here, he had done phenomenal work during COVID with chatbots, the conversational AI. We have Mavish Vaishnav, the Group CEO of Digital Health Associates, who sits on various government committees for digital health. I’ve been a part of the UN initiative of the Innovation Working Group Asia, where she drafted the roadmap for telemedicine way back in 2013. We have Dr. Olubisi Ongobase, who is a pediatric doctor and quality improvement team lead and mentor. And I’ve seen her work at the World Health Organization. She’s phenomenal and fantastic work that she has done. So what I’m going to do is pass this to my expert panel for their opening remarks on what I have there to say about the conversational AI for low-income and low-resource settings. So I’m going to start with Mr. Dino Cataldo. Dino, over to you for your views on this topic.
Dino Cataldo Dell’Accio:
Thank you very much, Dr. Gupta, for inviting me to participate in this very relevant, very important discussion and topic. So as you kindly introduced me, my background is in practical implementation of technologies such as blockchain, such as biometrics, specifically facial recognition for digital identity, having designed and implemented a solution for the United Nations Pension Fund to support the proof of existence of 84,000 retirees and beneficiaries of the UN residing in 192 countries. So my initial thought in addressing the challenges that conversational AI and chatbot can have and can present in settings with low resources is indeed, and of course I admit my bias here, is first and foremost the identification of the user. As we are looking at potential use cases, we cannot avoid to appreciate the importance that especially in the healthcare sector, when and if there is a relationship between a patient and a system, using this term in a broad sense, that is intended to provide
Rajendra Pratap Gupta:
services, that is intended to provide supported information, it’s critical that the system has the capability to identify who is the end user. Because as we can imagine, the response will need to be tailored and aligned with the specific needs and expectations of the end user. So here comes the concept of having multiple technologies that working together can create a system and a solution that ultimately is able to address the needs of the end user. So the proposition here is, as we discussed in other panels, is here we have the AI, which is a probabilistic technology, jointly working and functioning with a blockchain, which is a deterministic technology. And the two of them, in conjunction, they can complement each other and provide that level of support to provide and to offer and to confirm certainty about identity, certainty and reliability about the data that ultimately the machine learning model are using to elaborate the responses in the conversation. So I think we can start framing the conversation, the discussion, at least from my point of view, by looking at how the joint functioning of this technology can ultimately create a value and a secure solution for the end user. Thank you. Thank you, Dinu. And I think this is Dinu’s call to everyone that those of us who believe in leveraging AI for health, or for that matter, any critical sector, please ensure that the probabilistic technologies have a denominator of deterministic technology. So AI in isolation is probably going to create more distrust unless you start merging it with blockchain. I think this is where this panel is clearly having top global experts who have done groundbreaking work in terms of trying to get both these technologies working together. And in health, we always say that anything you do, the first thing that the user looks at is with distrust. And when you start saying that this technology has a basis of ensuring an identity and reliability, and that cannot happen without blockchain. And this brings me to my next panelist who is, I think for many years that I have known him, is today the man on the mission, the man who leads AI for WHO and the WHO-ITU collaboration. Beyond that, he knows beyond AI, I mean, given his work in mobile health and other standards negotiating with maybe 194 countries for getting people on board for this emerging technology. Sameer, I want to ask you, what has been your experience? What is your vision? What’s the work you are doing in this area? And what can this conversational AI deliver for the LMICs? Over to you.
Sameer Pujari:
Thank you, Rajendra. And thanks for sitting on this forum. I think it’s a very interesting discussion, especially at two angles. One is the conversational AI because the discussions have floated very much towards just generative AI. I think there are two different components to it that needs to be discussed. And second is the low middle income settings. I mean, that’s the key here that we’ll discuss. Let me step back and say, before I go into my experience part, that often in this technology forums, we focus a lot on technology. I would urge everyone to start, take off that hat and think of people. I think it’s very, very important that any discussions we are doing are focused on the people that they are getting the benefit out of. And these are not one set of people, these are the future generations that we’re looking at. These are the current population that we’re looking at. These are care providers that we’re looking at. Everyone has a role and impact in this area of work with AI and conversational technology. So I think technology has to be understood as an enabler. That’s the first point. Now, second point going in, is it really making an impact? What is the challenge and how are we looking at it? Rajendra, you mentioned at the beginning of the session that there are gaps in health care services. Even in 2023 today, we are still seeing a massive gap in rural Africa where women cannot be screened. It’s very expensive to screen for cervical cancer. We are seeing in Egypt, massive gaps in screening of diabetes population. And these are problems which are not existent because of gap in access to health care, but there is a gap in the health care providers proportionately. So technology provides that specific enabling factor, especially the conversational AI. We are even not there at the stage of population health components, sexual reproductive health areas. We are still missing those massive outreach components with the health care gaps. Forget the part of health education and those things, they are way forward. I think that’s where the role of conversational AI is critical. It has been shown through science that in the low-medium country settings, with the technology getting cheaper, there is a potential that these technologies can make a difference across different disease areas in a very effective manner. And our Director General mentioned that very specifically in the July launch of the Global Initiative on AI for Health. However, we have to be very, very careful of four areas. First is equity. And I think that’s where the main component comes into play is when you’re trying to deploy technology in low-medium country settings, the business value for this is much less. And hence, it is us as this forum or the civil society groups or the international development groups who need to be cognizant that we are working in ensuring equity. The technology companies are not going to be pushing for that. I think that’s one part that everyone has to focus on as we look forward. The second part is collaboration. It’s extremely important that we work together across sectors for health, education, and different areas and domains of work. Third is focus on sustainable business models. It’s very exciting to trigger a new product, a new project, and go to the field. And 90% of the times, I’ve seen it die because it doesn’t have a sustainable business model. So that’s a very important component. The fourth point is looking at how it benefits the user at the end of the table. That’s the most important. How can you take this AI to the people? That’s the discussion that has to be happening all throughout. I think if we can focus on this around the people-centered approach, the people focus, we can make an optimal impact with conversational AI in the changing of the healthcare domain across the SDGs indicators, not just the healthcare indicator, but across SDGs. And that’s the key, I think, I would like to see this forum and the members of this in their own role focus on. We have what you asked me earlier, what we’re doing, WHO together with ITU and the World Intellectual Property Organization, heads of the three agencies, announced a global initiative on AI to bring together all this work to kill the verticalization or to reduce the verticalization so that we can work on WHO side providing the guidance, standards, policies on the facilitation side, bringing all the groups together, and then actually helping countries implement that member states level through the right governance approaches that we have. So I think at this stage, I’ll summarize by saying that there’s a huge potential in the healthcare market that our member states are seeing, and they’re asking WHO to work towards that direction. However, as the UN body, we need to ensure that our member states do not get hit by the private sector business models, but we can benefit everyone in the private sector in terms of maximum value of AI in healthcare. Thank you. Back to you.
Rajendra Pratap Gupta:
Thanks, Sameer, and really heartening to hear what most of the people refrain from saying is that we should not get trapped by private sector business models. At the same time, we have some phenomenal people in private sector, and I think to your point of people benefiting, we have next Mr. Sabin Dima, who is the founder and CEO of humans.ai. I really like his approach and the way he’s building the AI verse is that he says you will be able to do anything you can think about with AI. I mean, it is totally disruptive approach that Sabin has. Sabin, it’s nice to have you. I know you are traveling. I would like to hear your views given the work, and I would like you to maybe speak for a minute about the work that you have done in this field and how you are disrupting and what do you see as the role of conversational AI in healthcare. Over to you.
Sabin Dima:
Hello, everyone. Thank you so much for this opportunity. I agree with Mr. Sameer that we need to have a human driven approach. I’m Sabin Dima. I’m the CEO and founder of humans.ai. We are for more than four years in the AI field. In this new AI era, we are a company that already have some world premieres. We created the first AI counselor of a government. We created an AI that it’s able to have a conversation in real time with 19 million of Romanians, and all of those opinions, we’re using them to train an AI. And on the other side, the decision makers can have a conversation with one entity, with one AI, like they’re talking with 19 million of Romanians. We strongly believe that artificial intelligence is the greatest tool ever created, and in order to democratize it, we created an AI framework that makes it so easy to create a narrow AI. I don’t think that AI is able to replace humans, but it’s able to replace some skills. And we can help with that, but we’re taking in consideration two major aspects. One is the data, and for that, we created the first blockchain of artificial intelligence in order to have the data traceability, to create what is called explainable AI, and to make sure that if I’m giving an opinion to this governmental AI, the AI will be trained with my opinion as well. And there is no bad actor that can delete that opinion. And the other one, it’s ethics. We had a lot of research papers on ethics in AI. The latest was presented at the Imperial College in London. Regarding the AI in healthcare, for sure AI is going to democratize access to health, but I see a bidirectional approach. Usually, we are using conversational AI to get answers. We are interacting with the AI. We are asking questions to the AI. But everybody wants to solve people’s problems, but I think we are not aware about those problems. So we should engage in a conversation with people in a human-like interaction like we’re having right now, to understand what are the people’s problems, and probably the most important, what is the sense of urgency? And for that, we’re not asking governments to invest in infrastructure. We’re not asking for governments to invest in hospitals and so on. We need just an internet connection. And in some cases, there are some AI models so efficient that can be encapsulated in a blow-entry tablet, and we can ship it in remote places. So I think that we should build this bidirectional conversation to ask people to be aware of what are their problems, and on the other side, to encapsulate different doctor skills, being able to respond and to create this open innovation platform, that it’s a living organism, that any startups can participate and can bring different skills under the same core.
Rajendra Pratap Gupta:
Thank you, Sabin. Coming to Mewish, Mewish, you have been in this field, writing policies and roadmap for telemedicine, and now this new field of conversational AI. And given the fact that you are involved in academia, which is expected to show the roadmap for the future to those who are into this field, what’s your view about conversational AI?
Mevish Vaishnav:
Thank you, Dr. Gupta. Hello, everyone. I’m Mevish Vaishnav, the Group Chief Operating Officer at Digital Health Associates and Academy of Digital Health Sciences. I thank the DC Digital Health for this extremely important session. While we talk about generative AI and large language models like LLMs, I would say that the basic and the disruptive point for LLMs and for generative AI would be the conversational AI. Just imagine if we had a conversation a scene where billion people are speaking and billions of people talking to patients, populations, speaking about health issues, and clinicians addressing them. It would make a phenomenal opportunity to analyze these conversations and create the DHAI, that is the generative health artificial intelligence, which would be different from artificial intelligence for general purposes because health is very technical. It is clinical. So, I see a great opportunity of conversational AI being the starting point for the generative health AI, which will over time kind of eliminate the need for using doctors for basic health problems because most of the people in the rural settings or in the semi-areas or even urban areas have the need for basic information and this can be handled by conversational AI, which is driven by either generative health AI, but both are dependent on each other. Without this data, we actually cannot do anything. So, I see a phenomenal opportunity and I think we should build upon this. Thank you.
Rajendra Pratap Gupta:
Thanks, Mavish. Moving from what you said is the generative health AI and the fact that when people start interacting over voice, over communication, rather than texting or writing, which limits their ability as, I’ll not call it illiterate, but digitally illiterate populations who have not actually learned to write still. I mean, that’s a major part of the population. In fact, yesterday at a panel, we were talking that 2.6 billion people are still not connected to the internet and with what Sabin was saying, you know, of shipping the tablets to low resource settings. I mean, just imagine if people start talking, you know, the quantum of data that comes out is going to be exponentially more than what we have today because today you have to type, you have to text, that gets captured for analysis. The moment you start analyzing voice-based data is going to be exponential than what we have. So, I think the accuracy will increase and that will become much more worthwhile. But I think at this point, I would like to bring Ashish Atreja, the actual person who has done a lot of work in this during COVID and even earlier. Ashish, what has been your work in this field and how do you see this field shaping up and the role of conversational AI? Over to you.
Ashish Atreja:
Dr. Gupta, it’s a pleasure to be here and thanks for having me. Greetings from California. It’s 1 a.m. here and really excited about this. We just launched the largest network in the United States on generative AI in healthcare called Valid AI. And the reason being, just a very brief background about me, I did my medical school from India and then came to the U.S. to do public health and then informatics. So, as a physician, I’ve been practicing for the first 10 years of my life and now as an informaticist technologist supporting technologies for the University of California Health and now working globally in many things. I still am an adjunct professor in medical school in India. So, I’m considered an app doctor because I started building apps around 15 years ago and these were mostly deterministic models. So, we took the rules from the guidelines and the biggest gap we see is, and very eloquently expressed by the previous speakers, there is efficacy which we see from the medicine, what is possible today, like 99% of patients’ blood pressure can be controlled with the current medicine. But the real gap, which Dr. Gupta mentioned, is 80-80-80. Many patients don’t even get access to the doctor. They can’t even drive to a doctor. The doctors have a waiting list and even if the doctor prescribes a medicine, they do not have time to explain how to take it and how to do other things like salt reduction and others. So, there’s a biggest difference in the care which actually patients get in their home and then what is possible. And that is because most of the human medical care globally, whether it’s United States or Africa or India, is because we have locked medicine and care into the same time and the same space as a physician. So, everything has become physician centered. You have to come into the same clinic or a hospital to get care. What generative AI and AI can do finally is unlock care with the time and space. So, you can provide care anywhere you can and you do not need a physician. You can extend beyond one-to-one physician-centered care to what we call as exponential one-to-many care. If I have to tell the same thing about blood pressure control, I can make myself into a conversational AI bot. Now, with generative AI, within a matter of weeks and I can deliver not only to people which I see in University of California, I can deliver across California, across US, but really I can now deliver across globally. Right? So, any solution now we can make, if that is validated the right way, can immediately become a global solution. So, we are finally at the cusp of unlocking the biggest supply demand issue in healthcare by democratizing it completely. And if you really combine the rule-based stuff, the guidelines to provide rule-based care through text-based, you can then combine that with generative probabilistic. You’re unleashing the science of rule-based care with the conversation which patients need. Because rule-based care is our scientific way of physicians doing it, but conversation is the way how patients get it. And that has always been a barrier how to bridge it, but now with combination of these two technologies, we call it a hybrid AI, you can combine the physician-centered care traditionally with patient-centered care which everyone needs today globally. So, really excited about this. We have all the US states now looking at, you know, and really we need to go with a problem-centered approach first and really looking at equity. The equity is not just in, inequity is not only in patients, inequity is in countries, in states, and in healthcare organizations. If we do it the right way through collaboration, which is really I’m looking for here, we can finally make it the most inclusive, the most democratic way of providing care globally and become, go from digital divide to digital bridge. And I think that onus is on us, not on technology. We humans are the transformation agent to bridge the gap and really it’s a big calling for us to really leverage technology, but put our own DNA and purpose to bridge the gap.
Rajendra Pratap Gupta:
Thanks, Ashish. And this is very important of the fact that you launched valid.ai, I think, at Health in Vegas, I guess it was, as we are here, I think it’s going on parallelly. I move to the next expert panelist, Dr. Olabisi. She is a pediatric doctor and she has done phenomenal work using WhatsApp and others in underserved populations. I mean, coming as a clinician like you, Ashish, she has done phenomenal work. So Dr. Olabisi would like to hear about your work and what’s your suggestions. And I think you have a presentation, so I’ll ask the technical team here to allow you to just share your slides briefly. Hello. Hello. We can hear you, Dr. Olabisi. Please go ahead. I want to thank you very much, Professor Gupta, for inviting me to this forum. So I’m a pediatrician.
Olabisi Ogunbase:
Okay. Greetings to everybody. I work in a general hospital, a maternal and child center, and we see children, you know, mothers bring them to the hospital and they go. So we have no contact with them thereafter. So we thought of how do we continue and ensure patient engagement? You know, how do we ensure that we still maintain, you know, a form of interaction when our patients leave us, you know, so that we can prevent relapses and what have you. So that was what brought us to, you know, thinking of what to do when patients leave us. And that’s how we came about digital technology and what to do when they leave us, how to involve them in their own care. So we thought of using WhatsApp as the means by which we communicate with our patients. Please give me a minute. Okay. So we thought of using WhatsApp in communicating with our patients and, you know, having that relationship with our patients, even when they leave us. So I’ll be taking my presentation in this outline, a brief introduction, definition, objectives, and what actually we do, advantages and no. So WhatsApp is a form of digital technology where we use tools to maintain that relationship and that engagement with our patients. So for us, mobile phones is what we have and mobile phones is what the patients also have. So that’s the tool of digital technology we’re using. So patient engagement is how we are involving patients in their own care and digital means we’re using an electronic means to ensure that. So when we started our objective was, okay, how can we pass information across to our patients? How can we pass notices of what’s going on the hospital to them? How can we educate them beyond the little time? You can imagine in developing countries, there’s so many patients. So you don’t have that much time engaging with them when they come. So what’s all that means? Can we use to pass education to the patient? And the forum also served as a support system because the mothers engage amongst themselves on that WhatsApp platform. And they support one another, they ask questions, they share ideas. And those times we just stay as a fly on the wall, we don’t say anything. But when they now ask us questions, we can now come in and answer the questions. So there are many advantages to this form of engagement, digital engagement using WhatsApp. For us, it’s, of course, optimizing efficiency and unnecessary visits to the hospital. I mean, we can answer some questions, they don’t need to come to the hospital. And so it improves quality of life, it improves patient safety, it improves health outcomes, because we’re still engaging with them, we still have that contact and relationship with them. So there are many advantages. So as of a few days ago, you can see here, there are about 395 participants. And this is just one of the WhatsApp platform. Every clinic has a WhatsApp platform, a dedicated WhatsApp platform. So from the picture, we talk about weight gain and their weight increasing. The mother sent pictures to us about different things about their children. This is one saying the hair on my baby’s head, you know, has gone off, what is happening? There’s on the Leica, you know, what’s happening to my baby’s cord. So they send pictures. So they type questions, they can send pictures, sometimes they even send voice notes. Then doctor, listen to my baby’s breathing. I’m not comfortable, just take it. They record the baby’s breathing and they send it to us. So we are also able to listen to the breathing, we’re able to read their text messages, we’re able to see the pictures they send. So these are various from them. This here is showing the information. Sometimes it’s World Breastfeeding Week, it’s World Pediatric Day, it’s World Hand Hygiene Day. We’re using that forum to educate the mothers on the platform. So these are just examples of, you know, interactions that I picked up from the WhatsApp platform. They ask about immunizations, or they ask my baby has cough. Doctor, what do I do? And I see that we’re able to interact with them. Okay, come to the hospital, do this first aid, let me see you tomorrow. So we’re able to book appointments, we’re able to see them. So we’re able to interact with them. So that aids the experience that they have. So this is pictures we also send to them. This fontanel is normal. This is how you engage better when you’re positioned, better when you’re breastfeeding. And this picture on the bottom right is a picture of the rash. So they send us, doctor, look at what’s on my baby’s skin. What is that? Do I come to the hospital? What do I use? Of course, we don’t really prescribe on the platform, but we can educate, we can inform, we can say, okay, I need to see you in the hospital. come at 10 o’clock, please come at 9 o’clock. So it’s a forum. And these are pictures of their babies that they send on the platform. When their babies are six months, they say, this is my baby. I’ve completed exclusive breastfeeding. They’re excited because we’ve talked about exclusive breastfeeding. So they send their babies pictures. Like I said to you before, they support one another. So when a baby is one year, a baby is six months, they send a picture. All the mothers are congratulating them. Oh, you’ve done well. You’ve breastfed exclusively. I know that we all know here that we’re talking about digital health. We’re talking about breastfeeding is one of the childhood survival strategies. So it’s a big thing for us. So in conclusion, I’ve talked about how at Matana Child MCC at Lagos, we’ve used the WhatsApp platform as one of the digital tools to engage with our patients, even when they have left the hospital. The consultation shouldn’t stop in the doctor’s office. Like the last speaker said, it should still continue beyond the doctor so that we can prevent relapses, we can continue to educate, and all that. So the key words, in conclusion, the key words are digital patient engagement, digital technology, mobile health, using the smartphones that the doctors have, the dieticians have, the nurses have. And this platform is not doctors. Everybody’s there. The nurses are there, the dieticians are there, the social worker. So if the question comes that concerns the nurse, she answers. If it concerns the pediatrician, I answer. Everybody’s on that platform. And it’s really a useful platform for us. So thank you very much for listening.
Rajendra Pratap Gupta:
Thank you, Dr. Olabesi. And I think this convinces us that if you can use WhatsApp to bring such a change, and you get photographs from your mothers who say that this is what the check-in looks like after six months, I think one of the things that you pointed that we don’t prescribe over WhatsApp. But I think what my friend Dinu, who is sitting on my right, has working on technology with blockchain and what Sabin is doing, I think the moment we are able to put the identity within the system, I think the day is not far when I think a perception on WhatsApp may be legal as well. I think that’s the day we should look forward. But I think seeing your presentation, your work that you have done, I think low-resource settings are the high-opportunity settings for conversational AI. I mean, that’s what I would say. And this brings me to my next panelist, Shauna Hoffman. Shauna had led global roles at IBM, Watson, and before that with Dell. She was revered in this field. And she is doing path-breaking work in terms of what she does at GodRate Technologies. Shauna, over to you for what conversational AI can do and what you would say in terms of re-fencing the negatives around conversational AI. Over to you.
Shawnna Hoffman :
Thank you so much, Dr. Gupta, for having me here today. And Dinu, I love sharing the stage with you. There are so many great insights that you have. And I’ve been in artificial intelligence for almost 20 years now. And I have seen it at its best, and I’ve also seen it at its worst. And when Watson won Jeopardy back in 2011, I knew that conversational AI had actually taken the front seat. And so I joined IBM at that time. I led a Watson practice for Watson Legal. And when COVID-19 hit, I was chosen as one of the few to lead our COVID-19 solutions to the marketplace. And we had three. And one of the things I had realized after leading an AI practice, that AI wasn’t enough. And we needed to be responsible. And that responsibility was tracked and traced through blockchain. And so that was the combination of both. Our three products that we brought out to the market within the first three weeks of the shutdown were one of really important. Remember when we couldn’t find masks and we couldn’t find gloves? And it was really a challenge to get the PPE across the globe. We had an AI chatbot combined solution with blockchain to track and trace all of the materials. We found over 10 billion within the first 24 hours. And so it was connecting people all over the globe. I will say that AI, one of the most amazing things for healthcare is that those individuals who can’t often travel to a location to get to a hospital or get to a doctor, often they have mobile phones. And so conversational AI is extremely important for us to get around the globe so that individuals have an opportunity to get forms of healthcare. And maybe it’s unusual, it’s not traditional, but it answers those problems, as our previous guest speaker just said. And I love what you’re doing to really bring that, especially to mothers. I’ve got three kids of my own, and man, did I have a lot of questions when they were little. Because every little cough makes you a little scared as to what’s happened with them. So other solutions that we’ve worked on, of course, the supply chain. And making sure that not only, oddly enough to say, not that doctors are a supply, but during COVID-19, they were really lacking in so many of the areas. And so we were able to move doctors around through, again, our chat bot. The doctors were able to chat to say, hey, we’re available, we’re happy to go anywhere in the globe, and we could connect them with the hospitals that were the most in need. Again, a blockchain solution with AI. You know, conversational AI has such a potential to bridge the healthcare gap. And I would definitely say there are five that we have worked on throughout the years. And I have to say this before I even mention the five. AI has been around since 1956. And the newest, most excitement that I’ve ever seen is really just this past year. And it is when a system that used to cost my clients over $20 million to put in place, that was Watson, is now a conversational AI that is free to the globe. And so we’re seeing a lot of hype, a lot of excitement around there. But do know there’s a lot of use cases over the past 15 years that IBM Watson has been around that they’ve really solved a lot of these problems. And so there is a good company to go back to to ask those questions. I don’t work for them anymore, but there are a lot of us who have that are willing and very willing to share our experiences. If we were to look at the fives, let me jump into those, accessibility. That was mentioned by our previous speakers. Reaching the remote and underserved populations that lack that access to traditional healthcare. Again, access to mobile phones. Many of them have, although we did talk about earlier, yesterday, the two points, and you did here too, the 2.6 billion people that don’t have access to the internet. We need to fix that to be able to give them an opportunity to be part of this global health system. I love the consistency of AI, so 24-7 availability is my second one. It’s extremely important to be able to have doctors, which we’ve done in the past. So Watson had an, we did a lot, even remote surgeries. That kind of gets into robotics. Again, AI is over 90 different components. Conversational AI is only one of them. You can do remote surgeries from one end of the world to the other, and so we had some really amazing things that we saw. But again, that 24-7 availability with conversational AI is extremely important, and it is consistent. I will say, so I’m the president of Guardrail Technologies. One of the reasons that we exist is to put guardrails around AI. AI, as Dino had mentioned, is a probabilistic model. It is not correct 100% of the time. Sometimes it’s even really incorrect. We’ve been working in the medical space in AI. I’ve worked a lot with various different hospital systems in the U.S. I just spoke at one about six weeks ago, and we dove in with 30 of their top physicians to figure out what we needed to do to answer the problem of the AI being wrong and the AI hallucinating. And it could be very scary that it gives the wrong information and could actually cause death. So we need to be careful. We have guardrails. We fact-check the generative AI. That’s part of our program. But make sure that you are fact-checking it because it is going to be incorrect. The best systems out there, because it’s probabilistic, none are going to be 100% correct because of the way the model is, and there’s nothing wrong with that, but we just need to make sure we’re adding that extra layer that confirms that we are fact-checking our information. Education, great educational tool, making sure, as you had seen, that the mothers know how to breastfeed their babies, what the different rashes look like. I love this one. Language and cultural sensitivity is one of my top five because AI can be used to be customized to the local language, the local responses to things. It can be really cool. There’s some AI out there. I just was talking to one of our previous guests, and he had mentioned that they have a movement program that he was in the midst of going and finishing up, working on a patent application for. But as an individual moves, the AI can watch the movement and see what possible types of medical issues that the individual has. There’s some really good language of cultural sensitivity, but then also from there, being able to take that and say, okay, that’s a cultural thing, but then this is just unusual, unique. They may have symptoms for other things. Again, very customizable to the individual. And then my last one, efficient triage, which we can identify urgent medical issues, not, again, 24-7, not having to wait for a doctor’s office to open. So thank you.
Rajendra Pratap Gupta:
Thanks, Shauna. This makes it very interesting to first see those who have been into the clinical side have used it at scale. So there’s no doubt about the effectiveness. In fact, it’s about saving lives. What I said in the beginning, the three ATs, the 80% have no access, 80% can’t afford, but 80% have acute problems. That means they every time don’t need to go to doctor. And these are 80 As, all As, access, affordability, and acute. So the fourth A would be artificial intelligence, of course. But given the fact that we are DC Digital Health and we believe in tangible outcomes for what we discuss here, and that we have taken a topic of conversational AI, so I’ll go back to my expert panelists and ask them that if you had a clean slate and given the discussion that we have with our expert panelists, what would you recommend, Dinu, to you in terms of our pathway for the next one year for this field? Thank you.
Dino Cataldo Dell’Accio:
Thank you very much for that question and also for that call to action. So I think the previous sharing and comment were extremely relevant. I really like the observation on human centricity, the distinction that was made about the gaps, how to bridge the digital divide, the concept of guardrails that Sharna just mentioned. And so here again, I like to talk from personal experience. I’ve been working for the United Nations for 22 years. And my background is actually in auditing. For large part of my career, I was the Chief IT Auditor at the United Nations before becoming the CIO of the UN Pension Fund. So I have a professional deformation on assurance, on evidence. And I think that one of the implicit concept that if I may, all the speakers of this panel have touched upon, but we have not yet made it explicit, is the concept of trust. In order to have attention to human centricity, in order to bridge the gap, in order to enable the human being to approach, to make use of, to be supported by these technologies, I think we need to also build a framework of trust so that they don’t need to understand what is the distinction between a conversational versus a generative AI. They don’t need to understand the distinction between a blockchain or distributed ledger technology. They don’t need to be bothered with those technological details that are often too complicated to explain and to verbalize. They need to just be able to trust the solution and the entities, whether they are private, whether they are public, that are offering the solution. So I believe that it’s incumbent upon us working in this field to come together and start building bottom up, and of course also top down, a framework of generally accepted criterias and principles that can be utilized to then support the reliability, the trustworthiness of the solution and this technology where and if they are indeed implemented in healthcare or for that matter, also another area of our society. So I think that there is that need to start now looking at the fact that I think as we all recognize, this powerful technology can be used for good or for bad, and not all solution have the same level of reliability. So there is a need to start having some sort of criteria that will enable us to start comparing and contrasting, to make assessment and to then providing a level of assurance that I think that ultimately that the human centric approach calls for the user to deserve that what they are going to use is trustworthy and is reliable.
Rajendra Pratap Gupta:
Thank you for raising this very important point of addressing this core issue of the human centricity plus reliability. I think it is a twin opportunity and a twin challenge too. And this brings me to Sameer. Sameer, you lead all the AI initiatives at WHO and WHO is the multilateral body where every government looks up to. I think now there is an excitement across the world for generative AI and AI for health. Everyone is waking up. So what is your advice and the roadmap for the next one year or the action plan, if I were to call it?
Sameer Pujari:
Thank you, Rajendra. And you rightly said all the member states are actually getting very excited about this work, both from a positive side and negative side. And I said negative excitement is a scare or the fear of what damages can do, and the positive excitement is the opportunity. So I think there is, and we see an unprecedented push from member states. Normally, there will be a two-way discussion, but this time member states are actually coming to us and asking, and not just now, since December last year, when actually, charity started picking up the speed. And that’s when it started off. So WHO actually, through this process, has put out a position in the June WHO Bulletin, where we have clearly articulated the value possibilities of generative and discussional AI. And in summary, the one sentence that summarizes that article for everyone here is, is the British position is to be cautiously optimistic and apply right safeguards. I mean, that’s what we are saying here is, we have to be cautious, we have to be optimistic. And as long as we have the right safeguards, and when I say safeguards, it is the ethics approaches. And now, ethics is a very common word. I mean, it’s a moral word, almost to put out and discuss, but I think it’s the application of ethical use, development, and deployment of technology or AI specifically. More importantly, because AI has more power than before, is a critical part. And WHO has guidance, which it’s working with its countries to deploy. So it needs to be not a knee-jerk reaction, but a more sustained governance approach for AI, because AI is here to stay. It’s already with us. It will make a difference in the way things are going forward in terms of healthcare, in terms of development, in general, in terms of education, agriculture. So I think what’s important is, we need to take a detailed, systematic, creative approach in this. in these regards. Also, regulations. I mean, we don’t want to have regulations again becomes the whip for our times for the developers. But what we want to make sure is that regulations are there to safeguard and provide guardrails for the right technology and the right products to be deployed across the domain. And I think this is also humbling this time to see that it’s not just coming from the countries, it’s also coming from the developers, the industry, the private sector. The recent discussions at the US Senate through all the CEOs, where there’s a call for regulating AI through the governments and the UN. I was in Copenhagen just last week where there was discussion of the UN High Level Commission of programs on AI and regulations and governance. And there’s a huge push from the Secretary General on sort of putting this work together. So I think that’s the area where the world is going towards. And we will need to prioritize that. Again, keeping in mind that it has to be people-centered, not technology-centered. So the regulations, the ethics should not be technology-controlled or centered, but people-centered. How is it going to make an impact? And they have to be adaptable for different countries. As you mentioned, 194 countries. There are different stages. And again, for the first time, we’re seeing a rather less gap in terms of preparedness between the high-income and the low-middle-income country settings. I mean, there is some, probably, parity there. But there is still a similarity across the board. So I think it’s important to manage that. Use the power of collaboration. And I think that’s what we’re leveraging through WHO, through ITU’s work. And I think the forum there, a lot of the colleagues from ITU are there to leverage what is existing. As WHO, we’re getting normative guidance, which is science-based, evidence-based, and deploying that science-based ethical regulatory approaches. And my call for this community here, which has a mix of a lot of expertise, many grassroots workers we’re working on, to ensure that the guidance that has been deployed, or the products that are being used, are not technology-centered. It has to be science-centered. And when I say that, it’s the guidance, which is the content that is coming into it, should not be written by the developers. And this happens. Developers pull out Google, take the content. They are more fancy towards the application part of it, not on the content. But I think, as actual healthcare providers, our job is to make sure that the content is governed through the right full mechanisms, the process is right, we’re done. And technology is the enabler, which is a massive, massive boon for the healthcare process. And if we can do that combination right, in the ethical and regulated fashion, pushing towards the right governance mechanism, I think we will have a successful one year of AI. And I hope we can come back next year in this forum and say, we just talked about it in 2023, but 2024, we are making impact. Back to you, Rajen.
Rajendra Pratap Gupta:
Thank you, Sameer. I think in this IGF annual forum, the 18th forum that’s going on, we heard, we had the entire high-level panel that was constituted by the UN Secretary General at this meeting discussing AI. I think one of the things that I saw in this forum is that most of the sessions this time are on AI and native AI, and around various guidelines. And you made this point of consciously optimistic, and also about regulations. But in a technology which is evolving, how can you regulate? I mean, do you think that it is AI that will itself regulate or there’ll be just guidelines which people should follow? And I think the work that Ashish, what Mavesh and others are doing, will that be a good starting point? Because guideline gives you a general direction and doesn’t stall innovation, because regulation to a point can become like a hindrance to innovation. So, I mean, do you think that we should stick to guidelines rather than regulation for now, for next two, three years?
Sameer Pujari:
It’s a great question, Narendra. And I think there’s no blanket answer to it. I think even the European Commission, EU Act is looking at segregating the different ways that we regulate products. And so I think it depends a lot on the solution, on what kind of solution we’re talking about, and what is the impact of that solution to define whether we can work with guidelines or regulations are needed to do that. And I think it’s very centric. So let me give you an example. For tobacco control or diabetes prevention and management, it is a prevention. There is a lot of content which is available. These are healthcare programs which have provided guidance, which don’t have the outreach. I think such simple guidelines, guidance-driven programs for health education, personally, I think can be very quickly distributed if there’s a small mechanism of testing the right content in there. There is a risk on the hindsight that if you don’t control or regulate these content, this can damage by providing misinformation in that, which is a big concern. So there are some products which I think can be loosely regulated, guidelines-driven, but there are some specific areas where cancer screening products, where there are more diabetic retinopathy screening programs where it needs to be regulated in a way. Now, I get this dialogue all the time in discussion whether regulations is over-controlling innovation. And I think that’s the thin line where we have to draw, how does the value? I think the member states or the countries want to use the value. They have seen the problems that they have and how technology can help that. So I think the intentions this time are around more focused on how can we maximize the value of technology, but at the same time, having that regulation is important because without regulation, there’s a massive risk of misappropriation, misuse. So I think the regulation, the level of regulation and the control of regulation needs to be properly adapted and uncharted and defined, but it is important to make sure that we are not transversing ourselves into an open sort of platform where anyone can do anything around healthcare, especially in healthcare. And I ask this question to people, would you say the same thing around when it comes to your financing? Would you allow non-regulated financial digital models to work across the board? And would you be open to doing that? Health is two domains further. So people are more worried about the money than the health, unfortunately. And that’s where the answer comes in. I think it is important to be able to regulate rightfully so we can benefit the value of the technology opportunities at the same time, control or safeguard the damage it could cause in the long run.
Rajendra Pratap Gupta:
But Sameer, even after the Sarbanes-Oxley Act in the financial markets, we had the subprime crisis. We had banks collapse even a few years back, even this year, Silicon Valley bank collapse. So I think even if you over-regulate, we have the outcomes. I mean, wherever there’s money involved, there would be, I think you made this very interesting point in the very beginning, and I really appreciate you for that forthrightness is to not get into the trap of the private sector. So, but I think the experience of over-regulation hasn’t served the purpose. I mean, that has kind of been the government’s way of putting it, saying we are pro-people, so we need to safeguard them, but neither the safeguard the people nor the organization. At the end of the day, the sector bleeds. But I take your point. Coming to the fact of point that you said about people-centricity or people having trust and ethics about it, so I would go to Sabin who is actually building at world scale. Sabin, what I understand from my experience of leading consumer-facing organization is that trust is a matter of value. So, if I get value out of humans.ai, I would love it. If I get value and benefit out of the products and services you roll out in AI, generative AI, anything, that will create trust. So, what would you think about the conversational AI product or healthcare in general in terms of using AI to create that value for creating that trust? Because value is the precursor for trust, not the other way around.
Sabin Dima:
I’m 100% sure that the technology is here. So, it’s not a problem of technology anymore. Even us on this round table, we have all the resources to start experimenting with the project. I believe a lot in learning by doing. I believe that if we will have together as a group, one use case, we will help the regulators to better understand and we can fill this gap between real world and the regulatation of the area. So, I will choose the easiest win that we can get, probably in the aftercare. For example, we have a project with a big pharma company. We saw that in our region, it’s a huge dropout rate. So, people are not finishing their treatments. After three days, they are not finishing their treatments. After three days, they’re feeling better and they stop taking their antibiotics. So, what we are doing, we are cloning the doctor voices because the doctor is the only authority in your life when you’re speaking about medical treatments. So, we are sending audio messages on WhatsApp with the voice of the doctor saying, hey, Sabin, I know that it’s day three and you’re feeling better, but it’s important to finish the… So, what I’m saying is, if we together, we will implement only one solution and we will choose one region, we will learn a lot and we will learn from the real world what were our initial ideas and what the real use case outputs really look like. So, I’m willing to help with our technology and our team and our expertise to create together a real-life use case in conversational AI for healthcare. And in one year from now, we will know more that we know now.
Rajendra Pratap Gupta:
And Sabin, would I take the liberty to say your message on your behalf, what you always say is, for AI start now, do something rather than just thinking about it.
Sabin Dima:
Exactly. The technology is here, we have all the skills. I see a lot of passionate people about the subject. So, we need to start doing.
Rajendra Pratap Gupta:
That’s the best message. And I really remember the line that you said last time that when you heard yourself speak in Portuguese, you were able to actually check what phenomenal opportunities exist before us and the project that you’re doing where you clone doctor’s voice and convince the patients to carry on with the treatment. So, I think the fact is that conversational AI has multiple use cases. And so, one of the things I understand, and we carefully picked up this panel, it was not because of friendship. It was because of the complimentary things that come to the table by thinkers and doers and regulators who are critical to success of conversational AI. So, that’s why we have blockchain, we have AI, we have WHO, we have UC Davis, we have Academy of Digital Health Sciences, we have Shawna. So, this is what is the beauty of this panel is that we should be able to get into something decisive which we can measure over the next one year. Mavish, coming to you, fact that you run a couple of initiatives in digital health, what would your action plan be for the next year?
Mevish Vaishnav:
I believe conversational AI can actually serve as a powerful tool in patient engagement, educating people about the facts behind a particular health-related issue. As you rightly said, Dr. Gupta, imagine the effort that would go in typing and texting, but conversing would actually leave an important and exponential impact. We all know the time that doctor spends with patient is very less, but if we have a conversational AI, patients would be happy that they have been heard. And at Academy of Digital Health Sciences, we are working on the report on generative health intelligence, and we will be releasing it soon, covering all these topics on the role generative health intelligence will play in shaping the future of healthcare intelligence. We will be happy to collaborate with you all, and I would also like to say that within the dynamic coalition of digital health, Dr. Gupta, you should take the stewardship in creating the global group because UN is the largest multi-stakeholder and a multilateral body, and getting everyone under the roof to form a global generative health AI group and leaders where you have regulators, policy makers come together to give a direction to all stakeholders, doctors, hospitals, and the frontline health workers to understand how generative AI work, how to get trained on it, and how to deploy it. We already have a course on digital health at Digital Health Academy, and you can visit the website to know more about it. Thank you.
Rajendra Pratap Gupta:
Thanks, Mavesh. Ashish, over to you after your grand initiative that you launched this week. What are the opportunities for stakeholders to work together? Because the worst thing that happens to health is we all keep doing our work in silos. We rarely connect, forget, hear, and listen, and come together to act. I mean, when I took over as chairman of the Dynamic Coalition on Digital Health at the UNCGF, one of the things I have done over this last one year is to get all the people in the same, I would say, wavelength, pick up a project, and deliver it. So every year, for all the Dynamic Coalitions that at least I chair, we come with tangible outcomes every year. So given your leadership and your pioneering work, what do you suggest we should be doing in the next one year? We heard your previous experts from the position of authority and influence.
Ashish Atreja:
Happy to. I think one of the critical things is I think it’s the onus is on us. There is a very famous map called Gartner Hype Cycle, where it shows about all the technology that comes. There is the hype peak that happens. Then there is a valley of disillusionment, a valley of death. And then there’s a second wave, which comes later on. And generative AI is now at the peak of that hype right now. But we all know there’s a value. So where I would echo is the transformation peak, that is a second peak, that is slower, that happens after the valley of death, is the true peak. And that is one, as humans, we don’t just look at technology, what it can do, but actually we start learning how to use the technology for the right use cases within our workflows in a trustworthy, scalable, scientific manner. So it’s repeatable, replicable, right? And that’s what science role is, right? It takes what one person may say, but actually validates that approach across multiple different variations. So you can be fairly confident, for example, if I give this blood pressure medicine, this is gonna be the impact on it, because it’s been repeated replicable success. So we need to go the same thing with AI, we need to have that lens, similar to what Sameer mentioned, put that scientific evidence-based lens. And then see if something Sabine is doing great, can we replicate that across country? And can we demystify that through a playbook? We call it an implementation science playbook. So through valid AI, 30 health systems, health plans have got together. We have three global partners right now, in Israel, India, as well as in Canada. But our goal working, I love the suggestion which Mavish mentioned is, creating this global thought leadership group on generative AI in healthcare. We love to contribute our collective knowledge from US through valid and coalition of healthcare AI into it. So we can all learn from each other faster. We can also support each other best practices. And also maybe the ecosystem not only just had to be scientific, but also equal input from our key ecosystem partners, including startups, bigger technology, pharma. So we hear from them. So if we have to do a balanced approach, we don’t err on the side of caution necessarily, but err on the side of optimism combined with caution and have feedback from all quadrants.
Rajendra Pratap Gupta:
I totally agree with you, Ashish. I think it’s a great approach to make sure that the excitement is also backed by competence. And for that, everyone needs to work together. And I think Samir, Sabin, Mavish, and Dino has very carefully told that not only these are the challenges, but there are also technical solutions, which are there. And I think the line that Dino put it, which sums up the challenge plus opportunity, probabilistic plus deterministic, as simple as that. And both the solutions exist at scale. I mean, he is sitting where he has deployed, how many countries is this Dino? What do you have done for the pensioners? 192. 192 countries. We have Samir Pujari sitting here, 194 countries. We have Ashish Atreja, 15 systems. Mavish running a course globally. Dr. Olavesi, I’m going to come to her next. You have everything on this current screen, where everyone who is an influencer. at large and a do or both, which is a rare combination. And we know, I think the Gartner’s hype cycle, see sometimes those historical rules and equations also get challenged. We should challenge the Gartner hype cycle and we should actually make it hope and heal cycle. You know, there is a hope, let’s use it for healing. I mean, as simple as that. So Dr. Olavesi, you have heard all those people. You have used technologies and was very impressed to see this six month pictures of the babies. So given what you have heard, what do you need sitting there in Lagos, you know, from people on the screen to take your work to the next level? What should we be doing? What you should be doing? Over to you.
Olabisi Ogunbase:
Thank you very much for that question. With this WhatsApp platform that we have with the mothers, I can see lots of gaps because when the mothers send the pictures or type their questions, it’s not real time. I might not see it at that point in time, but conversational AI, it’s real time, you know, and it’s all the true machine learning, the responses that are appropriate, that are relevant, comes to the patients immediately. Unlike me, it might be hours before I listen to that. So I can see the advantage of what we are doing, but I can also see a lot of gaps. And it’s not personalized, it’s open to everybody, the 300 and so patients on that platform. It’s not personalized, it’s not real time. Sometimes it’s not appropriate, you know, because when they ask a question on cough, I use the opportunity to just talk generally. So that everybody picks something, everybody gains something. Yeah, so I think that’s the next step for me. We have to go away from this platform, which seems so basic to me, and see how we can introduce AI into it and take it to the next level. So let me talk before our session, I was hearing Metaverse, you know, we have to collaborate and take from what everybody has learned. We don’t have to reinvent the wheel. Technology has come far. You know, we’re talking AI, we’re talking conversational AI. We need to collaborate and take this platform to the next level because patient outcome is important. Quality of care is important. Patient safety is important. And these are all issues that conversational AI will have an impact on. So this time next year, I don’t want to be talking about WhatsApp. I want to go for to the next level. So thank you very much.
Rajendra Pratap Gupta:
Thanks, Dr. Olavisi. And I assure you that one of the things that we promise as a tangible outcome of this year’s panel on conversational AI would be to make sure that you next time present how we help you reach the next level. That’s a very big challenge. But if you’re not able to make a difference on the ground, we are a fancy organization and we are not that. We actually mean results and we will do. So that’s why it was the reason to have you given the work that you’re doing in actual LMICs as we would call them. And if you are able to make a difference to your working as a clinician, we would have succeeded in delivering or walking our talk. Otherwise, it’s just a mere discussion, which we will not intend to. Coming to Shauna, Shauna has led a global project and we are very impressed that I know a few years back, the only project at scale for AI was IBM Watson. So Shauna, you have an experience, you have reflections. Given the journey of IBM Watson, given what we are talking now, what would be your guidance for this group for what we are talking about tangible outcomes for next year?
Shawnna Hoffman :
You know, I think that the, kind of my reflections, honestly, are this is an extremely complex problem in general. And it doesn’t have to do with just conversational AI. So as you stand back, look at all of the different aspects that makes the individual vulnerable. I think one of the concerns that I have is something that you had brought up, the 2.6 billion people who don’t have access to the internet, we need to continue to move forward with conversational AI, but we also need to make sure that those 2.6 billion people get access to the internet and to that reliable connectivity to the information. Because if we create all these chatbots and do all this amazing work and they can’t access it, then it’s really not gonna do us much good to really make that big of a difference that I know that you all want to make. I think that that would probably be my main thing that would concern me, that I would probably add beyond what the other speakers have mentioned. Because that complexity really does take it to a really tough level. And we need to look holistically at the individual and what their needs are so that they can get access and what we can do uniquely. One of the things we did with IBM Watson is to set it up in various villages to where everyone would come to one location. So there are opportunities that individuals don’t need just even a cell phone, but providing access where it’s walkable to them within a few miles or even many miles, but at least within like half a day to be able to get access to this remote medical information.
Rajendra Pratap Gupta:
Thank you, Shana. Now we will move to the questions that I see on the chat. What is the potential for training and learning best practices? So on the training side, at least what I can mention about is that we have courses on digital health at every level for doctors. I mean, there’s a postgraduate certificate course. You can look up digitalacademy.health. We have courses for health professionals, but what we are also coming up, which is very interesting is courses on AI and robotics for class eight students with IIT Delhi we have tied up, is that we need to educate people at the bottom, right from class eight onwards where they start learning about it. And this is the elementary course. And then what also we are launching early next year is the frontline health workers course. If they’re not educated, we’re not going anywhere. So that’s on the training side. On the best practices side, I would put this question to Sameer Pujari, given that WHO is probably one of the best platforms to look at, or even the Dynamic Coalition on Digital Health to look at collaborating with Sameer on the best practices on AI for conversational AI. Sameer, over to you. Can you unmute Sameer Pujari, please?
Sameer Pujari:
Yep. Hi, sorry, there was a lapse in the network connection for some reason. But I just wanna mention that on the training part, WHO has converted the guidance that they have created in the last year. And there’s an open WHO course available on ethics and governance of AI. And this is not just a theoretical course. It has a very practical checklist of an approach. And I’ve put that in the chat, the link to the course, which has been taken by more than 17,000 people across 170 countries virtually. So I think that’s one of the solutions, but I think one of the products which is there, we are coming up with a course specifically for developers, because it’s important for this community to understand what it means to create an ethical approach. And this course will be going live by the end of the year. We are having similar courses coming up with the regulations side as well. And these are targeted to developers, to policymakers, and to implementers. So there are checklists and application sort of processes for each of them within this course materials. These are being used by academic institutions across the globe to train students on healthcare provisions and AI. So these are some of the ways that it is there. But again, I keep reiterating the things. It’s not, let’s not recreate the wheel. Let’s join hands, there’s content available, and we can deploy as many ways as we can through the process. Back to you.
Rajendra Pratap Gupta:
Thanks, Sameer. Ashish, over to you. This is an interesting question. Is the role of conversational AI in dispelling superstitions and health fallacies?
Ashish Atreja:
That’s a great one. I think there is a clause that if you’re not intentional about something, then that’s not gonna happen. So which means we do know there is a lot of misperceptions in healthcare. We saw in COVID what happened. And if we just leave at this, like in social media, WhatsApp or others, there’s a lot of chance of things going viral, which are not accurate. And what we realized in COVID was clinicians, researchers actually did not have much voice because most viral content was the one which was the least trusted content from clinicians. So I think part of this is coming back to this stuff is the onus is on us to put science as a base, right? So when technology solutions are created, and because it’s democratizing technology, anyone can, within a week, learn using these technologies to create a bot and to do it. That may not be validated, and many times are not if it comes out so early. So we have to put kind of some framework. One can call it guardrails. If it is very life-threatening things, we have to put very rigorous guardrails. So FDA, Food and Drug Administration in US has a three-point system. It’s a life-threatening system thing that has to go through much more clinical evidence, multiple clinical trials. If it is moderate risk, then certain kind of a thing. If it is very low risk, then it can go without major clinical trial. So we used to have some kind of a framework like that. If it is an education content, can we even use generative AI to validate some of the content which may come out? If we create a generative AI, not on large language models on the internet, because then it will hallucinate, but can we create the large language model on Harrison’s Medical Textbook, which I got trained on? Can we train, get on WHO practices, on VA practices, open domain content from US, UK, developing countries, WHO, wherever it is, on textbooks? Then we actually may have an automated way or semi-automated way to check the accuracy of it, put some delimiters, maybe backing with human in the loop for critical things. So I think that framework is not here right now, but we need to go beyond, Dr. Gupta, as you mentioned, from traditional ways of regulating to actually maybe semi-automated bot ways of regulating. I was in a security summit and gave a keynote there and where it ended was, they’re gonna be more and more bots on the trying to hack information now. So right now, humans do this bots to kind of get into security and hacking. With generative AI, it’s gonna be bots that are gonna be doing it. So we need to dwell bots, which are gonna be protecting us in that. And so the similar thing we have to do, we may not be able to do this governance just by humans alone. We have to go one-to-many and automated governance backed by human’s loop to allow that.
Rajendra Pratap Gupta:
Thanks, Ashish. And I think the point that you raise is very important. The Dynamic Coalition for Digital Health at the UN’s IGF. One of the things I would add to what Mavish proposed was not only the generative health AI, but also generative health AI governance framework. I think if I’m sure there are multiple, but we need to come out with something which is understandable, implementable over the next year or so. I have interesting question that I would post to Sabin is, how can conversational AI technology be made more accessible to people in low-income areas who may have limited access to smartphone or the internet? I know that you did had a passing reference to this Sabin, but you would like to add something on this?
Sabin Dima:
Yeah, at least we need an internet connection if we want to have access to powerful models, but there are models very efficient that you can run it, not on a tablet, but only on a mobile phone. But I see something like the digital doctor of the village that encapsulate the knowledge from all of the doctors from all around the world. And basically you need one mobile phone for every village. So this is the minimum resource that everybody needs. I like another question, to what extent can conversational AI pose a threat to employment? I’m always saying, and I said it before, I think that AI is not going to take your job, but the human using AI will take your job for sure. Probably using AI, employers will work only two or three days per week, and we will achieve 10 times more results. In the same time, you know, that it’s a big problem in healthcare in general, that the human error and AI can supervise this. So imagine that you will do your job having maybe 100 AI assistant helping you perform better. So I don’t see any threat for employment.
Rajendra Pratap Gupta:
And I mean, I will add to that, that in the other dynamic coalition on internet and jobs, we had a session yesterday on Project Create. Create is collaborate to realize the employment and entrepreneurship for all through technology ecosystem. In fact, we have created job maps for nine sectors. And we have talked about the conventional models of doing a business and the Create model. So let’s not fear technology. I think technology is best used for creating jobs than taking away jobs. And this Project Create is about that. So I would say look up this website called projectcreate.tech. We are releasing our framework tomorrow afternoon at IGF on Project Create. So while the threat is not for jobs, the threat is to lack of competence. I would put it this way. So I would say upskill yourself, be competent. If you’re not competent, anyone can threaten you, not only AI. So I would say that please upskill, continuously upskill, crosskill yourself. And that’s important. So there is no threat to you if you are updated and upskilled. Well, if you are not, you certainly have. Sabin, you’re trying to say something?
Sabin Dima:
I agree, I agree.
Rajendra Pratap Gupta:
Thank you. Let’s look at the other questions that are there. Where can we access the recording of this conference? Is there on YouTube, IGF broadcast that on YouTube? So it’s available for people to watch. There is a comment, I guess. I believe youth-mediated initiative would help bridge the digital literacy gaps. Yes, of course. And we have, I think yesterday, we were surprised to have a digital health session by the youth tech envoy of the ITU. And she is keen to work with the DC Digital Health to address this big issue of youth’s involvement in digital health and DC. There’s another question. Ashish’s comment on I am hoping science, which is evidence-based, validated, repeatable, applicable outcomes and transparency ethical approach can help build trust along with great patient experience. Yes, Ashish, totally agree with you. And that is what I think this group should be working on, on the governance and the outcome. So by the way, on the other side with the International Standards Association, we are working on the outcomes measures using technology. I think Dr. Enkasing from my team is going to make a presentation to the meeting in Arlington, I guess it’s next month on the how to measure clinical outcomes of technology driven initiatives. And we are especially talking of digital therapeutics, which is being led by health parliament at the, it’s called the Bureau of Indian Standards, which represents the ISI, the International Standards body. So this was a great session. We are up our time. And I thank each one of you for taking our time and different time zones and enriching us on conversational AI, giving us a pathway for next year. I also thank our technical team at IGF for making this session seamless for us. Thank you all so much. And we will connect back in the mainframe and hopefully next year we’ll come back with the tangible outcomes we discussed. The goal would be Dr. Olavesi should benefit of all we talked. That would be the goal for us. Thank you so much. Thank you. Thank you very much. Thank you. Thank you. Thank you all. Thank you.
Speakers
Ashish Atreja
Speech speed
182 words per minute
Speech length
1816 words
Speech time
598 secs
Dino Cataldo Dell’Accio
Speech speed
133 words per minute
Speech length
681 words
Speech time
307 secs
Mevish Vaishnav
Speech speed
166 words per minute
Speech length
541 words
Speech time
195 secs
Olabisi Ogunbase
Speech speed
158 words per minute
Speech length
1580 words
Speech time
598 secs
Rajendra Pratap Gupta
Speech speed
185 words per minute
Speech length
5361 words
Speech time
1737 secs
Sabin Dima
Speech speed
161 words per minute
Speech length
1113 words
Speech time
414 secs
Sameer Pujari
Speech speed
190 words per minute
Speech length
2752 words
Speech time
871 secs
Shawnna Hoffman
Speech speed
194 words per minute
Speech length
1730 words
Speech time
535 secs
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81
Knowledge Graph of Debate
Session report
Full session report
Moderator – Daria Tsafrir
During the discussions, three main topics were examined in depth. The first topic focused on the concerns of the government regarding the protection and safety of critical infrastructures and supply chains. It was acknowledged that governments have a major role in ensuring the security of crucial infrastructures and supply chains, which are vital for the functioning of industries and economies. However, no specific supporting facts or evidence were provided to substantiate these concerns.
The second topic revolved around the risks of over-regulation and the dynamic nature of AI. Participants expressed the need to strike a balance between regulating AI to prevent potential negative consequences and allowing for its innovative and transformative potential. The dynamic nature of AI poses a challenge in terms of regulation, as it constantly evolves and adapts. Again, no supporting facts were provided to further illustrate these risks, but it was acknowledged as a valid concern.
The third topic that was discussed focused on cybersecurity challenges. It was highlighted that addressing these challenges requires collaboration within international forums and the possibility of establishing binding treaties. The need for such cooperation arises from the global nature of cyber threats and the shared responsibility in mitigating them. However, no supporting evidence or specific examples of cybersecurity challenges were referred to.
Throughout the discussions, all speakers maintained a neutral sentiment, meaning they did not express strong support or opposition to any particular viewpoint. This could indicate that the discussions were conducted in an objective manner, with an emphasis on highlighting different perspectives and concerns rather than taking a definitive stance.
Based on the analysis, it is evident that the discussions centered around key areas of government concerns, the risks associated with over-regulation of AI, and the need for international cooperation in addressing cybersecurity challenges. However, the absence of specific supporting facts or evidence detracts from the overall depth and credibility of the arguments presented.
Moderator 1
During his presentation, Abraham introduced himself and verified that he was audible. He provided a comprehensive overview of his background and experience, emphasising his expertise in the field. Abraham highlighted his various roles within the industry, acquiring a diverse set of skills and knowledge in the process.
Abraham also detailed his educational qualifications, underscoring his pertinent degrees and certifications. He explained how these qualifications have equipped him with a strong theoretical foundation, complemented by practical skills developed through hands-on experience.
In addition, Abraham outlined his past work experiences and accomplishments, showcasing specific successful projects and the positive outcomes they generated. He shared examples of challenges encountered during these projects and how he overcame them, displaying problem-solving abilities and resilience.
Regarding communication skills, Abraham mentioned his experience working with multicultural teams and effectively collaborating with individuals from diverse backgrounds. He emphasized his strong interpersonal skills, enabling him to cultivate robust relationships with clients and stakeholders throughout his professional journey.
Furthermore, Abraham mentioned his commitment to continuous professional development, expressing enthusiasm for keeping abreast of the latest industry trends and advancements. He attends relevant conferences, workshops, and seminars, actively engaging in professional networks to stay connected with industry experts.
In conclusion, Abraham presented himself as a highly experienced and qualified professional, highlighting his expertise through his extensive background, educational qualifications, and successful project achievements. He demonstrated effective communication, collaboration, and adaptability, crucial in a fast-paced, ever-evolving industry.
Gallia Daor
The Organisation for Economic Co-operation and Development (OECD) has played a significant role in guiding the development and deployment of artificial intelligence (AI). In 2019, the OECD became the first intergovernmental organization to adopt principles for trustworthy AI. These principles, which focus on the aspects of robustness, security, and safety, have since been adopted by 46 countries. They also serve as the basis for the G20 AI principles, highlighting their global relevance and influence.
The OECD’s emphasis on robustness, security, and safety in AI is crucial in ensuring the responsible development and use of AI technologies. To address the potential risks associated with AI systems, the OECD proposes a systematic risk management approach that spans the entire lifecycle of AI systems on a continuous basis. By adopting this approach, companies and organizations can effectively identify and mitigate risks at each phase of an AI system’s development and deployment.
To further support the responsible development and deployment of AI, the OECD has also published a framework for the classification of AI systems. This framework aids in establishing clear and consistent guidelines for categorising AI technologies, enabling stakeholders to better understand and evaluate the potential risks and benefits associated with different AI systems.
The OECD recognises that digital security, including cybersecurity and the protection against vulnerabilities, is a significant concern in the era of AI. To address this, the OECD has developed a comprehensive framework for digital security that encompasses various aspects such as risk management, national digital security strategies, market-level actions, and technical aspects, including vulnerability treatment. Moreover, the OECD hosts an annual event called the Global Forum on Digital Security, providing an opportunity for global stakeholders to discuss and address key issues related to digital security.
Interestingly, AI itself serves a dual role in digital security. While AI systems have the potential to become vulnerabilities, particularly through data poisoning and the malicious use of generative AI, they can also be utilised as tools for enhancing digital security. This highlights the need for robust security measures and responsible use of AI technologies to prevent malicious attacks while harnessing the potential benefits AI can provide in bolstering digital security efforts.
In addition to addressing risks and emphasising security, the OECD recognises the importance of international cooperation, regulation, and standardisation in the AI domain. The mapping of different standards, frameworks, and regulations can help stakeholders better understand their commonalities and develop practical guidance for the responsible development and deployment of AI technologies.
Intergovernmental organisations, such as the OECD, play a vital role in convening stakeholders and facilitating conversations on respective issues. By bringing together governments, industry experts, and other relevant actors, intergovernmental organisations enable collaboration and foster partnerships for addressing the challenges and opportunities presented by AI technologies.
Finally, the development of metrics and measurements is crucial for effectively addressing and evaluating the impact of AI technologies. The OECD is actively involved in the development of such metrics, with one notable example being the AI Incidents Monitor. This initiative aims to capture and analyse real-time data and incidents caused by AI systems, allowing for a better understanding of the challenges and risks associated with AI technologies.
In conclusion, the OECD has made significant contributions to the development and governance of AI technologies. Through the establishment of principles for trustworthy AI, the emphasis on risk management, the focus on digital security, the recognition of AI’s dual role in security, and the efforts towards international cooperation and metric development, the OECD is actively working towards ensuring the responsible and beneficial use of AI technologies on a global scale.
Asaf Wiener
The Israel Internet Association, represented by Asaf Wiener, serves as the country code top-level domain (CCTLD) manager for the IL, which stands for the Israel National TLD. As the manager of this important domain, the association plays a crucial role in overseeing internet activities in Israel.
Furthermore, the Israel Internet Association is the Israeli chapter of the Internet Society, demonstrating their commitment to promoting various aspects of the digital landscape. Specifically, they focus on digital inclusion, education, and cybersecurity within the country. These areas are of critical importance in today’s interconnected world, and the association strives to bridge the digital divide, ensure access to quality education, and enhance cybersecurity measures for Israeli citizens.
Dr. Asaf Wiener’s organization also works towards addressing digital gaps and advancing public initiatives. This highlights their dedication to narrowing the disparities in access and opportunities that exist in the digital realm. By engaging in various public initiatives, they aim to create a more equitable digital landscape for all.
Additionally, Dr. Asaf Wiener demonstrates a strong inclination towards public engagement and participation. He actively invites anyone interested in learning more about their activities to approach him for further details, indicating a desire to foster collaboration and partnerships in pursuit of their mission.
In conclusion, the Israel Internet Association, led by Asaf Wiener, fulfills the crucial role of CCTLD manager for the IL, representing the Israeli chapter of the Internet Society. Their focus on digital inclusion, education, and cybersecurity, and their commitment to addressing digital gaps and engaging the public, highlight their dedication to advancing the digital landscape in Israel.
Abraham Zarouk
Abraham Zarouk is the Senior Vice President of Technology at the Israel National Cyber Directorate (INCD). In this role, he oversees the day-to-day operations of the Technology division, focusing on project implementation, IT operations, and support for national defense activities. Zarouk also plays a key role in preparing the INCD for the future by promoting innovation and establishing national labs for research and development.
The INCD places a strong emphasis on addressing weaknesses in artificial intelligence (AI). They examine vulnerabilities in AI algorithms, infrastructure, and data sets, and have established a dedicated national lab to enhance AI resilience. Through collaborations with industry leaders like Google, the INCD is actively promoting the use of AI-powered technologies and driving innovation in the field of cybersecurity.
In addition to their proactive approach, the INCD also acknowledges the potential threats posed by AI-based attackers. As the use of AI tools among attackers increases, the INCD recognizes the need to stay vigilant and develop strategies to counter these sophisticated attacks.
Overall, Abraham Zarouk’s role as the Senior Vice President of Technology at the INCD is crucial in ensuring smooth operations and driving the organization’s preparedness for future challenges. The INCD’s focus on addressing AI weaknesses, collaboration with industry partners, and recognition of potential AI-based threats highlights their commitment to cybersecurity excellence.
Daniel Loevenich
Germany is taking proactive measures to manage the risks associated with artificial intelligence (AI) within complex technical systems. The country is specifically focusing on the AI components or modules within these systems. This approach highlights Germany’s commitment to addressing the potential dangers and challenges that AI can present.
To further mitigate these risks, Germany is working on extending its existing cybersecurity conformity assessment infrastructure. This move aims to establish a robust framework to evaluate and ensure the conformity of AI technologies. The country is also striving to unify AI evaluation and conformity assessment according to the standards set by the EU’s AI Act. This step demonstrates Germany’s dedication to aligning its evaluation processes with international norms and regulations.
The implementation of the AI Act is deemed crucial for managing AI risks in Germany. This legislation, which the country is actively working towards, will play a vital role in addressing technical system risks across the entire supply chain of AI applications. By incorporating this act, Germany seeks to establish a comprehensive and effective framework for managing AI-related risks.
Furthermore, Germany is actively promoting the adoption of AI technologies, particularly among small and medium-sized enterprises (SMEs). The country recognizes the potential benefits that these technologies can bring and encourages businesses to embrace them. This approach highlights Germany’s openness to innovation and its efforts to support the growth of AI within its industries.
There is also support for international standardization in guiding the use of AI technologies. This standpoint suggests that by establishing global standards, individuals can have more control over how AI technologies are utilized. This commitment to international cooperation reinforces Germany’s desire to foster responsible and ethical AI practices.
It is important to acknowledge that AI technologies are heavily reliant on data, and their responsible usage ultimately rests on individuals. Germany recognizes the responsibility that comes with the use of AI systems and the need for individuals to exercise caution and ethics when handling data-driven technologies.
Another noteworthy observation is the call for the market to be the determining factor in deciding the use of AI-based systems. Germany suggests that market forces and customer preferences should dictate the direction of AI technology, promoting a more customer-centric approach to AI adoption.
Nevertheless, standardizing AI usage at a value-based level can be challenging due to the differences in societal values. The discrepancy in value-based governmental positions creates a complex landscape for consensus-building and establishing universal standards for AI application. Germany recognizes this challenge and the need for careful consideration of normative and ethical issues surrounding the use of AI technologies.
In conclusion, Germany is actively implementing AI risk management within complex technical systems, with a particular focus on AI components. The country is working towards unifying evaluation processes and conforming to international standards through the AI Act. Germany also promotes the adoption of AI technologies among SMEs and supports international collaboration in establishing standards for responsible AI usage. However, the challenge of aligning value-based norms and standards remains an ongoing concern for AI implementation.
Hiroshi Honjo
Hiroshi Honjo is the Chief Information Security Officer for NTT Data, a Japanese-based IT company with a global workforce of 230,000 employees. NTT Data is actively involved in numerous AI and generative AI projects for their clients. Honjo believes that AI governance guidelines are crucial for the company, covering important aspects like privacy, ethics, and technology. These guidelines promote responsible and ethical practices in AI development and usage.
In the realm of generative AI, Honjo highlights the significance of addressing cybersecurity intricacies, particularly in light of recent attacks on large language models. This underscores the importance of tackling cybersecurity issues within the context of generative AI.
One complex issue in handling data by generative AIs is determining the applicable law or regulation for cross-border data transfers. Similar to challenges faced by private companies managing multinational projects, NTT Data must navigate various regulations and ensure compliance with jurisdiction-specific requirements.
Honjo advocates for international harmonization of AI regulations, emphasizing that guidelines in G7 countries are insufficient. He supports the establishment of international standards that govern the development, use, and deployment of AI, aimed at promoting fairness and consistency in AI regulation.
Additionally, Honjo expresses his concern regarding uneven data protection regulations like the General Data Protection Regulation (GDPR). He acknowledges that differing data protection regulations across countries impose significant costs on businesses. To mitigate these challenges and ensure a level playing field for businesses operating in multiple jurisdictions, Honjo advocates for consistent and harmonized data protection measures.
In summary, Hiroshi Honjo, as the Chief Information Security Officer for NTT Data, emphasizes the necessity of AI governance guidelines, the need to address cybersecurity intricacies in generative AI, the complexity of cross-border data transfers, and the importance of international harmonization of AI regulations. His commitment to consistent data protection regulations reveals his dedication to reducing costs and promoting fairness within the industry.
Bushra Al-Blushi
Bushra Al-Ghoushi is an influential figure in the field of cybersecurity and currently serves as the Head of Research and Innovation at Dubai Electronic Security Center. She has made significant contributions to the industry through her leadership positions.
One of Al-Ghoushi’s notable achievements is the establishment of Dubai Cyber Innovation Park, which aims to promote innovation and collaboration in the field of cybersecurity. Her involvement in founding this park demonstrates her commitment to advancing the industry and creating opportunities for technological development.
Al-Ghoushi’s expertise is also recognized internationally, as she is an official UAE member of the World Economic Forum Global Future Council on Cyber Security. This highlights her contributions to global discussions and initiatives surrounding cybersecurity.
Furthermore, Al-Ghoushi’s extensive involvement in advisory boards, both nationally and internationally, reflects her broad knowledge and the trust placed in her expertise. These advisory roles enable her to shape policies and strategies in the field, further solidifying her thought leadership and influence.
In terms of AI risks, Al-Ghoushi advocates for a gradual and incremental approach to cybersecurity rules and regulations. She emphasizes the importance of identifying and mitigating potential risks posed by AI through appropriate controls and regulations.
Al-Ghoushi also highlights the significance of considering the deployment of AI models and how they impact security controls. She emphasizes the need for addressing the unique risks associated with AI in their development and implementation, ensuring that adequate security measures are in place.
Regarding policy and regulatory approaches, Al-Ghoushi supports a risk-based approach that strikes a balance between control and security issues. She collaborated with Dubai in 2018 to develop AI security ethics and guidelines, which remain applicable to generative AI today.
Al-Ghoushi emphasizes the need for global harmonization of AI regulations and standards. Currently, different countries have fragmented regulations, making compliance challenging for providers and consumers. Harmonization would simplify compliance and instill confidence in internationally recognized AI tools.
To achieve this, Al-Ghoushi suggests international collaboration and the establishment of an international certification or conformity assessment for AI. This would ensure that AI systems meet minimum security requirements and facilitate compliance for providers while enabling effective enforcement of industry standards by regulatory bodies.
In conclusion, Bushra Al-Ghoushi’s leadership and expertise in cybersecurity are evident through her various roles and initiatives. Her emphasis on gradual, incremental cybersecurity rules and regulations for AI reflects a balanced approach that prioritizes both innovation and security. Al-Ghoushi’s advocacy for global harmonization of AI regulations and the establishment of international certification schemes further underscores her commitment to promoting secure and responsible use of AI technologies.
Session transcript
Moderator – Daria Tsafrir:
Thank you very much. Thank you. I think we’re ready to begin. Okay. Can we have our speakers on Zoom on the screen? Perfect. Can everyone turn their cameras, please? Yeah, we see you now. Okay, here’s Daniel. Okay, so good morning, everyone, and welcome to our session on cybersecurity regulation in the age of AI. I’m Daria Zafriar, currently a legal advisor at the Israel National Cyber Directorate, leading legal aspects of AI, cloud computing and international law. Unfortunately, due to the current situation in Israel, my colleagues and I were unable to attend the session on site. So our colleague, Dr. Weiner, who is already there, offered his help in moderating on site. So Asaf, let’s start and then get back to me.
Asaf Wiener:
Great. So my name is Dr. Asaf Weiner. I’m from the Israel Internet Association, which is the CCTLD manager of the IL, the Israel National TLD. And also we are the Israeli chapter of Internet Society, promoting digital inclusion, education and cybersecurity for citizens in Israel. Among other things, working on digital gaps and other initiatives for the public. So I’m not originally part of this panel, so I won’t take too much time to present myself. But I invite everyone who will have any questions or want more details about our activities at Internet Society IL to approach me after the session. And I’ll be happy to introduce myself and our work. And now let’s go back to the original participants of this panel.
Moderator – Daria Tsafrir:
Thank you. So let me ask you, let’s start by introducing yourselves. Let’s start with Dr. Al-Bushi.
Bushra Al-Blushi:
Hello. Good morning, everyone. It gives me a great honor and pleasure to share. the stage with the great panelists and with everyone here today morning. It’s 5 a.m. in Dubai now. So, my name is Bushra Al-Ghoushi. I’m the Head of Research and Innovation in Dubai Electronic Security Center. I’m also the Director General Senior Consultant in the center. So, basically, it’s the center that sets the rules, regulations, standards, and also monitor the cyber security posture here in the city of Dubai. I’m also the founder of Dubai Cyber Innovation Park, which is an innovation arm for Dubai Electronic Security Center. I’m the official UAE member in the World Economic Forum Global Future Council on Cyber Security. I’m also a member of many advisory boards nationally and internationally. Thank you. Mr. Zarouk?
Hiroshi Honjo:
Okay. So, Mr. Honjo, is there? Yes. My name is Hiroshi Honjo. I think I’m only the one based in Tokyo, Japan, but I just come back from Germany, so I still got your luck. So, I’m the Chief Information Security Officer for a Japanese-based IT company called NTT Data, with the employee of the 230,000 globally. Japan is only a small part of the employees, so we have more business in more than 52 countries except for Japan. So, as a private company, we are running so many AI, generative AI projects to our clients, so it’s a very hot topic. It’s a pleasure to talk with you. Thank you.
Moderator – Daria Tsafrir:
Ms. Galia Daur?
Gallia Daor:
Good morning, everyone. My name is Galia Daur. I’m a Policy Advisor in the OECD’s Digital Economy Policy Division in the Directorate of Science, Technology, and Innovation. Our division covers the breadth of digital issues, including artificial intelligence, including digital security, but also measurement aspects, privacy, data governance, and many other issues. But for today, we’ll be focusing on AI and digital security. So, I’ll stop here and I look forward
Moderator – Daria Tsafrir:
to the discussion. Thank you. Mr. Lovanic? Good morning, everyone. I’m Daniel Lovanic.
Daniel Loevenich:
I’m the AI and Data Standards Officer at German Federal Office of Information Security. And I’m very much concerned on AI cyber security standards. Let me just stress that I appreciate to share the stage with you and congratulations to a great event up to now. Thank you very much.
Moderator 1:
Yes, Abraham, I think we can hear you now. So if you could present yourself.
Abraham Zarouk:
Okay, hello everyone. My name is Abraham Zarouk. I’m the SVP technology of the INCD, the Israel National Cyber Directorate. I manage the technology division. So I am responsible for day-to-day operation, such a project implementation, IT operation and providing support for national defense activities. I am also responsible for preparing the INCD for the future by creating R&D activities, promoting innovation, establishing national lab and building national level solution. I have eight kids and they always ask a lot of questions. So I’m already know how JGPT feels. Thank you.
Moderator – Daria Tsafrir:
One will be about the current state of affairs. And the second one will deal with, is there more to be done in the domestic and international levels? So let’s get into it. Now we are all familiar with the cybersecurity regulation toolkit, breach of information, mandatory requirements for a critical infrastructure, risk assessments, info sharing, et cetera. And the question is whether this current toolkit is sufficient to deal with threats to AI systems or to the data are used for it. Now, our goal in this session is getting some insights of what governments can do better and where they shouldn’t be at all. So now, please note that when we talk about regulation, we mean not only regulations, but also government. We mean it broadly, also government guidelines, incentives, and other such measures. So for everyone’s benefit, and so that we can be on the same page, let me turn to Mr. Zarok and ask you, can you please map out for us, from what you learn, the different cybersecurity risks and vulnerabilities related to AI? Mr. Zarok?
Abraham Zarouk:
Again, you hear me now?
Moderator – Daria Tsafrir:
Yes, now I can hear you.
Abraham Zarouk:
Thank you. The INCD focuses on three main domains when addressing AI. The first domain is protecting AI. AI-based models are increasingly being deployed in production in many critical systems across many sectors. But those systems are designed without security in mind and are vulnerable to attacks. Since the average AI engineer is not a security expert, and the cybersecurity experts are not domain experts in AI, we need to find a way to establish and improve AI resiliency. INCD approaches this issue from several angles. One is examining weaknesses in AI algorithms, infrastructure, data sets, and systems. more. This is done as an ongoing task. The INCD promotes R&D project for testing AI models. Unlike ASM, attack surface management, in the IT world, in the AI world, a tailored approach is needed from each algorithm. INCD focuses on common libraries model and dedicated attacks. Another angle is building a robust risk model for AI. We attempt to define metrics and the models to measure risk in AI algorithms. As I mean, to measure and test the robustness of AI as we do with another IT domain. A third angle is the national lab for AI resilience. INCD has established a national lab which develops an online and offline platform for self-assessment of machine learning models. Based on risk model, we develop. The national AI lab is a collaboration between the academic world, the government, and the technology giant. INCD collaborated with the cyber center at Ben-Gurion University, which is a leader in research, and with Google, which brings cloud knowledge in cyber protection and AI. A second significant domain is using AI for defense. Today most tools and products are used from a form of AI, some more and some less. If you don’t have a logo AI inside on your product and you don’t say AI three times a minute no one will buy it. We understand the power of AI and what it can offer and as ongoing effort we make sure our infrastructure and products support the latest AI powered technology. INCD much like many other nations is promoting innovation and the use of AI powered technology. This is since we don’t want to be behind when it comes to the technology. Our role as a regulator is mainly not to interfere but to see where we can assist the market in order is to promote implementation and the use of advanced technology. We use a variety of tools and capabilities to support our day-to-day operations. This includes tools to help researchers in their system, cyber investigation, various automation to assist in analysis and response for incident as a part of our collaboration with Google in the cyber shield project. A smart chatbot for our national cyber call center 119. It’s a reversed 9-1-1 to provide better service to citizens, collect relevant contextual information, provide more focused responses and support additional languages. A new tool under develop aims to help investigate network traffic pickup in an easier, faster and more human way. AI helps us scale and takes care for routine tasks. So in the time of war, AI allows us to direct main power to critical tasks. We use AI to assist and mediation between the human and the machine. Last domain but not the least. and maybe the most complex subject, which is currently in design, is the defense against AI-enhanced, AI-based attacker. We see an increase in the use of various AI tools among the attackers. And we understand that in the future, we will see machines carrying out sophisticated attacks. We are currently in the process of designing a way to approach this threat scenario, which will probably be built from several components working together. In the future, we will see attacks and defense fully managed by AI, and the smarter, stronger, and faster player will win. Thank you.
Moderator – Daria Tsafrir:
Thank you, Mr. Zurich. Dr. Al-Blushi, I’m going to turn to you now. Based on your vast experience in your past and your current work in promoting innovation and shaping policy at both domestic and global levels, what do you make of AI risks? How do you frame it from a cybersecurity regulation perspective?
Bushra Al-Blushi:
So I think in a city like Dubai, we are always at the forefront of technological transformation revolution. Our role as cybersecurity regulator is to enable those critical national infrastructures to use the new technologies, but use it with the right controls around cybersecurity, and it shouldn’t be perfect from the first place. So it’s gradual, incremental cybersecurity rules, regulations that we work together with the business developers just to make sure that the business objectives are being met and also security is being considered. So I will divide what I’m going to speak about into three main points. So the first one is the AI model security themselves versus the security of the consumers of those AI models. So when it comes to the AI security models and the developers of those models, so the rules, the controls, the standards, and the policies are totally different when I’m speaking about the consumers of those AI models. For me, when I’m talking about the AI security itself, the AI model itself, AI at the end of the day is like any other software that we were using in the past, but what makes it different is the risk that it might generate, the way it has been deployed, how it is being implemented. So for example, an AI model that is deployed in an IoT bulb shouldn’t have the same security controls like an AI model that is deployed in a connected vehicles where any risk or any issue in that AI model might impact the human lives. So at the end of the day, it’s where that AI model is being deployed, how it is being used and why it is being used that makes it different than any other software development tools that we get used to develop in the past. This is how AI model itself became different than the normal software development. Then the second point is the security of the AI consumers. So those people, those government entities, the consumers of the AI themselves. I think in our scenario, in our case, we are more worried about the consumers than the producers because we have main players, as we can all see, we have very main players, specifically when it comes to generative AI that are attracting lots of attention or lots of customers to use them. So when it comes to the AI consumers themselves, I think we need to consider many elements. So how that AI will be used, where it will be used, will it be used in a critical national infrastructure? And what about the data privacy of the data that is being used over there? And then also why I’m using that AI model. So I can categorize it as the previous speaker was saying, so I can categorize it into three main areas. How we are using AI today, we might use it to protect as cybersecurity professionals, using it in the new defensive methodologies that we are using, or it can be used by the malicious actors to harm, or the third category, it can be used by a normal users or it can be used by government entities. And in that case, we will be worried about the data privacy of the data being processed in the AI model. So when it comes to the policies and regulations, so I talked about AI security itself, the consumers, and the last point is the policies, standards and regulations that we need to put around the AI models, I think that there has been lots of efforts globally and internationally having OECD AI principles, NIST AI security standard, and then the great bunch of policies that were issued recently in June by EU. I think we are creating progress towards that, having, let’s say, standards or having specific policies around the security of the AI. But as I said, at the end of the day, it’s like the previous software models that were being developed in the past. So if we will think about it, how we should deal with the AI when it comes to the policies and the regulatory point of view, I think we need to develop, first of all, the basic best practices and principles, like any normal software development life cycle, secure by design, supply chain security. So those basic principles should always be there. And then develop one layer on top, and that layer can be specific to the AI itself, how AI should be developed, should be maintained, should be trusted. So another layer which is specific to AI. And the third layer that can be added, as I said, at the end of the day, depends where I’m going to use it. So it’s a sector-specific layer. So we can add banking layer controls, transportation layer controls, medicine layer controls. So this is the third layer where we need to work with the business owners or the business sectors themselves in order to make sure that the third layer also contains enough controls that will enable them to use it in a safe manner. I believe, I strongly believe that that risk-based approach is the best approach where we should all consider, because having too much controls will limit the usage of the AI, and having too loose controls also will take us into other security issues. In our case, for example, we developed an AI security ethics and guidelines back into 2018 that can still be applicable to generative AI. We are also developing an AI sandboxing mechanism for government entities to test, to try to implement AI solutions that they would like to implement at the city level. And also we have clear guidelines about data privacy. So as we are saying that most of the AI models now are hosted in the cloud, so we have a clear model how information can be dealt in the cloud, and that will include AI models that are hosted in a cloud environment. So I don’t think we should reinvent the wheel. We should develop on the basis of the things that have been there for a long time now.
Moderator – Daria Tsafrir:
Thank you, Dr. Alushi, you’ve raised some very important points. I’ll turn now to Mr. Anjou. Mr. Anjou, you’re representing the private sector. So from your organization’s point of view, how are you currently dealing with AI risks and cybersecurity?
Hiroshi Honjo:
Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI governance guidelines within the company, and that includes the privacy and ethics and technology, everything. So basically, what we do for Genitive AI as a company, basically we do everything for client asks. So many clients ask for the, let’s say, application development, for instance. So we take automatic code generations using Genitive AI. That obviously includes a lot of problems, including IP, integration of property issues, that if you learn the code from whatever source, maybe including the commercial code or non-open source code. So that’s privacy protections, well, integration of property protection is the very important thing for the company as well. And also the frameworks include OECD or NIST AI frameworks that helps the defining risks for what the AI project is. So that went pretty much well for defining the risks within the AI project. The thing is, although we kind of state the risks within projects, it’s all come up with what’s the purpose of the project, whether it’s important infrastructure, whether it’s the banking transactions, or whether it’s more likely what’s on the display is a transcript. So it really depends on the risks there. So all the projects are not really the same. As for privacy issues, so a lot of the large language models in market is learning data from somewhere, and you have to learn a lot of big data. It’s not small data. It’s a huge… data, and the question is where does that data reside, and who owns the data? It’s basically, it’s more like the cross-border data transfer issues that, you know, what’s the data source, what’s the use of the data, that’s basically, it’s more like the international transfer. So question is which laws of regulations will be applied to that data. So that’s a bit of cloud issues, same as the cloud issues, so it’s not easy resolutions for that. So basically, we have to deal with all the data along the generative AIs. So that’s basically a lot of privacy or protections, or anything about the cybersecurity, what will happen to cybersecurity also applied to the generative AI. So basically, when you talk about this AI and security or AI guidelines or whatever, you state within private company, it really depends on the, includes the data, privacy, when you get the data compromised, the data source compromised, or the result of the data was compromised, or any breaches happened within the large language model that has been attacked a couple of times. So that’s really the kind of lessons learned, the cybersecurity also applies to the, not all, but part of the generative AI, the projects. So as a private company, it’s not the single company, country level, we just need to deal with the multinational, multi-country level projects that have to deal with the old data, privacy issues, and also the need to protect the models or data where that resizing. So it’s pretty risk models, risk-based management. So it’s all about money. But basically, due to the multinational projects, that’s not easy resolutions. But with the guidelines and some of the lessons, well, things we kind of apply to cybersecurity, things into the genitive AI, kind of resolving some of the issues residing in the genitive AI projects. So but as I said, we have to deal with a lot of the different countries. So that’s our challenges right now. So not technology itself. It’s more like cross-border, well, multinational different regulations. That’s the real challenges for private company. I think I’ll stop here.
Moderator – Daria Tsafrir:
Thank you. That was very interesting. Ms. Daur, I will turn to you now. The OECD was the first, if I’m not mistaken, to publish clear principles for dealing with the IRIS. Could you share with us the OECD’s policy from today’s point of view with an emphasis on the robustness principle? And maybe a word on where we are headed.
Gallia Daor:
Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles for artificial intelligence. So these principles sort of seek to describe what trustworthy AI is. And they have five values-based principles that apply to all AI actors. And they have five recommendations for policymakers specifically. And so within these principles, like you said, we have a principle that focuses on robustness, security, and safety, which sort of provides that AI systems should be robust, secure, and safe throughout their lifecycle, which I think is a particularly meaningful aspect. And the principles also note that systematic risk management approach to each phase of the AI system lifecycle on a continuous basis is needed. So I think it gives sort of the beginning of an indication of how we can apply a risk management approach in the context of AI. So these principles have now been adopted by 46 countries and also serve the basis for the G20 AI principles. And since their adoption in 2019, we’ve worked on providing tools and guidance for organizations and countries to implement them. And so we sort of took a set of three different types of actions. So one focuses on the evidence base. So we developed an online interactive platform that Brinks called the OECD.AI Policy Observatory that has a database of national AI policies and strategies from. from over 70 countries, and also data and metrics and trends on AI, AI investment, AI jobs and skills, AI research publications, and a lot of other information. We also work on gathering the expertise. So we have a very comprehensive network of AI experts, now with over 400 experts from a variety of countries and disciplines that help us take this work forward. And we also develop tools for implementation. So we have a catalog of tools for trustworthy AI. Sorry, I should say, we don’t develop the tools, but we compile them. So we have this catalog that sort of different organizations and countries can submit the tools that they have. And we process that, and anybody can access and see what is out there that can be used. And in that context is where also our increasing focus on risk management and risk assessment in AI. And we already, last year, we published a framework for the classification of AI systems. And others have noted that the risk is very context-based. So the system is not in the abstract. We don’t know what risk it may pose. It depends how we use it, who uses it, with what data. So this classification framework is really there to help us identify the specific risks in a specific context. And we also will soon publish a mapping of sort of different frameworks for risk assessment of AI and what they have in common, and sort of the guidepost, the top-level guidepost that we see for risk assessment and management in AI. So that’s sort of the main focus is AI here. But I do want to say a word about the OECD’s work on digital security, which is our term for cybersecurity in the economic and social context. So we have an OECD framework for digital security that looks at four different aspects. So it has sort of the foundational level, which is the principles for digital security risk management, general principles, and operational principles for how to do risk management in the digital security context. It also has a strategic level, so how you take these principles as a country and use them to develop your national digital security strategy. We have a market level of sort of how we can How we can work on sort of misaligned incentives and in the market including sort of information gaps and to make sure that that both products and services are safe or secure sorry and also that in particular and others have mentioned that AI is now increasingly used in the context of critical infrastructure and critical activities So we have a recommendation on the digital security of critical activities And the last level is a technical level where we focus on vulnerability treatment and including sort of protections for vulnerability researchers and good practices for vulnerability disclosure, and I think so that this leads and maybe I’ll stop here but I think that the others have said about the the intersection between AI and digital security, which is really the heart of today’s conversation and we sort of see that Like the the first intervention by Mr. Zarouk said so we see that we need to focus both on the digital security of AI Systems. So what do we need to do to make sure that AI systems are secure? so in particular sort of looking at vulnerabilities in the area of The data that is used of data poisoning and how that can affect the outcomes of an AI system But we also need to think about how AI systems made themselves be used either to attack so generative AI is maybe somewhat of a game-changer and in this aspect too, so we we know for example the generative AI can be used to Produce very credible content that can then be used at scale and phishing attacks, for example And also, you know, there’s less work that we have not yet done But sort of how how AI systems can be used to enhance digital security, so I’ll say just one word on that that we have at the OECD We have the global forum on digital security, which is for prosperity which is an annual event where we bring different stakeholders from a very large range of countries to talk about the sort of hot topics in digital security. And the event that we did earlier this year jointly with Japan focused exactly on sort of the link between digital security and technologies and with AI obviously being one of the key focus. And that was exactly one of the themes of our discussion there. So I’ll stop here, but thank you.
Moderator – Daria Tsafrir:
Thank you, Kalia. I can share with you that Israel has adopted the OECD principles into its guideline papers on AI. At the moment, the guidelines are non-legally binding, and the current demand is for sectoral regulators to examine the existing need for a specific regulation in their field. But I imagine we will be soon looking into the AI Act as well. So now I’ll turn to Mr. Lovanic. Could you share with us Germany’s policy regarding cybersecurity and AI? How, in your opinion, will AI Act affect Germany’s policy and regulation? How will you implement it into your law system?
Daniel Loevenich:
Yeah, it’s a very difficult question. Challenging. Since AI Act is, as you know, it’s brand new. But indeed, we in Germany are very much concerned with the European perspective of AI. And just let me stress the fact that especially on the EU level, the union and the standardization organizations like SunCenter, like JTC, TwinOne, as you know, do a great job on that. They very much focus on the 10 issues addressed in the AI Act standardization. And we in Germany are very much looking forward to implement procedures and infrastructures based on our conformity assessment and especially certification infrastructures. To implement the technical basics for our conformity assessment of these standards. But first of all, let me stress the fact that if we say AI risks are special risks to cybersecurity, then we always have in mind the technical system risks, like, for instance, a vehicle. And we always have in mind the technical risks And we address the, especially for embedded AI in such a technical system, we address all these risks based on our experiences with engineering and analysis of these technical systems. Or in case of a distributed IT system with the whole supply chain, in the background, we have special AI components or modules as, for instance, cloud-based services that play a key role for the whole supply chain. So we address the risks in terms of the whole supply chain of this application. And it’s very important to be aware that when we, in Germany, consider AI risks, we have to concentrate on these AI modules within that complex systems. And we do that just by mapping down these application or sectoral-based risks, which may be regulated, of course, by standards, down to technical requirements for the AI modules that are building. And of course, we have a lot of stakeholders being responsible and being competent to address these risks. And they are responsible for implementing the special AI countermeasures, technical countermeasures, within their modules during the whole life cycles, as we heard from the speakers already. And this is where we do concentrate, especially in Germany, but in the EU. The overall issue is to build a uniform AI evaluation and conformity assessment framework. Bloomberg, independently of who’s responsible to implement the countermeasures effectively working of them for the cybersecurity risks. And this is a European approach. It is number one key political issue in the German AI standardization roadmap. So if you ask me what we do next, yes, on the basis of existing cybersecurity conformity assessment infrastructure, like attestation, second party or third party evaluation, certification, and so on, we try to address these special AI risks as an extension to the existing frameworks, implementing the EU AI standardization requests. Does that answer your question, basically?
Moderator – Daria Tsafrir:
Thank you. Thank you so much. And you actually brought me directly to the second round of our session, which is what’s missing and what we can do better. And as some of you mentioned already, one of our major concerns as a government is the protection of safety and safety of critical infrastructures. And as a result, chain supplies. And recently, we are also looking into SMEs. So I have two questions, if you could address it shortly. One is, what should governments be doing in the regulatory space to improve cybersecurity of their systems? And when we talk about regulation, I think we need to address two subjects. One, we need to consider the risks of over-regulation. And we also need to think, is AI dynamic? Maybe it’s too dynamic for regulation. And the second question is, how much of the challenges should be addressed within international forums, including maybe binding treaties? So if you could address these questions, and maybe an idea or an advice for the future, if you have one, then I’ll be glad to hear it. So I think we’ll keep the same order. So we’ll start with Dr. Al-Ghloushi, and where we can go on.
Bushra Al-Blushi:
Yeah. I think I will take it from the international perspective. As we can see today in the current land of many AI acts being developed and being issued by different countries and it’s totally fragmented. And it’s very difficult for both providers and consumers to adopt at the end of the day. So assume that I’m providing those services or AI models in 100 countries and I’m having 100 acts, to which one should I comply with? Shouldn’t we harmonize or shouldn’t we come at least with the minimum requirements for conformity assessment or for compliance that will make it much easier for the producers to comply. And at the end of the day, we’ll give also the consumers that confidence that this AI tool is being internationally recognized by multiple countries. So that fragmentation, as I said, it makes it really difficult for both consumers and providers to comply with. The international collaboration and the harmonization of AI standardization, the compliance to the requirements to address those challenges. And actually this was one of the papers that we published last year with the World Economic Forum calling for a harmonized international certification scheme for different things. AI was not part of it, but at least it addressed the idea how the harmonization should be done, what are the minimum requirements. I’m not saying that it’s the full certification that the country should rely on, but at least it’s the minimum requirement certification or the minimum requirements conformity assessment that will makes it easier for the providers to comply with and will make also our role as regulator much more, let’s say, less than having different standards, different requirements, different, let’s say, acts in different countries. This is in a nutshell, I think harmonization of international requirements is very important in order to move forward with the different AI acts that we have today.
Moderator – Daria Tsafrir:
Thank you. Mr. Rangel.
Hiroshi Honjo:
Yeah, Dr. Bruce, you said almost what I want to say, but basically. As a private company, we need international harmonizations for all the regulations. On this keynote speech of IDF, our Japanese Prime Minister Kishida-san said there will be AI regulations, guidelines in G7 countries. It’s OK, but that’s not enough. So there are more countries. So we need at least minimum requirements, minimum harmonization to run the business across the multinational countries. So I’m kind of looking forward for that. But what I don’t like to happen is what happened to the data protections, GDPR. Some countries have very strong regulations. Other countries have very soft law. And that’s a private company that costs a lot. So I hope all the things harmonized within AI. So I’ll stop here.
Moderator – Daria Tsafrir:
Thank you. Ms. Bauer?
Gallia Daor:
Thank you. Yeah, so I think we’ve heard a lot about the fragmentation issue. And obviously, that’s a serious issue. So I think it’s difficult to talk sort of in the abstract about whether we should or shouldn’t have regulation or because these things are happening. So I think it’s also to talk about what we do with this. And I think from the perspective of an international organization, I think we can talk perhaps about sort of three roles of intergovernmental organizations and what they can do to help countries and organizations in this situation. So one thing is sort of looking at the mapping the different standards and frameworks and regulations and all these things out there and sort of trying to identify. identify commonalities, I don’t know, perhaps minimum standards, and sort of develop some sort of a practical guidance from that. But I think another important role is the ability of intergovernmental organizations, and we see that here today, to convene the different stakeholders from the different countries and from the different stakeholder groups to sort of flag their issues and have that conversation. And perhaps a third aspect is to advance the metrics and measurement of some of these issues that are very challenging. And so in the context of our work on AI, we’re developing, and we will launch next month, an AI incidents monitor that sort of looks at real time, live data to see what actual incidents AI systems cause in the world. And I think that’s maybe one step to advance that issue. Thank you.
Moderator – Daria Tsafrir:
Thank you. Mr. Lovnic?
Daniel Loevenich:
Yeah, we in Germany want to open markets to new technologies. We want people to be creative with AI technologies. We want SMEs to be on their way to use these technologies and even to develop new ideas with these technologies. So we really don’t want to prescribe things. We just want to recommend people and organizations to do special things. So basically the, and obviously the first and overall instrumentarium for this is international standardization so that people can decide on different issues and their own risks and requirements to use technologies in special ways. ways and not to use them or to misuse them in other ways. Please allow some remarks on that standardization issues, especially on the ISO level. My experience is they are a lot of people involved. Many of them are AI experts, but I can distinguish three schools of thought. Technical, sectoral means application specific in contradiction to the technical application agnostic of you and the normative and ethical things on the top. It’s nothing new. It’s three different aspects of AI technology since they are data driven. We have data in these systems and they are used as machine understandable data, not readable data, but understandable data. So people are very much responsible in using these technologies for specific purposes. Now then, if you have appropriate standards and speaking of harmonization, you can do this on the technical level, like ISO does, like Sansemelet does, like other people do. It’s very easy. If you come to application specific requirements, you can standardize that. In Europe, we have ATSI for instance, or ITU for the normative. for the health care sectors. Very effective. You can do that. And you can do it even on the application and sectoral-specific levels. You can do regulation if you want, but let the market do it. Let they decide this is use of our AI-based systems. And let the market and the customers decide, I want to use this technology in that way that is regulated by blah, blah, blah. The third school of thought or level is very much specific on value-based things. There are society and all these kind of organization and digital serenity and other aspects that play a key role in that. In the EU, for instance, you have 27 nations, if I’m right, with probably 27 different value-based governmental positions on that. So it’s very, very difficult. Our time is coming to an end. Yeah, I’m going to stop here. But this is the difficult part. Yeah, it was very interesting.
Moderator – Daria Tsafrir:
Yes, thank you. I did steal back our five minutes, I have to say. But well, anyway, time flies when you’re having fun. And our time is unfortunately up. So I would like to thank you all for participating. And I know some of you had to wake up very, very early in the morning. So I really appreciate your effort. It was very interesting and very enlightening. And I hope to see you soon, maybe on the follow-up session.
Speakers
Abraham Zarouk
Speech speed
103 words per minute
Speech length
819 words
Speech time
475 secs
Asaf Wiener
Speech speed
133 words per minute
Speech length
133 words
Speech time
60 secs
Bushra Al-Blushi
Speech speed
165 words per minute
Speech length
1650 words
Speech time
601 secs
Daniel Loevenich
Speech speed
98 words per minute
Speech length
1083 words
Speech time
664 secs
Gallia Daor
Speech speed
163 words per minute
Speech length
1509 words
Speech time
554 secs
Hiroshi Honjo
Speech speed
98 words per minute
Speech length
942 words
Speech time
578 secs
Moderator – Daria Tsafrir
Speech speed
143 words per minute
Speech length
1024 words
Speech time
429 secs
Moderator 1
Speech speed
164 words per minute
Speech length
17 words
Speech time
6 secs
