Child online safety: Industry engagement and regulation | IGF 2023 Open Forum #58

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Julie Inman Grant

The analysis covered a range of topics related to online safety and abuse. One of the main points discussed was Australia’s strong online content scheme, which has been in place for over 22 years. The scheme is primarily extraterritorial, as almost all of the illegal content it deals with is hosted overseas. This highlights Australia’s commitment to tackling online content and ensuring a safe online environment for its citizens.

Another important aspect highlighted in the analysis is the need for a more individuals-centered approach in addressing online abuse. Schemes have been put in place to address individual abuse cases, and understanding current trends in online abuse is deemed integral to applying systemic powers effectively. Taking into account the experiences and needs of individuals affected by online abuse can lead to more targeted and effective interventions.

A concerning finding from the analysis is the significant increase in cases of online child sexual exploitation and sexual extortion. It is reported that there has been a doubling of child sexual exploitation cases and a tripling of sexual extortion reports. Shockingly, one in eight analyzed URLs involves coerced and self-produced abuse through smartphones and webcams. These figures highlight the urgent need for robust measures to combat online child sexual abuse and protect vulnerable children.

The role of online platforms in preventing abuse was also discussed. Currently, online platforms are being used as weapons for abuse. However, platforms like Snap and Instagram have been provided with intelligence reports on how to prevent this abuse. The analysis suggests that online platforms should do more to proactively guard against their services being exploited for abusive purposes.

The analysis also touched upon the topic of corporate responsibility in online safety. The introduction of the basic online safety expectations tool allows the government to ask transparency questions and compel legal answers from companies. Furthermore, companies can be fined based on their truthful and complete responses. These expectations play a pivotal role in compelling companies to operate safely and protect their users.

Global collaboration and transparency were identified as crucial factors in tackling online child abuse. Initiatives like the Heaton Initiative are putting pressure on large companies, such as Apple, to do more to address child sexual abuse. Additionally, future enforcement announcements targeted at five more countries are to be made next year, indicating the ongoing commitment to global collaboration in combating online child abuse.

The analysis also highlighted the challenges faced in safeguarding children online. While the internet has become an essential tool for children’s education, communication, and exploration, it was not initially built with children in mind. Notably, there has been a notable increase in reports of cyberbullying among younger children during COVID-19 lockdowns. It is imperative to strike a balance between safeguarding children appropriately and allowing them to benefit from the internet’s use.

Regarding age verification, the analysis presented differing viewpoints. Companies were encouraged to take responsibility in verifying users’ ages and facilitating meaningful checks. However, it was suggested that age verification should not restrict children’s access to necessary and beneficial content. Trials for age verification are currently being conducted by platforms like Roblox, and Tinder and Instagram have begun implementing age verification in Australia. However, there are concerns about the effectiveness and potential restrictions on access for marginalized communities.

The effectiveness of META’s Oversight Board in reviewing content moderation decisions was called into question. In the past year, the board received around 1.3 million requests for content moderation reviews but was only able to cover 12 cases. This raises concerns about the board’s efficiency in handling the sheer volume of cases.

Lastly, the analysis emphasized the importance of multinational regulation for online platforms and the need for specialized agencies to handle investigations. The gray area of regulation poses significant challenges, requiring multi-layered investigations to effectively address abuse and ensure accountability.

In conclusion, the analysis shed light on various aspects of online safety and abuse. It highlighted Australia’s strong online content scheme, the need for individuals-centered approaches in tackling online abuse, the concerning increase in cases of online child sexual exploitation, and the role of online platforms in preventing abuse. The importance of global collaboration, corporate responsibility, and safeguarding children online was also emphasized. Critical evaluations were made regarding age verification measures, META’s Oversight Board, and the need for multinational regulation and specialized agencies. These insights provide valuable information for policymakers, platforms, and organizations to address online safety and combat abuse effectively.

Audience

The discussion revolves around striking a balance between children’s right to access information online and ensuring their safety, particularly in relation to sexuality education. It is important to provide children with accurate and scientific information while also protecting them from potentially harmful content. This highlights the need for a comprehensive and inclusive approach to online education.

There are ongoing discussions regarding the implementation of new regulations to safeguard children online. The speaker questions whether there is a balance between raising awareness and imposing obligations on service providers under these regulations. This reflects the growing recognition of the importance of protecting children from abuse, exploitation, and violence online.

In terms of ensuring child safety online, the audience argues for not only blocking but also removing harmful content. Simply blocking such content may not be sufficient, as individuals seeking it may find ways to circumvent these blocks. Therefore, the removal of harmful content becomes crucial to guarantee the safety of children.

In conclusion, the discussion emphasizes the need for a balanced approach that upholds children’s right to access accurate information while safeguarding them from harmful content. The introduction of new regulations and the emphasis on removing, not just blocking, harmful content further demonstrate the commitment towards ensuring online child safety. This signifies progress in protecting children from abuse, exploitation, and violence in the digital realm.

Noteworthy topics discussed include children’s rights, online safety, access to information, and sexuality education. Additionally, the discourse touches upon the relevance of the UN Convention on the Rights of the Child and the impact of digital regulation on children’s rights and internet safety. These aspects contribute to a comprehensive understanding of the subject matter and highlight the interconnections between various global initiatives, such as SDG 4: Quality Education, SDG 5: Gender Equality, and SDG 16: Peace, Justice, and Strong Institutions.

Tatsuya Suzuki

During the discussion, the speakers emphasised the need to enhance internet safety for children. They highlighted the importance of having a comprehensive plan in place to ensure the secure use of the internet by children. This plan involves collaborative efforts with various stakeholders, including academics, lawyers, communications companies, and school officials. These groups can work together to develop strategies and guidelines that promote responsible internet use among children.

The speakers also expressed their support for public-private initiatives aimed at addressing online child abuse and exploitation. They recognised the crucial role of the Child Welfare Agency in upholding the interests of the private sector in these efforts. Additionally, they highlighted the active collaboration between the agency and the Ministry of Education, Culture, Sports, and Tourism, as well as the involvement of Japanese UNICEF. These collaborations are important in developing effective and comprehensive approaches to combating online child abuse and exploitation.

Overall, the sentiment expressed during the discussion was positive, with a strong emphasis on implementing measures to protect children online. The speakers recognised the urgency and importance of ensuring the safety and security of children in their online activities.

Through the analysis, it is evident that this issue is aligned with Sustainable Development Goal 16.2, which aims to end abuse, exploitation, trafficking, and all forms of violence and torture against children. By addressing the challenges of internet safety and working towards its improvement, progress can be made towards achieving this goal.

In summary, the discussion highlighted the necessity of implementing initiatives to improve the safe and secure use of the internet for children. Collaboration with various stakeholders, such as academics, lawyers, communications companies, and school officials, is essential in developing a comprehensive plan. The support for public-private initiatives in tackling online child abuse and exploitation was also emphasised, acknowledging the roles of the Child Welfare Agency, Ministry of Education, Culture, Sports, and Tourism, and Japanese UNICEF. Overall, there was a positive sentiment towards the implementation of measures that protect children online, aligning with Sustainable Development Goal 16.2.

Moderator – Afrooz Kaviani Johnson

Child exploitation on the internet is an ongoing issue that has evolved over the years. It now encompasses more than just explicit materials, but also the ways in which technology enables abuse. To effectively address this issue, collaboration across sectors is crucial.

Australia’s eSafety Commissioner is at the forefront of combating online abuse. This government agency has implemented a range of regulatory tools to drive industry-wide change. The role of Australia’s eSafety Commissioner in spearheading these efforts is commendable.

The involvement of the private sector is also vital in protecting children online. Companies are increasingly being called upon to take proactive measures and be accountable for their responsibilities in ensuring online child safety. These discussions involve industry experts from various countries, including Japan’s private sector and BSR Business for Social Responsibility.

Japan is making significant strides in enhancing internet safety for young adolescents. The country’s Child Welfare Agency and multiple stakeholders, such as academics, lawyers, communications companies, school officials, and TTA organizations, are actively involved in creating a safe and secure online environment for young people. Japan’s measures in this regard have been positively received and appreciated.

Recognising the importance of private sector involvement, Japan’s Child Welfare Agency has developed the Internet Environment Management Act, which respects the individual and subjective interests of private organizations. These organizations are actively engaged in ensuring the safe and secure use of the internet by children.

Addressing online child abuse is a complex and challenging task. Mr Suzuki, a prominent speaker, highlighted the various ways in which children can fall victim to online abuse, emphasising the need for parental involvement and proper ‘netiquette’. In Ghana, collaborative regulation involving tech companies has been adopted to tackle online child abuse.

Continued learning and knowledge exchange are crucial in combating online child abuse. A recent discussion on internet literacy and online child abuse served as a fruitful exercise and a positive step in addressing the issue. Ultimately, promoting sustainable development by ensuring all learners acquire the necessary knowledge and skills is vital.

In conclusion, addressing the issue of child exploitation on the internet requires collaboration across sectors, involvement of government agencies like Australia’s eSafety Commissioner, proactive engagement of companies, efforts from countries like Japan, and continued learning. These various approaches collectively work towards protecting children online and making the digital world a safer space for young people.

Toshiyuki Tateishi

Summary: The Japanese private sector has adapted over the past decade to address the challenges of online child sexual abuse and exploitation. Japan has a constitutional law that protects the secrecy of communication, preventing the blocking of certain websites. They have also implemented mechanisms, such as the Jeopardy system, to block access to illegal sites. If the abusive site is located in Japan, it is deleted by the ISP and investigated by the police. If the site is overseas and found to be sexually abusive, it is promptly blocked. Japan’s approach to internet safety has been commended for its low level of government interference with digital freedoms. They emphasize balancing freedom of communication, security, and innovation online. Communication is seen as crucial before taking down any content, even engaging with parties located overseas. Overall, Japan’s comprehensive approach demonstrates its commitment to creating a safe online environment and addressing online child abuse and exploitation.

Edit: The private sector in Japan has proactively responded to the emerging challenges of online child sexual abuse and exploitation in recent years. Japan has enacted a constitutional law that safeguards the confidentiality of communication, thereby prohibiting the blocking of certain websites. To combat online child abuse, Japan has established mechanisms like the Jeopardy system, which enables DNS servers to block access to illegal sites. If an abusive site is discovered within Japan, the Internet Service Provider (ISP) will delete it and involve the police for further investigation. In the case of overseas sites, a thorough examination is conducted, and if found to contain sexually abusive content, it is promptly blocked. Japan’s efforts to combat online child abuse have been recognized by a 2016 UN report for preserving digital freedoms with minimal government interference. They place a particular emphasis on striking a balance between freedom of communication, security, and innovation in the online realm. Additionally, before taking down any content, Japan believes in attempting communication with the relevant parties, even if they are located overseas. This approach underscores the importance of dialogue and potential collaboration with foreign entities to effectively address online safety concerns. Overall, Japan’s comprehensive strategy exemplifies its unwavering dedication to promoting a secure online environment and combating online child abuse and exploitation.

Dunstan Allison-Hope

Human rights due diligence plays a vital role in upholding child rights and combating the alarming issue of online child sexual exploitation and abuse. Business for Social Responsibility (BSR) emphasises that incorporating human rights due diligence is essential for companies to demonstrate their commitment to the well-being of children. BSR has conducted over 100 different human rights assessments with technology companies, highlighting the significance of this approach.

A comprehensive human rights assessment involves a systematic review of impacts across all international human rights instruments, focusing on safeguarding rights such as bodily security, freedom of expression, privacy, education, access to culture, and non-discrimination. It is crucial to adopt a human rights-based approach, which includes considering the rights of those most vulnerable, particularly children who are at a greater risk.

The European Union Corporate Sustainability Due Diligence Directive now mandates that all companies operating in Europe must undertake human rights due diligence. As part of this process, companies must evaluate the risks to child rights and integrate this consideration into their broader human rights due diligence frameworks. By explicitly including child rights in their assessments, companies can ensure that they are actively addressing and preventing any potential violations.

However, it is important to maintain a global perspective in human rights due diligence while complying with regional laws and regulations. Numerous regulations from the European Union and the UK require human rights due diligence. However, there is a concern that so much time and attention going towards the European Union and the United Kingdom takes time away from places where human rights risks may be more severe. Therefore, while adhering to regional requirements, companies should also consider broader global approaches to effectively address human rights issues worldwide.

A holistic human rights-based approach seeks to achieve a balance in addressing different human rights, with a specific focus on child rights. Human rights assessments typically identify child sexual exploitation and abuse as the most severe risks. To ensure the fulfilment of all rights, a comprehensive assessment must consider the relationship between different human rights, with considerations given to tensions and the fulfilment of one right enabling the fulfilment of other rights.

Another crucial aspect of human rights due diligence is the application of human rights principles to decisions about when and how to restrict access to content. Cases before the meta-oversight board have shown that having the time to analyse a condition can provide insights and ways to unpack the relation between rights. Applying human rights principles like legitimacy, necessity, proportionality, and non-discrimination to decisions about when and how to restrict access to content helps ensure a balanced approach.

It is also important to provide space to consider dilemmas, uncertainties, and make recommendations in cases relating to human rights, particularly child rights. Highlighted is the use of space available for the meta-oversight board to make decisions and the idea for similar processes to take place concerning child rights is welcomed. This helps ensure that informed decisions, considerations of different perspectives, and recommendations can be made.

In conclusion, human rights due diligence is vital to respect and safeguard child rights and combat online child sexual exploitation and abuse. By integrating child rights into their broader human rights due diligence, companies can demonstrate their commitment to the well-being of children. While complying with regional laws, it is crucial to adopt a global approach to effectively address human rights risks. A holistic human rights-based approach considers the interrelationships between different rights, while the application of human rights principles guides decisions about content access. Providing space for deliberation and recommendations in cases involving child rights is fundamental to making informed decisions and ensuring the protection of children’s rights.

Albert Antwi-Boasiako

The approach adopted by Ghana in addressing online child protection is one of collaborative regulation, with the objective of achieving industry compliance. In line with this, Section 87 of Ghana’s Cyber Safety Act has been established to enforce industry responsibility in safeguarding children online. The act provides provisions that compel industry players to take action to protect children from online threats.

Furthermore, Ghana’s strategy involves active engagement with industry players, such as the telecommunications chamber, to foster mutual understanding and collaboratively develop industry obligations and commitments. This collaborative approach highlights the importance of involving industry stakeholders in shaping regulations and policies, rather than relying solely on self-regulation.

The evidence supporting Ghana’s collaborative regulation approach includes the passing of a law that includes mechanisms for content blocking, takedown, and filtering to protect children online. These measures demonstrate the government’s commitment to ensuring the safety of children in the digital space.

The argument put forth is that self-regulation alone cannot effectively keep children safe online, as it may not provide sufficient guidelines and accountability. On the other hand, excessive regulation can stifle innovation and hinder the development of new technologies and services. Ghana’s approach strikes a balance by fostering collaboration between the government and industry players, promoting understanding, and establishing industry obligations without impeding innovation.

In conclusion, Ghana’s collaborative approach to online child protection aims to ensure industry compliance while striking a balance between regulation and innovation. By actively engaging with industry stakeholders, Ghana seeks to develop effective measures that safeguard children online without stifling technological advancement. This approach acknowledges the limitations of self-regulation and excessive regulation, thus presenting a more holistic and effective approach to online child protection.

Session transcript

Moderator – Afrooz Kaviani Johnson:
Okay well welcome everyone, welcome everyone in the room and I understand we’ve got at least 20 that have logged on online as well to join this evening session in Kyoto so I know it’s been a long day for many people and we appreciate you taking the time and joining us in this session. We are going to be exploring different models of industry engagement and regulation to tackle online child sexual abuse and exploitation. My name is Afrooz Kaviani, I work for UNICEF headquarters in New York as the global lead on child online protection. I’m joined by my colleague Josie who leads our work on child rights and responsible business conduct in the digital environment. So Josie is managing our online moderation today and she’ll be looking out for questions and comments that may be coming from our online participants and we’re delighted to have with us expert speakers from different sectors and really from around the globe joining us representing Australia’s East Safety Commissioner, Japan’s Children and Families Agency, Japan’s private sector, Ghana’s Cyber Security Authority and BSR Business for Social Responsibility. Our aim today is to foster collaboration and the exchange of ideas, experiences and innovative strategies on this difficult topic of child sexual abuse online. So I do want to give the content warning that we are speaking about a difficult topic and it may be disturbing for people in the room or online so please feel free to step out or do what you need to do to you know safeguard your your own well-being. Many of you already know that this challenge of child exploitation on the internet is not new, however its nature has changed over the last decades and in the early stages efforts primarily were looking at halting the spread of child sexual abuse materials on the internet but today we’re seeing how technology is also being used to enable or facilitate child sexual abuse in a wide range of ways including the live streaming of child sexual abuse, the grooming of children for sexual abuse, the coercion, deception and pressuring of children into creating and sharing explicit images of themselves. So obviously it goes without saying that addressing this issue requires collaboration across sectors and it requires strengthening of systems for protection for children you know in their homes and their communities and in their countries but today we’re zooming in on a specific dimension of this response and it’s about how different jurisdictions are engaging companies in this effort and we’ve got one round of questions for our panelists and then we’re going to open the floor for questions and discussions from the audience. So I’m really delighted to turn to Australia to start us off and we’re so pleased to have with us Julie and Mum Grant who is Australia’s eSafety Commissioner. Thank you Commissioner for joining us and the question for you is really being the world’s pioneering government agency for online safety. I’m interested to hear from you about the suite of regulatory tools that you’re deploying to really drive systemic change in industry against online child abuse.

Julie Inman Grant:
It’s important to start with the fact that Australia has had a strong online content scheme for more than 22 years, which means almost none of the content, illegal content that we’re dealing with online, is hosted in Australia. It’s almost all extraterritorial and overseas. So you see the world moving towards some much more process and systemic types of laws. We’re seeing with the online safety bill in the UK, with the Digital Services Act. We do have process and safety powers, but I also want to start by talking a little bit about the complaint schemes that we have, because I believe it’s one of the most important things that we do. We seem to forget that it’s individuals who are being abused online, and that’s how harm manifests, and the ability to take down that content to prevent the re-traumatisation, but also to understand the trends that are happening through engaging with the public is really critical to our success in applying the systems and process powers. So just to give you an example, we’ve seen a doubling this year of child sexual exploitation. When we analysed about 1,300 URLs, we found that one in eight is now, instead of inter-familial abuse, which tends to be more typical, one in eight is coerced and self-produced through smartphones and webcams in children’s bedrooms and bathrooms, in the safety of the family home. So that’s really significant. It just shows that the internet is becoming a new receptacle for targets, for predators, and it’s no longer one of convenience. The other huge trend that a number of us are seeing is we’ve had a tripling of sexual extortion reports coming into our image-based abuse schemes. So image-based abuse, the non-consensual sharing of intimate images and videos, we are seeing younger children being subject to that, but it’s now young men between the ages of 14 and 24 that are largely being targeted. And while 18 is the year that we consider young people adults, they’re not totally cognitively developed. They may be leaving school, so they don’t have the pastoral care and protections that they might once have had. So it’s a very distressing kind of crime, and sometimes it can happen very rapidly. Organized criminals have figured out that young men will take off their clothes and perform sexual acts for the camera more readily than young women, and they will negotiate down. We’ve seen some negotiations where they’ll try and extract $15,000 from a teenager, and they’ll say, well, I’m just a teenager, I don’t have that money, well, how much can you give me? And it’s relentless. But they’ll also use guilt and shame and other tools of social engineering. So all this is really important for us to understand. We’ve actually developed some intelligence reports for companies like Snap and Instagram to say this is how we see your platform being weaponized. If you use some AL and machine learning, you can see that these same images are being used in 1,000 different reports, and if you use some natural language processing, you’ll see that they’re using the same language. So we need to encourage the companies to step up, and that’s where safety by design is a key systemic tool. But I guess the most potent one that we have is what we call the basic online safety expectations, and that’s where we lay out a set of foundational expectations we have for online companies, whether they’re gaming or dating sites or social media sites or messaging sites, to operate in our country. And it gives us the opportunity to ask transparency questions and compel legal answers. Questions we’ve been asking for six years, basic things like photo DNA has been used for more than ten years. Which services are you using it on? Are you using it at all? Are you looking at traffic indicators for live stream child sexual abuse material? Again, we can find the companies based on whether or not they respond truthfully and fulsomely in the manner and form. So that’s where the penalties are. stunning report, I think, in December of 2006, looking at the most powerful countries and companies in the world that have the financial resources and the capability to do things, but we’re not doing enough. So shining that light, with sunlight being the best disinfectant, is, I think, an effective tool. We’ve already seen in the United States the Heaton Initiative and others, you know, putting pressure on companies like Apple to target child sexual abuse material. You can’t tell me that in 2022 they only had 234 cases of child sexual abuse when they’ve got more than a billion handsets and iPads in the market and iCloud and iMessage. So we really need to lift the hood. We’ll be making a similar set of enforcement announcements next year focused on five more countries. We need to continue to work together. We need to lift the lid. We need to focus sunlight on so that we don’t let darkness fester in the darkest recesses of the web. Thank you, Commissioner. That is so fascinating, just the breadth of tools, and really I have to apologize and let everyone in the room know that I’ve given a very small amount of time to each speaker. So the Commissioner did an amazing job there really covering the breadth, but I think we’re going to have time to unpack and understand better. But I think just what you’ve managed to do and just those analogies of, you know, shining the light and using those regulatory tools to lift the hood. I forgot just to mention that we have codes, mandatory codes and standards covering eight sectors of the technology ecosystem, five of which we’ve filed a search engine code which now includes generative AI and synthetic generation of CSAM and TVEC, but we’re creating standards for a broader range of what we call designated internet services and relevant electronic services.

Moderator – Afrooz Kaviani Johnson:
Thank you, Commissioner. We’re now going to move to Japan. We’re here in Japan, so it’s very timely and it’s actually very exciting to introduce the next speaker. Mr. Tatsuya Suzuki. He’s the director of a newly formed agency in Japan, which is very significant for the child protection and child rights architecture in this country. So he’s the director of the Child Safety Division of the Children and Families Agency. He’ll be joining us online. And the question for Mr. Suzuki is to understand with his extensive experience, which includes roles at Japan’s National Police Agency, we’re wanting to know more about how this newly formed agency is really going to push forward public-private initiatives in order to tackle the specific issue of online child sexual abuse and exploitation. Do we have…

Tatsuya Suzuki:
SPEAKING JAPANESE SPEAKING JAPANESE SPEAKING JAPANESE SPEAKING JAPANESE SPEAKING JAPANESE The first is to use mobile information transfer machines and appropriately select and utilize internet-specific information so as to polish their ability to appropriately utilize internet. That is the first. However, the development and progress made on developing a safe and secure environment for young adolescents was to avoid military smuggling and military destruction of the newer internet developed with this current state-of-the-art new moira technologies. As we heard Mr. Kambiyu’s acceptance, a scale of research and innovation of the potential applications of modern data in the development of the environment of the moira in Japan and Japan to the world. Considering the characteristics of the Internet, there are more involved activities and more meaningful activities for the Münster-based demandé in our nation, it is necessary to earnestly appreciate the sacrifices of the standpointattled favored by the challengers. Give credit, for those who continue to support us in the spirit of universal demand, We are also working on a comprehensive plan for the proper use of children. We have been working on this work at the Cabinet Office for a long time, but this April, the Child Welfare Agency was established, so we are working on it. As I mentioned earlier, the Child Welfare Agency’s Internet Environment Management Act respects the private sector’s individual and subjective interests, and we are working with private organizations and organizations. For children’s safe and secure use of the Internet, we are working with experts in various fields, including academicians, lawyers, communications companies, school officials, and TTA organizations, and holding a seminar on the maintenance of the Internet environment for adolescents. We are also discussing the revision of the basic plan. Finally, I will explain a little about child sexual harassment prevention measures. In the past, the National Police Agency and the Japanese government have been working together on child sexual harassment prevention measures. Last year, we implemented the Child Sexual Harassment Prevention Plan 2022, but from this year, we will be in charge of the children’s home prevention measures. Regarding the promotion of the measures, we are also actively working with the Ministry of Education, Culture, Sports and Tourism, the Ministry of Education, Culture, Sports and Tourism, and the Japanese UNICEF. That’s all I have to say. Thank you.

Moderator – Afrooz Kaviani Johnson:
Thank you very much. and answers, but fantastic to hear how there is, you know, these basic standards in the law and now you are starting the implementation measures which are taking that multi-sectoral approach, but with a strong engagement of the private sector. On that note, I’m very pleased to shift the mic to the private sector representative from Japan, Mr. Toshiyuki Tateishi, who is representing the Japan Internet Provider Association as well as the Internet Content Safety Association, and my question is if you can let us know how private sector initiatives in Japan have adapted over the last decade to address emerging challenges relating to online child sexual abuse and exploitation.

Toshiyuki Tateishi:
Thank you. I’m very happy to be here. So I’d like to explain some Japanese situation, so could you check? So in Japan we have the Secrecy of Communication, it’s a constitutional law or something, but the brokering system is very hard to the ordinary people, how do they work, and then I make a small slide, so could you put something? So ordinary people think like that. So we are going to go outside and then I want to go somewhere, the house or any other building, so then we find we cannot go into them. But this is not the brokering, this is just a real world block. So I’m sorry for a little bit still Japanese. Any question with DNS? Normal website access with DNS. We ask the, we put the someone. So the LeZova replies actually may not, because So the the brockin system server answered the wrong kind of maybe another server’s IP address. So we cannot access to the overlap place. about that I first mentioned to enter some buildings. But actually the blocking Next please. So it’s like this. So when I want to go to the karaoke, the gatekeeper will say, okay, push it, next please. Then one more push, please. You can go there. But in fact the blocking system is like this, next please. He said I want to go to the house A. But when I want to go home, next please. So it’s like this, right? You cannot go out even from your house. So this is a broken system. But so in the other view, so if we broke them in Japan, but many other countries can access the each whatever, access the content. So only Japanese don’t know. Next, please. So this is a blocking scheme as a measure against these sites. So many left side of this slide, the users of the internet report, which you always have a legal content something. Then they will, the report is coming from. Then our association, Internet Content Safety Association, we make it a list. And automatically, the DNS server retrieves the list weekly, then update the DNS. So we can block the illegal contents. Next, please. Next, please. So if the website is located in Japan, deleted by the USP, ISP, and put it again, please. And the police will have an investigation about that. Push, please. So arrest the criminals. But not in located in Japan, located overseas, please. Check the site, which is really exist or not. Then we have a validation about that, which is a sexual abuse something. And then after that, we create a list and distribute a list to the ISP. And then the sites are blocked. After that, but we check weekly, which is exist or not. and then we delete the URL from the list. Thank you very much. And lastly, one more express. So in 2016, the UN report about freedom of expression in Japan, I was talking with him. He said this is a model, a kind of a great model in Japan presents in the area of freedom of internet, he said. So a very low level of government interference with digital freedoms addresses the government’s commitment to freedom of expression. As the government considers the legislation related to the wiretaps and new approaches to cybersecurity, I hope that this spirit of freedom, communications, security, and innovation online is kept as a forefront of regulatory efforts. So he said, so I’d like to keep this situation. Thank you.

Moderator – Afrooz Kaviani Johnson:
Thank you so much, Mr. Tateishi. That was very helpful to have the images. I appreciate your effort in those bespoke images. And I think you raise some important points that we may get to discuss as we go on, looking at the various rights that are implicated and making sure that we do advance human rights and children’s rights holistically and ensuring that every child has the right to protection from sexual abuse and exploitation. So from Japan, we’re now going to move to Ghana. And I’m so delighted to have Dr. Albert Antwi Boasiyako, who’s the Director General of Ghana’s Cybersecurity Authority. So Director General, as the Cybersecurity Authority really pioneers its role, because it is also relatively new in the scheme of things, interested to hear how Ghana is championing industry responsibility and fostering innovation to tackle this issue of online child sexual abuse and ex-

Albert Antwi-Boasiako:
And thank you also, Yannis, for the topri food privacy efforts and the violation of food access for land exploitation. thank you, and colleagues, and speakers, and everybody here and hopefully online on behalf of the for the invitation to contribute to the discussion. Thank you very much, Yannis, and thank you to all of you for being here today. I’m a government leader, and I’m impressed about how Australia has advanced that, but as a government lead for the past close to seven years, I think there are different maturity levels, and I want to speak from the developmental context. It’s very important, first things first. If you jump without doing the first things, you’re likely to create problems. So, I think the first thing is to have a commonality, and I think that’s one of the things that I heard that was here, the commonality, you know, whether you’re starting or whether you advance, I think the baseline requirement is key. But one also need to appreciate a bit of context. Developing country perspective, sometimes my Western colleagues tell me, but you have this law, especially when they come in, I say, well, the culture of other people is different. You have an interest in making progress but two aspects, technical competency or capacity of the host country. But my early part of this job as national adviser to government before I was appointed as general director, I realized that there are other factors that affect enforcing certain mineralsetting. you to have a certain degree of In fact, we run a lot of ideas with my partners. Once you mention regulations, my Western colleagues said no regulation, especially my US folks. But I think over the past few years, we’ve emerged to the matter. There is that sort of consensus that self-regulation alone cannot what? Keep our children safe. I think by and large, some of my colleagues have shifted a little bit. Possibly, I didn’t also stay too extreme, because there is that concern. If you over-regulate, then you are also going to kill innovation and others, especially a private sector perspective. But Ghana came up with a certain strategy, what we call collaborative regulation. Is that regulation all right? Because without regulation, I don’t think we’ll be able to achieve it. But how? How unique is it? I realize that it’s not just a government making a law and expect the industry to what? To comply. Sometimes, even understanding, and I can confirm that, the industry that we expect to follow certain best practices, to implement certain measures, themselves do not appreciate the risks that our children are facing, either the content they assess, the conduct themselves, or the content that they establish. When you have this realization, I think one will be very careful in terms of how you start your regulatory process. So taking inspiration from Julie, the basics of online safety expectations, we had to pass the law. And the law incorporated the issue of blocking, taking down content, filtering. It was quite a difficult one, because, of course, the suspicion from the civil society. Again, we had to sit together to debate, and eventually, Section 87 of our Cyber Safety Act make provisions to compel industry to act. in a manner that will protect children on the Internet. But that is just a basic framework. I think my colleague from the common law country will appreciate that. The primary law is one that you need an ally, a legislative instrument, to also effectively and practically operationalize the law. And I think, Alfred, we’re grateful we had to invite you yourself. We open up, not just industry, but international partners. Alfred has to visit Ghana for the first time to take part in a public consultation to formulate the specific mechanism by which industry plays. And I think she saw that. The industry is sitting together with us. In fact, they are suggesting. And as I sit here, I can mention that Ghana is active. The first active private sector player is the equivalent, the one who has the telecommunication chamber. Arguably, that is the most important industry body with that. And they have been actively involved in terms of even developing the ally. That is what I refer to the collaborative regulation. Because if you’re certain that we’re doing this together, you lose the morale or to say you are not complying. Of course, it doesn’t mean that is the only two. Ghana’s law incorporates sanctions, both administrative and criminal sanctions. Of course, we needed to fund cybersecurity. And in the developing context, you don’t just allow that. So we incorporated that. So if you do not comply, you are sanctioned. And telecommunication firms have got money. So you pay. And then we’re used to what? To finance cybersecurity. So we have these tools available in our law. I mean, nonetheless, at the core, what I wanted to share as a model from our perspective is this collaborative approach, that you engage with them because you need to build understanding. The concept of regulation in this age is not like in our contests, you know, all these headmaster and the student to do, go and do it. I think that’s a focus is quite, we need to engage. And I think it has been successful, even at governance level of my authority, 11 board members, three are from the industry. And I think that approach has work and other international practices such as the guidelines for industry by the IT, UNICEF, we have been incorporated into the ally as a best practices. But currently what we’re doing most is intensifying the awareness creation. The allies in the process, because that is really what is going to personalize the industry obligation and commitment. But we, I don’t think I will achieve much without really raising the awareness among the industry players that these are the risks. That’s the reason why you need to comply if you need to take down a content. This is why you need to apply if you need to, I mean, you need to comply with the law if you need to block certain content as far as the protection of children are concerned. So I think in a nutshell, ours is a developing situation, I must admit. Ours is a collaborative regulation because I think that is the best, it’s not really a government just giving instruction to industry, it doesn’t work like that. I think if you have a case, you discuss, you argue on the table, and I think that’s what Ghana has been able to use to get the industry sitting at the table, and I think some of our international partners who visit see the discussions, it’s open, transparent, there are risks, the government has to lead, industry need to get on board. But I think we do that by way of talking. Thank you.

Moderator – Afrooz Kaviani Johnson:
Thank you. Thank you, Director-General. No, that’s really fascinating, and indeed, the purpose of this whole discussion is that exchange of experiences, because there are very different approaches, different contexts, but what I really heard from you was going along that journey together with industry and looking at what was fit for purpose in your context, and really moving from just what is on paper to practice, and the best way to do that is bringing industry along with you. Now I’m really pleased to introduce Mr. Dunstan Allison-Hope, who’s the Vice President for Human Rights at BSR. Now, as I mentioned just earlier, this issue of online child sexual abuse and exploitation, it’s a human rights issue, it’s a children’s rights issue, and we do know that there are various tools in the human rights suite of tools, including human rights due diligence, including impact assessments, which are conducted by companies, and these can be key instruments in advancing responsible business conduct. So the question for you is, what does robust human rights due diligence entail, and how can it play a role in addressing this particular issue of online child sexual abuse and exploitation? Thanks. Great. Well, first of all, thank you

Dunstan Allison-Hope:
for the invitation to speak. Much appreciated. I’d love an invitation to Ghana as well, if that’s forthcoming. That was quick. So the main purpose of my comments today is to share how human rights due diligence, based on the UN guiding principles on business and human rights, can form an essential part of company efforts to respect child rights and to address online child sexual exploitation and abuse. I have really two main thoughts to share. The first thought is around the value of human rights due diligence, and the second is about some regulatory trends that are going to transform the landscape of human rights due diligence that I think it’s important to think about. So for context, the technology and human rights team at BSR has now conducted well over 100 different human rights assessments with technology companies. They come in a wide variety of different shapes and sizes. Sometimes it’s new products, sometimes it’s content policy, sometimes it’s market entry, market exit as well. They come in lots of different shapes and sizes. And in doing those assessments, I think we’ve experienced three main benefits of taking the human rights-based approach that you mentioned. So the first is the systematic review of impacts across all international human rights instruments, including all rights in the Convention on the Rights of the Child. So in a child rights context, that forces us to consider rights such as bodily security, freedom of expression, privacy, education, access to culture, and non-discrimination. It forces us to consider all of these rights holistically and to consider the relationship between them. So these rights are interdependent. Sometimes there’s tension between them. Sometimes the fulfillment of one right enables the fulfillment of other rights. So one clear benefit has been to take that holistic approach. The second is that a human rights-based approach requires us to give special consideration to those at greatest risk of becoming vulnerable, which clearly includes children. So this means that a robust human rights assessment would need to consider and find ways to consider the best interests of the child. The third is that the UN Guiding Principles provides a framework for appropriate action to address adverse human rights impacts. And one thing that we’ve really noted in the technology industry is that that appropriate action may vary considerably according to where in the technology stack a company sits. So the UN Guiding Principles have been written for all companies, in all industries, in all countries of all sizes. They apply to everybody, but it forces us to think through how you apply them in the context of the company that you’re working with. Now, till now, everything I’ve mentioned, all this human rights due diligence, has mainly been of a voluntary activity by companies. It is about to become much more mandatory with some very important implications. And this is my second point, and I’m going to share a long list with you in slide form, too. I started writing this long list, and I thought actually putting them on the slide might be helpful. So there is a very long list of things that companies are now. having to respond to. We have the European Union Corporate Sustainability Due Diligence Directive that’s going to require all companies doing business in Europe, so not just European companies, all companies doing business in Europe, to undertake human rights due diligence. The Corporate Sustainability Reporting Directive will require all companies doing business in Europe to report material topics informed by the outcome of human rights due diligence. And people often think of this as a reporting directive, which it is, but it has this really important line, informed by the outcome of human rights due diligence in it. And we’ve mentioned already the EU Digital Services Act that requires large online platforms and search engines to assess their impacts on fundamental rights, and it specifically calls out child rights as something to be assessed. We have the UK Online Safety Bill, which requires social media companies to assess content that may be harmful or age-inappropriate for children. We have the EU AI Act, which is still being debated as we speak, but essentially it includes the EU Charter of Fundamental Rights as the basis for understanding risk. In Japan, we have the Guidelines on Respecting Human Rights and Responsible Supply Chains. So if you put yourself in the shoes of a company, that’s a lot to take on in one go. And what we’ve noted, what we advise companies about, and what we talk to companies about a lot, is that throughout these regulations, human rights assessment requirements that are based on are very similar to the UN Guiding Principles on Business and Human Rights. So our position has been, if you want to prepare yourself to comply not just with the letter of these laws, but the spirit of these laws, the outcomes that they’re seeking to achieve, taking an approach based on the UN Guiding Principles is going to get you there and is going to get you to the right place with not just one of these regulations, but all of them. My point here is quite a simple one, which is that the rights of children, including efforts to address online sexual exploitation and abuse, should be fully embedded into these broader methods of human rights due diligence. We need to make sure that assessment of risk to rights of children is fully embedded into these broader methods of human rights due diligence. So that could mean, for example, child rights impact assessments being a modular part of much broader human rights due diligence. It might mean making sure that children or those with insight into the best interests of the child being meaningfully consulted and included in the process to undertake human rights due diligence. There’s lots that we can unpack there, but my advice is to sort of invest a lot of effort and thought into these processes. So this trend towards mandatory human rights due diligence I think is a massive regulatory and cultural shift for companies. It’s one I think will be well advised to harness for the child rights outcomes that we want to see. I am reasonably optimistic on all this with one caveat, which is you’ll notice the European Union and the UK features very strongly on this list, and I do fear that so much time and attention goes towards the European Union and the United Kingdom that that takes time and attention away from places where human rights risks may be more prominent, may be more severe. So one sort of flag that we’re raising is to make sure that companies take global approaches while applying to these quite regional laws and regulations. I’ll stop

Moderator – Afrooz Kaviani Johnson:
there. Fantastic. Thank you so much. Another very impressive effort of condensing a lot of information for us in that short time. Thank you, Dunstan. That was fantastic. A lot of food for thought, and really it’s a timely discussion because of this global, you know, this massive shift in the global landscape, and also at the same time these massive child rights and child protection challenges that we’re facing. So online participants and people in the room, we now do have a few moments for questions. We have a microphone behind us here, and we also have Josie monitoring the chat there. I’m not sure if there are any questions. Please, if you could come up to the mic and put your question. we can take a few and then we can open to the panel to discuss. Thank you so much for this

Audience:
great discussions and presentations. My name is Yulia, I work for UNESCO and I would like to bring a kind of challenging topic and a rather provocative question. So we are talking here about protection and safety which is of course the key of existing of children online but at the same time we do consider the right of children to access to information and it comes more pressing when we are talking about for instance sexuality education. So basically it is easy to ban all the content on on sexuality online but at the same time the reason children right to get access to correct and scientific information and content on sexuality and I wonder what are your I don’t know thoughts ideas how we might proceed with those you know challenging intersections between safety and access right to access this information. Thanks a lot. I might just take a couple of questions

Moderator – Afrooz Kaviani Johnson:
so just in the interest of time and then really it will be open to the panel to answer so please

Audience:
go ahead. Thank you my name is Jutta Kroll from the German Digital Opportunities Foundation. Just to answer before I put my question I would refer to the general comment number 25 to the UN Convention the rights of the child. I’ve brought some copies for those who have not been come across that and it will probably answer some of the question that has been put. I have a question to the first speaker I have to apologize that I came in a bit late but what I heard on the the new law and regulation was that it is also on raising awareness for parents and children in regard of protection on with regard to see them there. And I wanted to know whether there is a relation between, or a balance between, on the one hand, raising awareness and education part, and on the other hand, the obligations to service providers. And then the second question is going to this colleague. I have seen you’ve been talking about DNS blocking, but also we would need then removal of the content, not only blocking, because then it would still be there, and probably those who are looking for that content might find ways to circumvent the blocking that you’ve been talking about. Could you explain that maybe a little bit deeper? Thank you so much.

Moderator – Afrooz Kaviani Johnson:
Okay, it looks like we don’t have any other questions from the floor for now. So there’s three main questions, if I can summarize it for our panel, and then we can just pass the mic along, and if you would like to respond to one or all. So the first one is the really important balancing of the children’s rights to access information, and particularly sexual and reproductive health information, and getting that balance right when we’re talking about harmful content and restricting access, and making sure that that doesn’t inadvertently restrict children’s other rights. The second one, I think, may have been for our second speaker, the Japanese law. Yeah, so just understanding more about the awareness-raising content. And then the third one, which you’ve addressed it to the Japanese private sector, but perhaps other jurisdictions might like to share, you know, how they’re making sure that it’s not only about blocking, but also taking down, and also responding, you know, safeguarding children as well. So I think there is that whole system. So any volunteers from the panel? Commissioner?

Julie Inman Grant:
I was just saying earlier that clearly the internet was not built for children, although one-third of the users of the internet are children, and their online lives are inextricably intertwined with their everyday lives. It’s their schoolroom. It’s their friendship circle. It’s their place for learning, commuting, creating, and exploring, whether it’s exploring their sexuality or affinity groups. And we need to make sure that as we’re trying to make this safer, that we’re not—we’re mitigating the harms, but we’re not— we’re harnessing the benefits as well. So, you know, we came up against this. We did a two-year consultation on age verification, which was probably one of the most difficult processes I’ve gone through because there’s just so much polarization. But one of the things that we were so conscious of— of what was the ability of marginalized communities to be able, and particularly young people, to be able to do that exploration. And that doesn’t mean, age verification doesn’t mean restricting their access to everything. Again, I think there are a lot of things that companies can do beyond age gating. And I think Roblox is trialing some age verification. Tinder just announced they’re doing so in Australia, as is Instagram. So it’s good to see that companies are starting to think about what is our responsibility to make sure that children are 13 and above, and that we’re making meaningful checks. And I can say, from our experience of youth-based cyberbullying, what we saw post-COVID is that because parents were so much more permissive with technology use when we were locked down, we now have kids that are eight or nine reporting cyberbullying to us, whereas prior to COVID, the average age was about 14. So once you are permissive with technology use, you really cannot ratchet that back. So I would just say, I’m the Australian regulator. And yes, we have powers. But we have a model where we talk about the three Ps of prevention, protection, and proactive and systemic change. You’ve got to prevent the harms from happening in the first place by having fundamental research, by understanding how harms manifest against different vulnerable communities in different ways, and then co-designing solutions with those communities, doing this with communities rather than two communities. I think I heard Albert say, and we struggle with this too, one of the biggest challenges is raising awareness and encouraging young people to engage and help seeking behaviors. And I’d say parents are the hardest cohort to reach. So all of the things are interrelated. If they weren’t hard, then we would have nailed this

Dunstan Allison-Hope:
already, but they are. I just have a comment on the, I think it’s the first question, maybe the second one. So it’s a great question because it enables me to say a human rights-based approach is designed to achieve precisely that. So a couple of things to say. First of all, when we do human rights assessments, it is quite typical for child sexual exploitation and abuse to raise to the surface is one of the most severe human rights risks that companies need to address. So first of all, that risk tends to come up as one of the top priority risks to address based on the criteria that UNGPs set out. However, we do take this holistic approach. So we consider the relationship between different human rights. So when does the fulfillment of one human right enable the fulfillment of other rights? When does a violation of one right, like the violation of the right to privacy, present risks to other human rights, like the ability to access information or express yourself freely? When there are tensions between rights, how do you address those tensions? How do you apply human rights principles like legitimacy, necessity, proportionality, non-discrimination to decisions about when and how to restrict access to content? And just one idea to throw out into the room that came to me when the question was asked. One of the interesting developments in the sort of business and human rights field and the tech industry over recent years has been the meta-oversight board, where they publish case decisions on particular cases that come to the oversight board and make recommendations for what meta should do to address the whatever failings they’ve identified. And I read a lot of those cases and they’re very long and it includes a segment that undertakes a human rights analysis of that particular case. And the Oversight Board has the time and space to do that, because they’re not making rapid decisions like META does. They have weeks and months to do this analysis. And I find it a really helpful source of insight. I’m not sure that there have been many child rights-related cases before the Oversight Board, but some place where we can do that type of thinking to unpack tensions between rights, the relationship between rights in a child rights context, I think would be really useful, because we come across this all the time when we do human rights assessments. Dilemmas, uncertainties, we’re not sure what recommendations we should be making sometimes. And I’d love space for that thinking to take place.

Julie Inman Grant:
Can I make a comment about? I’m glad you’re reading the cases of the META Oversight Board. It raises an interesting issue, because there’s a lot of discussion now about multi-stakeholder regulation of the platforms. And I believe that in the last transparency report, the META Oversight Board received about 1.3 million requests to review content moderation decisions that were made. And because these are such long, drawn-out decisions, they were able to cover 12 in 12 months. Now, we’re a very small agency, but we’ve dealt with tens of thousands of investigations. And you’re just able to be a lot more nimble. So I think there’s a really important role for that, and to kind of interrogate some of those more difficult and contextual issues. It’s always the gray area that’s going to be challenging. But I’m not sure also how many of the decisions that the META has actually accepted based on Oversight Board recommendations you might have a better sense.

Moderator – Afrooz Kaviani Johnson:
I’m just wondering if if you would like to respond to the question around the blocking and take down

Toshiyuki Tateishi:
So first of all we have to take down So it’s it so many times we try to the and sometimes before we are booking that We ask the other even foreign countries. We ask there. There are some time police or any governmental Offices asked that at last if we ask Them if there is no reply or something. So the last measure to block some sexual contents

Moderator – Afrooz Kaviani Johnson:
Thank you, and I’m not sure if Mr. Suzuki is still online because there was a question about Just explaining more about the provision in the law about raising awareness Whether it’s that’s broad around all of children’s rights in relation to the digital environment Would Suzuki son like to answer that question or we can

Tatsuya Suzuki:
Say seat You must have cornelia on the full it’s demand so No, internet literacy Guinevere Because the most of its Failure You write about home for some of us you are not Know Villa voodoo duty I mean Oh Oh 日本でインターネットで子供が性被害に遭う場合としては 1つには騙されて、自分の裸の写真を送ってしまうというパターンがあります。 これは多分外国でもあると思います。もう一つ、これはどちらかというと日本に特有のものなのかもしれませんが、 インターネットで知り合った人と実際にリアル空間で面会して、そこで性被害に遭ってしまうというところがあります。 こういったインターネットの使い方について、その危険性について教えるということを子供家庭庁だけでなく、 警察庁や文部科学省あるいは総務省といったいろいろな機関において、 それからもう一つは保護者ですね。保護者にお願いしているのは、ヘアレンタルコントロールと言いまして、やはりお子さんでも一人一人発達段階はいろいろと違います。 親子でよく話し合って、何をしているのかというと、 ある先生がよく言っているんですけれども、子どもたちは自分が話し合って決めたルールがよく守ります。 そういったことをお願いしているところです。 今での時点で、子どもの話し合いによって、 子どもの話し合いによって、 子どもの話し合いによって、 子どもの話し合いによって、 子どもの話し合いによって、 今でちょっと時間をお待ちいただきません。

Moderator – Afrooz Kaviani Johnson:
Thank you so much Mr Suzuki and thank you again to all our online and in-person participants. We have come to the end of our time together, though I think this is a topic that deserves a lot more time because as was just mentioned, there is a lot of complexity to this. There are very challenging dilemmas that regulators are dealing with, that companies are dealing with, that civil society, that people working on these issues are dealing with. So it is something that I hope we can keep up the exchange. I hope that everyone found that a fruitful exercise, at least a start of the discussion. We’re meant to capture key takeaways and key actions from each of the sessions. I don’t know that they’re fully formulated yet. yet, but certainly I think I’ve taken away that there is this need to continue the learning and the exchange, that there is this need to ensure that these solutions are consultative, that everyone is involved in the journey, particularly companies when we’re talking about regulation, co-regulation or collaborative regulation as Ghana is doing. Obviously tech companies are vital stakeholders in this effort to protect children from online abuse, but we also see this massive global landscape shifting, I think I really took that away from your points Dunstan, and just this opportunity to fully embed online protection of children from online sexual abuse and exploitation into these broader methods that are becoming increasingly mandatory. So thank you to our esteemed panellists, the Commissioner had to dash away to catch her Shinkansen to Tokyo, so she sends her apologies, but a huge thank you to our panellists, a huge thank you to our interpreters and everyone supporting today, so thank you. Thank you.

Albert Antwi-Boasiako

Speech speed

179 words per minute

Speech length

1310 words

Speech time

438 secs

Audience

Speech speed

164 words per minute

Speech length

405 words

Speech time

148 secs

Dunstan Allison-Hope

Speech speed

176 words per minute

Speech length

1708 words

Speech time

583 secs

Julie Inman Grant

Speech speed

165 words per minute

Speech length

1803 words

Speech time

656 secs

Moderator – Afrooz Kaviani Johnson

Speech speed

150 words per minute

Speech length

2003 words

Speech time

800 secs

Tatsuya Suzuki

Speech speed

72 words per minute

Speech length

531 words

Speech time

445 secs

Toshiyuki Tateishi

Speech speed

124 words per minute

Speech length

761 words

Speech time

368 secs

Broadband from Space! Can it close the Digital Divide? | IGF 2023 WS #468

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Berna Gur

Recent advancements in space-based technologies, particularly megaconstellations like Starlink, have emerged as a promising solution for providing broadband services on a global scale. These advancements have significantly improved the capabilities of space-based technologies, making it feasible to deliver high-speed internet connectivity to even the most remote areas worldwide. Starlink, a megaconstellation consisting of thousands of small satellites, has the potential to revolutionize internet access by providing global coverage.

The global coordination of frequency spectrum is crucial for ensuring uninterrupted provision of all wireless services. The frequency spectrum is a limited natural resource that must be carefully managed to avoid interference and disruption of various wireless services. The International Telecommunications Union (ITU) plays a vital role in regulating the global coordination of frequency spectrum. It ensures that the allocation and usage of frequency spectrum are properly coordinated to prevent any conflicts or disruptions.

To fully leverage the benefits of space-based technologies and ensure effective implementation, countries need to re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Laws and regulations play a crucial role in the successful integration of new technological advancements. Therefore, countries must adjust their regulations according to the unique circumstances and requirements presented by space-based technologies. By doing so, they can create a conducive environment for the deployment of satellite broadband services and facilitate their widespread adoption.

Furthermore, active participation in international decision-making processes, such as the ITU and the UN Committee on Peaceful Uses of Outer Space, is essential. Engaging in these forums allows countries to have a voice and contribute to the development of policies and regulations that govern space-based technologies. Active participation enhances the chances of achieving desired outcomes and ensures that countries’ perspectives and interests are well-represented. Moreover, awareness of international space law is crucial for making informed decisions and effectively navigating the complex landscape of space-based technologies.

It is important to note that the provision of satellite services in a specific country is subject to that country’s laws and regulations. These laws, often referred to as landing rights, determine the terms and conditions under which satellite services can operate within a country. Each country has the autonomy to decide its own regulations for satellite services, taking into account its unique needs and priorities.

In conclusion, recent advancements in space-based technologies, such as megaconstellations like Starlink, offer a promising solution for providing broadband services globally. To fully harness the potential of these technologies, countries need to re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Active participation in international decision-making processes, such as the ITU and the UN Committee on Peaceful Uses of Outer Space, is crucial for shaping policies and regulations that support the effective deployment of these technologies. Additionally, it is important for countries to be aware of international space law and its implications to make informed decisions. By doing so, countries can unlock the benefits of space-based technologies and ensure an uninterrupted provision of wireless services on a global scale.

Stephen Weiber

Libraries have emerged as vital institutions at the intersection of digital connectivity and meaningful impact. Despite being rooted in the pre-digital era, they have evolved to embrace the transformative power of technology. Libraries now incorporate robotics, 3D printing, and Starlink connections, enabling individuals to engage with cutting-edge innovations.

While libraries provide essential services, their focus extends beyond mere provision. Instead, libraries seek to make a tangible difference in their communities. They are conscious of their role in fostering education and actively contribute to achieving Sustainable Development Goal 4: Quality Education. By offering access to technology and knowledge resources, libraries empower individuals to enhance their skills and pursue lifelong learning.

Moreover, libraries contribute to Sustainable Development Goal 9: Industry, Innovation, and Infrastructure. They recognize the significance of meaningful connectivity and its impact on individuals’ lives. Libraries have long understood the transformative potential of the internet and have diligently worked towards improving people’s lives within the local context. Their success lies not just in the availability of digital infrastructure but in the measurable improvement in the quality of life for individuals accessing these resources.

Despite the rise of digital infrastructures, libraries continue to hold distinct advantages. Contrary to the assumption that internet cafes and telecenters would replace libraries, this has not been the case. Libraries offer unique value propositions that set them apart. They go beyond providing connectivity by offering diverse avenues for engagement, learning, and social interaction. Libraries serve as vibrant community hubs and spaces that foster a sense of belonging.

In conclusion, libraries are indispensable in bridging the gap between digital connectivity and meaningful impact. Their evolution has enabled them to integrate technology and cater to the changing needs of their communities. Libraries are not simply service providers; they are catalysts for transformation, driving positive change, and improving lives. With their ongoing commitment to innovation and a community-centric approach, libraries will continue to be vital pillars in the digital age.

Dan York

The use of Low Earth Orbit (LEO) satellites for high-speed, low-latency internet connectivity, particularly in the context of video communications, is seen as a positive development. LEO satellites operate at a height of less than 2,000 km, enabling quick packet transfers and offering lower latency times compared to geosynchronous satellites. Notably, SpaceX’s Starlink project leverages LEO satellites, further supporting the viability and potential of this technology.

However, one of the major challenges currently faced is the large-scale launch of LEO satellites. SpaceX has been able to launch seven rockets each month, but there is uncertainty whether smaller launch providers can operate at this scale. Overcoming this challenge is crucial for the successful implementation of LEO satellite technology.

Critical questions are also being raised regarding the use of LEO satellites for global internet coverage. Technical feasibility, environmental impact, and effects on astronomy are all areas of concern. The environmental impact of satellites, both during their launch and disposal in the upper atmosphere, remains unclear. Additionally, large satellite constellations may cause issues for astronomical observations. These concerns highlight the need for careful examination and consideration of the impact and trade-offs associated with using LEO satellites for global connectivity.

While new connectivity options are emerging, such as OneWeb and Amazon’s plans for global coverage, at present, Starlink remains the only option for this kind of high-speed, low-latency connectivity. The expansion of these connectivity solutions presents complex challenges due to legal and regulatory considerations. Each country has its own regulatory rules, and providers need to negotiate with each country’s regulators. Furthermore, conflicting frequency usage can prevent some countries from utilizing these systems. The deployment of these solutions requires cooperation and interoperability among different space-based providers to ensure a seamless and efficient global coverage.

Despite these complexities, there is support for exploring emerging technologies in the field of connectivity. Dan, who supports the exploration of emerging technologies, believes that despite the challenges, the benefits provided by LEO satellites and other technologies outweigh the difficulties encountered.

LEO deployment is viewed as critical because with proper permissions and power, it can be quickly set up anywhere, making it highly adaptable. Additionally, LEO connectivity is seen as complementary to existing infrastructure and can help build digital skills until terrestrial connectivity reaches a particular area.

Concerns are being raised about the environmental and carbon costs associated with launching systems for global connectivity. A recent paper analyzing the carbon costs of launches highlights the trade-off between carbon cost and global connectivity. The sustainability and control of LEO constellations, mainly run by commercial entities owned by billionaires, are also being questioned. The need for continuous satellite launches to maintain the constellations raises concerns about the long-term sustainability of this approach.

In conclusion, the use of LEO satellites for high-speed, low-latency internet connectivity has the potential to revolutionize global connectivity. However, challenges related to large-scale launch, technical feasibility, environmental impact, and legal considerations must be carefully addressed. Cooperation and interoperability among space-based providers are key factors for success. Despite concerns about the environmental and carbon costs, there is support for exploring emerging technologies in this field. It is critical to study and understand the opportunities and trade-offs associated with these technologies to ensure their responsible and sustainable implementation.

Moderator

The discussion highlighted the potential of satellite technology, specifically low-Earth orbit satellites like Starlink, in bridging the digital divide and providing global broadband services. These satellites are capable of connecting anyone, anywhere with high-performance, robust broadband, which has the potential to close the digital divide. This new era of satellite communications has been seen as a game changer, particularly in regions where internet access is limited or non-existent.

However, while satellite technology offers many benefits, there are concerns that need to be addressed. One of the main concerns is the high cost of satellite internet. The cost of using services like Starlink can be prohibitive for certain communities, making it challenging for them to adopt this technology. Additionally, questions have been raised about the environmental impact of satellite systems. Researchers have expressed concerns about the sustainability of Starlink and the potential impact of launching thousands of satellites.

Another issue that emerged from the discussion is the potential misuse of satellite technology. In the case of Starlink in Brazil, it was revealed that the service was being used to support illegal activities such as gold mining and drug trafficking, which goes against its original intent of providing connectivity to remote schools. This highlights the importance of ensuring accountability and regulation of satellite activity.

Libraries were also mentioned as important community support centers that can play a role in bridging the digital divide. They can offer a range of value-adding services and help localize internet usage. Libraries have the potential to act as public interest locations within communities, and examples such as internet backpacks in Ghana utilizing libraries as centers for bringing people online were mentioned. Additionally, libraries can offer a variety of services, beyond just connectivity, and can act as a bridge from the availability of digital tools to their impact, achieving the desired change.

Throughout the discussion, it became apparent that monitoring and regulating satellite activity is essential. This includes tracking the advancements and issues with satellite technology, such as space junk and potential disruptions to astronomy. The audience emphasized the need for better coordination among customer countries for choosing satellite internet providers and ensuring a robust monitoring system.

In conclusion, satellite technology, particularly low-Earth orbit satellites like Starlink, has the potential to bridge the digital divide and provide global broadband services. However, there are challenges such as high costs, environmental impact, and potential misuse that need to be addressed. Libraries can also play a significant role in supporting communities and bridging the digital divide. It is crucial to monitor and regulate satellite activity to ensure accountability, better control, and informed public debates.

Nkem Osuigwe

Starlink Internet has revolutionised libraries in Nigeria, with its implementation in five urban areas including Lagos, Abuja, and Kaduna. This game-changing move has attracted a new audience to these libraries, thanks to the provision of fast, stable, and reliable internet services. Particularly, open knowledge enthusiasts and those interested in Open Educational Resources (OER) have greatly benefited from this influential addition.

The introduction of Starlink Internet has significantly enhanced the efficiency of users’ work within the libraries. Users have reported faster and more efficient work, thanks to the stable internet connection. This positive feedback highlights the immense impact of the fast internet provided by Starlink, which simplifies various online activities, including translation work on open platforms. Users have noticed that the internet does not slow down during use, making the translation process smoother and enhancing overall productivity.

Despite the notable advantages, several challenges need to be addressed to further develop and improve Starlink’s internet services. One major challenge is the weak signals experienced beyond a specific radius from the libraries, limiting the accessibility of the internet service. Additionally, the service becomes unavailable during power outages, further hindering consistent and uninterrupted internet access. Moreover, the limited operating hours of the libraries pose a constraint for individuals seeking to utilise the service outside of the designated time frame.

To tackle these challenges and improve the service, it is crucial to identify and study the usage trends and user demographics of the Starlink Internet service. Gaining a comprehensive understanding of users, including their age range, specific internet usage patterns, and overall internet needs, will enable service providers to enhance the service in a more targeted manner. Moreover, investigating the speed of the internet and potential drop points throughout the day is important. User feedback also plays a vital role in gathering insights and suggestions for improving the service.

In conclusion, the introduction of Starlink Internet in Nigerian libraries has had a significant positive impact. The fast and reliable internet connection has attracted new users, particularly those interested in open knowledge and OER. However, challenges such as weak signals, service unavailability during power outages, and limited operating hours need to be addressed. Therefore, identifying user demographics, studying usage patterns, and obtaining user feedback are critical steps toward enhancing the service and expanding its application.

Audience

The analysis focuses on two key areas of satellite internet: Leo Satellite Internet and Starlink. Leo Satellite Internet is seen as an essential solution to closing the growing digital divide. It allows for faster deployment compared to terrestrial or mobile infrastructure, making it an effective means of bridging the gap in internet access. However, concerns arise regarding the longevity and selection of Leo Satellite Internet service providers. Countries need to invest in hardware and establish institutions to support their services. The analysis suggests that countries should improve coordination to negotiate better conditions with Leo Satellite Internet providers and enhance their power as consumers.

Leo Satellite Internet’s ease of deployment is highlighted as an advantage. The Leo dish only requires power for providing internet access and can be quickly deployed anywhere. It can also help people develop digital skills and increase internet usage, which has positive implications for education and innovation.

However, concerns are raised about the simultaneous development of multiple infrastructures. Companies like SpaceX, OneWeb, and Amazon’s Kuiper are building their own systems, which lack cooperation and interoperability. It is suggested that more collaboration and standardization are needed to address efficiency and sustainability concerns.

The analysis indicates uncertainty regarding the use of Starlink for commercial or community networks. The rules and regulations surrounding Starlink’s licenses and reselling are unclear, causing uncertainty for potential users interested in wider network deployment.

Environmental and financial sustainability are also concerns. The current business model of Starlink, which requires renewing satellites every five years, raises environmental and economic concerns. The long-term environmental impacts of this process are worrisome, considering the urgent need for sustainable consumption and production. Additionally, doubts are expressed regarding the economic feasibility of Starlink’s large-scale satellite launches.

There is also concern about the regulation and accountability of satellite operators. The potential for individuals or entities to manipulate satellite services raises concerns about their misuse or exploitation.

Measurement Lab is mentioned as a valuable resource for monitoring internet performance, including satellite performance. It measures aspects such as interconnection points, speed, and quality of internet globally, providing the largest public dataset on internet performance.

Furthermore, doubts are raised about Starlink’s ability to effectively close the digital divide due to high costs. The unit costs of Starlink range from 150 to 200 watts, with a capital expenditure of 300 to 600 US dollars. Affordability and the ability of individuals and communities to sustain recurring payments for internet access are concerns.

The analysis also highlights the misuse of Starlink infrastructure to support illegal activities in the Amazon region, negatively affecting indigenous communities. Additionally, the unfulfilled promise of Starlink providing internet connectivity to schools in the Amazon region raises doubts about the company’s commitment to addressing educational needs in underserved areas.

In summary, the analysis provides an overview of the advantages, concerns, and uncertainties related to Leo Satellite Internet and Starlink. Leo Satellite Internet shows promise in bridging the digital divide with its fast deployment and potential for improving digital skills. However, concerns exist regarding the selection and longevity of service providers. Uncertainties also surround the use of Starlink for wider networks, environmental and financial sustainability, regulation and accountability, and the company’s commitment to fulfilling promises. Careful consideration and comprehensive planning are necessary for the development and deployment of satellite internet systems to ensure equitable and sustainable access to digital resources.

Session transcript

Moderator:
All right, are we good? Okay, we’re live. Good morning, afternoon, evening to everyone who’s here or in person or online. This is workshop 468, broadband from space, can it close the digital divide? That’s our question. The setup for this is the idea that we’ve entered a new era of satellite communications. They’re not new satellites, they’ve been around for a long time. But types of satellites that have new capabilities, especially in low Earth orbit, we’ll hear about that shortly, and the possibility of satellites in multiple orbits coordinating to create new kinds of services. My name is Don Means, I’m director of the Gigabit Libraries Network. Each of our speakers will introduce themselves. Our time is short today, so we’re going to try to get through this pretty quickly and have time for open discussion. I just wanted to make a couple of points in the beginning here about barriers to adoption. The question that we’re posing here is, can satellites actually close the digital divide? Can they contribute to the solution to this longstanding problem that we’ve had? The reason we’ve had this problem… is just the basic economics of infrastructure that says the farther away you are from the core of any network, the more expensive it is to reach you, and you probably have less money to boot. So that’s why they’re still not participating in the global digital conversation. We’ve identified these three barriers as availability, affordability, and usability. So affordability, of course, this is a difficult question if you approach from the standpoint of how much can a family afford to spend every month for access. It depends on how the value is set. What can they gain from that? Does it change their economic calculation in the first place? Like you buy a car because without the job, that kind of a thing. Usability is the most comprehensive or largest kind of topic here because it covers everything from skills to devices to an environment, but it’s absolutely critical to adoption. Availability is slightly different because if you don’t have availability, then affordability and usability are moot questions. So that’s what’s interesting about satellites, especially low-Earth orbit satellites, is they can connect anyone, anywhere, given that they’re great, with high-performance robust broadband, low latency, 100 megabit connections. There are lots of issues around this related to, well, I’m not going to get into all those, but there are. Hopefully we’ll get into those. So the goal here, at least from Gigabit Library’s network standpoint, is this is a real opportunity to connect every community. Now this is not connect every person or every household. That’s a dream, but it’s a reality that we could set up in every community. If we come up with a number, it’s 100,000, some number of communities, neighborhoods or small communities everywhere, that’s doable. And what do you have with that? What do you have with this community network that is basically no fee or low fee? Well, those are questions. But for us, this is a baseline standard functionality to allow virtually everyone access, even if it’s not everything that everybody wants. It’s something that is there for everybody. So we’re going to hear more about what that means and what are the implications of that and how the technology is built from our next speaker, who is Dan York with the Internet Society. Dan, welcome. Dan is also the co-coordinator of the session. Dan?

Dan York:
Thank you, John. Welcome to everybody. I am delighted that we’re having this session, having this conversation. You should now be able to see my screen, correct? Okay. So my role here is to talk a bit about the technology and to help us understand this as we look at this discussion and this debate. As Don mentioned, satellites are not new for Internet access. We’ve had them for satellites that are in the geosynchronous or geostationary. And those are ones that are way out at 36,000 kilometers. They have the capability that they can basically be parked over one part of the earth. And you can have basically three of them and be able to get worldwide global connectivity. The challenge is they’re expensive. They’re typically the size of a large bus. They take years to create, millions upon millions of dollars. And they’re launched out into that distant orbit, which is great. It has provided Internet access all around the world. long time for a packet to go from the earth out to that satellite and back. Networking terms, we talk about latency or the lag, the amount of delay, and that can be 600, 700 milliseconds, even a second. And that’s, that would be impossible for me to come in over a, this zoom connection. I could not do that. So the exciting part about why we’re here is this new generation of satellites that are, that operate low earth orbit, which is the opposite side of that, that is down underneath 2000 medium earth orbit, which is in between those two areas. And there are a couple of solutions. There’s one company, SES, which operates a network of satellites and there you can have fewer. You can only need maybe 11 or 20 satellites, but they’re in motion and they have longer latency times, but the excitement is all down in Leo’s because now we have things that can have very quick acts, low latency. So you can have maybe 40 milliseconds, 50 milliseconds, which is well within the range of things like video communication and pieces like that. The challenge is that you need more satellites. And so our picture, as we start to grow that in a little bit, the components, when this interest is coming about, because we have this demand for these high speed, low latency connections. There’s also been this massive reduction in costs for satellite development. If you are watching this space, you can see the companies like SpaceX and Amazon are in fact, you know, they’re mass producing satellites. I think I saw a report from one of Amazon’s things. They’re able to pump out for a day out of their factories. SpaceX is similar. They’re creating large numbers of this. And we’ve seen this massive reduction in the cost of launching SpaceX with this reusable rockets and pieces like that. The three. And Berna will get to this when she talks about the policy side are that. you have the satellite constellation, which is of course what we all know or we talk about. You also have the thing on the ground. Now the satellite industry calls this a user terminal or a ground terminal or something like that. For a consumer, we might talk about it as just the antenna or the dish or something like that. It’s a little different from the past. With traditional satellites, you had a fixed antenna that you put on the side of a house and you pointed out because the satellite was always in a location. These antennas look more like a pizza box. They have electronics in them to be able to interact with multiple satellites. And they’re very different. They also sometimes have a different point. It’s packaged differently in different ways. And then you have these ground stations, which are also called gateways. And those are important pieces of how this all works. And let me just show a quick picture to show how this works. In one way, your satellite connection goes up, bounces off a satellite and gets down to a ground station. In LEO environments, you’re actually probably interacting with at least one or two satellites. The satellites are typically overhead for about five minutes. And so you have multiple satellites and that’s part of what happens. Now, one interesting development that’s happened with LEOs that’s made it even more interesting is some communication between satellites. Because before and with traditional satellites, you had to always be in range of a ground station. And so that meant that you could only interact where you had to have a ground station every maybe nine kilometers or so across the earth. Now, the satellites are actually able to connect in between them using what are called inner satellite lasers. And this allows you to connect to a satellite, go across the Starlink constellation in this case, and then drop down to a ground station. SpaceX has pioneered this with. with Starlink, all the others who are out there are looking at similar kinds of ways, and so it provides some interesting and very remote areas that are far away from where you might be able to have a ground station. The Internet Society did create a document about this. You can get it at internetsociety.org. It’s there. It’s something you can be able to look at that goes through a lot of these questions and things. I want to just touch on a couple before I pass it on here. Don mentioned the question around affordability. Can we actually make this affordable to everybody who needs it? Will it have the capacity to handle everybody’s devices? Because we want to go and connect. Everybody has many devices. The big question from a technical point of view, quite honestly, right now in 2023, is getting the satellites up there is one of the big challenges we have right now. It turns out that at this moment in time, SpaceX is the only provider that’s really operating at scale. There’s a number of other launch providers that are working in this area, but they’re all caught in transitions right now between rockets. The United Launch Alliance, which is a traditional US provider, has been around for decades. They’re in the middle of going from an Atlas 5 to the Vulcan Centaur. It’s Ariane 5, but Ariane 6 is delayed. All of these things have caused a delay in us getting there. It should be temporary, but it is one of the challenges right now in getting these systems up there. There are smaller launch providers, but one big question is, can they launch at the scale? To give you an illustration of that, just in the past two months, SpaceX has launched seven rockets each month. Just this month, they’ve launched two already. They were supposed to launch one this morning, actually, but it got delayed because of some high winds. Now, they’re just trying to figure out when. This gives an example of this. and launch at this kind of level to provide this kind of support. We’ll talk, we have our paper outlines and we have room here to talk about some of these questions around security, privacy, interoperability, space debris is a big question. There are questions we don’t know yet. We don’t know whether all of these different proposals can actually work. We’re not clear on the environmental impact of launching all of these rockets and also of having these satellites burn up in the upper atmosphere. And there is a strong concern about impact on astronomy and pieces, which we don’t yet understand. So the reason we need to be having this conversation here at IGF and in other venues of activity in the space over the next few years, we’re expecting to see Starlink complete its first phase, what it calls its generation one and go on toward its next one. Right now it’s going to, the first one is about 4,400 satellites. The next shell, the next part of the constellation has been approved for 7,500 and is going on toward ultimately around 30,000 satellites. One web has completed its, which is now part of Eutelsat. So it’s actually now Eutelsat one web. They’ve completed their initial group, but they’re looking to go on toward building a second phase. Amazon’s project Kuiper just this past week successfully launched its first set two satellites for demonstration, but assuming all goes well, they’re looking in the 24 to begin launching growing up to around 3,200 satellites over the next while China is, is look from what we can tell from the outside is looking to build its own competitor, Starlink, which will be around 13,000 satellites or so 14,000 and the EU, the European union is looking to create what it calls its Iris two constellation. So the timing, the reason we need to be having these conversations is to understand that over the next five years or so, there’s going to be a massive amount of. of capacity coming up there online, the opportunities are tremendous. You know, satellites that are here, it’s conceivable from the filings with the International Telecommunications Union that we could see 40,000, 50,000, maybe even 60 or maybe even 90,000. It’s hard to know how many of these will actually make it into space and be able to work. But it’s a huge number of satellites. There’s a huge opportunity, but there’s also a lot of questions and things we just need to understand around Don’s points around affordability, availability, and also usability. So with that, I wanna just say thank you, and I look forward to answering questions as we go through this more.

Moderator:
Thank you, Dan. Very nice summary there. So, Nkem Uyegi. I never get that right, right, Nkem? But welcome. Nkem has been working on a project in Nigeria with libraries there. Nkem, please introduce yourself and tell us what you’ve been up to with these satellites. Unmute, Nkem.

Nkem Osuigwe:
Oh, apologies. Can you hear me now? Very much. So I am Nkem Uyegi. I am a librarian. I work for the African Library and Information Associations and Institutions with headquarters in Accra, Ghana. That is AFLIA. And when Don started talking about this new libraries, this satellite thing, it was like, is it possible? I don’t know how it’s going to happen. Can we afford it? Who can afford it in Africa? Through his engagements, we were able to get EARNED using Starlink for five weeks. libraries in Nigeria, one in Lagos, in the city center, then we have three in Abuja, and one in Kaduna. These are really urban areas. And what I had envisaged was that maybe it could be possible in rural areas, but I think that the rollout first was to urban areas. So we got those five around from June. We took delivery between around June 29 of this year, 2023. Initially, there were plenty of challenges in setting them up. There were issues. In fact, I took a picture of the library in Abuja that had complained that they were not getting the internet after the unit was set up. And it was because of the trees that covered the dish, so to speak. But right now, all of them are, the five of them are up and running. There are still some challenges because they are saying that the coverage doesn’t extend much to outside of the libraries. Because when I asked them to imagine a situation where something like maybe, we hope not, COVID-19 happens again, what happens? What happens when the library doors are closed? Can they still be able to offer services, even if it’s only internet services? So right now, the internet is strong, fast, stable. Inside of the libraries, we’ve seen particular areas of the libraries, outside a little bit. the signals become weak. Now, we have asked them to, we have asked the libraries to find out who will benefit most from this, or who can benefit most from this, this free and fast internet. And my idea, my idea was, you know, young people that are seeking for employment opportunities or want to learn digital skills or want to assess their assignments, or lifelong learners. But there was, I didn’t envisage another critical group. And when I found this out, you know, it kind of made me elated that, oh, so this is possible too. So, we open community, the open knowledge community in Nigeria. Who are these people? People that like the Wikimedia users group. We have quite a number of them in Nigeria. You know, Nigeria, we have more than 500 languages. And some of these groups, and some of them all have the user groups within Wikimedia. And why am I talking about them in particular? If you have ever edited any of the Wikimedia projects, you find out, once you edit and you want to publish, it kind of hangs. But with this now, the people, those of them that use the library, because we introduced the internet to them. And they say that when they go there to edit, whatever they did, it just goes fast like that. It doesn’t hang up. It doesn’t give them much issues. And also then we… also working with open license and OER enthusiasts. Because we are beginning to realize those resources, educational resources, stories, and so on. We hardly find those in mother tongues. And considering the fact that we have so many languages in Nigeria, we are beginning to ask librarians and others to please use some open platforms where we have storybooks for children to translate them into our local languages. And they’ve been using the internet a lot, quite a lot on that. And right on StoryWeaver, we are building another one on an African storybook. And all these things are made possible because of this internet from Starlink that makes it easier for you to translate. Because when you translate a story on these platforms, or when you’re translating, the internet slows down where you started. Or you get tired. But now with that fast internet, it’s really better for libraries in Nigeria, these five libraries that have free internet. And for the National Library of Nigeria, that they have it in Lagos, in Abuja, and Kaduna, they are saying that it’s a game changer. That although the traditional library users, those ones that we are sure will always use the library, it’s not as if they make use of it so much. But it’s really the new. people that are being attracted to the internet that are making use of it, especially like I said, the Wikimedia users group and then the open license enthusiasts and those that are interested in OER and stuff like that. So that is what I can say now about what we are doing in Nigeria with this. But we had a meeting on September 26th. Dawn was there most of the time. I wasn’t there most of the time. But I spoke with them again this morning and I asked them that there are things that we need to do. And that thing that we need to do is to you know, who are these people that are now using this internet? You know, because they are slightly different from our regular users. What is their age range? What do they do? You know, what stories do they have about the use of this internet? Then what’s the speed for them? Does the speed drop at a particular point in the day and so on? But you know, because these are all government libraries. They work only from Monday to Friday. They don’t work on Saturdays. So we are trying to see how to get staff to run shifts so that they can run, they can open on Saturdays for others that need them on those days. Then also, you know, yeah, hello. Once light goes off, you know, the router stops. And a problem. If the routers came with in-built inverter libraries and batteries, maybe that will be better for us. Thank you very much, Don. Thank you, I’m done. Yeah.

Moderator:
Thank you, Nick. Thank you. Yes, there’s a slide of the dish mounted on a mast to get elevation. Some flyers that went out around Lagos and Abuja show this new service was available, and people, word of mouth is getting around, people are interested and they’re coming in. Stephen Weiber will talk to us here. Stephen’s with the International Federation of Library Associations, a longtime associate. And we talked about this hub concept for sharing. And so Stephen, what’s the what’s the best hub?

Stephen Weiber:
Thank you. So I think I’m going to sort of in a less enthusiastic way, given where I come from. And I think one of the one of the things that makes libraries interesting here is that we are a pre-digital public infrastructure, that it’s an infrastructure that was there ahead of the Internet in order to help actually achieve goals in real life, in order to help people improve their lives through access to information, through access to knowledge. And so what they’re doing is trying to apply to make the most of the opportunities that digital tools aim goals of actually making a real difference. And I think it’s something that really came out in what Kim was saying was that what libraries are doing by bringing in the Starlink connections, by drawing on the satellite internet connections, is actually it’s making the difference between availability and impact as you talked at the beginning that obviously you can make things available but then how do you actually make that bridge from availability to full-on impact and that’s what the libraries are doing and it’s through some of the more basic things about some people being free access, limitation on access, but it’s also through the fact that you have a staff and a space that are actually dedicated to thinking through what how do we make a difference not just we provide something and see what happens but actually thinking it through and I think that there’s a number of characteristics about libraries and about the philosophy and the modus operandi of libraries that mean that they’re pretty well placed to do that and I think some of these actually resonate quite well with the themes picked for this IGF and I think there’s a I know the talk about meaningful connectivity and the back to a table or a link or whatever is something that’s been at the heart of what way that libraries have worked with the internet for a very long time it’s not I know the success of the internet is not measured in the number of people covered by a signal success of the internet is measured in how many people’s lives are improved there’s a strong focus on it being rights based that everyone has this right to access to information to be able to use information to improve their lives and beyond that there’s the role of libraries in localizing in thinking about the context and thinking about what’s going to work building on their knowledge of their communities and really being responsive to the needs of in highlight the interest of libraries as being a public interest and known location within the community there was a fantastic example of an internet backpack so another technology for bringing people online in Ghana and they use the library because it was the one place where all the local schools felt it was okay to come and it was okay okay to be the center in order to get people online. Two other things to mention, I think there’s the potential of libraries as a federator, again, they’re not seen as wanting to shift their weight around or try and dominate things, but they have proven to be quite things in taking all the different local actors, bringing them together in order to think how collectively can we make the most of connectivity and I think in Ken’s examples of working with Wikimedia chapters, working with different groups is really powerful here. And then I think that the final point I’d say is that libraries aren’t just about connectivity. At the risk of sounding rude, once upon a time there was the idea that internet cafes and telecenters would take over from libraries, but that’s not been the case, no, we don’t really talk about telecenters anymore because they were a purely digital infrastructure and with libraries you have other services that has a whole variety of ways of adding value and I think that’s also probably what helps make it and the examples that Don has supported in the US and the examples that Ken’s been involved in in Nigeria demonstrate that when you add connectivity to this mix you can really make things happen and you can really make sure as I said that we make that link between availability and impact.

Moderator:
Very good. That said, us anyway, the quintessential example of a community center, but if there is a center that the community supports and trusts, fine. It’s just that the library offers a certain model for a range of services. support services training, all the things Stephen mentioned that make it a go-to in the absence of an alternative. So now we’ll hear from Berna Gurr, who’s with Queen Mary University in London, a lot of policy aspects of this, which there are not a small number. And then we’ll open it up to questions. Please send them in through the chat or wait for the opportunity after Berna finishes. Thank you.

Berna Gur:
Go ahead. Thank you. It’s a pleasure to be here with such distinguished panelists, so thank you for inviting me. My intervention will focus on the regulatory and policy aspects of satellite broadband with a particular emphasis on addressing the global digital divide. As an international community, we strive to achieve a more equitable internet use that reduces global inequalities rather than increases them. And it’s only when connectivity becomes universal and meaningful, it can be utilized to create social and economic impact, which can lead to economic development and innovation. Now meaningful connectivity has broader requirements, but the underlying communications infrastructure for universal access remains crucial. Recent advancements in space-based technologies, particularly megaconstellations like Starlink, offer a promising solution for providing broadband services globally with minimal additional terrestrial infrastructure. This technology does not have to be considered as a standalone solution. It complements existing global communication infrastructures. However, its successful integration requires careful consideration of each country’s unique circumstances and needs as well as domestic laws and their international law commitments. So first, the policy makers and regulators shall make informed decisions by consulting other stakeholders about the best way to utilize this. They can then intervene by utilizing laws and regulations to maximize its benefits. As there is already an understanding of how satellite services are regulated at the national and international level. To start with, the provision of satellite services in a particular country is subject to that country’s laws and regulations. These are called landing rights, and the countries decide for themselves the terms of landing rights. Satellite communications are not new, so the regulations of regard in almost all jurisdictions. These regulations, however, at times need to be adjusted to the unique circumstances requirements of technological advancements. Mega constellations, I believe, qualify as such. Let’s start with the ground station, the gateway. Dan explains satellite systems connect to the internet through these ground stations. They need to be at the moment, the starting technology, need to be set up at least every 1000 kilometers. For that, they will need authorization from each relevant jurisdiction. Let’s say that your country has a smaller surface area, and there’s a ground station in one of your neighbors. Do you want to use, rely on your neighbor? to not disrupt your services at all times. There may be other cybersecurity implications as well. In another example, let’s say that your country has a very large surface area, you may need regulators to facilitate the authorization of more than one ground station. Suppose you want to create a competitive environment by authorizing multiple satellite broadband companies. In that case, you will need to arrange a location of these ground stations to avoid interfering with each other’s services and all other wireless services. The United Kingdom’s regulatory agency Ofcom, for example, has been very proactive in updating its regulations through frequent consultations with various stakeholders. Now, this brings us to the use of frequency spectrum. The satellites require the use of assigned frequency spectrum for their uplink and downlink connection with the user terminals and also the ground stations. The frequency spectrum is a limited natural resource, the global coordination of which ITU regulates. Unleashing requirements of the ITU, the frequency spectrum assignment in a particular country is subject to their jurisdiction, but coordination at international and domestic levels is necessary for uninterrupted provision of all wireless services, including mobile connectivity and satellites. The coexistence of operators in proximity may require technical cooperation amongst themselves. A licensing requirement for licensees to cooperate with each other may be a good solution to resolve this problem. So the range of these licenses and authorizations also changes with the business. For example. direct-to-consumer model would likely require an internet service provider license. Whereas OneWeb plans to provide backhaul services primarily to incumbent telecom operators. These are subject to different, they will be subject to different narrower set of regulations. Another essential component of the satellite systems are the user terminals. Satellite broadband companies need to export equipment to facilitate use of their services. The use and importation of user terminals are subject to licensing and import requirements of the national authorities. These terminals must be installed at the users premises and have been subject to standards and conformity assessment procedures by national regulatory agencies. These licenses are combined with the internet service provider license. From an international law perspective, the treaty obligations under the General Agreement on Tariffs and Trade, shortly GATT, and the Information Technology Agreement, plus preferential trade agreements, can become relevant. Regulators will have to check their commitments under these agreements. The customs duties applicable to user terminals will be an affordability, will be important to the affordability issue, especially for your broadband companies planning to provide their services directly to consumers. Again, depending on the type of service, data governance regimes and privacy concerns may come into the picture. Shortly, what I’m saying is that most countries have international law commitments when exercising their domestic regulatory powers. It is an extensive subject, so if you find this topic interesting and want to learn more, you could take a look at our research project funded by ISOC Foundation. There is a report on the global governance of satellite broadband, which covers the topics that I mentioned here. And there are short reports and papers for governments and civil society organizations, as well as links to academic papers, if you are interested. I want to conclude my intervention by referring to our policy paper. We advise developing countries to re-evaluate and update domestic regulations related to licensing and authorizing satellite broadband services, consider different business act on cybersecurity and autonomy when deciding on gateways. Forming regional alliances can enhance the achievement of policy goals. Two, participate actively in the International Telecommunications Union, which manages limited natural resources like frequency spectrum and orbital resources. Members should engage in decision-making processes, especially at world radio conferences. If this is done through regional alliances, it will again enhance achieving desired outcomes. Three, trade treaties. Consider interests and priorities associated with satellite broadband technology. And the last one, participate in the UN Committee on Peaceful Uses of Outer Space and take advantage of capacity building opportunities offered by the UN Office for Outer Space Affairs. Awareness of international space law is essential to make informed decisions. By holistically considering these actions, countries can ensure that their initiatives align with their sustainable development goals, technological autonomy and cybersecurity considerations. Thank you.

Moderator:
Thank you, Berna. You’ve made the point that the system is incredibly complicated on so many levels, the intellectual property, the licensing, the technology, the multiple technologies. the ecosystem, we’re really just at the beginning. And I wanted to make that point first, that this is not an advocacy, if it sounds like that, for this new technology, it is, however, I would say an advocacy for exploring this technology. And everyone who’s worried or concerned with bridging this actual digital divide, infrastructure especially, or as a backup, people that are connected, should know firsthand how this technology works. It’s still unfolding, the price has changed, there are so many questions about it. Before I’m going to ask for any questions from our live audience, I want to give Dan Organizer prerogative to make a follow-up comment on Bernard’s presentation. Dan.

Dan York:
Thank you, Don, and thank you, Berna, that was great. I want to just emphasize one key point, partly that you said that it is emerging, right? I mean, two years ago, we didn’t have this capability the way we do. Right now, we have primarily Starlink as our only option for this kind of connectivity. OneWeb expects to go live with their systems later this year to have connectivity by the end of 2023. And Amazon’s looking to get theirs up by the end of 2024, so many more. The important point is, this is an incredibly dynamic and evolving space. One other deployment challenge, I just want to build on what Berna said. If you look at what she talked about with each country, when I started this work, I naively thought that, you know, you could just, once these things were up there, you could bring a dish anywhere and it would just work. And, but the reality is all of that legal, all of those conventions that Berna mentioned are critical. And one question we often see from people is when will, you know, Starlink or. or OneWeb or anybody be available in my country. And it comes back to what Berna showed on that slide. In each and every country, the regulator needs to approve the spectrum that is being used for the uplink and the downlink between the systems also has to approve that user terminal equipment to be distributed. And so there’s a lot of regulatory work. These providers, whether it’s SpaceX or OneWeb or Amazon, they have large teams of staff whose job it is just to go and talk to the regulators of each country. And another critical was this sharing of spectrum. There are some countries that actually can’t use any of these systems because the frequencies that are needed are being used by that country’s existing government systems worth of pieces. So there’s a lot of complexity in turning it on for each individual country around the world. So it’s an exciting time, but there’s a lot of complexity and I see already some fantastic questions that people are asking. So thank you all for paying attention and for being here.

Moderator:
Thank you, Dan. Complexity is the word. In spite of the fact that the point is actually plug and play. We have a question from the audience here. Please identify yourself and try to make it quick.

Audience:
Hello, my name is Utra Meier-Hahn. I’m with GIZ, which is the German Agency for International Cooperation. We also look at this topic and I would like to leave one very short remark and a question. The very short remark adds to the very specific title of this session, how Leo Satellite Internet can contribute to closing the digital divide. And one thing that I have not heard, but when presented with the argument that Leo Satellite is so expensive and that is so unknown and it’s uncertain and all the other limitations and why we shouldn’t be more active in supporting other kinds of infrastructure, terrestrial infrastructure, mobile infrastructure. then I think it is important to remember that the digital divide grows larger with time. So, it’s very important to start closing it quickly, and that is one of the qualities of Leo Satellite Internet, that it allows deployment much quicker than the build-out of terrestrial or mobile infrastructure. So, it has a role in complementing these efforts. I feel like that is good to add. My question relates to the coordination specifically among countries that inquire about the use of Leo Satellite Internet and that try to choose providers. At this moment, there is not so much to choose, but with a view to the future and the past, first the past that has shown that sometimes providers may not live long, but their services require from those countries to make investments, both in hardware and in establishing the institutional setup, as you just said. There may be the assumption that the power as consumers, if we regard those countries as consumers, could be enlarged vis-à-vis providers to have good conditions by coordinating. My question to the panel goes to the direction of how you would suggest improved coordination among those, if you will, customer countries. Thank you.

Moderator:
Thank you. We actually didn’t mention politics among the various complexities, and certainly in telecommunications is rife with politics. How you integrate that into the ecosystem is a challenge in every country as well, and the business model and the ecosystem impact is another TBD. Dan?

Dan York:
Yeah, I think one of the interesting aspects you’ve mentioned was the quick deployment, and that is a critical element. You can drop a Leo’s dish anywhere, and you can make it happen. in a country that has that permission. And as long as you can get power to it, right, you can get that access and be able to provide that. We see it certainly as a compliment to existing infrastructure. You know, if you, we talk about a low latency in Leo connection, obviously you can get even lower latency in fiber, right? If you can get a fiber connection, you can get a fully synchronous, a higher, even higher speed, lower latency, that’s great. But the challenge is you can’t get fiber everywhere, right? And so there’s a complimentary aspect to this. There’s also a really interesting aspect, which is that Leo connectivity can get out there for the interest and usability and help people build the digital skills to be using internet connectivity so that when other terrestrial, you know, connectivity can catch up to that, they find that the people are already excited and interested and want that. So sometimes, I mean, there is a tension between terrestrial providers and the space-based, the newer space-based providers. But one interesting aspect is they actually work really well and one can lead to the other and can support both in there. Trees, one challenge that we have at the moment globally is that everybody’s looking to build their own systems, right? SpaceX has its own ground stations, its own antennas, its own systems. OneWeb has its own antennas, its own ground stations. Amazon’s Kuiper is doing the same thing. So we’re building multiple infrastructures. We’d love it if they’d cooperate, interoperate more, but these are commercial entities that are on their own market space around that. So we’ll have to see. As far as the internet, Berna could speak to that, I’m not.

Audience:
Dan, we’ve got actually quite a few questions and we’re running low on time. There was one online, maybe you could address that, related to the reselling. This has been a big question about Starlink and their licenses and the ability to use it as backhaul for commercial or even open community networks. Yeah, and this is an open question.

Dan York:
And the question is really, you know, if you get your Starlink connection, can you then resell it to other people? Can you do other things like that? This goes to what Berna mentioned about the different business models. Starlink right now is very focused on a direct consumer, a one-to-one relationship. So you get an antenna, you go and you use it for yourself or your library or your piece. OneWeb, their business model is very much focused on reselling. So their model is to work with partners, to work with people to be able to serve it. So they have a very different model on that. Amazon has indicated that they’re also going to do the direct-to-consumer model. But in all those cases, they’re testing other models too. So I think we don’t have a definitive answer right now around that. I know in some cases, Starlink has allowed their connection to be used as backhaul into a community network, with the backhaul being the connection back to the rest of the internet. So they have allowed that. Not clear yet whether that’s broadly applicable or whether they’re doing it on a case-by-case basis. But it’s one area I think we just have to…

Moderator:
And actually part of what that model is by testing out those limits in fact. And so what will they permit? The Starlink business model continues itself to evolve rapidly. They change their pricing structures, their licensing, and they go for different markets, the end-user consumer market. now into commercial use, ships, planes. So it’s highly dynamic, the business model. So I just would encourage everybody to try one out and see what you can do with it. We have, it looks like three people here in the room. Could you each ask your question briefly, all at once, and we’ll try to get to all of them.

Audience:
Sir, introduce yourself. Yeah, okay. My name is Nick Brock from DW Academy and from Rhizomatica. I wanted to make this longer, but I will make it really short. I think there’s an underlying, undermining question, which is ecology. So, and you said there are many doubts. My question, I will turn it around. Firing satellites in space, satellites that have to be renewed every five years. So why do you put this as a rhetorical question, if this is sustainable or not? So please tell me what, give me one argument why you think this is environmentally sustainable as a technology. So I don’t see it. And I think this kind of, we have to see, there is a competence and maybe, yeah, there are the companies competing against each other. Let’s see what happens in five years. Do we have this time? And this question would come from my daughter, 11 years old, not having a cell phone, opening up because we’re all crazy and fucking up.

Moderator:
Yeah, and it’s an excellent question. And there’s lots more. Dan, let’s collect some questions. We’ll try to answer all of them. That’s a good one. Sounds good. Go ahead, please.

Audience:
Thank you so much. Okay, plus one to the environmental question. The other question is just about the regulation or observation of the people who are putting these satellites into space, especially when they’re able to turn them off or throttle or change the service provided. Sort of at the whims of these individuals. How do we monitor that? What’s the accountability structure? And then also just to give a, raise a hand that we are in the middle of a pandemic. a nonprofit called measurement lab that measures the interconnection points, speed and quality of the internet around the world and being able to monitor the satellite space. So if anybody is curious about making that data public. Could you repeat that last last one, please? We measure interconnection points about the speed and quality of the internet around the world and we make that data public. It’s the largest public data set about internet performance that exists. So if people are interested in monitoring satellites, please come see me.

Moderator:
Okay, that’s a lot. Carlos.

Audience:
Hi, Carlos. My point is around whether it can close the digital divide or not. I think there are very laudable efforts from libraries, where they have access to power, where they have access to pay the connectivity. But what does it happen in other communities where a unit that is, you know, 150 to 200 watts, that costs a CAPEX that is, you know, 300 to 400, even 600 US dollars? I mean, how do you affront those costs when people don’t have actually money to get to the end of the month? How do you do the recurring payments? You know, like, I mean, there are many questions to actually consider this as a business model for the communities. But then there is a real question that is being asked by some researchers that is the sustainability of Starlink itself. If it needs to continue, I mean, the environmental question is a real question that I would like to get answered as well. But would the Starlink continue being sustainable or it would be loomed 2.0, right? Steve Song, one colleague of mine, has been starting to do that research in the economics of the amount of money that they require in revenue to be able to continue putting 12,000 to 20,000 satellites in orbit to cover everyone when they cannot do more individual connectivity, you know, because it doesn’t make sense because people don’t have the money to pay for the CAPEX and the OPEX. I mean, it’s… Thank you, Guido. One more here. I’m from Brazil and I’m here representing the youth program from the Brazilian Committee about Internet Governance. And actually, I would like to have the point of view from the speakers because talking about the Brazilian experience with the Starlink… I think we had a kind of naively expectations about how the Starlink would be a meaningful connective especially in the Amazon region but what we are seeing right now especially me as a research is that the Starlink has been most used in the Amazon region in the really wild and remote areas to support illegal gold miners and drug trafficking and they are exactly the group who is killing the indigenous people and they are responsible to for the indigenous tragedy so we had a kind of promise that the Starlink would provide internet for the the schools in the Amazon region but it didn’t happen so I understand that this is probably related to the affordability key but I would like to see from you what you think about that especially because I think we need to talk about business advocates because if I know I think people that work in Starlink know that this broadband is being used in order to support illegal groups so that’s my point. Thank you.

Moderator:
Thank you. I think all those are excellent and difficult questions. There are many difficult questions with this technology the and the system there is the environmental impact. I should say that it may seem like we’re promoting Starlink but we are not. We’re just pointing to it as a new and unique phenomena in the telecom ecosystem. It seems to us important what it is, how it works, what’s at impact. This a single global last-mile network. That’s really different. Let’s find out. It’s really our case here. So I don’t feel I should be defending Starlink or if anybody else wants to, they’re welcome to, but the satellite turnover, impact, there are trade-offs. So for example, should we allow nuclear power to deal with the amount of carbon that’s accumulating in the atmosphere? I’m completely against nuclear power. In the context of the crisis, maybe we have to. So I don’t know if that’s a good analogy, but I want to make the point about trade-offs. So yeah, yeah. So this is, this is great. Dan, I think you should point everybody to your discussion environment that you have where a lot of these issues are aired out every day, and I think that a lot of these could be dealt with there, but take a shot at anything you’ve just heard.

Dan York:
I know we’re running out, we’re hitting the end of time. And these are great questions. I mean, to the person who asked about the environmental issues, there was just a paper published recently. That’s the first we’ve seen sort of in analyzing the research, looking at the costs, the carbon costs of the launches of these systems. And that’s a real question. And that is this trade-off. Can we use these as a system to go and connect the unconnected around the world? Can they be affordable? That’s the huge question that’s being asked here. Can they be that? I have a larger question. Right now, these are all being built by commercial enterprises. Do we want only under the control of several commercial entities that are owned by eclectic billionaires. The EU is taking a position with their iris constellation of trying to have one that is coordinated by a set of countries. Will there be other models? The larger question that was asked here, are these sustainable? We don’t know. People have been around here for a while will recall there was a Leo burst back in the 1990s with Iridium and Global Star and some other countries, entities that were creating constellations for telephone access. It wasn’t there. It died away, although they’re still up there. They’re still being used. They’re looking to come back in some ways for data, but it’s a real question. The thing that we, and the importance of bringing it here is for people to understand that this technology is happening. It’s going on. There are rocket launches happening every week that are putting more and more of these satellites up there. We have to understand them. We have to understand where they can fit, what trade-offs we will make. What are they? Is the carbon cost, is the trade to get the connectivity that we all need? Are there ways that we can mitigate that or make it better or do it? What happens to all these satellites when they burn up? You mentioned it there, we didn’t really hit on here, but these things only have about a five-year lifespan due to the pull of gravity, atmospheric drag, lots of other reasons. The satellite providers have to be constantly launching new satellites in order to keep these constellations up. Is that sustainable? Is there enough people who will buy it? Is there the capacity to support it? I don’t know. None of us do. The measurement lab, a question around that. We don’t have access to that data yet because a lot of it is all happening in proprietary systems. Also, there’s only one up there in full. Lots of questions. these next five years are going to be very interesting and I think we all just need to keep our attention focused on there to see what are the opportunities, what are the trade-offs we have to make, where does it all work, will it all work.

Moderator:
Very good. There are more of course issues. We didn’t talk about space junk, we didn’t talk about astronomy, we didn’t talk about the stability of this billionaire, that there are just a lot of issues. So tracking this is important, involving everyone in it or as many people that are actually interested is important so that these questions are not just here in a room but are part of the public debate. So I encourage everyone to investigate more deeply into this extraordinary technology and I’ll take a break. That is usually not mentioned, we talk about education, we talk about health information, access to public services and public information, but having basically a connectivity point in a community that is impervious to disruption, to disasters, speaking of carbon and weather, this is increasingly the world we’ve created and we’re living in or going to be living in for the foreseeable. So also don’t have access to educational opportunities, commercial health opportunities, are also people who are not contributing to the carbon accumulation but they are impacted more heavily by the results of what industrial economies have done. Giving them this capability is one very powerful way to give them adaptation capability and we think that’s part of the equation as we calculate how these things should go. So with that we’ve run a little bit over but I want to thank our our panelists, and our audience, and everyone involved in this. Thank you very much. Thank you. And I wish we could have gone to Burna and the chem a bit more too, but we had so little time. Thank you.

Stephen Weiber

Speech speed

181 words per minute

Speech length

792 words

Speech time

262 secs

Audience

Speech speed

167 words per minute

Speech length

1299 words

Speech time

466 secs

Berna Gur

Speech speed

134 words per minute

Speech length

1202 words

Speech time

538 secs

Dan York

Speech speed

183 words per minute

Speech length

3331 words

Speech time

1094 secs

Moderator

Speech speed

136 words per minute

Speech length

1853 words

Speech time

819 secs

Nkem Osuigwe

Speech speed

131 words per minute

Speech length

1167 words

Speech time

535 secs

Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The need to enhance digital health systems in preparation for future pandemics has become increasingly evident. Accurate and reliable medical advice and treatment should be accessible without individuals having to physically visit healthcare facilities. This is crucial to ensure the safety and well-being of patients and to reduce overcrowding in healthcare facilities, especially among the elderly who are more susceptible to complications from infectious diseases.

The COVID-19 pandemic has highlighted the limitations of traditional healthcare delivery models that heavily rely on in-person consultations and hospital visits. This has caused strain on healthcare systems and increased the risk of transmission in crowded facilities. Therefore, there is an urgent call for the development and improvement of digital health solutions.

One supporting fact behind the argument for digital health improvements is the surge in healthcare demand during pandemics like COVID-19. The rapid spread of the virus has emphasized the need for scalable and efficient healthcare services that can cater to a large number of patients. By implementing digital health solutions such as telemedicine and remote monitoring, the burden on physical healthcare facilities can be alleviated, and healthcare providers can reach a wider patient population.

Another important consideration is the age and vulnerability of certain populations, particularly the elderly. Concerns have been raised about the increased risk they face when visiting crowded healthcare facilities. Digital health technologies can provide them with access to healthcare services from the safety of their own homes, reducing their exposure to potentially infectious environments.

The analysis also highlights the relevance of the United Nations’ Sustainable Development Goals (SDGs), particularly SDG 3: Good Health and Well-being, and SDG 9: Industry, Innovation and Infrastructure. Improving digital health aligns with these goals by promoting accessible and quality healthcare for all, as well as fostering innovative solutions to address healthcare challenges during crises.

In conclusion, the need for digital health improvements in anticipation of future pandemics is supported by various compelling arguments. These include the necessity for accurate and timely medical advice without physical visits to healthcare facilities, concerns about overcrowding, increased healthcare demand during pandemics, and considerations for the vulnerable and elderly populations. Embracing digital health solutions can enhance societies’ capacity to respond effectively to future health crises, ensuring comprehensive and accessible healthcare services for all.

Geralyn Miller

During a panel discussion, speakers elaborate on various facets of Microsoft’s initiatives related to health outcomes, health equity, and digital health literacy. One significant topic highlighted is the crucial understanding of social determinants of health. The speakers underscore that these non-medical factors have a substantial impact on health outcomes, accounting for 30-55% of them. It is emphasised that addressing these determinants is vital for tackling health disparities.

Another key point discussed is the importance of addressing systemic problems, including social determinants of health, to enhance health equity. Microsoft’s multidisciplinary research on issues such as carbon accounting, carbon removal, and environmental resilience is commended. The company’s involvement in humanitarian action programs to effectively respond to disasters is also highlighted. By focusing on these systemic problems, Microsoft aims to create a more equitable healthcare system.

The role of technology and data in improving health outcomes and promoting health equity is emphasised. Microsoft’s development of a health equity dashboard, which enables visualisation and understanding of the problem, is lauded. The dashboard employs public data sets to provide different perspectives on health outcomes. Additionally, Microsoft’s LinkedIn ‘Data for Impact’ program, through which professional data is made available to partner entities, aims to enhance digital health literacy by equipping students and job seekers with the necessary skills.

Responsible AI is another significant aspect underscored by the speakers. Microsoft’s commitment to principles such as fairness, transparency, accountability, reliability, privacy & security, and inclusion in its approach to AI is highlighted. The need for implementing policies and practices to ensure safety, security, and accountability in AI is stressed. Measures such as implementing safety brakes in critical scenarios, classifying high-risk systems, and monitoring to ensure human control are deemed crucial. Moreover, the licensing infrastructure for the deployment of critical systems is considered essential.

The panel also addresses the issue of potential bias in AI models and the need to understand and inspect the data guiding these models. Microsoft actively works towards understanding the distribution and composition of the data to prevent bias. The goal is to ensure fairness and reduce inequalities by ensuring that bias does not occur due to the data employed in AI models.

The value of cross-sector partnerships, especially during the pandemic, is emphasised. Collaborations between the public, private, and academic sectors in research and drug discovery are cited as successful examples. These partnerships, including government-sponsored consortia, privately-funded consortia, and community-driven groups, have been instrumental in advancing healthcare during the pandemic. The continuation of such partnerships to drive positive change is advocated.

Additionally, the panel underscores the importance of maintaining good standards work, particularly during crises such as the pandemic. The use of smart health cards to digitally represent clinical information and support emergency services is discussed. The work of the International Patient Summary Group, aiming to represent a minimum set of clinical information, is commended, and the need to continue this good standards work is stressed.

The challenge of keeping up with the accelerating pace of innovation is acknowledged. As innovation progresses rapidly, individuals and organizations must strive to stay current and adapt. The significance of dialogue and information sharing as opportunities to expand knowledge and foster collaboration is also highlighted. Panels and training sessions are seen as valuable starting points for initiating these discussions and sharing insights.

Furthermore, the panel emphasises the need for training in both tech providers and the academic system. They assert that training in digital health should be integrated into the academic curriculum to ensure that everyone in healthcare is equipped with the necessary knowledge and skills. This approach is considered essential for advancing digital health literacy and ensuring its scalability.

Lastly, responsible implementation of generative AI is discussed, advocating for open policy discussions to ensure inclusivity and address ethical concerns. The importance of discussing responsible AI is underscored for its successful and inclusive implementation.

In conclusion, the panel discussion provides an encompassing overview of Microsoft’s initiatives pertaining to health outcomes, health equity, and digital health literacy. It underscores the importance of understanding social determinants of health, addressing systemic problems, and leveraging technology and data to improve health outcomes. Microsoft’s various initiatives, such as the health equity dashboard, LinkedIn ‘Data for Impact’ program, and Microsoft Learn platform, are commended. Additionally, the panel highlights the significance of responsible AI, cross-sector partnerships, maintaining good standards work, and promoting dialogue and information sharing. The importance of training in both tech providers and the academic system, as well as responsible implementation of generative AI through open policy discussions, is emphasised.

Ravindra Gupta

Digital health has achieved technical maturity, with the necessary technology and infrastructure in place for its implementation. However, it lacks organizational maturity, as highlighted by Debbie, a panelist at an event, who pointed out the shortage of trained individuals who can effectively leverage available healthcare technology. This expertise gap poses a significant challenge to successful digital health implementation.

To address this issue, comprehensive understanding and implementation of digital health are needed. This includes educating healthcare professionals, technologists, and patients about digital health’s integration into healthcare systems. The International Patients Union is one example of an organization dedicated to training patients in effectively using digital health technology.

Another area that requires attention is government policies on digital health, which currently lack focus on capacity building. Governments should prioritize capacity building initiatives to equip healthcare professionals with the necessary skills to leverage technology effectively. Pressure should be exerted on bodies like the World Health Organization (WHO) to provide faster normative guidance for digital health policy development, facilitating effective national policies.

Private and non-profit organizations are developing innovative and affordable strategies for digital health literacy. The Digital Health Academy, for example, offers an online global course for healthcare professionals, and plans are underway to provide low-cost training courses for frontline health workers. These efforts bridge the digital health literacy gap and ensure healthcare professionals are proficient in digital tools and technologies.

Governments must play a pivotal role in funding digital health initiatives, as seen in the Indian government’s investment in the national digital health mission. This funding is crucial, especially considering the evolving business model of digital health, which has led to the withdrawal of many large companies. Government support is essential for sustaining digital health initiatives and ensuring successful implementation.

Digital health has proven its readiness during the COVID-19 pandemic. Fast-track vaccine development involved global researchers, and AI was used in repurposing drug use. Additionally, 2.2 billion doses were digitally delivered through COVID apps, highlighting the efficiency and effectiveness of technology in healthcare. This underlines the need to continue utilizing technology beyond the pandemic.

Digital health literacy is crucial for healthcare professionals and workers in the sector. Failing to adapt and learn digital health skills may render individuals professionally irrelevant. Patients’ increasing access to health information necessitates healthcare providers’ awareness of advancements to provide accurate and quality care.

Upskilling and cross-skilling in digital health are essential for scalability, as scalability relies on healthcare professionals having the necessary competencies to leverage digital tools effectively. Moreover, healthcare providers should stay ahead of patients in terms of health knowledge to provide accurate care.

In summary, digital health has achieved technical maturity but lacks organizational maturity. Comprehensive understanding and implementation, capacity building, and literacy initiatives are necessary. Government support, funding, and upskilling efforts are key to successful digital health implementation. Digital health literacy is important for both healthcare professionals and patients, and upskilling is necessary for scalability. Healthcare providers need to stay informed to provide quality care. By addressing these challenges and investing in digital health, we can achieve better healthcare outcomes for all.

Moderator

The panel speakers engaged in a comprehensive discussion on the topic of digital health literacy and equitable access to digital health resources. They acknowledged the existence of disparities in access to healthcare and emphasized the potential of digital health to advance healthcare outcomes if accessed equitably. The need to enhance digital health literacy and promote equitable access was a recurring theme throughout the discussion.

Collaboration among various stakeholders, including healthcare providers, educational institutions, and technology companies, was identified as crucial for enhancing digital health literacy. The panel highlighted the importance of developing comprehensive frameworks and assessment tools to gain a holistic understanding of individuals’ abilities in navigating digital health. This would enable tailored interventions and support for those who need it most.

The role of social determinants of health in influencing health outcomes was also emphasized. The panel noted that 30 to 55 percent of health outcomes are dependent on social determinants of health. To visualize this problem, the Microsoft AI for Good team has built a health equity dashboard. This highlights the significance of addressing social determinants, such as economic policy, social norms, racism, climate change, and political systems, to achieve health equity.

Furthermore, the speakers advocated for digital health literacy and digital skills to be viewed as part of the social determinants of health. Microsoft’s initiatives, including a multidisciplinary research initiative on climate change, partnership with humanitarian open street map team for disaster mitigation, and a free online learning platform, were highlighted as examples of addressing social determinants. Microsoft-owned LinkedIn also promotes economic development and digital skilling through their economic graph and data for impact program.

Sub-Saharan Africa was identified as a region facing high health inequality, with a high disease burden and a shortage of health workers. The panel called for focused efforts to address health inequality in this region. They highlighted the positive impact of digital technologies, especially mobile, in addressing health issues. Reach Digital Health, for example, uses mobile technology to improve health literacy and encourage healthy behaviors. The Department of Health in South Africa also implemented a maternal health program that reached around 60% of mothers who have given birth in the public health system over the past eight years.

The panel stressed the importance of incorporating a human-centered design approach in the development of digital interventions. They noted that design considerations should include an understanding of the bigger context and the needs of the end-users. This approach ensures that digital health solutions are simple, easy to use, accessible, and free, with appropriate literacy levels.

The moderators expressed their interest in hearing insights and key policy recommendations from the panel. They highlighted the importance of enhancing digital health literacy, especially among marginalized populations. The panel agreed that governments and international organizations should prioritize policy interventions and investments to achieve this goal.

Capacity building in digital health was identified as a significant ongoing challenge in the healthcare sector. The need for policymakers to focus on capacity building and provide training for healthcare professionals and frontline workers was emphasized. The speakers emphasized the importance of continuous upskilling, considering the rapid pace of technological innovation, and highlighted the need for a practical implementation focus before policy development.

The importance of equitable access to digital health resources was another key point discussed. The Digital Health Academy was highlighted as an organization focusing on affordable training, providing $1 trainings for frontline health workers to ensure affordability. The responsible development and deployment of digital health technologies were emphasized, with a focus on upholding digital rights, privacy, and security. The speakers stressed the importance of involving various stakeholders for responsible innovation.

The speakers also touched on the concept of the digital divide and its impact on health equity. They highlighted the need to bridge this divide through initiatives such as Facebook Free Basics, which provides essential information for free, improving people’s literacy and data usage. Aligning priorities between mobile network operators and health organizations was seen as crucial for improving health equity.

Youth-led initiatives and community involvement were identified as crucial for bridging the digital divide in health. The panel emphasized the need for culturally sensitive initiatives that consider the specific needs of the population. They highlighted the importance of empowering young advocates to actively shape internet governance policies to ensure equitable access to digital health resources.

Lastly, the panel discussed the role of governments in investing in digital health. The Indian government, for example, has set up a national digital health mission and provided free consultations to citizens through the e-Sanjeevani program. Implementing free telemedicine consultations through health helplines was seen as a way to bridge the digital divide and address healthcare inequities.

In conclusion, the panel highlighted the need for collaborative efforts, policy interventions, and investments to enhance digital health literacy and achieve health equity. They emphasized the importance of addressing social determinants, building digital health capacity, and promoting equitable access to digital health resources. The responsible development and deployment of digital health technologies, as well as the involvement of youth and community in shaping policies, were identified as crucial. Overall, the panel provided valuable insights and recommendations for advancing digital health literacy and equitable access to digital health resources.

Yawri Carr

The emergence of the Responsible Research and Innovation (RRI) Framework in AI healthcare is seen as a positive development in the field. This framework focuses on transparency, accountability, and ethical principles, ensuring that innovation in AI does not compromise ethical standards. It places an emphasis on safeguarding digital rights and privacy and holds AI systems accountable for their decisions.

Stakeholder involvement is highlighted as essential in the RRI process. Societal actors, innovators, scientists, business partners, research funders, and policymakers should all be involved in the responsible research and innovation process. It is important for these discussions to be open, inclusive, and timely, working towards ensuring desirable research outcomes.

Youth-led initiatives are recognized for their role in promoting responsible AI. Universities, education centres, and mentorship programs have crucial roles in inspiring young people to innovate in health technology. Community-based research projects are also highlighted as a means to promote cultural sensitivity and address specific community needs.

However, there are challenges in applying ethical considerations in profit-driven AI innovations. There is often a clash between ethical considerations and profit-driven motives. Power imbalances, particularly financial, often hinder the work of ethicists. Therefore, regulatory frameworks, certification processes, or voluntary initiatives are needed to enforce ethics in AI.

Young advocates are viewed as influential in shaping internet governance policies and ensuring equitable access to digital health resources. Their participation in policy discussions at forums like the Internet Governance Forum (IGF) and the formation of youth coalitions can amplify the collective voice for accessibility and inclusivity. Engagement with multi-stakeholder processes can ensure a diverse contribution to the policies.

Youth-led research and innovation hubs are seen as valuable in addressing digital health challenges. These hubs provide a platform for young innovators, healthcare professionals, and policymakers to collaborate and find innovative solutions.

Technologies such as telemedicine and the use of robots are praised for their usefulness in pandemic situations. Robots can restrict direct human contact, reducing the risk of virus spread. Telemedicine enables remote treatment, ensuring health services while maintaining social distance.

The importance of technology and AI in healthcare is emphasized, particularly in protecting nurses and healthcare workers. Assistive technologies like robots can help safeguard these frontline workers.

Open sharing of data and research related to the pandemic is encouraged. This open sharing can lead to greater cooperation and more effective responses to emergencies.

Digital health leaders are urged to prioritize equity and ensure that healthcare is not a privilege but a right for all. Technical skills are not the only important aspect; a commitment to equity is also vital. Healthcare and digital health care should be accessible to everyone.

The valuable role of nurses and ethicists in evolving technology is highlighted. The work of nurses remains critical in healthcare, and ethicists play a crucial role in contributing to the mission of responsible AI.

In conclusion, youth-led initiatives, stakeholder involvement, and the emergence of the RRI Framework in AI healthcare are viewed as positive developments. Challenges exist in applying ethical considerations in profit-driven AI innovations, emphasizing the need for regulatory frameworks and certification processes. The importance of technology, telemedicine, robotics, and the open sharing of data and research are recognized. Digital health leaders are urged to prioritize equity, and the crucial role of nurses and ethicists in evolving technology is emphasized. Ultimately, youth play a fundamental role in advancing digital health and ensuring its accessibility.

Deborah Rogers

The speakers in the discussion highlighted several key points about digital health in Africa and how it can potentially address health inequality and overburdened health systems. They emphasised the increased access to mobile technology in Africa, which has seen significant growth over the years. In Africa, where 10% of the world’s population represents 24% of the disease burden, access to mobile technology has the potential to bridge the gap and improve healthcare outcomes.

One of the main arguments put forth was the effectiveness of low-tech but highly scalable technology in disseminating health information and services. The speakers stressed the success of programmes that utilise SMS and WhatsApp in improving health behaviours and service access. For example, a maternal health programme in South Africa has reached 4.5 million mothers since 2014, resulting in improved health outcomes.

The discussions also highlighted the role of digital technology in improving health literacy. Through the use of digital technology, a maternal health programme in South Africa has witnessed increased uptake of breastfeeding and family planning. However, the speakers emphasised the importance of implementing digital health initiatives in a human-centred manner and being cognisant of the larger health system they are a part of.

Furthermore, the speakers addressed the issue of health equity and the digital divide. They presented an example of the Facebook Free Basics model, which provided free access to essential health information and led to increased profit for mobile network operators. This approach demonstrated that reducing message sending costs for end-users does not inhibit profit for operators, thus showing the potential for mobile network operators to improve health equity.

The discussion also delved into the importance of a human-centred approach in developing digital health interventions. The speakers emphasised that digital health should be easy to use and accessible, designed with users in mind. They also noted that access to a mobile device itself is less of a problem than the cost of data, which needs to be addressed for wider adoption of digital health services. Overall, digital health was seen as an integral part of the health infrastructure, rather than a side project.

One noteworthy aspect that was brought up in the discussions was the potential bias and lack of diversity in the development of digital health services. The speakers emphasised that the makeup of the development team often does not represent the actual users of the services, leading to the introduction of biases. This can perpetuate health inequities and hinder the effectiveness of digital health interventions. Therefore, there was a call for more diverse and inclusive development teams to ensure the services are designed to meet the needs of all users.

During the discussion, the speakers also highlighted the role of digital health in the COVID-19 pandemic. Large-scale networks were used to quickly disseminate information, and digital health platforms played a vital role in screening symptoms and gathering data. The burden on healthcare professionals was reduced, showcasing the potential of digital health to alleviate the strain on the healthcare system.

Furthermore, the importance of sharing medical knowledge and not hoarding information was emphasised. The speakers noted that the lack of knowledge during the early stages of the COVID-19 pandemic had a significant impact on everyone. Therefore, the dispersal of information on a large scale can greatly contribute to improving patient health outcomes.

The discussions also emphasised the need for investment in digital health infrastructure for future pandemics. The COVID-19 pandemic highlighted the importance of having digital health platforms in place. Building and investing in such infrastructure before the next pandemic occurs would enable a quicker response and avoid starting from scratch.

Additionally, the potential of technology to decrease health and digital literacy inequities was discussed. Technology was hailed as a great enabler in addressing these inequities and improving access to healthcare and education.

In conclusion, the discussions on digital health in Africa highlighted its potential to address health inequality and overburdened health systems. The increased access to mobile technology and the success of low-tech interventions have provided evidence of the positive impact of digital health. However, the speakers emphasised the need for a human-centred approach, diversity in development teams, and investment in infrastructure to fully capitalise on the potential of digital health. There was optimism about the future of digital health, and the involvement of youth in its evolution was seen as crucial.

Session transcript

Moderator:
in turn creating disparities in access to care. So in this session we will discuss strategies to enhance digital health literacy and identify measures to promote equitable digital health access. Our goal is to find innovative policy solutions that bridge the digital divide and ensure that digital health truly advances healthcare outcomes for all. Thank you all for joining us on this important journey and let’s get started. We have three key policy questions that will guide our discussion today. How can comprehensive frameworks and assessment tools be developed to capture and assess different dimensions of digital health literacy, ensuring holistic understanding of individuals’ abilities in navigating digital health information and services? What strategies towards health equity can be adopted to ensure digital health literacy programs effectively address unique needs and challenges faced by marginalized communities, promote inclusivity and equitable access to digital health resources? And also how can partnerships between key stakeholders including healthcare providers, educational institutions, technology companies and governments be leveraged to enhance digital health literacy skills, foster collaboration and knowledge sharing to advance health equity? Our panelists will be addressing these issues today so if you would like to ask a question towards the panel we will have a Q&A session at the end for on-site participants and online participants may use the Zoom chat to type and send in your questions and my online moderator Valerie will be helping me with them. So without further ado to kick off our discussion I would like to introduce our esteemed panelists who will share their insights on these matters. First joining us online we have Ms. Gerilyn Miller, an innovation leader driving change in healthcare and life sciences through AI. She is a senior director at Microsoft in product incubations, Microsoft health and life sciences cloud, data and AI and she’s also the co-founder and head of AI for health which is Microsoft AI for good research lab. And then we have Professor Rahindra Gupta joining us on site here today, a leading public policy expert with vast experience in policy making and he’s been involved in major global initiatives on digital health and holds several key positions in the digital health arena. He’s also the founder and behind many path-breaking initiatives like his Project Create and organizations working for digital health. And next we have Ms. Debbie Rogers joining us on site as well. She’s an experienced leader in the design and management of national digital mobile health programs and the CEO of Reach Digital Health aiming to harness existing technologies to improve healthcare and create societal impact. And last but definitely not least we have Ms. Jari Carr joining us online. She’s an internet governance scholar, youth activist and AI advocate and she’s also a digital youth envoy for the ITU like me and a global shaper with the World Economic Forum with her work centering on responsible AI and data science for social good. Now let’s begin section one of today’s workshop on low digital health literacy and strategies and I would like Ms. Gerilyn to take the floor first. So what research and development initiatives for example including the creation of comprehensive frameworks and assessment tools is Microsoft pursuing to address the multifaceted challenges of low digital health literacy? And additionally can you highlight your thoughts and innovative strategies and partnerships that Microsoft is employing or supporting to enhance digital health literacy among marginalized populations with a focus on inclusivity and equitable access especially in low-income and rural areas? Ms. Gerilyn over to you. Yeah great thanks and thank you for inviting me today to

Geralyn Miller:
participate in this. So the lens I’m going to take from this is really based on something that is known as social determinants of health. So I want to start by defining and sanity checking that social determinant of health is a non-medical factor that influences health outcomes. So this is the conditions that people are born work and live in and the wider set of forces that shape conditions of our daily lives right. So this includes things like economic policy and development agendas, social norms, social policies, racism, even climate change and political systems and this affects about from research we know that this is about 30 to 55 percent of health outcomes are actually really dependent on social determinants of health. So when you want to think about health equity in digital literacy it’s really important to for two things. First to understand the problem based on data and I’ll share a little bit about what Microsoft research is doing in that area and the second is to open your mind and have a willingness to address the underlying often systemic problems that felt that affect health outcomes and that includes social determinants of health. So Microsoft has some things that we’re doing to understand the problem with data including the Microsoft AI for Good team has built something that we call a health equity dashboard that is essentially a Power BI dashboard that takes a number of public data sets and allows one to look at them from a geography perspective, slice and dice the data by rural, suburban and urban populations and then also examine different health outcomes including things like life expectancy. So that’s the first thing right is really being able to understand and visualize the problem itself. So I invite you to actually have a look at that information. There’s a number of other things that from a Microsoft perspective we’re doing to look at on the social determinants of health side. So I’ll point for example to some of the work we’re doing on climate change. We announced a climate change research initiative that we call MCRI which is really a multidisciplinary research initiative that is focusing on things like carbon accounting, carbon removal and environmental resilience. We also have our Microsoft AI for Good research lab and our humanitarian action program. They have for example worked with a group called humanitarian open street map team or HOT which partnered with Bing maps to map areas vulnerable to natural disaster and poverty. So that’s an example of some of the work out of the research lab and the humanitarian action program coming together to help give relief teams information to respond better after disasters. There’s also a lot of work that we have happening from a Microsoft perspective that ties more directly to economic development and digital skilling. So we have some work on a LinkedIn, something called the economic graph which is a perspective or a view based on data of more than 950 million professionals and 50 million companies. LinkedIn which is a Microsoft company also has a data for impact program and this program makes this type of professional data available to partner entities including entities like the World Bank Group, the European Bank and others. So it’s data on more than 180 countries and regions and this is at no cost to the partner organizations. An example of the impact of this type of data, this data for impact information was able to advise and inform a 1.7 billion dollar World Bank strategy for the country of Argentina. And then there’s also the Microsoft learn program which is a free online learning platform enabling students and job seekers to expand their skills. So role-based learning for things like AI engineers, data scientists and software developers, hundreds of learning paths and thousands of modules localized in 23 different languages. So in summarizing I just want to say that we look at this as from a holistic broad perspective as digital health literacy and digital skills as part of the social determinants of health and the work that we’re doing to support those.

Moderator:
Thank you very much Ms. Miller. And now moving on to Ms. Debbie. As an experienced leader in the design and management of national mHealth programs and the CEO of Reach Digital Health, can you share your thoughts on digital health literacy, digital divide and health equity, effective strategies for enhancing digital health literacy among marginalized populations particularly in resource constrained settings and additionally how can partnerships between non-profit organizations like Reach and private sector mobile operators be strengthened to promote digital health literacy among women and marginalized communities addressing gender-based barriers and limited resources while contributing to bridging the digital divide?

Deborah Rogers:
Thanks very much. So I think the first thing just to talk about is a little bit of the context. So we work primarily in Africa. To give you an idea around inequality and health in sub-Saharan Africa we have 10 percent of the world’s population, 24 percent of the disease burden and only three percent of the health workers. And so we really do have the odds stacked against us in a time when we’re supposed to be going towards universal health care, which quite honestly is a pipe dream if you look at where things are at the moment. While we’ve made some progress in addressing maternal and child health and addressing infectious diseases such as HIV, we are getting an increased burden when it comes to non-communicable diseases. So the burden is just increasing, not decreasing. And so really if we follow the same patterns over and over again and we keep just training more and more health workers and not addressing the systemic issues or relieving the burden from the health system, then there’s absolutely no way that we’re going to be able to improve these stats. We’re going to go backwards and not forwards. And so I think I’m fairly optimistic actually because I think that digital and particularly mobile has the opportunity to really address some of these issues in a way that many other interventions don’t. Reach Digital Health was founded in 2007 with the idea that the massive increase in access to mobile technology in Africa, at the time more people in Africa had access percentage-wise to mobile technology than in the so-called global north or western countries, was a way for us to leapfrog some of the challenges that we’ve had in the global south and to actually address some of these issues. And we really have been able to see that. We have been able to see how the access to information and services through a small device that’s in the palm of many people’s hands has been able to improve health, both from a personal behavior change perspective but also health systems as a whole. And so what we primarily focus on is using really, really low tech but highly scalable technology, so things like SMS, WhatsApp. These are the things that everybody uses every day to communicate to their family and friends. And we use that to empower them in their health, help them to practice healthy behaviors, to stop unhealthy behaviors, and to access the right services at the right time. And with the fairly ubiquitous nature of mobile technology in Africa, we’ve been able to reach people at a massive scale. So for example, we have a maternal health program with the Department of Health in South Africa. It’s been running since 2014. We’ve reached 4.5 million mothers on that platform. But that represents about 60% of the mothers who have given birth in the public health system over the last eight years, which percentage-wise is huge. And we’ve been able to see that this has had impacts such as improved uptake of breastfeeding, improved uptake of family planning, and really has seen not just an individual change but a more systemic change with the ability to understand what is the quality of care on a national scale for the Department of Health in South Africa. And so we really do believe that if you harness the power of the simplest technology, if you design for scale with scale in mind, if you design with understanding the context, then you can actually use digital to be able to increase health literacy. And so it’s not all doom and gloom. It’s not just about the fact that digital is always excluding other people. It can be an enabler, but only, of course, if we consider the wider context and we don’t go blindly into things and ignore the fact that this could be something that increases it. And so I think I’ll talk a little bit later more about some of the strategies that can be used, but I think two things to remember is design with the human, not patient. I don’t like the word patient, but in digital health we tend to use that word. With the human at the center of what you’re trying to do, and design understanding that you are part of a bigger system. And this is not something that exists by itself. And if you do those two things, not only will you be able to improve health literacy, but you’ll be able to do so in a way

Moderator:
that doesn’t widen the divide that many technologies already put in place. Thank you very much, Ms. Debbie. Now moving on to Professor Gupta. With your extensive experience in policy development, digital health education, and founding the world’s first digital health university, can you share your thoughts and offer key policy recommendations that governments and international organizations should prioritize to comprehensively enhance digital health literacy, especially amongst marginalized populations? Additionally, can you share insights into successful and scalable educational strategies and approaches that have effectively improved digital health literacy, with a focus on adapting these methods globally to meet healthcare scaling needs for digital health? Thanks, Connie. Firstly, I congratulate you for picking up this

Ravindra Gupta:
very important topic. And secondly, I’m a little worried for such a long question, because after 5 p.m., I’m half asleep. It’s been an engaging session throughout the day. But yes, it’s a very important topic. It keeps me awake. But pardon me for my incoherence. But let me give you a little backdrop of why this topic is important. There is an international society called International Society of Telemedicine and eHealth. It’s been around for a quarter of a century and has memberships in 117 countries. So way back in 2018, I said that digital health has two opportunities and two challenges. But the two challenges are like, we have reached the stage of technical maturity. Give me a challenge, I’ll give you 100 solutions. But where we lack is organizational maturity. People are not trained enough to leverage technology that’s available. So I said, let’s look at capacity building. I think the issue that you brought up. So 2019, they formed the Capacity Building Working Group, which I chair. And post that, we have done two papers on capacity building. One is listing the kind of people we need to train across digital health. And second, we have done a deep dive. We released that in partnership with World Health Organization. So there is, for those who are looking at what kind of capacity we need, the ISFDH website has a list, two papers written on this topic. And then 2019, WHO set up their Capacity Building Department, which is a very recent thing. So I think there is a lot of focus. And now coming back to what my experience was. So having pushed various organizations to do that, but I still relied, we were just doing policy papers and, you know, policies take time to translate. I mean, people like Debbie would need people to help her, you know, in technology. I mean, a policy paper can’t help her. She needs people trained in digital health. So in 2019, I set up the Digital Health Academy, which now is now the Academy of Digital Health Sciences. We have started a course for doctors and for people in healthcare. It’s a global course, fully online as digital course should be. But to your point, that also would not solve my biggest overall challenge. I am training doctors, you know, it is so shocking. And I’ll put a context to that, that we had a half page advertisement in a leading newspaper in India. A very senior doctor called me and asked, Rajendra, what’s digital health? So I was shocked that even doctors need to be first surprised that what does word digital health mean? I’ll give you another example. There’s a company that works exclusively in data domain. So I called the founder who is a doctor and asked, do you do digital health? He said, no Raj, we don’t do digital health. I said, do you use data? He said, we only use data. So I said, you only do digital health. So the challenge is first people should know the definition of digital health. That is the level we have to get in and which is needed across the ecosystem. So right from the bureaucracies and the ministers and the ministries of health, they need to understand what is digital health because they come for a fixed tenure or they get transferred. If that level they are sensitized, then the things flow down the line because government makes policies which get implemented as programs. So that’s one level of competencies that I have told WHO to look at because my experience in WHO meetings is that bureaucrats come, they spend two, three days in Geneva or New York and then they go back and forget it. So there has to be a course for policymakers at the highest level, which probably WHO or any organization could do. The second level is what we need to do is the courses for doctors and health professionals. And third and the most important, which we are launching in next two months is frontline health workers. But understand the challenge that frontline health workers are either doing voluntary service, like you have the ASHA workers in India, which is a million workers. They are our first line or first responders. Don’t expect them to pay you $1,000 or $100. So we had to actually innovate and convince one of the Institute of National Importance that we need to bring out $1 trainings. So we should train people for as low as $1. And this we’re doing globally. So frontline health workers, if I’m able to train, I think I would have addressed the biggest challenge for healthcare. Now, one of the government’s agency has approached us to work with us. So as such, on the capacity building, I think governments just focus on the program minus capacity building, which is a serious lapse. And I think this is across the board. I think that we would agree on that is that we are very focused on saying maternal health, mobile application, child health, mobile application, rural health, telemedicine, but who will do it? We don’t know. But people are going to use don’t even know how to use a mobile phone. They do not know how to log in on the account. So we need basic training. And I think this is what private organizations, not-for-profits, and then government step in very late, let me tell you that. So they are not the ones who would initiate. So once you go with the program, talk to them, they will partner. So as a policy, I’m glad, Connie, that you put a session on this, something that our Digital Health Dynamic Coalition should have done, but they only allow one session for a dynamic coalition. So we had our session, which we are doing tomorrow. But now that you have taken it up, it puts the spotlight on this important topic. At ISFDA, there are policy papers. They have been given to WHO. WHO set up the capacity building department, but honestly, nothing much has moved between 19 and 23, four years. We are still to look at, and they’re still forming a committee. So I think it’s mostly going to be the civil society organizations and private sector that will take the lead. On policy side, I have not seen documents that talk about it so far, so we will have to wait for a normative guidance from WHO, which will be still, I think, a few years away. It takes time to build a document in WHO. How this will happen fast is like this. In India, we have a digital health mission, which has rolled out 460 million health IDs. In this year, we will roll out 1 billion health IDs. Our health consultations, teleconsultations have crossed 120 million. So I think that is the first point. So I’m inverting the process from policy to let’s first have implementation. So when the government rolls out at such level and scale, automatically you will start feeling the need of trained people in this. So I think this is one thing, but more than structured courses, it will be more of continuous upskilling that everyone will need to do because technology is also changing. Till last year, no one talked about generative AI. Now people have started talking about generative AI. So I think we need to keep that trainings as feud and make it more as a continuous upskilling program for people across healthcare. We are not waiting for government policies, we are rolling out as Academy of Digital Health Sciences and these are global programs. We are making it really affordable as $1 trainings for front-end health workers, for doctors and for the industries, the postgraduate program. And we will announce undergraduate programs as well because I think this is where we need to build capacity. So for now, I think policy interventions will happen. I think overall a part of the health policy, everyone should put capacity building and digital health is now an integral part of health. So digital upskilling is required for digital scaling. So I think this is something that governments have to look at and WHO should take a frontal role. So I would say more to WHO and organizations like the one that Debbie runs, organization like the ones that I run with my team. And more importantly, there are two people sitting in this room, Priya and Saptarshi. They run patients union, International Patients Union. Even if you train doctors, industry and the frontline health workers, if patients are not trained, who will use digital? At the end of the day, they have to open an app, use it. They need to know what’s privacy, what’s security. So it’s on us on people like them, to go and train patients for how to use digital technology. So it’s a multidimensional topic and I’m happy that there’s a session dedicated to this. Unless we address this in a complete ecosystem perspective, we’re not done justice to this topic, thank you.

Moderator:
Thank you very much, Professor Gupta. And now to Jari. As someone with expertise in responsible AI, digital rights and a passion for the intersection of technology and society, how can policymakers craft regulations to ensure the responsible development and deployment of digital health technologies, especially for marginalized communities? And also, what role do you see for youth-led initiatives in enhancing digital health literacy, bridging the digital divide and engaging with policymakers to drive policies that support equitable access to digital health resources? Over to you. Hello, everyone, dear organizers, participants and guests.

Yawri Carr:
Thank you very much, Connie, for the organization and thank you for inviting me. Well, so in a world where technology and healthcare are more intertwined than ever, the responsible development and deployment of digital health technologies are paramount importance. This is especially true when considering marginalized communities where equitable access to healthcare is not just a goal, but a moral imperative. So in this case, I would like to mention the Responsible Research and Innovation Framework as one of the guiding philosophies that serve as a roadmap for navigating the intricate terrain of AI in healthcare. At its core, RRI is a commitment to harmonizing technological process with ethical principles. It places a premium on transparency and accountability, recognizing them as pivotal elements in the responsible development and deployment of AI technologies. In the realm of healthcare AI, RRI advocates for policies that do not only uphold digital rights, safeguarding privacy and security, but also establishing mechanisms to hold AI systems answerable for their decisions. It is a holistic approach that seeks to ensure that benefits of innovation are realized with a compromise in ethical standards or jeopardizing individual rights. So who should be involved in a process of responsible research and innovation? Societal actors and innovators, scientists, business partners, research funders and policymakers, all stakeholders involved in research innovation practice, funders, researchers, stakeholders and the public, large community of people, early stages of R&I processes, and the process as a whole. And when? Through the entire innovations life cycle. And to do what? So it is important to anticipate risks and benefits to reflect on prevailing conceptions, values and beliefs, to engage the stakeholders and members of the wider public, to respond the stakeholders, public values and also the changing circumstances that are present in these kinds of processes, to describe and analyze potential impacts, reflecting on underlying purposes, motivations, uncertainties, risks, assumptions and questions, and a huge amount of dilemmas that could also emerge in this kind of circumstances, and open to reflections and to have a collective deliberation and a process of reflexivity and to integrate measures throughout the whole innovation process. So these are also in which ways should we do this? Working together, becoming mutually responsive to each other, and of course in an open, inclusive and in a timely matter. And to what ends, what this framework proposes is that it’s allowing appropriate embedding of scientific and technological advances in society to better align their processes and outcomes with values, needs and expectations of society to take care of the future, to ensure desirable and acceptable research outcomes, solve a set of moral problems, and will also protect the environment and consider impacts on social and economic dimensions, also promote creativity and opportunities for science and innovation that are socially desirable and are taking the public interest. And how these can be applied specifically in a context of healthcare technologies. For example, there are academic projects and also societal projects. One example of an academic project is one from the Technical University of Munich in which I am now studying. Well, we have a project that’s an AI-driven innovation, including a robotic arm of exoprothesis and an advanced version of bimanual mobile service robot. So to ensure the responsible and ethical integration of these technologies into broader healthcare applications, the developers from the Machine Intelligence Institute have collaborated with the Institute of History and Ethics of Medicine, as well as the Munich Center for Technology and Society. And these teams are employing embedded ethics, incorporating ethics, social scientists and legal experts into the development processes. So they have initial onboarding workshops where these experts have become integral members of the development team. They have been actively participating in regular virtual meetings to discuss technological advancements, algorithmic development and product design collaboratively and interdisciplinary. And when ethical challenges are raised, they are addressed as part of the regular development process leading to adjustments in product design. An example involves the planning of model flats for a smart city where initial designs focus on open play layouts. Embedded ethics is highlighted in this case, potential challenges for elderly population and accustomed to such arrangements, promoting every consideration of the layout. Also taking into consideration that these kinds of projects in this specific case had a target population of the elderly population. So this is why it is very important to look at this target population and actually see if they are prepared and if they could be adapted to these kinds of technologies. So insights from this discussion influence the design process, emphasizing the importance of directly seeking future inhabitant perspectives in layout planning. And simultaneously, the project also involves interviews with various stakeholders, including developers, programmers, healthcare providers, and patients. Well, workshops, participant observations of development work, and collaborative reflection and case studies contribute also to active ethical consideration. And well, the project is also aiming to develop a toolbox to facilitate implementation of embedded ethics in diverse settings in the future. But there are also several unresolved issues that remain and that are also like with cultural setting and with corporate and organizational structures. Because even in research setting funded by public resources, the development of AI is predominantly situated in a fairly competitive landscape with prioritization of efficiency, speed, and also profit. So, and also in the case of health, so ethical considerations might be normally isolated or like are normally like not so taken into an importance when they directly clash with profit-driven motives. So taking ethical concerns seriously often creates a tension with industry objectives and the needs of the community. And this is the risk of being assimilated into broader corporate commitments to concepts like technological solutionism, micro-fundamentalism, that at the end prevents ethicists to actually do their work and to do a responsible healthcare technology. Normally, embedded ethicists may find themselves working within contexts that are characterized by pronounced power imbalances, particularly those of a financial nature. And it is probable that some form of enforcement measures will become very necessary in such environments. So not just for the development of the technical aspects, but also like for the work of the persons that are working on the responsible development and deployment. So that may be regulatory framework certification processes or even voluntary initiatives into the organization can make an awareness of these kind of issues that are arising in these situations. And well, okay, I also needed to talk about youth-led initiatives, right? If I still have time. Okay, so, well, there are also like a lot of ways in which youth-led initiatives and also marginalized community could also engage with responsible research and innovation. So for example, youth-led initiatives could connect or could try to participate in events such as this one, but also like try to, that universities or centers of education could inspire the youth so that they can also learn about telemedicine, how can they develop telemedicine initiatives in countries and also in a special rural areas as the professor was mentioning about in India, that these kinds of populations don’t have the same access. Also, for example, community-based participatory research projects that are involved in communities in their research process, ensuring that interventions are culturally sensitive and address the specific needs of a population. Also detail health literacy programs. And also like innovation challenges could be motivated between students and youth so that they can also engage. And I also consider the mentorship that these students or youth can also gain from experienced people is also very important because they need a guidance and also like foundations and also examples of how can they develop their ideas. So thank you.

Moderator:
Thank you very much, Jeri. So while low digital health literacy is a challenge for all populations, it’s also particularly harmful for marginalized communities. So in this section, we’ll discuss strategies for addressing health equity and the digital divide in the context of digital health. So let’s start this off with Ms. Gerilyn again. So in light of the session’s focus on health equity and the digital divide, could you share your thoughts and elaborate on specific policy measures and initiatives that Microsoft is advocating for or actively participating in to bridge the digital divide and promote equitable digital health access? And also how is Microsoft addressing barriers faced by diverse populations and how are these efforts contributing to advancing health equity? Over to you.

Geralyn Miller:
Yeah, thank you very much for the question. So I want to respond to, in this context to some of the comments that Dr. Gupta and Ms. Carr mentioned and really shine a light on the concept of artificial intelligence, generative AI, and what we at Microsoft call responsible AI as an example of policy. So one of my favorite quotes in this area is a quote by our Chief Legal Officer and President, Brad Smith. And I’m gonna paraphrase a quote I don’t have exactly, but Brad has a quote that basically says that when you bring a technology into the world and your technology changes the world, you bear a responsibility as a person that created that technology to help address the world that the technology helps create. And so from a Microsoft perspective, we look at this under the lens of something that we call responsible AI. Our responsible AI initiatives date back far before the birth of the chat GPT and generative AI and large foundation models and large language models, really back to about 2018, 2019. And we have a set of principles that we’ve established that are around how you design solutions that are worthy of people’s trust. So these are our principles, what we call our responsible AI principles. There are many people who have different principles around responsible AI. I’ll share with you ours. I would just offer that it’s something worthy of thought. And very often when I work with academic medical centers or healthcare providers who are starting to use AI or build and deploy AI models, I also offer to them, hey, you should have a position on responsible AI, right? Do your thought work, do your homework. You should have something that is consistent with your own values, your own entity’s values. And, but going back to, from a Microsoft perspective, what we believe those principles are. The principles are really based on fairness. So treating all stakeholders equitably and not making sure that the models themselves don’t reinforce any undesirable stereotypes or biases. Transparency. So this is all about AI systems and their outputs being understandable to relevant stakeholders. And relevant stakeholders in the context of healthcare means not only patients who may be receiving the output of this, but also clinicians who may be using these as decision support tools or to do some type of prediction. Accountability. And so people who design and deploy AI systems have to be accountable for how the systems operate. And I’m gonna do a click down on accountability in a second. Reliability. So systems should be designed to perform safely, even in the worst case scenarios. Privacy and security, of course, that goes, those are underpinnings behind any technology and AI systems as well should protect data from misuse and ensure privacy rights. And then inclusion. And this is all about designing systems that empower everyone, regardless of ability, and engaging people in the feedback channel and in the creation of these tools. And there are some things I will drill down a little bit on the inclusion front as well. So when you, an example, as I mentioned of the accountability, I’d like to share some things that are, President Brad Smith was offering when he testified before this, the U.S. Senate Judiciary Subcommittee. This was back in the beginning of September, around September 12th, on a hearing entitled The Oversight of AI, Legislating and Artificial Intelligent. So Brad highlighted a few areas that he is suggesting help shape and drive policy. One is really about accountability in AI development and deployment. Things like ensuring that the products are safe before they’re offered to the public. Building systems that put security first. Earning trust. So this is things like provenance, technology, and watermarks so people know when they’re looking at the output of an AI system. Disclosure of model limitations, including effects on fairness and bias. And then also really channeling research, energy, and funding into things that are looking at societal risk associated with AI. He also suggested that we need something called, what he terms safety brakes for AI, that manages any type of critical infrastructure or critical scenarios, including health. And when you think today, we have collision avoidance systems in airlines, we have circuit breakers in buildings that help prevent a fire due to, for example, power surges, right? AI systems should have safety brakes as well. So this involves classifying systems so you know which ones are high risk. Requiring these safety brakes. Testing and monitoring to make sure that the human always remains in control. And then licensing infrastructure for the deployment of critical systems. And then from a policy perspective, ensuring that the regulatory framework actually maps to how these systems are designed so that the two flow together and work together. So that’s an example of the policy in action side of things. And from a Microsoft perspective, we put our responsible AI principles that I mentioned into action through our commitments at a policy level. Our voluntary alignment, for example, here in the US out of some of the things coming out of the White House. So voluntary alignment with commitments around safety, security, and trustworthiness of AI. And on one last point, I did wanna go back to the responsible AI principle and talk about inclusion. And so we’re doing some work from a Microsoft perspective in the health AI team that I am a product manager on to really look at how, when we have data that guides models, and this is either custom AI models or when we’re grounding large foundation models or large language models with data, how do we make sure that we understand the distribution and makeup of that data to ensure that their bias doesn’t creep in from the data perspective? And we’re also doing work, for example, on the deployment of models. How do you understand if models are performing

Moderator:
as they intended?

Geralyn Miller:
How do you monitor for something called model drift? So when models start to perform in a manner that isn’t how you think, right? When the accuracy starts to decline, and then what do you do when the models don’t perform that way? And this last part, the model monitoring and drift is some of the things that we have happening out of our research organization. So thank you. Thank you very much, Ms. Cherilyn.

Moderator:
So now I want to move back to Ms. Debbie. Drawing from your experience in developing the digital strategy for a major telco in South Africa, how can telecommunication companies play a more significant role in advancing health equity and bridging the digital divide through innovative approaches and digital solutions? And also, what lessons can be learned from your work in South Africa that can be applied globally to improve digital health access? Thanks.

Deborah Rogers:
I think one of the most interesting examples of how mobile network operators have really had a big impact on in decreasing any inequities around health is the Facebook Free Basics model. You may not know what that was, but Facebook basically put together simple information through what looked like a little Mobi site. And this was essential information that they felt everybody should have access to. And they work with mobile network operators to zero rate access to only that portion of Facebook, just that portion, not to everything, but just that portion. And they were able to show that by providing essential information that is free to access, they were able to improve people’s literacy and use of data. So they then went on to use more data and to use the internet more often and therefore become more valuable customers to the MNOs. So by doing something like providing free access to essential information, there was also an increase in profit for the mobile network operators. And I think that’s a really interesting model to look at. I think very often we forget that it’s just as important for mobile network operators to be reaching as many people as possible as it is for those of us who are trying to improve health through something like digital health. And so if there are aligned priorities, then there are very good ways that you can work together. One of the ways that we’ve worked with mobile network operators in South Africa has been to reduce the cost of sending messages out to citizens of the country. And that’s been done not in a way that prohibits the mobile network operators from making a profit, but what it does do is it makes it completely free for the end user. So if it’s completely free for the end user, you’re reducing the barriers for them to be able to access this kind of information. But the reduced cost is then something that can be brought to the table because of the increased size of access. So the more we scale out these programs, the more we’re able to see economies of scale, and the more worthwhile it then becomes for mobile network operators to engage with us. And so one of the very interesting models that’s been used was to reduce churn. So if people can only access information, say using a MTN SIM card, they’re less likely to switch to other SIM cards if that’s the case. And so being able to align the health, the desires of a health, digital health organization or government with those of mobile network operators is incredibly important for being able to ensure that you’re working towards the same goal, but without anyone asking for any handouts because that’s not going to work. I think when it comes to strategies for decreasing inequity, I think the one that we really need to talk about more is about being human-centered. And that doesn’t just mean designing for people and occasionally having them attend a focus group. It means designing with them and ensuring that the service is actually something that they want to use, something that they love using. Make it easy and intuitive for them to use. No one starts a course on how to use Facebook before they use Facebook. We shouldn’t create services that need so much upscaling. We should create services that are simple and easy for people to use. You need to use appropriate language and literacy levels. And this is something that the medical fraternity often forgets about because it is a very patriarchal society. Make it something that is at least close to free for people to access. We find that access to a mobile device is less of a problem than the cost of data, for example. So just because somebody has access to a device doesn’t mean that they’re going to be able to go and look up information because they may not have data on their phones. So you can work very closely to reduce the cost or make it zero cost. And that’s really going to ensure that you reduce the barrier to access. And then you really have to try and think about the system that you’re in. By creating a digital health solution, are you overburdening the health system that already exists, for example, or are you reducing the burden on it? Are you creating feedback mechanisms that mean that you can understand what the impact is that you’re having on the system itself rather than working within a vacuum? Are you making sure that where a digital health solution may not be accessible to somebody, there is an alternative in place that does not rely on the digital health solution? We can’t just operate within silos. We have to think about the fact that digital health is just as much a part of health infrastructure as the physical facilities, for example. Until digital health is seen as just as much of an infrastructure, it’s going to be a fun project on the side and not something that’s going to have some systemic change. And so it’s really important for us to think about that system. And then recognizing biases, I think Geraldine mentioned this, very often the people who are creating digital health services are not the people that are using the digital health services. So this goes back to why human-centered design is so important, but it’s also important to understand that you will be introducing biases if the people who are building the system are not the people who are using the system. And so you have to look more systemically. Look at the makeup of your team. How diverse is the makeup of your team? I would assume, having been an electrical engineer myself, that it’s probably not particularly representative from a gender or race perspective. So look at the team that you have. How are you working to make your team more representative and therefore address some of the biases that are going to be put in place by having a non-representative team building up the systems? So there’s a bunch of things in there, but I guess in summary, build for the end user in mind, make it human-centered, make it easy to use, appropriate, and intuitive. Design with the understanding that you work within a system and make sure that you don’t have unintended consequences and that you’re always feeding back to understand what the impact on the broader system is. And ensure that you think about the biases that are going to be inherent in the fact that the people building the system are not necessarily the people using the system.

Moderator:
Thank you very much, Ms. Debbie. And now moving on to Professor Gupta. So based on your background in advising the Health Minister of India and drafting national policies, how can governments play a pivotal role in addressing the intersection of health equity and the digital divide, particularly in the context of healthcare access for marginalized communities, and also what policy measures should be prioritized to ensure equitable digital health access?

Ravindra Gupta:
Thank you, Connie. Thank you, Connie. This depends on the economic status of the country. So when you have a LMIC country like India, so I’ll give you example was what was done. So we understand that there is a sizable population which is underprivileged, which is marginalized. So there was a scheme that was launched for 550 million people, and you have to understand that countries are at different phases of development and they require investments on infrastructure, they require investments on health and education, and it’s not possible to give the amount that the sectors actually deserve. So what was done very carefully since I was in drafting the health policy, I played a role in that. So we carefully treaded the path of saying, let’s first make primary care a comprehensive primary care. So first guarantee primary care. So that’s comprehensive, that includes chronic disease management to all the things. Then let’s convert the sub-centers and primary centers into health and wellness centers and put telemedicine as a part of it. So what happens is 160,000 health and wellness centers now across the country offer you telemedicine. Then we created a e-Sanjeevani program, which is a telemedicine program, which is you can get a doctor consultation for free. So that is across specialties. That’s why it’s at 120 million consultations. And now what’s going to happen is we’re putting in AI and NLP in that. So given that India has 36 states and people talk different languages, their dialects are different. So a person’s talking from a southern state to a doctor in northern state will hear like his language when he speaks and the doctor will hear in his language when he listens to the patient’s problem. So I think India has planned its strategy for addressing the vulnerable and the underprivileged sections as it charts its course of development. One is that integrate technology in the care delivery right from the primary care. So that has proven, as I said, 460 million health records, 550 million people given insurance, which is of a very decent amount, I would say, which a typically middle class would afford. So on the policy side on digital health, India has, as we speak, is probably the largest implementation of digital health in the country that is happening. And I would bring here one point that the government has not only to take the stewardship, but also the ownership of investing in digital health. Debbie would understand it very well that digital health is still figuring out the business model. That’s why you see the largest companies have withdrawn digital health and as much they can give talks on the forum, but their investments are on futuristic technologies, which are probabilistic technologies. But the companies that forayed into it years ago don’t exist on the map. So I think governments have to play a frontal role on investing like Indian government has done. They set up a national digital health mission, rolling it across states, ensuring that everyone has what you call the Ayushman Bharat Health Account number, ABHA number. And we actually will be probably the first country to work towards what I have championed that let’s work to make digital health for all by 2028. And this for those who work in healthcare and more so in public health. 45 years back in Almaty, we promised health for all by 2000. It’s 23 years after the deadline that we are still not close to that. At least we can champion digital health for all by 2028. If that is one objective we pursue as governments across the world, I think a lot of issues will get addressed because there is a whole lot of planning that will go into doing that. And it’s doable. That’s the only way you can address the issue of health equity. Because the practical part is that doctors who study in urban areas do not want to go to rural areas. They will not. I mean, even if you push them to do, they will find a way to scuttle that. But the only way you can do is you can get technology into their hands with the mobile phones. I think now the systems are fairly advanced. Tomorrow we are hosting a session on generate the conversational AI in low resource setting. So you can have chatbots interacting with people, addressing their basic problems. And 80% of the problems are routine, acute problems. So I think we need to leverage technology not only as a policy, but as a program. And there are best practices available. I think India has, parts of Africa have, but these are like islands of excellence. I think forums like these are good to discuss if they can be mainstreamed into islands of excellence to center of excellence. And we can replicate them and scale those programs. So I think India probably would have a good story as we speak about scale up of digital health program. But again, the key point is that the federal government has to be the funder for the program. And where do you start is health helpline. If you really want to address the inequities, start a health helpline, which people can pick the phone, talk to a doctor or a paramedic and get a consultation free of cost. Get into projects like East and GV, which I think the country is offering to other countries as a goodwill gesture, is where you connect to district hospitals and tell doctors to allocate time for doing digital consultations. So these programs actually help you bridge the digital divide. And health and wellness centers, a phenomenal experience of under $60,000 health and wellness centers which have telemedicine facility. So I think picking up the queue, I would say it’s time for implementation. For policy-wise, I think we all know that. I think that we very clearly said it’s getting integrated. And in fact, I go a further line and say, if you’re not into digital health, you’re not into healthcare. Don’t talk healthcare. That’s the truth, actually.

Moderator:
Thank you. Thank you very much, Professor Gupta. And finally, Tijari, drawing from your experiences in speaking about youth in cyberspace and internet governance, how can young advocates actively participate in shaping internet governance policies to ensure that digital health resources are accessible and equitable for all, regardless of socioeconomic status or geographic location? And also, what are some successful examples of youth-driven initiatives in this context? Over to you.

Yawri Carr:
Thank you very much. Well, in the realm of youth in cyberspace and internet governance, empowering young advocates to actively shape internet governance policies is crucial for ensuring equitable access to digital health resources. So young advocates can play a transformative role in policy discussions by engaging in many ways, such as participating in the IGF, because with this active participation, we start to break the ice in how to discuss, how to have dialogues, how to ask questions, and all of these activities, even though they are seen as very daily for experienced people, for youth, this is, yeah, ways to break the ice and to gain confidence in how to participate in public debates. And they also get insights into current challenges and opportunities in digital health governance. Second, for a formation of youth coalitions, young advocates can form coalitions or networks dedicated to digital health equity. And these coalitions can amplify the collective voice of young people advocating for policies that prioritize accessibility and inclusibility in digital health. For example, we have the Inter-Society Youth Group, or we have regionally different youth initiatives, and a chapter about digital health could also be open so that coalitions in this specific topic can deepen into these kinds of topics. Also, third, it would be engagement with multi-stakeholder processes. So not just the IGF, but also in other kinds of processes that are led by governments, NGO, or industry stakeholders. And their participation ensures that diverse voices contribute to shaping policies that consider the needs of all. And it is also important that in this circumstance, so public sector and industries and NGOs can also open this kind of opportunity for youth and that they actively seek for youth that could participate into their processes as well. Because if they don’t do it in such a direct way, so youth, as I mentioned before, they could feel intimidated and think that they are not experienced enough to participate. The fourth, youth-lead policy research. Young advocates can initiate research projects to understand the specific challenges faced by marginalized communities in accessing digital health resources. Because evidence-based research can be a powerful tool for advocating target policy changes. And I think this is something that it is a situation, it is a possibility in many countries that have the resources for research, but it is still very behind in countries, for example, in Latin America, where we don’t have so much support from public foundations or from the government to do research. And we also don’t have like so big research focus in our university. So I think maybe one professor can bring this kind of perspective that can inspire the students to make a research group for example, universities in Brazil, they have like student groups in which they meet some day of the week or some day monthly, and they discuss specific topics. So I think this is a good practice so that youth can start to create, that they can start to discuss and that they can start to bring this university and to other colleagues and classmates. Of course, it would be great if some countries could also start to help other global South countries in order that they can have more research and that the students can participate more in these kinds of initiatives in their own countries. Also innovation hubs for digital health. So for example, in which hubs in which young innovators, healthcare professionals and policy makers can create solutions together. In this sense, it would be also good to have a funding from an organization or a company that can also collaborate so that these kinds of innovations at the end can also maybe have like starting a month of financial resource so that they can start with this kind of innovation and that youth can feel that they are able to become innovators in this kind of field. But also I think that this kind of innovation address gaps in digital health accessibility. And some kind of examples of youth driven initiatives are for example, digital health task forces because in several regions, youth led task forces focus on creating policy recommendations for integrating digital health into broader intergovernance frameworks. Also youth led data privacy campaigns in which youth can also for example, create dialogues in various communities and they can provide awareness about the importance of robust data privacy measures in digital health technologies that people and common patients can also understand why it’s important to protect their privacy when they access some kind of digital health tool. And global youth hackathons for health in which there are health challenges that can develop on innovative apps and platform addressing specific healthcare needs that are specifically related in the communities of these youth. And I also consider another action. It’s this movement also of paid internships that students can also have access to internships that are paid so that they can equally participate in a practical application of what they are learning at university or what they are studying. So, well, I think that by actively participating in these initiatives, young advocates contribute with fresh perspectives, innovative solutions and commitment to digital health equity in internet governance policies because they are digital natives and they also could. I consider they could understand rapidly how the technologies can help them, but also their challenges, their issues, and they can also become more active as they are not just the future, but also the present. So, thank you. Thank you very much, Jari.

Moderator:
And also, thank you once again to the panel for their responses. And so now we’ll move on to the Q&A session. So, if any on-site participants would like to raise their questions, please feel free to walk up to the mic. Okay. Hello, I’m Nicole. I’m a Year 2 student in Hong Kong.

Audience:
In case of another pandemic like COVID-19 nowadays, how do you think the current digital health can be developed and improved and contribute to the society in recovering and ensuring each individual can receive the accurate and same medical advice and treatment without physically visiting a healthcare facility as it will be crowded with a lot of people or elderly. Thank you.

Deborah Rogers:
Thanks. I think actually looking at some of the work that was done during COVID-19 is a really good example of how we can use digital health to address issues that come up during a pandemic. I think one of the things that has really been a challenge in the work that we do is that we speak directly to citizens and empower them in their own health. Given that the medical fraternity is quite patriarchal, that’s not usually a priority. What we found is that when an issue is something that happens to somebody else, then it isn’t seen as a need to provide people with the right information. But when COVID-19 happened, everybody was affected. Nobody had the information. It didn’t matter if you were the president of the country or if you were a student at a high school. No one had the information about the pandemic that was needed. So we were able to use really large-scale networks, things that were already there like Facebook, like WhatsApp, like SMS platforms, to be able to get information to people extremely quickly. In a time when the information was changing on a daily basis, this wasn’t something where you could take a lot of time, think through things and put up a website and think about how things are going to be talked about. This was happening in real time. So you continually had to be updating things. People continually had to get the latest information. And without that, many more people would have died than did already in the pandemic. I think what’s important, though, is for us not to forget the lessons of COVID-19. We very quickly forget, as human beings, when things go back to so-called normal, we very quickly forget the lessons that we learned. And so I think one of the really important things that needs to continue from COVID-19 is an understanding that knowledge is power in the patient or citizen’s hands. And this isn’t something that needs to be hoarded by the medical fraternity, that by giving information to people at a really large scale, you can improve their health and you actually make your life easier at a time when you are most needed. Digital health can’t replace a healthcare professional, but it certainly can reduce the burden for healthcare professionals. And so that’s a really important thing that we need to continue to consider as we move on from COVID-19. I think the other thing to remember is that we built up platforms, digital health platforms, that solved problems during COVID-19. Screening for symptoms, for example, gathering data that could be used for decision-making, sending out large-scale pieces of information to people. Many, many people in the digital health space reacted very quickly and created incredible platforms that could be used to solve the problems during COVID-19. Many of those no longer exist today. And so we need to remember that there needs to be an investment in digital health infrastructure in the long term so that we don’t have to spin up new solutions every time there is a new pandemic, because there will be another one. It’s not something that is going anywhere. So how are we preparing so that when the next pandemic comes, we’re not having to start from scratch all over again? And I think that’s something that we very quickly have forgotten.

Moderator:
I want to take a minute and address that as well, if you don’t mind.

Geralyn Miller:
A couple of things, I think, from the pandemic, and that’s a really great question, because as a society, we want to learn from the past. There’s two areas where I think are worthy to bring forward from the pandemic. First is that there is an incredible value in these cross-sector partnerships. So in public, private, and academic partnerships, we seek to light up research on understanding the virus to do things like drug discovery. Some of this was governance-sponsored consortium. Other were more privately-funded consortium. And then third class was kind of just similar groups of people coming together, what I would say almost community-driven groups. So really this cross-sector collaboration, that’s the first thing. Second thing is there is some good standards work that I think was done during the pandemic that could be brought forward. So we saw the advent of something called smart health cards during the pandemic. Smart health cards are a digital representation of relevant clinical information. During the pandemic, it was used to represent vaccine status. So think of it as information about your vaccine status encoded in a QR code. There has been an extension of that, something called smart health links, where you can encode a link to a source that would have a minimum set of clinical information. And it’s literally encoded in a QR code that can be put on a mobile device or printed on a card for somebody to take if they don’t have access to a mobile device. Smart health cards also reinforces the concept of some of the work being done by the IPS, or International Patient Summary Group. It is a group that is trying to drive a standard around representing a minimal set of clinical information that could be used in emergency services. And so some of those things that happened in the standards bodies I think were very powerful during the COVID-19 pandemic and I would love to see more momentum around driving those use cases forward and also expanding them. Thank you.

Moderator:
Thanks. Firstly, another COVID shouldn’t happen. That’s first.

Ravindra Gupta:
Second, I don’t think that technology at any time failed. Actually, it proved that it was ready. So whether you looked at the fast-track development of vaccine, which was collaborating researchers across the globe for technology. Repurpose drug use, artificial intelligence. That’s why we did it. I think almost every country, our country used COVID app. We delivered 2.2 billion vaccinations, totally digital. So I think digital health proved that it was ready. It is ready. Challenges will come, but I think technology is the only one that saved the life. We wouldn’t be sitting in this room, trust me, if technology wasn’t around. The only thing that we should do through forums like this is to keep the momentum going. What we want is to forget the COVID and go back to the old ways. I think there were incentives given by the government. There were flexibilities offered in terms of continuing the telehealth regulations like in the United States. I think that should become permanent. That’s all we should do. So technology has already proved that it’s ready. We were waiting for COVID to be shaken and start using it. So I think technology is ready, will always be ready with us for anything that comes our way. Thank you.

Yawri Carr:
Geri, would you like to provide a response? Yeah. I just wanted to say that I consider that in this situation of a pandemic, telemedicine and also the implementation of robots as the case that I mentioned previously are of a huge importance and could also be very useful taking into consideration that it’s very dangerous for humans to attend or to take care of people because of the contagious possibilities or risks. So I think that in these specific scenarios, the application of telemedicine and robots is particularly useful. Of course, taking into consideration that it’s an emergency, that the robots should not be working alone. They should also be guided by humans, but at least they are protecting also that workers such as nurses that are commonly workforce are not so valued in different societies because the tasks that nurses do, for example, are normally considered as dirty or not of a great importance. So I think, actually, these kind of technologies can protect not just the health of the patients that are infected by COVID or other pandemic, but also the work of the medical professionals such as nurses that are normally very exposed. And in the other side, I also remember the initiative of Open Science that my country, Costa Rica, actually had proposed to the World Health Organization so that the initiatives, the projects, and the research that was done in a context of a pandemic is opened and that also is kept available for every person that’s interested. And the data can also be accessed without having to pay, without having to make a patent of that. And I consider this also of extremely importance because in a case of an emergency, we just don’t have time for that and we should really try to cooperate within each other and to try to respond to the emergency in a holistic and collaborative way. Thank you.

Moderator:
Thank you very much to the panel for your responses. Are there any other on-site questions? If not, then I’ll take the question from the chat. So what are some emerging trends and future directions in digital health literacy? And what do you suggest to individuals to stay informed and up-to-date in this rapidly evolving field and ensuring they have the accurate guidance and outdated information?

Ravindra Gupta:
I’ll take that because of the couple of initiatives we are running. So one is on the technical community side. What we are doing is, within the health parliament that I run with my team, we have created co-labs. We are creating developers for health, working with companies like Google and others because I think what we need to do is to create developers to solve problems. So that’s one initiative where people who are enthusiastic about being part of the technical contributors to digital transformation of health, that’s one. The other thing, in the next three months, we’ll be starting courses for class 8 students on robotics and artificial intelligence, an elementary course. We want to educate them very early on so that they can choose what they want to do. They will be aware on what the opportunities are. And same way, we are doing courses which are very elementary level for people to understand rather than going to deep dive into tech. And everyone who is into health, I would strongly recommend that if you don’t know digital health, you will hit a zone of professional irrelevance. Please update. Whatever you do, whether you do a one-week course, two-week course, just make sure that you know digital health from an ecosystem perspective. Thank you.

Moderator:
Would any other speaker like to take the question?

Geralyn Miller:
Yeah, just a few comments on that. I think it’s always a challenge at the pace of innovation that we’re seeing today to keep current. So I want to call out and acknowledge our panel here today and the people who put the panel together today and gave us this opportunity. This is one way that the dialogue starts and that information is shared. And so more opportunities for people of similar interests to come together, I think will always help advance the state of where we’re at from an understanding perspective. So opportunities like this, training as well. I know, and it’s not just training from tech providers, it is just training infused into the academic system as well. And so I would agree with what Dr. Gupta said there. But again, a call out to the folks who put together this panel because I think this is one way that that starts. Thank you.

Moderator:
Thank you very much, Ms. Geraldine. So we have about five minutes left. So maybe we could go with the closing remarks from each of the speakers. Maybe starting with Ms. Debbie.

Deborah Rogers:
I guess my closing remark would be that technology is a great enabler. It can actually be used to decrease the inequity that we see in health, but also in digital literacy. I am actually very positive about the future that we see with digital health. And I think Dr. Gupta is right, the technology is ready. We’ve seen many case studies where things have been done at a really large scale. This is no longer a fledgling area. This is now a mature and really large scale area of practice. And so I’m really excited to see what happens from this point. And I’m excited to see that we have youth involved in this panel because yes, absolutely, youth will be the people who will be building the next evolution in this space. So really excited to see how that works and to see how things evolve from here.

Ravindra Gupta:
I think I would say that in this age where patients are more informed, if not than anyone, about health conditions, about the treatment options, it is high time doctors know them before patients start telling them. You don’t know about it? Let me tell you this. I saw this. So I think, one, this is that digital health is something that everyone who is into healthcare, whether it is a clinician or a paramedic, needs to learn this. Second, if you’re talking about digital health, scalability, scalability comes first. So I think continuously upscale, cross-scale yourself.

Geralyn Miller:
And lastly, I must say thanks, Connie, for putting up this wonderful panel discussion. Ms. Sterling? Yeah, first off, I want to start by expressing my gratitude for being included in this. It was a wonderful opportunity. I want to echo the sentiment that youth play a huge role in this going forward, and I’m very appreciative that you brought everybody together under this umbrella. The thing from a tech perspective, I agree with the panelists on that, you know, digital health is here now. The one part that I would add to this is that when we’re thinking about things new, evolving technology like generative AI, let’s do this in a responsible way, open the dialogue around policy discussion. A discussion is always healthy, and let’s make sure that this technology that we’re bringing to light, with good intent, benefits everyone. Thanks.

Yawri Carr:
Well, and in my case, well, in conclusion, let us strive to be digital health leaders equipped not only with technical skills, but also with a profound commitment to equity. I consider value the work of nurses is very important, even though the technology evolves. Of course, professionals, humans will be very necessary, and it is a fact that technology can help us to protect them and also the patients in situations of emergency and also value of the work of ethicists when they have something to say that they are not misvalued, that they can take into consideration and also when there are conflicts with, for example, profits, so that ethicists can also have an opinion of that and that they can also try to contribute in the mission of responsible AI, so that they are not just there as a decoration, but they are actually taken into consideration. And also, well, of course, the role of youth is fundamental. As we see, all the youth-led initiatives that could strengthen the mission of digital health literacy nowadays can, in the future, so develop in a very good environment that it’s inclusive, that it’s including marginalized communities and all the population. So I consider that now health care and digital health care should not be more a privilege, but also a right. And yes, I’m very thankful also for the opportunity to be here and to express my opinions and to talk about youth as well. Thank you very much.

Moderator:
Thank you very much once again to the panel for your insightful responses, and the workshop has closed today. Thank you very much for coming, and together we hope we can create a future where digital health resources are accessible, equitable, and can empower individuals to navigate their health journey confidently online. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

148 words per minute

Speech length

69 words

Speech time

28 secs

Deborah Rogers

Speech speed

171 words per minute

Speech length

2898 words

Speech time

1020 secs

Geralyn Miller

Speech speed

173 words per minute

Speech length

2706 words

Speech time

940 secs

Moderator

Speech speed

171 words per minute

Speech length

1670 words

Speech time

587 secs

Ravindra Gupta

Speech speed

206 words per minute

Speech length

3416 words

Speech time

997 secs

Yawri Carr

Speech speed

146 words per minute

Speech length

3219 words

Speech time

1325 secs

Beyond development: connectivity as human rights enabler | IGF 2023 Town Hall #61

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Nathalia Lobo

Brazil conducted a 5G auction in November 2021, with the majority of the revenue, over $9 billion, being allocated towards coverage commitments. This substantial investment showcases the country’s dedication to advancing its technological infrastructure. The auction resulted in a positive sentiment, as it is anticipated to greatly enhance connectivity in Brazil.

One notable project aimed at improving connectivity is the North Connected project, focusing on the North and Amazonic region. This initiative plans to deploy a comprehensive network of 12,000 kilometres of fibre optic cables into the Amazon riverbeds, ensuring efficient and reliable connectivity. The maintenance of these cables will be handled by a consortium of 12 different operators. The positive sentiment surrounding this project indicates its potential to significantly enhance connectivity in this region.

Furthermore, the impact of these connectivity projects extends to critical sectors such as healthcare and education. With the support from the funds generated by the 5G auction, hospitals that previously lacked internet access can now effectively manage patient information through online systems, providing better access to resources and improving healthcare services. Additionally, schools are being connected through the funding from the auction, enabling better education opportunities and facilitating digital learning.

Efforts are also being made to make the benefits of internet usage tangible and viable for the people of Brazil. The allocation of funds for connecting schools underscores the commitment to providing equal educational opportunities to all. Additionally, investments from the auction funding will be allocated to various projects, ensuring the overall development of the country’s digital infrastructure and making internet accessibility a reality for all.

While community networks play a crucial role in ensuring connectivity, they have specific needs that require tailored directives and policies. These networks operate in unique ways, making it challenging to create standard policies. A positive stance emphasizes the importance of understanding the needs of community networks and structuring effective public policy to support their viability. A working group has been established to study these specific needs and draft viable policies and directives.

One significant outcome of the efforts to improve connectivity is the North Connected project, which aims to bring competition and lower connection prices in the region. By increasing the number of service providers and fostering healthy competition, consumers can benefit from more affordable and accessible connectivity options. At least 12 new companies will operate in the region as a result, indicating a positive impact on reducing inequalities and improving access to digital services.

However, there are concerns regarding illegitimate community networks that don’t operate with the same level of efficiency and reliability as legitimate networks. Differentiating between legitimate and fake networks becomes imperative to ensure that public financing is not misused. The need to regulate and monitor community networks to prevent misuse highlights the challenges faced in this sector.

Overall, the initiatives and projects aimed at enhancing connectivity in Brazil signify a positive transformation in terms of infrastructure and access to technology. Community networks offer meaningful connectivity and foster learning about the digital world within communities, complementing the efforts of Internet Service Providers. The government continues to grapple with the challenges and responsibilities associated with supporting the growth of community networks, highlighting the need for a balanced approach to drive equitable and inclusive digital development in Brazil.

Audience

The analysis delves into various arguments surrounding the topic of internet access and community networks. One argument highlights concerns about the current system of charging for data, particularly as it is seen to benefit more developed communities, while placing a financial burden on users. The speaker expresses worries about high bandwidth resources, like videos, requiring more financial resources from users, thus exacerbating inequalities. This argument reflects a negative sentiment towards the capitalization of bandwidth.

Another critique focuses on the current telco model, suggesting that educational resources should not be dependent on a person’s ability to generate income. The speaker questions why access to educational resources should be gamified and proposes a different model where users can directly access frequently needed resources. This perspective aligns with SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities). It also carries a negative sentiment.

On a positive note, the case of Finland is cited as an example of a more honest and beneficial business model for data plans. In Finland, data plans do not have a data cap but differentiate based on speed, providing everyone with a flat rate. This positive sentiment is supported by evidence of negligible variable costs for data volume, especially in mobile services.

However, the analysis reveals that the project Wikipedia Zero, which aimed to provide zero-rated access to Wikipedia versions, failed to gain substantial traction and was discontinued in 2017 due to low access numbers. This is considered a negative outcome for the project.

The analysis also highlights the importance of revisiting access points and connection questions in communities facing struggles related to conflict and climate change. Access numbers to internet services, such as those provided by the Wikipedia Zero project, are questioned in communities where zero connectivity is a resilience measure. This observation reveals a neutral sentiment towards the subject.

Additionally, the high costs associated with community networks in remote areas are flagged as a concern. It is noted that individuals in Chihuahua and Mexico need to have a daily income of at least $3 or $4 to afford connectivity, while some communities resort to engaging in illegal activities to finance their access. This negative sentiment highlights the financial challenges faced by communities in remote areas when it comes to internet access.

The analysis further reveals that community networks sometimes depend on weak infrastructures, which affects the quality and reliability of their services. This observation adds to the negative sentiment surrounding the topic.

Issues with government policies regarding access to fiber networks for communities are also raised. The analysis suggests that despite the presence of fiber networks near communities, government policies restrict their access. However, there is optimism about future developments, particularly with the Amazonian network project.

The operation of community networks is shown to vary depending on their context. For example, community networks in Africa function more like small businesses, while those in Mexico or Colombia display a greater level of political organization. This insight highlights the diversity in the operation models of community networks.

Lastly, the misconception of poor service quality in community networks is challenged. The analysis presents evidence of good performance and positive impacts of community networks in communities. This positive sentiment encourages a re-evaluation of misconceptions and brings attention to the potential benefits of community networks.

In conclusion, the analysis provides a comprehensive overview of various arguments and perspectives on internet access and community networks. It highlights concerns about the current system of charging for data, critiques the telco model, and examines alternative approaches. It presents the case of Finland as an example of a more beneficial business model for data plans. It also discusses the discontinuation of the Wikipedia Zero project and raises questions about access to internet services in communities facing challenges like conflict and climate change. The analysis examines the high costs and weaknesses in infrastructure associated with community networks. It points out issues with government policies regarding access to fiber networks and highlights the diversity in the operation models of community networks. Finally, it challenges misconceptions about poor service quality in community networks and emphasizes their positive performance and impacts in communities.

Jane Coffin

This extended summary delves into the importance of diverse networks, grassroots advocacy, community networks, public-private partnerships, and structural separation networks in the context of global internet access and connectivity. These points are supported by various pieces of evidence and arguments.

Firstly, the importance of diverse networks is highlighted, with a focus on how they contribute to global internet access, lower prices, and reaching more people. It is demonstrated through the challenges faced by Liquid Telecom in deploying fibre from Zambia to South Africa due to complications, as well as the significance of connectivity being delayed by regulatory issues. This highlights the need for diverse networks to ensure better access, affordability, and inclusivity in the global internet landscape.

The significance of grassroots advocacy and multi-stakeholder approaches in promoting connectivity is emphasised. Personal experiences of working on community network projects are shared, underscoring the collective power of communities in negotiating with governments. This highlights the role of advocacy and partnerships in bridging the digital divide and ensuring that connectivity initiatives are inclusive and sustainable.

The effectiveness of community networks in providing connectivity in regions where major providers struggle to make a profit is discussed. The example of East Carroll Parish, Louisiana, where a community network was utilised to provide connectivity, exemplifies how these networks can fill gaps and offer diverse types of connectivity. This showcases the potential of community-driven initiatives in expanding internet access to underserved areas.

The role of public-private partnerships and innovative financial models in funding connectivity projects is emphasised. The Connect Humanity project is cited as an example. This underlines the importance of collaboration between public and private sectors, as well as the need for innovative financing mechanisms, to overcome financial barriers and ensure sustainable investment in connectivity infrastructure.

Structural separation networks are presented as a viable option for reducing costs and improving connectivity. These networks involve one party managing the network while others operate their services on it. This model is being explored in parts of the US, where municipalities are demanding greater accountability. The potential cost-efficiency and improved connectivity offered by structural separation networks make them an attractive option for expanding global internet access.

Lastly, the summary highlights that communities running their networks can deliver reliable connectivity. It stresses that such networks are not unreliable but are managed by skilled technologists. These community networks require a long-term business plan and substantial financial backing to ensure sustainability. This insight underscores the importance of community involvement and support in achieving sustainable and robust connectivity solutions.

In conclusion, the extended summary underscores the importance of diverse networks, grassroots advocacy, community networks, public-private partnerships, and structural separation networks in promoting global internet access. These insights are supplemented by evidence and arguments from various sources. It is evident that a multi-faceted approach, involving collaboration, innovation, and community empowerment, is crucial for achieving connectivity goals and bridging the digital divide on a global scale.

Raquel Renno Nunes

The analysis explores the important issue of connectivity and stresses the significance of internet accessibility as a fundamental human right. Notably, Raquel Renault plays a crucial role in this area as a program officer at Article 19, where she addresses various connectivity issues. Her responsibilities mainly focus on infrastructure and involve collaboration with standard-setting organizations such as the ITU and ITU-R.

Undoubtedly, the COVID-19 pandemic has highlighted the pivotal role of connectivity, particularly in enabling the right to health. The outbreak has underscored the critical need for accessible and reliable internet connections to ensure the well-being and improved access to healthcare services for all individuals. In this context, connectivity has emerged as a vital tool in bridging the digital divide and reducing inequalities.

One of the central debates surrounding connectivity revolves around whether internet access should be treated as a human rights issue or simply as a commercial service. There are two contrasting ideologies on this matter. On one hand, the viewpoint advocating for the recognition of internet access as a basic human right argues that governments and relevant organizations should ensure equal access and availability of the internet for all individuals. On the other hand, some argue that internet access should function solely as a commercial service, subject to market forces and individual affordability.

The discussion aims to bring together these differing perspectives and comprehend the merits of each argument. Its goal is to comprehensively explore the concept of connectivity and determine whether all forms of connectivity are inherently beneficial. By considering these diverse views, it becomes possible to develop a more nuanced understanding of the issue at hand.

In conclusion, the analysis underscores the importance of connectivity in our society and examines the debate surrounding internet accessibility as a human right. It highlights the invaluable role of individuals like Raquel Renault in addressing connectivity challenges and emphasizes the positive impact of accessible internet during the COVID-19 pandemic. The discussion of various viewpoints contributes to a broader perspective on the issue, stimulating further dialogue and exploration of the different facets of connectivity.

Robert Pepper

The analysis of the given information reveals several key points regarding internet connectivity. Firstly, it is stated that 95% of the global population now has access to broadband. However, despite this high percentage, there are still around 2 billion people who are not online. This discrepancy highlights the shift from a coverage gap to a usage gap in internet connectivity. Affordability is identified as the main hindrance to internet usage, particularly in sub-Saharan Africa. The high cost of devices and monthly service is preventing many individuals from accessing the internet.

Furthermore, the benefits of internet access are seen as serving human rights. It is noted that people use the internet for educational purposes, to receive and create information. In fact, a significant 73% of people believe that access and use of the internet should be considered a human right. This highlights the importance of internet connectivity in empowering individuals and promoting equality.

On the other hand, various barriers to connectivity are observed. Infrastructure limitations, such as the backhaul and middle mile, are identified as one of the challenges in getting people connected. Additionally, the architecture of telecom termination monopoly is mentioned as a barrier.

In terms of specific services, the concept of zero rating is discussed. Zero rating is the practice of not charging for data usage on specific applications or websites. Discover is highlighted as a net-neutral zero rating service that allows access to any application or website. This service is seen as beneficial as it helps prepaid data users stay connected even when they run out of their data plan.

It is also worth noting that not all zero rating services are considered anti-competitive. Some zero rating services have been found to be net neutral and pro-consumer according to the stringent net neutrality rules under the Wheeler Commission.

The analysis also points out the outdated nature of legacy models and regulations in the telecom industry. The traditional telecom network architecture, engineering, business model, and regulation are based on outdated principles. The emergence of modern flat IP networks has changed the costs associated with data usage, rendering the legacy models irrelevant.

To conclude, the analysis reveals the challenges and opportunities in internet connectivity. While a significant portion of the global population now has access to broadband, affordability and infrastructure limitations remain significant barriers. The benefits of internet access in terms of human rights and empowerment are recognized. Additionally, the emergence of zero rating services and the need for modernization in the telecom industry are highlighted. These findings emphasize the importance of addressing these issues to ensure equal and affordable access to the internet for all.

Thomas Lohninger

The analysis focuses on connectivity and internet access, highlighting several arguments and supporting evidence. One argument is that instead of wasting time on debates that hinder the goal of connecting the unconnected, efforts should be directed towards addressing these debates. The discussion points out that although promises have been made about 5G, the actual impact and benefits of the technology have not materialized. In addition, there seems to be no new technology empowered by 5G, and little reason for consumers to upgrade from their current 100 to 300 megabit connection.

The analysis also highlights the negative consequences of network fees or “fair share” contributions. It suggests that proposed fees could harm smaller networks and lead to increased fragmentation of the internet. Another important argument raised is the potential negative impact of zero rating, where certain companies are given an unfair advantage. This practice could potentially violate net neutrality and hinder efforts to achieve reduced inequalities in connectivity.

Thomas Lohninger, in particular, raises concerns about zero rating programs limiting consumer choice and hindering innovation. He highlights examples such as a Smart Net offering in Portugal, where the affordability of certain services compared to others raised concerns about an “internet à la carte” system. The analysis also explores alternative approaches to data plans. It suggests that instead of having data caps, data plans could be differentiated based on the speed of internet access, as implemented in Finland. This is seen as a more honest and efficient business model for telecom companies.

The analysis also notes that total data consumption does not necessarily impact network operation unless it leads to congestion. It criticizes the use of legacy models based on minutes of use, distance, and time, which are no longer relevant in today’s data networks. Noteworthy observations include the termination of projects like Wikipedia Zero, which aimed to provide free access to specific services. The low usage of Wikipedia Zero led to its discontinuation.

Furthermore, it is suggested that corporations could make better use of available bandwidth by offering flat rates during off-peak hours. The analysis argues that in instances where the mobile network is often unused during late-night hours in countries with connectivity issues, the refusal to open the floodgates is primarily due to corporate greed, rather than capacity or cost issues.

In conclusion, the extended analysis emphasizes the need to prioritize connectivity and internet access for all. It proposes addressing debates that hinder these goals, critiquing telecom industry PR campaigns, and examining the consequences of network fees, zero rating practices, and data plans. The analysis suggests alternative approaches such as bandwidth-based data plans and flat rates during off-peak hours to optimize available resources. These insights provide valuable perspectives for policymakers, businesses, and individuals involved in promoting inclusive and accessible internet connectivity.

Session transcript

Raquel Renno Nunes:
from Brazil and we are just testing the camera and also she would need us to go to the presentation because she’s gonna you know show some PowerPoint and we’re gonna be doing from here because she’s not being able to access the link via her computer so we have to to do it but anyway I’ll start by introducing myself I’m Raquel Renault from article 19 program officer and I’m responsible for connectivity issues and I work mostly on infrastructure level and standard setting organizations my work specifically is in the ITU, ITU-R and here I’m followed by Natalia Lobo who is online and she’s the director of the sectoral policy department of the Ministry of Communications of Brazil then Robert Pepper the head of global connectivity policy and planning ex-FCC, ex-NTIA and consulted to other regulators and have a vast experience in spectrum management then Jane Coffin also an expert with extensive experience in the technical community public sector private sector as well and she’s currently a consultant and an expert providing information on community networks, IXPs, interconnection peering and community development and Thomas Lohninger there is the executive director of the digital rights epicenter works in Vienna Austria and also works a lot on net neutrality issues specifically in the European Union but not limited to European Union and this is going to be an open discussion we’re going to take questions and comments from the people here in this room but also online and the idea is to bring different views you can see that we have people from different backgrounds here basically the idea is to update and bring different views on the connectivity issues a lot has been said about how connectivity is important how it’s a human rights enablers specifically after the pandemic so even the right to health was something that was related to connectivity lately but we still face the digital divide and we have new kinds of digital divide and also some people say that we have many digital divides and the SDG still frame access as a development issue and but then we are facing a human rights issue so there are two different ideas and assessment of what the right to access the Internet not as a human right but also not just a commercial service how it should be tackled know how it should be seen how it should be framed so we are here together these different ideas and also understand if any connectivity is good connectivity and what are the challenges and opportunities that we might have nowadays so I would like to start with Natalia if possible or not yet Lucas okay okay so we’re gonna start by the people in this room and then we live Natalia for later is it okay okay so we can start by please pepper if you

Robert Pepper:
can start there I think that that works better thank you very much and it’s great to be here and thank you for the invitation on the panel so just maybe to start things off a little bit one of the things that I’ve done with meta with the economist intelligence units now called economist impact was we had started a study back in 2017 and it was a six-year time series called the inclusive Internet index and it looked at 54 55 indicators for a hundred countries and you know you’ve seen some of this Brazil does quite well and we early on you know the issue on connectivity was coverage people did not have available to them a broadband connection over the last six or seven years we and about three years ago we saw a shift from a coverage gap to a usage gap and the latest data provided by the ITU and the UN Broadband Commission three weeks ago in New York at the their annual meeting before the UN General Assembly met was that 95% of the world’s population now has available to them a broadband connection there’s about 400 million people out of the 8 billion people that do not have a broadband connection available and that remaining 400 million will be served by satellite and that was you know a general conclusion not just by the Broadband Commission but by GSMA the mobile operators satellite operators who were there in the room and that becomes extremely important on the on the other hand there’s about 2 billion 2.1 billion people who could be online who are not online that is the usage gap so there are about 2 billion people not connected and then there are people who are under connected and this goes to the question about what is meaning you know about meaningful connectivity and so the question is why are they not connected so one of the other projects we did with the economist was they surveyed people in sub-saharan Africa who were not online in about 23 24 countries and what they found these are the people who are not online all right the vast majority had a connection potentially available the number one reason was affordability of devices and affordability of monthly service the second reason was the way it was framed was I don’t know how to use the internet or what to use it for digital literacy questions the third major reason was the lack of local relevant content which also goes to the question of why should somebody be on and then there’s a separate issue with electricity no electricity you’re not even if you have devices you’re not gonna be able to charge them and so on so we’ve seen this shift from a coverage gap to a usage gap which is about adoption but why is that important it’s important because of the focus of the of the session being online and we especially learned this during the pandemic provides access to services that are directly related to fundamental human rights education the ability not just to receive information but also to speak and create information this is this is really fundamental when you take a look at article 19 not just the organization but article 19 one of the things that the economists did as part of this project was to go into the field in each of those hundred well actually there were it was 98 countries because two countries do not permit surveys so they couldn’t go in the field for a survey in two of the countries but 98 countries and they asked people when they called it the value of Internet survey what do you use the Internet for how do you value it and what do you know how has it improved your life and in the last two years of the pandemic it was specifically focused on questions about well-being and what was interesting is especially during the pandemic the last minute actually change it’s by region it’s differences in some of the regions but questions about education the way people use the Internet for education for their children during the pandemic when things were shut down and what was a little bit surprising was that people in sub-saharan Africa felt that being online was more important than people in Europe for education for their children across the world as part of that survey on average three-quarters of people 73% year-over-year I mean so it’s more than one year 73% of people said access and use of the Internet should be a human right and it was one of the things about that that’s interesting is in emerging markets in particular this was the case in sub-saharan Africa it was 76% Middle East North Africa 75% Asia 74% Latin America 71% in Europe it was only 69% and North America 57% because in Europe and North America people take being online for granted that’s at least we don’t know that that’s the fact that’s my you know presumption right that’s my hypothesis of why that’s lower because people think oh it’s like turning the water on on the tap but when you’re in parts of the world where you cannot take the Internet for granted people see it as even more important as something that really should be a human right so I’ll stop there and I’m happy to dive into any of the more data later but I wanted to just sort of maybe set the scene with you know connectivity why it’s important for human rights broadly and how there are real data that reinforce that from both people who are online and from people who are not yet online and then we can have a conversation about how do we get people online so that they can benefit from being online in ways that serve human rights thank you

Jane Coffin:
follow on with another economist story from 2014 and I know some of the people involved in this so it’s a true story well I know the company it’s called liquid telecom they provide fiber a lot of fiber connectivity in sub-saharan Africa and the point of telling you this is to focus on the importance of diversity of networks for access of course lowering prices and competition and bringing in more people to the global internet liquid telecom has been providing connectivity for years and years and one of the hardest things for them to do is get fiber across borders now it will change with the Leos that are going up but that’s got hold another cross-border issue which I’d love to hear peppers opinion on and folks in the room later but this one is about how hard it was to take fiber from Zambia to South Africa the team had to get in a boat after about there were negotiations going on for over a year between the two countries and part of the hang-up was a an historic bridge so a cultural ministry was involved on both sides of the border the telecom ministries were involved on both sides of the border the Border Patrol was involved on both sides of the border the regulators were get the picture the regulators were involved on both sides of the border so a year goes by and there’s still no fiber deployed year now if you’re going into business and you’re deploying fiber your business model is not a year’s wait it’s it’s get the fiber out get it deployed you have the investment and liquid does do a lot of great work with developing communities as well so we’re not just talking about a big corporate giant that doesn’t think about working with communities that have been unconnected and underserved they finally put some people in a boat and the article in The Economist is from July 5th 2014 where they said the connectivity between the two countries was nearly thwarted by a swarm of bees because they had put the fellow in the boat a bunch of bees started to attack the fellow the CEO took off one of his shirts wrapped around the guy’s head they got in the boat they took the fiber across the river this is a true story there have been other stories like this in it all around the world where you have these border issues now of course there are going to be complications in some parts of the world but this is more of a government’s just not coming together and negotiating those agreements to quickly get connectivity deployed across those borders of course with mobile networks it’s a little different sometimes that’s a power level issue I used to live in Armenia and there were all sorts of power level issues where people would blast the signals too much from one country to another and you were picking up the signal from one operator that you didn’t intend to have and paying a lot more money but the point of this story is that there are ways that connectivity can be deployed but it just gets hung up if you were advocating for connectivity from a grassroots perspective you can help with governments I’ve worked with Pepper and others here at this table talking to governments I was government years ago it does take a multi-stakeholder approach to make sure that governments understand whether it’s from a corporate perspective or nonprofit perspective that there are things that need to be done in order to speed up connectivity the liquid story is of course one about a company I used to work on I’ve worked on many community network projects and it helped lead a movement here back in 2016 in Guadalajara when I was at the Internet Society we brought about 40 different community network advocates together from all walks of life to talk about the importance of working together and as a collective you can often have more power when you’re trying to negotiate whether you’re in an ITU meeting or you’re here and you’re talking to others when you’re when you’re bringing certain concepts together that are similar and you can share those stories and come up with talking points together because if you’re by yourself sometimes you’re not going to make that difference when you’re negotiating with folks that may have more power quite frankly community networks have come in from a diversification perspective to bring in last-mile and minimum connectivity when you talk about community networks it’s not in the manner at least what we were advocating at the Internet Society what I still advocate when I was with a startup community networks or community networks can provide different types of connectivity that some of the bigger providers may not be interested in providing because they don’t get a return on investment in certain communities because time and distance equals money right and if there are only a certain number of people in a certain place some providers can’t don’t go in there because they can’t get that return the smaller networks some of these are fixed wireless some of them are just Wi-Fi mesh networks are creating change and most recently I was working with some folks in a place called East Carroll Parish Louisiana that had been named the poorest town in America and they were tired after the pandemic because they weren’t connected so this is another story of where public-private partnerships were something called capital stacks which is just a fancy term for putting a lot of different types of money together whether it’s concessional capital and or philanthropic which means foundations grants banks coming together companies who can provide those loans as well and that’s what the capital stack means stacks of different types of funding blended finance is the other fancy-pants term for this it’s just lots of different funds coming together to de-risk investment and you can do that in small towns and in poor communities and this is what a group called Connect Humanity is doing the startup I was working with before other organizations are looking at this even some of the folks in the UN and the Giga project I’m not speaking for them I do work with them adjacent to them on a project but I would just say that you’re finding more innovative ways to bring in these PPPs that are very different from the huge infrastructure projects like the dams and roads that we saw in the 60s 70s 80s 90s I suppose to infrastructure infrastructure is expensive if you talk to anyone in the space who’s building out that connectivity it’s billions of dollars to build networks but it also can be supplanted with the M’s and the tens of thousands of dollars with these smaller networks so I’m just here to throw out that there are ways that different types of organizations can work together to achieve the same thing which is connecting the unconnected and digital skills training that’s a whole separate issue I won’t get into but I’m more on the infrastructure side there are ways to work together and it’s not as if the capital out there is something evil you’ve got to look at capital in a very clinical way when you’re working at that grassroots level also be as smart as the banks are be as smart as the people that are putting this infrastructure together thank you

Thomas Lohninger:
thank you My name is Thomas Lohninger from the Austrian Digital Rights NGO Epicenter Works and I am surely here on this table the one with the least expertise from on the ground when it comes to building community networks but I absolutely think that this is one of the key issues that should be in the focus of this IGF and digital rights debate in general and what I might be able to contribute in that regard is to point out how debates that we are having right now globally but particularly in Europe are actually working against achieving this goal of connecting the unconnected. The first thing is that I really want to call out some of the PR campaigns that we have seen from the telecom industries in recent years. I mean that whole debate around 5G with all of the promises what it should bring nothing of this is materialized and I think it would have been that money would have been far better spent actually bringing connectivity of normal best effort end-to-end internet to all corners of the world and doing it in an affordable manner because of course satellite internet is available everywhere but it also needs to come to the people in the form of a device that can be powered it can be sustained in the local circumstances. And now we have debates around 6G already while we just see all of the promises of 5G just imploding in themselves. There’s no killer app, there is no new technology that’s just empowered by that. It is at best a little bit of a faster internet connection and what do we see in the countries where this already exists? People are actually not interested. If you have a 100 megabit to 300 megabit connection there’s very little reason as a consumer to upgrade. So I would really question the premise of a lot of the international telecom debates and we should ask the question if that energy that focus and that money is well spent and I think we just simply have bigger problems. And then there is a second big issue that I want to raise which also ties into this whole issue of connecting everyone on the globe together and that is the issue of network fees. It’s also often dubbed fair share or fair contribution. This idea which currently is making the round because ETNO, the European Telecom Operators Association, was quite successful in lobbying a former CEO of France Telecom who is currently serving as digital commissioner in the EU to adopt this idea. It is a very old idea we know it from the telephony era. It’s basically you want to reach my customers you have to pay me money. This idea of the telephony era is forced upon the interconnection world so whenever autonomous networks connect with each other, the so-called interconnection sphere, according to the fair share network fee proposal there needs to be a lot of money exchanged before that connection can be made. And if you just think that through you will see many problems and one particular is that smaller networks will suffer. We already have many small ISPs saying that they are actually afraid of their ability to compete, of their ability to connect to other networks if such a proposal were to really become law of the land. Because when you are right now looking at the interconnection world this is not an area to make a profit. This is usually nerds connecting networks with each other. It’s where we see that some connection is congested and that we have a packet loss. Okay let’s just put another cable there and maybe the money that this cable and connection itself will cost is the price that you have to pay. But it’s not a way to make money. We see some telecom operators already abusing interconnection as a tool to maximize their profits and if this were to be the law of the land, if every interconnection would have to follow this principle then I think we would see many more problems in the global internet in terms of right now you can connect to every other point of the internet which is what we call end-to-end. This could be a concept of the past and maybe we will wake up in a splinter net or in a fragmented internet where only a few big telecom companies are able to really be reachable globally. But all the others might actually have a far lower chance of connecting and that would certainly deteriorate the ability of particularly global majority countries to connect to privacy-friendly alternatives to let’s say Google or Meta. And then the last thing that I also want to mention because there’s a very interesting court case going on and the Constitutional Court of Colombia is the issue of zero rating. So the price differentiation if you make certain data packages more expensive or cheaper than others and in many global majority countries that is a very common way of connecting. So when you buy a SIM card you have free WhatsApp or free Facebook or other services that are given to you no matter if you have gigabytes in your SIM card or not. And that of course gives an unfair advantage to the companies that are in a poor position if they are the only ones reachable for people that otherwise would only have a telephone number and not be able to access the full internet. And I think Free Basic is certainly a project that needs to be discussed in that context and it’s my hope that the Colombian court follows the example from Canada, from India, from Europe to actually outlaw zero rating as a practice that is violating net neutrality. Because if we want to bring meaningful connectivity to everywhere in the world that needs to encompass the whole internet and just not a walled garden and a handful of selected services. Thank you. Now we’re gonna

Raquel Renno Nunes:
have Natalia that is going to be online.

Nathalia Lobo:
Hello everyone. Thank you so much for waiting for me. I had some trouble getting in. Let me try to share my screen. Can you all see it there? Yes, we can see it. But if you can you put it in show mode? Yeah. There you go. Great. Thank you. So I’m going to talk about a little bit what we have been doing in Brazil on connectivity. So it talks a lot with what you all have said, principally Jane and Pepa. So let me tell you a bit of what we have been, what is Brazil. Okay. And what is our challenge. So Brazil is the fifth largest country in geographical area. We have 203 million people and we have the largest city in Latin America with 12.3 million people and over 5500 municipalities and more than actually 40,000 localities. The largest economy in Latin America. And look at our size. Well, there’s lots of Europe there. We have great parts of Africa. So you can see the size of our challenge. Connecting all the people that are in our Brazilian territory is a challenge, especially when you got mainly people that you cannot make, you can’t give for granted any type of technology. You need to have them all working together so that we can get everyone inside the internet. So one of our, basically all the policies that we do have in Brazil regarding connectivity have been facing the supply side. So how do we get networks into the people? Today we managed to get over 90% of our households with connectivity and how do we achieve the rest of the 10%? That’s the deal. So in our 5G auction that we held in November of 21, we had the 5G obligations that put an over 90% of all economic value into coverage commitments. Let me talk to you that over $9 billion were in revenue of this spectrum auction and 90% of it converted to obligations. Most of the difference was paid over reserve price and converted into commitments and these commitments were into investments until 2030 and they go through 4G obligations in localities. We have over 7500 localities that are going to have 4G mobile broadband that had no service at all. We’re going to have 5G obligations in all 5570 municipalities and we have also the North, the connected North. This one is our very dearest project as we have, we’re going to a region in the North and to the Amazonic region that connectivity is still poor in terms of quality, in terms of resilience and in terms of price. So we are deploying 12,000 kilometers of fiber optic cables into the Amazon riverbeds and making sure that this is the CAPEX is just is public but who does the maintenance and operation of this these cables afterwards are a consortium of 12 different operators that will explore this and do all the maintenance. It’s not easy and maintaining the cable, this fiber cable optic, this optic fiber along riverbeds as it’s that you have many kinds of like anchors in the rivers that from the boats that can do, that can rip off the fiber optic cables. We have many issues regarding logs on the rivers. There are many issues that make it quite expensive and this is turning everything into possible. So the public partnerships are coming into this into this sense. So why is this so important for enabling human rights? Well, things have, this is what we’re doing there in the Amazon. So this is all our, what our boats are doing, deploying the fiber, the optic fiber cables. And why is it so important? Well, it’s transforming lives in the, in this region. So we have regions that had no internet and hospitals had no, had no way to put their protocols inside. And they were, they used to have post office shipping off the information on the patients and now they can have access to a simple system where they can put all the information inside and now they have access to more resources. May there be into medication, into amount of money that it gets to attend to their patients and telemedicine with the best hospitals in Sao Paulo for the people that live there. So there you go. And schools and the 5G auction. We also have a budget for connecting schools. It’s a little less than $1 billion, but then we are going, we are seeking to do the full connectivity there, not only the speed getting into the schools, but also the, all the wifi and what we, what this, what the schools are going to do with this pedagogical use. So that’s a bit of how we used the 5G auction to bring in some other perspectives that is not only the private perspective, but public policy. So we also have the investment fund to structure all that we need. So we have many sunk, sunk loss projects to do with this funding and especially and especially for public points and doing transportation. So we have subsidized, we had, we have many financing lines with subsidized revenue. And so here we go. We have, we have this summit tree of commitments. We have funds to make private investments viable. And we are also talking about now going into public policies in the demand side about usage, about wanting to make tangible benefits from internet usage into people’s lives. So I thank you. That’s all for now.

Raquel Renno Nunes:
Thank you, Natalia. Sorry, is this? Yeah. Okay. Yeah. Thanks. And do we have questions or, because I think I see people here that, ah. So Natalia, that, um, you point out something that’s actually really, really important.

Robert Pepper:
One of the biggest barriers to getting people connected is less in the access network. The backhaul and middle mile is absolutely one of the barriers. So the project by laying the fiber in the river, the Amazon, to bring broadband connectivity to those regions, right? That is essential. And, um, a, another example of that is something that we did partnering with Airtel and a small company called BCS in Uganda, uh, back in 2017, 2018. Um, Airtel had a 4G network, uh, across, uh, the, you know, Kampala, the urban areas in Uganda, but it did not have 4G on its cell site. It had cell sites across Uganda and covered about 90% of the population. Um, but in the rural Northwest part of Uganda, uh, it was 2G, it was GSM, 2G, some SMS. They couldn’t do the internet. Um, and the reason was they couldn’t get backhaul to this tower sites. They only had narrow band microwave, so they couldn’t actually get broadband to the towers. One of the projects that we did was build a, um, wholesale backhaul network, 770 kilometers across Northwest rural Uganda where there were no roads across the Nile can help. And it was an open cable. So there were capacity for all the operators, right? And it enabled Airtel to convert from 2G to 4G once they got broadband to the cell sites. So this is actually very analogous, um, to the project in Brazil, which is like really, really important. Um, uh, I would like to, though, uh, Thomas just respond very, very quickly. The zero rating thing is extremely important, and I do think that, you know, there was an evolution from FreeBasics, which was limited, to a less than perfect but much, much better and actually net neutral called Discover, because it uses a proxy server, so any application or website is available, as opposed to having to have the selection on the old FreeBasics, which is an 8 to 10-year-old project. And what’s interesting is the way people use it actually benefits people, because it’s not a degraded internet. The way a lot of people are using it is one of two ways, an introduction to being online, and then people actually want to be online. One of the evolving uses of it is actually, I think, extraordinarily pro-consumer, and that is especially in emerging markets, and it’s in dozens of countries, people who rely on prepaid data packs, data plans. What happens is that in the past, when they ran out of their data plan and they could not afford to top up, they were disconnected. What’s happening in many, many cases in many countries is that when people finish, fill up, they run out of their data plan, they, using the zero rating version of the narrow band connection, stay connected, so they have something, and then when they can top up, they go back to their full plan. And so it actually helps both the consumer and the operator in that there’s no transaction fee of having to be reconnected, redo plans, or anything else. So again, it’s not perfect, but it has a very, very positive social and consumer benefit, but the idea is eventually, I mean, everybody will want to be on the fully accessible internet with all of the applications, and that’s, I think, everybody’s goal, but, and by the way, when the FCC had its, you know, net neutrality rules, which were very stringent under the Wheeler Commission, they found that there was nothing inherently anti-net neutrality related to zero rating. There were some zero rating services that they found anti-competitive and violated net neutrality. There were others that were net neutral and that were pro-consumer, and so it was nothing inherent in zero rating that was anti-competitive, and in fact, that was even before some of these other developments, and so it’s not black and white, and I think it’s also a good example of where we’re extremely aligned on some things and not on others, which is absolutely fine. I mean, that’s actually a good thing, but I didn’t know how familiar you were with Discover or the way people are now using it, especially in emerging markets, to bridge top-ups so that they can stay connected, at least at a basic level, and then they top up and then they have the full internet experience.

Thomas Lohninger:
Maybe if we can go into that point. So the problem with zero rating is really what we have seen in the market, and it’s worth really looking at those concrete offerings and also how they are priced. When you go today to the English-speaking Wikipedia article for net neutrality, you will still find the picture of that article, it’s the infamous smart net offer from the Portuguese incumbent provider Mail. If you look closer in that offer, a gigabyte of YouTube was 54 times cheaper than normal internet gigabytes. So we have a drastic difference in affordability of certain services over others, and this internet à la carte where individual applications are bought is certainly the worst thing where we can all agree. And there certainly has been an evolution to class-based zero rating offers. I think those were considered in Canada, and then they scrapped it, they said, like, that’s actually not a viable option. In Europe, that was the reading of regulators, how zero rating could be admissible from 2016 until 2021. And in 21, the European Court of Justice found that, no, it’s actually contrary to equal treatment of traffic if you price differently. And why did the court find that? Because it’s actually an additional effort on the side of the telecom company to calculate packages differently, to have not just your monthly allowance, but a monthly allowance for this service, for this service, and for the rest of the internet. So it is, I think, it’s hard to make a case from the perspective of a telecom operator why it should be easier and cheaper to roll out these zero rating offers instead of, as you just said, the goal, and I think we agree on that, is an application agnostic form of access. And that could be a low bandwidth, that could be something that’s tailored for maybe low energy devices or other particular needs where you simply will not be able to stream 4K video. It’s totally fine. But the thing that we all want to avoid is that once the monthly data cap is cut off, you’re left with nothing, or you’re only left with WhatsApp. Because I feel like there always needs to be the freedom to choose. And if discovery is that, and I haven’t looked into it, I cannot speak to it. But I think it’s important that we understand the rights also from low income households to have the freedom to choose all things of the internet. And I mean, it’s also the freedom to innovate, which is most at stake with these zero rating offers. I mean, when Mark Zuckerberg created the Facebook.com, he did so because he had a full-fledged internet connection in his dormitory, and wasn’t limited to a consumer-only version of the internet.

Robert Pepper:
And that’s why the newer versions of that, so first the FCC found that there were some zero rating services where there was discriminatory pricing for some video versus other video as provided by the telco, in terms of the way zero rating was implemented. On the other hand, if you think about a service like Discover, think about sort of a dial-up, a low bandwidth version. So if it’s low bandwidth, you’re not going to have streaming video. So it’s just a low bandwidth version. But it’s not just WhatsApp. So it’s not a separate WhatsApp service versus being online. It’s actually a zero rated service that would be essentially everything, but at a very low speed rate until you top up. And so again, it’s not black and white. And I’m happy to take that offline to have that conversation, because I think it’s an important difference in distinction as things have evolved, and also the consumer behavior of using it as a bridging access for everything, but at very low data rate, until they top up and then get the full experience. And so again, I don’t think it’s black and white, or either or. I think the more important point, going back to the earlier part of the conversation, is some of the ways in which telcos are wanting to have network fees, they call it fair share, which is based upon the architecture and the economics of telecom termination monopoly. So the reality is you can have a lot of choice on the originating end. So I may have four or five operators that want my business. And so there’s a lot of competition. Once I select my operator, right, your network, if we want to talk to each other, if you want to send me messages or videos or whatever, the termination is a monopoly. Because there’s only, once I pick my network, your network must terminate, there’s no choice. And that’s why the, and a lot of this was purely accidental, in Europe, there was the development of calling party pays, or sender network pays, and that created and reinforced a termination monopoly. In the U.S., by just a different model, we had a bill and keep arrangement. So I pay for my airtime, whether I’m originating, sending, or receiving. That eliminated the termination monopoly. And as a result, the choice of the network connection is where it should be, on the originating end. Now, what does that mean? It means that if the telecom networks want to use termination monopoly on interconnection, they want to use termination monopoly to raise fees, they want to use termination monopoly to extract rents, because they have that, if they interconnect to me, it’s a monopoly. And in fact, the European Commission, as you know, in looking at mobile roaming, defined termination as a separate market with significant market power. And that’s at the core, the crux, of a lot of what we’re hearing from the telcos on the fair share and network fee issue. There’s one person waiting for you. Thank you.

Audience:
My name is Jarell James, and I have some questions, because I’m hearing a lot about capitalization of bandwidth, and how people would need to spend more money to get access to be able to stream, if their data packages are running out, and this idea of topping off, or even creating subsidies to allow for topping off into these communities. I think it’s really interesting to focus maybe on the fact that that premise is really locked into the traditional way we’re used to dealing with data, which is coming from a Western, more developed community, and not, I think the days of us counting minutes and counting data packages is long gone for many people in the West. And so, what I’m wondering about is, when we look at an Internet in the future, where streaming platforms are going to have higher quality videos that are going to require more bandwidth, and we’re trying to do these mitigations, is it not also valuable to look at what happens when you take those resources, like videos, and high bandwidth resources that people are regularly looking for? As this gentleman over here in the green tie actually was speaking in his session a few days ago, I don’t even think he knows I’m referencing him, he was mentioning that a lot of people do Brick Lane, it’s like a regular thing that’s Googled, right? And that is advancing their career. Why would someone need to regularly Google that five or six times, instead of just having access to that video, because many members in their community have that? And so, this is where I’m curious about we potentially looking at the way that telcos are gamifying data access, and not doing very obvious movements to ensure that, while they are able to mine the transaction of data for communication purposes between people, and they can say, hey, yeah, we can provide you with communication access, does that really mean that every resource that is beneficial to that person’s life, which objectively is a lot of educational resources, should also be reliant on that person’s ability to create monetary value for themselves, and then give it to telco companies, so they can somehow learn skills and achieve a greater life? It seems like a catch-22, and it seems very much premised on the idea that we’re expecting other communities to pay up and use data packages the way that we’ve done it. And so, these are some of the questions I would have.

Thomas Lohninger:
It’s an interesting question. We had various debates in this direction over the years. It reminds me a little bit about Wikipedia Zero, which was a project from the Wikimedia Foundation to zero-rate access to local Wikipedia versions in the country, and I think the English-speaking Wikipedia was always included as well. They did this in partnership with telecom companies, and disclosure, they were criticised by many people, including me, and the foundation decided to sunset this project in, I think, 2017, something around that time, and one of the reasons why they sunsetted this was it actually didn’t work. The access numbers to the zero-rated service were ridiculously low. I never really got an answer what their conclusion is, why it was never really, because the concept sounds, of course, very good, and we had similar fears also in the pandemic, in the lockdowns, that suddenly everything went online, and that means low-income households have potentially only caps on the data they can use. A model to, I think in your question, there was also a call to think outside of the box, and here it’s maybe interesting to look at Finland. In Finland, you cannot find a data plan that has a data cap. They differentiate via speed, so it’s the bandwidth that you get, but it’s always a flat rate, and this is the much more honest business model, because the variable cost for data volume is absolutely negligible, particularly in mobile, it’s expensive to build a network, to connect it, but then afterwards, whether there’s data flowing or not, you might have congestion if you have too many people at the same time, but then also a bandwidth-based system is helping you to allocate those resources much better than if you right now give everyone a 5G connection with three gigabytes, which can be used up in one concert if you’re streaming it, and yeah, so I think these hard questions about the business model of telecom companies need to be asked.

Audience:
Just a quick question on the point, when they were sunsetting that project, was it access numbers in communities that had regular connectivity, or were there any numbers that you know of for population? I work extensively with populations that have been shut down from the internet, and overwhelmingly the information that is looked up and is searched for is oftentimes health-related, because health facilities are destroyed inside of conflicts, and there’s a lot of questions around this access numbers, how much of it was done in communities that had zero connectivity to the greater world, as a resiliency measure, especially as we go farther into climate change, destroying the telecom infrastructure, it does seem more valuable to maybe revisit those connection questions and access points.

Thomas Lohninger:
Good question. I cannot give you that answer, but there’s a Wikimedia Foundation booth around the corner, they hopefully have.

Robert Pepper:
Thomas, to your point, very related, a lot of the data plans in the name of network management are total data consumption, not related to congestion. So two gigabytes, if I’m downloading that or using that off-peak, there is no impact on the network. Now if everybody’s trying to access a network at the same time, you end up with congestion, but the reality is, right, the legacy telco, their network architecture, their engineering, their business model, and the regulation was all based upon some fundamental principles. The metric was the minute, because it was about voice. The longer the distance, the higher the cost. The longer the duration that you are on, the higher the cost. When you have a flat IP network, and once we got to 4G, it was essentially a flat IP network, even in mobile. There’s a cost to the network, but it’s a step function. Once you’re connected, how much you use that network does not vary until you hit the limit, and that’s where the congestion comes in, but if you use it, you know, only a little bit or a lot, as long as you’re under that technical architecture, the cost is not variable, and yet you still have legacy models based upon minutes of use, distance, and time, which are no longer relevant using the flat IP architectures of today’s data networks, whether they are wireless or fixed, and that goes to your point, which is, like, really important because a lot of these plans are premised upon total data consumed. and that may have no relationship whatsoever to network congestion, right? And, you know, go ahead, I mean.

Thomas Lohninger:
Just to put a finer point on this, like, or to put it bluntly, if you are operating a mobile network in a country where connectivity is an issue, and it’s late night hours where the network is idle, it’s the only reason why there is not a flat rate for everyone is corporate greed. Because it’s a wasted resource. The bandwidth would be there. It costs them nothing. It’s just corporate greed and their business model reasons for not opening the floodgates and that people use it.

Jane Coffin:
One of the things that was mentioned in the description of the panel as well was innovative policies for improving connectivity. There’s a model that’s being explored that’s not new, but it’s new to the United States-ish. It came out of the European, out of the UK, called structural separation networks, where somebody might manage the network, build and manage the network, but other networks can run on top of it. And this is a way to also cut down on costs. This is more of the CapEx side and OpEx later. But bottom line is this is coming back in some parts of the U.S. right now where municipalities are asking for more accountability from the companies that are providing connectivity, suggesting that they not run, because I want to be really clear that governments probably aren’t the best at running networks. There’s a reason for that, that so much liberalization happened and there are no more state-owned enterprises. Well, there are some. But anyway, some companies are better at running the networks at a cost that can help people. But the open access networks allow different types of networks to provide services over the top of the network. So you could have a $10 email-only network. And this is, of course, prices that make sense in that economy, but it wouldn’t be the same in an emerging environment. Another company could come in and run their services over that network as well. It could be full-on video streaming. Who knows? But there are different models being explored. So I think it’s important that for many people here who are interested in what could be done, there are lots of other people you can network with. You can talk to some of the community networks that are also coming into play. They’re actually solid networks. They’re not flaky. When people hear the term community network, they’re like, oh, a bunch of crazy people running a network. You’re like, no, no. These are smart technologists, people who know to run the network. And it’s not fly-by-night in that sense. But knowing your business plan, I do want to put this out there, and I hope I don’t sound too business-y, but you do need to have a thought about how you’re going to continue to run your network. It’s something that Talia was mentioning. You can’t just build that network and hope that people are going to come and buy the services. You’ve got to have a plan, and those plans have to last longer than a year. And you’ve got to have subsidization.

Nathalia Lobo:
How do these community networks actually operate in Brazil? And what are their needs? How can we structure something? Because each one is very specific in their models, in their ways of working in community. So we have to understand how can we make specific directives so that we can make specific policies. You can’t do something on the specific case. It’s very difficult. So how can we manage to make these communities viable first? What are their needs? And how can we, as a public policy, also help these projects actually happen? How do you know that this is a good community network and that that’s not a good way of going for community networks? Not always everyone is exactly the same. So this working group is going to have some study results. And in that sense, from then on, we can start operating something that works for them. That’s the idea. Understand.

Audience:
Hi, everyone. I’m Carlos Baca. I’m from Mexico. I work in an organization that’s called Resomatica. And we are actually a community network. And we help other communities to get connected through this model. So I just went to Brazil, to the Amazonia in Brazil in July, to see what happened with the National School of Community Network there. And I saw a very similar situation with other communities. It is that the WISPs that go to the communities have very expensive prices. Really, really expensive. In Chihuahua and Mexico, when we work, we have people need to have at least $3 or $4 a day to be connected. So what the young people are doing now, to be connected to Facebook, to YouTube, to WhatsApp, etc., is to involve in the narco, in the organized crime. The drug sellers, they are now part of it. Because in northern Mexico, we have this problem very much. And so we are seeing that the people spend a lot of money to be connected. There is a lot of WISPs. So in very, very, very far communities, they have one point that it is very bad connection, you know. And sometimes people spend a lot of money to increase these antennas and this infrastructure for the WISPs. And then they need to pay insurance. It is a very big problem that we are seeing because we think that they are doing this job that they need to do. I am not saying that we don’t need to work with the WISPs, the small ISPs, because we need to have a joint effort. We work a lot with them in Mexico, for example. But we need to establish some conditions to better understand what is happening really in the communities. Not only what they are reporting or saying, what they are doing. So I think this is one of the things that I wanted to address. And the other one is that, as the same in Oaxaca, and this is a question for Natalia, there is a lot of fiber, you know, in a lot of ways, very near from the communities. But the government doesn’t have the policy to access to these fiber networks for the communities. In the Amazonian, they have a lot of expectation about this project. This is what I found. But they are also thinking why they can’t connect to this fiber network. So I think it is important to reach that. And finally, of the last question, I think it’s very, very difficult to try to define what is a community network. And you know, say that this is a good and this is a bad community network. And we need to understand better that maybe it is the main characteristic is that the people in the community can handle in the different ways the infrastructure and the services they have. So if you think the community networks in Africa, they are more like little business models that is very different from one community network in Colombia or in Mexico where there is a lot of political organization. So we need to escape some of the things that we think about community networks. As Jane said, they work well, you know, because we have now good examples of how it’s working in the communities. And as the same as the community radio seen 10 or 20 years ago, we need to escape to all of these imaginations or all of these thoughts that make to say that there are poors, there are not so good services, there are bad things, no? Or there are not, they don’t have enough quality, et cetera, et cetera. So we need also to think in this panel. It’s a very complex issue, I know. But try to understand the diversity that exists around community networks. Sorry for that.

Nathalia Lobo:
It’s okay, because they’re going to disconnect her in two minutes. Okay. Well, I was very surprised about the two, three dollars a day for connection. Well, the idea of the North Connected is that when this is all installed, you have new 12 companies at least working on the region. And some of them are just for capacity. They are just transport operators. So actually you’re dealing about much more offer and competition in the region so that these big prices don’t happen anymore. That’s why we built, we had the idea of building the North Connected, okay? So that competition, so that better quality services get there. So that’s one point. The second point is that what I said about the bad community networks is that how do I not finance none? There are some people that may fake a community network so that they get some public financing. And how do I, as a public servant, know that that one is going into those important networks that community networks do work well? It’s not the question if community networks are necessary or not. It’s just that how do I just take away the bad stuff, the non-legal stuff from all this group. So that’s the idea, to better understand that. And there’s always something that community networks do that others don’t, ISPs don’t do, which is making the connectivity meaningful. So this appropriation of technology, of information, and making the learning among that community, and distributing information on how to better work on that virtual world. So that’s something that we don’t know how to do it. Government is still tackling in Brazil. It’s just tackling how to deal with that next phase. So I believe that we have all the synergies to make this happen. It’s just only that we need to study it a little bit more so that we can structure something that we can go forward with. Did I answer everything?

Raquel Renno Nunes:
Thank you. Thank you, everyone.

Audience

Speech speed

166 words per minute

Speech length

1303 words

Speech time

472 secs

Jane Coffin

Speech speed

199 words per minute

Speech length

1838 words

Speech time

556 secs

Nathalia Lobo

Speech speed

111 words per minute

Speech length

1518 words

Speech time

817 secs

Raquel Renno Nunes

Speech speed

116 words per minute

Speech length

571 words

Speech time

296 secs

Robert Pepper

Speech speed

136 words per minute

Speech length

2719 words

Speech time

1196 secs

Thomas Lohninger

Speech speed

160 words per minute

Speech length

2063 words

Speech time

772 secs

An infrastructure for empowered internet citizens | IGF 2023 Networking Session #158

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The role of libraries is evolving alongside the advancements in internet access. Several cases have been presented, highlighting the changing nature of libraries and their ability to adapt to the needs of patrons in the digital age. Internet access has enabled libraries to offer a wider range of services and resources, promoting digital inclusion.

The Digital Inclusion Index model is highly relevant for all countries. Trish Hepworth emphasizes the significance of this model, which assesses countries’ progress in terms of digital inclusion. It considers factors such as internet access, technology availability, and digital skills, helping countries identify areas for improvement and bridge the digital divide.

Taking knowledge to rural areas is a beneficial approach, promoting knowledge sharing, socialization, and exposure among youth. This strategy addresses limited educational opportunities in rural regions and has received positive feedback for connecting rural communities with educational resources.

Sensitization on academic publishing and compliance with legal frameworks is crucial. Johanna highlights challenges in publishing academic work, such as vetting content and legal compliance. Greater awareness of publishing procedures and legal requirements is necessary to promote quality education.

Establishing education facilities in impoverished rural areas is challenging. Johanna’s personal experience running a small institution in a poverty-stricken rural area in Kenya demonstrates the difficulties faced. Innovative solutions and support from stakeholders are needed to overcome these barriers.

Creating shared knowledge spaces for learners from different institutions offers advantages. Johanna expresses enthusiasm for shared learning spaces that foster collaboration and knowledge exchange. This approach promotes a sense of community and enhances the learning experience.

In conclusion, the evolving role of libraries, the relevance of the Digital Inclusion Index, the benefits of taking knowledge to rural areas, the need for sensitization on academic publishing, and the challenges of establishing education facilities in impoverished rural areas are essential considerations for ensuring quality education. Shared knowledge spaces further enhance collaboration and idea sharing. By addressing these aspects, society can work towards achieving the Sustainable Development Goal of quality education for all.

Erick Huerta Velázquez

The use of Information and Communication Technologies (ICTs) is playing a crucial role in preserving and disseminating local knowledge in indigenous communities. This is particularly important as local knowledge in these communities is often oral and unwritten. By utilising local storage and Internet access, ICTs enable the documentation and preservation of this knowledge through recordings and videos.

One notable initiative in this field is the Rizomatica project, which collaborates with indigenous communities to help them develop their own media and conduct research. This empowers these communities to digitise and preserve their local knowledge, which might otherwise be lost. By incorporating ICTs into their cultural practices, these communities are able to create comprehensive reservoirs of knowledge and bridge the gap between traditional and digital libraries.

There are also real-world examples of communities successfully integrating traditional and digital libraries. One such example is Quetzalan, which has established a communication centre. This centre serves as a hub for both traditional and digital resources, allowing community members to access and contribute to the preservation of their local knowledge. Additionally, there are indigenous communities that have taken it one step further by running their own mobile networks and even establishing public intranets within their libraries. These initiatives demonstrate how ICTs can bring together multiple concepts of libraries, creating inclusive spaces for the preservation and dissemination of local knowledge.

Furthermore, community collaborations play a vital role in effectively preserving and disseminating local knowledge. A partnership between UNESCO and local communities in Mexico has resulted in the development of a policy for indigenous community radios. This policy promotes the establishment and operation of community radios, which act as platforms for sharing and promoting indigenous knowledge. In another example, Phonotech has assisted a 60-year-old community radio in restoring and archiving old tapes, thereby making them accessible nationwide. These efforts highlight the importance and effectiveness of community collaborations in preserving and amplifying local knowledge through various channels.

In conclusion, ICTs, community collaborations, and the integration of traditional and digital libraries are powerful tools in the preservation and dissemination of local knowledge within indigenous communities. By harnessing the potential of technology, these communities can document, digitise, and preserve their unique and valuable oral traditions. The partnerships formed with organisations and initiatives such as Rizomatica, UNESCO, and Phonotech further enhance the impact and reach of these preservation efforts. Ultimately, the combination of ICTs and community collaborations contributes to the comprehensive and inclusive representation of indigenous cultures and their local knowledge.

yasuyo inoue

Libraries play a crucial role in bridging the gap between rural and urban areas, reducing inequality, and promoting social and economic development. They achieve this by utilizing information and communication technology (ICT) techniques, which enable them to provide essential services and resources to areas with limited access. By harnessing the power of ICT, libraries ensure that people in rural areas have equal opportunities to access information, education, and other resources that are readily available in urban areas.

In addition to being information hubs, libraries serve as important community activity centers, preserving culture, history, and promoting education. They provide a safe and inclusive space for people to come together, engage in various activities, and cultivate a sense of belonging. Libraries often host community events, such as workshops, lectures, and exhibitions, catering to the diverse interests and needs of the community. This active engagement with the community helps libraries become vital institutions that promote social cohesion and cultural preservation.

Libraries also play a significant role in supporting education and lifelong learning. They serve as educational centers, offering access to a wide range of educational resources and materials. Libraries house books, journals, online databases, and other materials essential for research and learning. By providing these resources, libraries create opportunities for individuals to expand their knowledge, acquire new skills, and pursue personal growth. Additionally, libraries support formal education systems by providing study spaces, access to computers and the internet, and assistance from knowledgeable staff.

Furthermore, libraries have the potential to stimulate the local economy by forming connections with businesses and supporting local industries. By collaborating with local businesses, libraries can showcase their products and services, attracting customers and contributing to their success. For example, a small-town library in Shiwa features exhibitions that highlight the local business of sake brewing, promoting tourism and local commerce. Additionally, libraries can collaborate with agricultural co-operators to organize weekly vegetable markets, supporting local farmers and promoting sustainable agriculture. Through these partnerships, libraries contribute to the growth of the local economy and foster community pride.

In conclusion, libraries play a vital role in society, connecting rural and urban areas, reducing inequality, and fostering social and economic development. Through the use of ICT, libraries ensure equal access to information and resources. They also serve as community hubs, preserving culture, promoting education, and supporting lifelong learning. Furthermore, libraries can stimulate the local economy by collaborating with businesses and supporting local industries. Embracing and strengthening libraries is crucial for creating more inclusive and equitable communities.

Patricia Hepworth

The analysis highlights the importance of digital inclusion in Australia, with a focus on the disparities that exist between metropolitan areas and regional/remote areas. The Digital Inclusion Index in Australia provides statistics on digital inclusion across the entire population. There is a significant difference in the digital inclusion scores between metropolitan areas and regional/remote Australia. This discrepancy is also observed in the lower digital inclusion index among Aboriginal and Torres Strait Islander peoples compared to the general Australian population. The study also reveals that digital exclusion and abilities online vary significantly across different age groups.

Libraries in Australia play a crucial role in addressing digital exclusion. They provide essential support and services in educating people on how to use computers, mobile phones, and stay safe online. Libraries are especially valuable in facilitating community-based connections and nationwide digital collections. For instance, Hume Libraries, located in a highly multicultural area, have implemented successful digital inclusion programs. These programs have been effective in harnessing the existing infrastructure, people, and community relations to promote digital literacy.

The analysis also reveals that libraries can provide a tailor-made and localized approach to delivering digital literacy programs. In collaboration with a local university, Hume Libraries worked towards delivering digital literacy programs specifically designed for culturally and linguistically diverse communities. This approach ensures that the programs meet the specific needs of the target audience while leveraging the resources and expertise available in the community.

Furthermore, discussions at the Asia Pacific Regional Internet Governance Forum highlighted the importance of digital inclusion. While there was not a direct library representative at the Brisbane meeting, discussions centered around the Digital Inclusion Index and the role of bodies like libraries in promoting digital inclusion. This demonstrates that digital inclusion is a recognized and important topic in regional forums and that libraries are seen as significant contributors to this agenda.

In addition to addressing digital exclusion, libraries also play a significant role in improving digital skills and AI media literacy. Libraries serve as important institutions for adults who are not in formal education to enhance their digital skills and acquire AI media literacy. With the advent of generative AI, the need for digital skills and AI media literacy is increasing, making libraries even more crucial in supporting individuals in acquiring these skills.

To conclude, the analysis underscores the critical importance of digital inclusion in Australia and the need to bridge the gaps that exist. Libraries have proven to be invaluable in addressing digital exclusion, providing support, resources, and digital literacy programs. The discussions held at regional forums further emphasize the role of libraries in promoting digital inclusion. Additionally, libraries play a vital role in improving digital skills and AI media literacy, supporting individuals, particularly adults not in formal education, in acquiring the necessary skills for an increasingly digital world.

Moderator – Maria De Brasdefer

During the conference, several speakers emphasised the role of the internet in empowering societies and advancing access to information. Maria de Brasdefer, one of the speakers, highlighted that meaningful access to the internet leads to societies where citizens can make better-informed decisions, ultimately resulting in more democratic societies. This argument is supported by the notion that when individuals have access to a wide range of information and resources, they are able to participate more actively in social and political processes.

Another important point discussed was the significance of documenting local knowledge and leveraging library infrastructure to ensure accessible internet. Maria underlined the importance of preserving local knowledge by presenting four case studies at the conference. These case studies showcased how local communities have utilised ICTs (Information and Communication Technologies) to document and store important aspects of their culture, such as songs, stories, and traditional practices. Additionally, community radios and initiatives like the itinerant museum were highlighted as effective ways to share and preserve local knowledge. However, it was also pointed out that challenges such as high humidity could cause the deterioration of stored materials, indicating the need for proper storage facilities and preservation techniques.

Furthermore, Maria and other speakers asserted that libraries can play a pivotal role in digital empowerment. They argued that libraries are essential in providing access to information, fostering media literacy, and offering coding lessons and training. The audience, participating in interactive questions using menti.com, agreed that libraries can contribute significantly to digital empowerment in various ways.

Overall, it was concluded that the internet and library infrastructure are powerful tools in advancing access to information and empowering societies. The promotion and preservation of local knowledge through the use of ICTs were also deemed crucial. The conference highlighted the positive impact that these initiatives can have on promoting more democratic societies, enhancing education, and expanding opportunities for individuals and communities.

Woro Titi Haryanti

The speakers underscored the critical role of knowledge discovery and digital transformation in libraries and their impact on the community. They emphasised that libraries play a vital role in preserving knowledge, conducting research, providing reference materials, and fostering networking opportunities. The implementation of digital platforms, such as Indonesian OneSearch and e-PUSNAS, was specifically mentioned as a means to enhance access to public collections and digital books.

Moreover, there was a strong advocacy for integrating libraries into the national data infrastructure. The National Library was recognised for its contribution to the development of the national data centre. This integration would enable libraries to further support the digital transformation efforts of the country.

The sentiment towards these initiatives was overwhelmingly positive. People acknowledged the value and importance of embracing digital technologies and using them to modernise and enhance library services. The speakers and the overall analysis suggested that by embracing digital transformation, libraries would be able to better serve the needs of their communities, improving access to information and fostering knowledge exchange.

Additionally, the discussion highlighted the broader significance of this digital transformation for the country as a whole. By integrating libraries into the national data infrastructure, the government can harness the wealth of information and resources available in libraries to fuel innovation, drive industry growth, and promote sustainable development.

In conclusion, the importance of knowledge discovery and digital transformation in libraries, as well as their integration into the national data infrastructure, was emphasised. The positive sentiment towards these initiatives highlighted the potential benefits they hold for both libraries and the wider community. This analysis provided valuable insights into the role of libraries in the digital age and the steps that can be taken to ensure their relevance and impact in an increasingly digital world.

Session transcript

Moderator – Maria De Brasdefer:
We’re good? Okay, so hi everyone and good afternoon. First of all, I would like to welcome you to this session. My name is Maria de Bras de Fer and I work as a policy and research officer for the International Federation of Library Associations and Institutions. So today and also in view of this year’s IGF theme, The Internet We Want, what we would really like to do is to take this opportunity not just to present a series of short cases to you, but also to exchange and explore with you the topic of digital empowerment and to approach it from a slightly different perspective. So of course we know that the fact that you’re here sitting in this room just as well as all the other many people who are attending the IGF this year, it means that you’re already aware of the great value that lies in using the internet as a tool to advance access to information, but also and more importantly on the great value that meaningful access has on our societies as a whole. We also know that a society where citizens can make better informed decisions will automatically translate into a more democratic society where people will exercise their citizenship in a more participatory way, but ultimately they’ll also be able to uphold their rights both inside and outside digital spaces. But of course saying that, it’s the easy part, so we are aware of that and in that case the real question remains how can we do that and also what are the best approaches for this. So having this in mind, today we would really like to present you with a short series of four five-minute case studies that will look at the themes that lie at the intersection between digital empowerment, the documentation of local knowledge, but also the mobilization of the global library infrastructure to help people access the internet and make the most of it. So for this we have four speakers with us today. I think my slides are not showing, yeah, there it is. So we have Eric Huerta Velazquez from Rizomatika in a collaboration with CIDSAG and APC. We also have Woro Titi Haryanti from the National Library of Indonesia. We have also Trish Hepworth from the Australian Library and Information Association who will be joining us online. And we also have Yasuyo Inoue from the Tokyo University in Japan. But before we dive into these case studies, what we would like to do is also we would really like to hear from you because as we’re not many today, it would be good to exchange more. So we would like to do a quick reflection exercise with you first. So for this, and in case you’re not familiar with it, you can either scan the QR code with your phone or you can enter the website www.menti.com and then you will see a space where you can enter the code 18381615. I will give you a couple seconds. So now what you should see on your phones is the following question. So we have the question of have you thought about how can libraries contribute to digital empowerment? If you’ve thought about it before, you can share how, in what ways, and in case you haven’t, you can also share no, it never crossed my mind, or simply no. So far we only have one yes. These responses are anonymous, but of course you will also be able to comment on them at the end of the session if you’d like. So we have a second response, yes, media literacy, awareness, coding lessons, et cetera, yes, that’s very accurate. We have another just a yes. More yeses. So that’s good. We don’t have any noes so far. Device knowledge, which is not yet digital, yeah, that’s a very interesting one, too. So I think we don’t have any other replies, but this is good news, because it means that all of us were more or less on the same page about it. It means that we’ve thought about it before, but maybe we don’t know exactly how. And this is also why we’re here gathered today, to discuss a little bit about that and give you some insights on that. So for this, it is time now for our first presentation. So our first presenter will be Mr. Eric Huerta. So Eric works at Rizomatica in collaboration with CITSAG and APC, and he’s also an expert of the International Telecommunication Union for connectivity issues related to remote and indigenous people, and has served as a co-rapporteur on development of information technology and communication in remote areas and groups with unattended needs. Eric, please go ahead.

Erick Huerta Velázquez:
Thank you, and I’m sorry for being late. I got lost within the rooms. It looks very similar, and I went to the different ones. Our work in Rizomatica is mainly with indigenous communities, and so that made me think about what we could share in this session. It was more about, well, the role of libraries, but also just to question what is a library for everyone, and maybe if it does the same. I think one of the things that, like, it’s a barrier for the use of the Internet. For some people, it’s that it’s non-meaningful content within the Internet. That’s explained as one of the barriers of Internet adoption, and some of them is about the content. Sometimes some communities even say, well, when they have to take a decision on which technology they have to use, sometimes some communities refuse to get into the Internet because of the kind of content that people will find there, that sometimes have no relations to the reality, or sometimes it’s exposed to certain content that they don’t want to be exposed to that specific content. So, well, that made me think about, can we put the first one? Well, that’s the sort of communities we work with. We help many, we work together with indigenous communities that want to run their own media, such as community radios, such as community mobile networks, no? They are under their own mobile networks, and also we have this applied research program in which communities define which sort of local research that they want to do for a specific task, and there’s some examples, no? So there, you have the opening of a communication center in a rural area in Quetzalan, that’s some of the sort of the communities that have their own mobile network, and this is a community in the south of Mexico that works a lot with traditional medicine, and that’s the project. The next, please. So my first question is, what is a library, no? So when we think about a library, so we mainly think in the picture that is in our left, but when you talk with some communities, and what is a library for them, it’s this, no? It’s the territory, because most of the territory, it’s talking, it’s saying, it’s where they learn, it’s where they teach each other, it’s where they, it’s where they gather the local knowledge and this meaningful knowledge to manage and understand the territory. And so, how do we put together these different concepts of library, or this reservoir of knowledge that it’s in the nature on the territory of the communities, and the concept of library that we find in books, no, and the storages of knowledge in books, no? And what are the chances that the internet give us to do so? So the next one. So I think that ICTs can bring together these two concepts of library, mainly because most of the knowledge, for instance, in the communities, it’s oral knowledge, no? And it’s more related to knowledge that is full in practice, no, it’s not actually, so for many of the languages there are oral, are starting to be written, but not, but mainly are languages that are not written. And so, that is the main difficulty of bringing local knowledge into the libraries, because the libraries are mainly related to books. But when we bring ICTs into a library, even if it doesn’t have a connectivity, but has a local storage and that, then you can bring inside songs, then you can bring inside music, then you can bring videos, then you can bring all those stories that form part of the local knowledge of the communities. And so, this sort of work is mainly what many communities are interested on. So for instance, in this picture that it’s in my right, they are sharing, these persons are sharing their recovery, well, experience on the recovery of some of the local language and the local variety of their languages and bringing out some words and bringing out some stories and some research they did on that. And then, this space, they are having a workshop on how to put this knowledge together in a handbook, on a manual, and so on, so that they could share it better with other peoples. So that’s the idea. The other photograph is from a community that was one of the first in having these mobile, self-mobile networks. And also, they have a university, and they have a library from the university as well. But one of the things that they were more interested in complementing this library was the intranet. They said, well, we got this library, we got the books here, but we need a lot of, we need to document a lot of the findings that we are having from our knowledge. We also need to bring all the videos, the music that we need for. And that’s a complementary part of the library of the local university, of the indigenous local university. And then, the next one, please. And then, in this one, I wanted to share in the recent years, we have been, we mainly work with, from a long time ago, with community radios. But this, also, we open this local research program from, that bring us some other different experiences through. So, I’m going to talk about these two experiences, two chances that we have to bring and document specific knowledge. We got, with UNESCO, some consultations to develop the policy for indigenous community radios in Mexico. And from there, some needs came, some specific needs came, and some of them were specifically related with local archives of the radios. So, a lot of the, so the radios, and this one in the left, it’s a radio that was the first community radio of Mexico, so it’s about 60 years old, this community radio. And they have a splendid archive of many of the voices, knowledge, festivals, and so on. And was about to get lost, because that’s an area that is very humid and so on. So, when expressing the needs of being in this local archive, the phonotech take interest in that, and help them to restore the tapes, and also they are now keeping them, they have a copy, and they are now keeping in the phonotech now, so it has access for, well, they ensure that this archive will last forever. And then also, some communities, they decide together that some community radios decided to have a one-hour program every week, and that’s in the national radio. And so that’s, that has become also an important reservoir for knowledge in the communities. For instance, it has, they determine which are the subjects they want to talk about, but they, each of these programs is really… rich in knowledge, because some, for instance, some of them talked about the textiles, and they bring together a lot of information that is not in the books, is not in there, because it comes from the person there. And well, these other two, yes, very quickly, one is a community that started the research because they were Afro-descendant, and they wanted to be what the origin of them was. And the last one is another indigenous community that they run their itinerant museum, and these pictures that you see there, then you touch these pictures, and then they play the music or they play the stories of that. And that, well, that was what I wanted to show you for the last one, and we want, thank you very much. And we wanted to share on these possibilities of using the ICTs to incorporate local knowledge in libraries, and those are where you can find more information about it. Thank you.

Moderator – Maria De Brasdefer:
Thank you so much, Eric, for sharing all these nice cases with us, but also to emphasize on the importance of not just promoting local knowledge and building up local knowledge, but also on the importance of storing it, and how hard it is sometimes for certain communities to access not only their own knowledge, but also to store it sometimes, and also the role that libraries that play in it. So thank you so much for sharing it. So please keep in mind that there will be a space for asking questions to the speakers, but now we’re going to move on towards our next presentation. So our next presenter will be Woro Titi Haryanti. So Woro, as I mentioned, she’s a senior librarian from the National Library of Indonesia, and she has also been working in capacity development for librarians and also library technicians across Indonesia for more than 30 years. So go ahead, Woro.

Woro Titi Haryanti:
Thank you. Yes. I agree with Eric said that what is library is a reservoir of knowledge, and I’m going to tell you that what the National Library rules to reveal the knowledge discovery to the community. Yes. Yeah. And go to the second. Next please. Yeah. Okay. Next please. Yeah. This is the presidential directive, five steps to be taken to accelerate the national digital transformation. This is not the area of the National Library, but this is close related to the National Library. It is the function of the Ministry of Communication and Information. It should be taken into immediate action to expand the internet access and develop digital infrastructure and provide internet services for all. I mean, there is a targeted for the people, the population to get the access to the internet, and this is important for us as a library. So as long as they get the access, then the knowledge can be transferred there. And then the second, it’s targeted about 196,000,000, 7,014, and 70, that is the targeted to get the access of the internet. And then we have to prepare transformation digital roadmap for the government, strategic for public services, social ads, and et cetera. And then the third is to take immediate action to integrate national data center. This is also a library can contribute the data that is to be restored in the national data center. And then into taking it down into the need of the digital talents. This is also important for us, because through this digital talent, that there will be training. There will be training for peoples to be able to access the internet. That is, well, it targeted quite a lot from the Ministry of Communication. And this facilitates to this data center, that national data center, it needs to facilitate all the governments to restore their, to store their data, and then can be accessible for the community. And this also, the digital talent include digital literacy. Their target is all over Indonesia. They collaborate with 12 ministries, private sectors, and communities. Digital skills, digital culture, digital ethic, and digital safety. This will be covered in the curriculum, digital society, digital economy, and digital government. And then they divide, they put it into two categories for the training, that is the training for the skills for proficient class, and then also the empowering the cyber creativities that is the inclusion class. And this, next please, and this also directed from the president for the libraries. To improve and expand access to the digital libraries in order to accelerate the human resource development who will master science and technology, improve creativities and innovations to the create job opportunities, reduce unemployment rate, increase income per capita, as well as increase foreign exchange to create prosperity for all. That is the directive to the library. Next. This is the function of the role and the function of the national libraries. Yes, as the library, as the networking center, and also the preservation center. This networking means that we will collaborate with other institutions and then make a network to create more local knowledge, create knowledge that can be shared together. And then preservation center, as this also we have to localize their local information, local content that should be preserved and also can be accessed. And then this research center, depository center, and reference library center, and of course this library development center, but in here, this is the role of national library. And we have also obligation, next thing, we have the obligation to develop library national system in supporting national education system and guarantee the sustainability of libraries as a learning center. That’s again that we have to provide them with the access and also the content. And guarantee the availability of the library services throughout the nations and guarantee availability of collections through translation, transliteration, transcription, and transmedia. And also we promote reading habits and also develop library collection and develop national library itself. And we also have to be developed and appreciate those who preserve conservative and conserve manual. Next, please. Yes, this is libraries is not yet fully integrated to the national data infrastructure. Yes, it is to implement what the directive of the presidents that the national library is as part of the government, so we have to contribute to send our data to the national data center because this is an example of NLE, that’s the national library. And then we have two that actually enlist and then one search. Enlist is the open, what is it, the application for, what is it, to do the library management that is based on the mark base and then online. One search that I will talk about it later on. And also IPUSNAS, it can be accessed all over Indonesia. And other ministry will do the same thing. Next, please. Yes, this is knowledge discovery that is the, we have the Indonesian OneSearch. And here is the Indonesian OneSearch is single search portal for all public collection from libraries. At the moment, we can collect, we can have the 12,608,000 records. And then also the member is around 11,000, sorry, it’s more than, not 11,000, actually this is for the repositories, the repository itself is 11,000. And this is connected almost all the libraries in Indonesia. Not all, but mostly about more than 20% of the libraries in Indonesia is connected to us. And this is for the, the system is for the anti-plagiarism tool, subject analysis tools. And OII, OPH, Open Archive Initiatives. Next. Yeah, this is the institution I mentioned, this library institution is 300, librarians, the repository is 4,000, and then the repository institution is 11,000. It’s a very big knowledge can be reserved there. So we can, more and more knowledge is coming, then we also motivate those who are not yet become part of this program, they have to join us. And we also give them freedom to whether they want to send it to us, it’s only the abstract or only the metadata or full text is up to their policy in individual’s institutions. And we have that, what is it, the contributors, it’s quite a lot, National Library, of course, the biggest contributors, and also there’s a university, yeah, that’s also contribute their collections to us. Next please. Yeah, this is the e-mobile. We have the e-PUSNAS that I mentioned earlier, yes, this is, we have that social media-based library provide digital books to read, share, and shop. This application available on mobile, and then using digital right management and technology as the security. And in this also, we have the menu is for e-donations, for those who write books and then want to donate their books and give the right to the National Library. So you can, and up to now we have around 140 books that is donated to the National Library, and that is free, then everybody can access it. Well we can, it doesn’t have the royalty things, no, we are not talking about the royalty things because we can give it, what’s it, voluntarily, and it’s free. Okay, next. And this is another one, this is the, our latest is Bintang Purposedu, this is for the education, and we work quite close to the Minister of Education, and also Minister of Religion. Why Minister of Religion? Because Minister of Religion, they also have schools that we can collaborate in. And this platform provide to improve access to the digital content for schools and universities. The contents are varied, such as audio books, video books, educational tutorials, scientific journals, all of this can be accessed via multiple platforms. The total collections that we have in here is for elementary schools, a total collection, for the elementary schools is 26,000-something, and then junior high school is 22,000-something, and senior high school is 50,000, and then university is 262,000-something, and the digital books from the Minister of Religion, we have 58,000, and from the Minister of Education we have 1,063,000 books that is stored there, so it can be accessible for the community. I think next, yeah, next, oh, this is, sorry, yeah, this is for the eResources, eResources is the service that we have, this is digital collection for Service National Library Indonesia, which are either subscribed or made independently by the National Library. It means that we subscribe the books that I think everybody is familiar with here, and there’s one, there is Niliti, that’s mainly for the research for the manuscript, and also there’s Balai Pustaka, that is we digitize their book, and then we put it here, and this is free, and to be able to access this, you have to be the member of the National Library, and you can do it online to become the National Library members, and the National Library memberships with the membership number, we now connect it to our national ID, and that’s it, integrated. Thank you, that’s all, I think.

Moderator – Maria De Brasdefer:
Thank you so much, Woro, for sharing this case, too, yeah, I can also think it is indeed an interesting example of a case that can be followed by other libraries, but not just in terms of digital empowerment, but also in terms of economic growth that is tied to the use of libraries, so thank you so much for sharing that with us, and now we’re going to move on to the next case, which is the case from Trish Hepworth, who is the Director on Policy and Education for the Australian Library and Information Association, and she works across the sector to empower the workforce, and also strengthen libraries to achieve a socially just and progressive society.

Patricia Hepworth:
Thank you, Maria, and I wish I was there, but thank you very much for having me. I’d like to acknowledge today that I’m coming from the lands of the Ngunnawal and the Gambri people, and pay my respects to elders past and present. Maria, are my slides up? Thank you, perfect, brilliant. I guess I wanted just to very quickly have a little bit of a look at what this looks like from Australia. So in Australia we have an index called the Digital Inclusion Index that gives us statistics about digital inclusion across the whole population, and the Digital Inclusion Index measures the accessibility, the affordability, and the ability of people online, and then basically gives a score. What you can see up there is some of the various vectors that we know are wildly different across the country. So Australia is a very concentrated metropolitan kind of a country. Most of our population lives in cities along the coast, and there is a huge difference between the digital inclusion scores in metropolitan areas, which are quite high, and the digital inclusion scores in regional and remote Australia, which are much lower. Similarly, for our First Nations people, so the Aboriginal and Torres Strait Islander peoples of Australia, we can see that they have a much lower digital inclusion index than the Australian population generally. But again, in particular, the further you go from those metropolitan areas, the lower the digital inclusion index. Now the next slide, please, Maria. And across all of the different vectors, we see a really significant change across age grounds. So this graph on your screen at the moment talks about digital exclusion. So it’s looking at those two bits around accessibility and affordability. Now, as you can see, for younger age groups, the ability to access digital worlds to be online is much higher. And as you go through the older age groups, that accessibility really drops. And if I could have the next slide, Maria. And that probably unsurprisingly goes with ability as well. So and we see this across all of the things, the accessibility and the ability of people are closely correlated. So people with the most access also have the most ability and comfort online, those with the least access. So First Nations people, regional people, older people, they have the least ability online. If I could have the next slide to have a look at what that actually looks like in practice, only 23 percent of Australians were confident that they could edit a video and post it online. So the fundamental ability to be on TikTok, for example, is only shared by a quarter of people in Australia, only 35 percent. So just over a third were confident that they could work out if they were being harassed online and if they were being harassed, what they could do with it or which authorities they could report it to. And if I could have the next slide. While people’s abilities and media literacy is quite low, people’s interest in being secure and able digital citizens is very high. So when you ask people, they are really keen to know how they can protect themselves from scams. They want to use media across all the different forms of media to stay connected with community, to stay connected with friends and family. And if we have a look at the next slide, this is very much where libraries come in. So across the library systems and in particular libraries in educational institutions, so schools and TAFEs, which is our vocational education in Australia and universities and public libraries, we see that librarians are already working solidly in these areas. So you have the infrastructure from libraries to have the access to the Internet, as Waro and Eric have said, the ability to access community-based connections, but also nationwide digital collections. So you have those accessibility ports. But we also see with libraries is a huge role in bolstering that ability as well. So when you ask libraries, they are helping people find resources in the catalogue and they’re helping people find information online. But they’re also providing a basic support about how to use computers, how to use mobile phones, how to use the internet, how to use the Internet, how to use the Internet or about how to use mobile phones, how to stay safe online. And if we can have the next slide, I just wanted to do a very quick look at a little local library, Hume Libraries, which is based down on Narm, so on Wiradjuri country in Melbourne. So Hume Libraries is a very, is situated in a highly multicultural area. And so they can see that all of those cross the correlation. So they’ve got communities who have English as a second language, which is often one that looks at digital exclusion. They have older communities who often have English as a second language and they have outer metropolitan. So that’s another one where you will find people of lower digital literacy. If I could have the next slide. So Hume Libraries have run a huge amount of work in conjunction with the local university to actually run out a research project around how do we deliver digital literacy programs for culturally and linguistically diverse communities. And the thing about using the libraries is that the infrastructure was already there. So they were able to pull together the resources they had around community engagement. They were able to harness the people in the libraries and also the community relations that were already there. And they had a system in place for the programs. So working with these three together, they very successfully managed to tailor digital inclusion programs for called communities or culturally linguistically diverse communities that went across age ranges and abilities. So that looks different for different people. You might have people who are absolutely fluent in spoken English, but unable to do written English or perhaps need their content in video or audio format. You might have people have different accessibility issues. You need to be able to find case studies and ways of working with people that relate to collections that are important to them and communities in which they are already participating. So running these sort of programs in your local library means that you can have a very tailored experience where you leverage the ability to have those central points for access, but also then brings in all of the support from the libraries to upskill that ability piece. If I could have the very last slide. I think some of the takeaways that we would say from Australia’s experience is that it’s not easy. Libraries are there. The public libraries are in every different community across Australia. So we go regional and remote. We have linguistically diverse. We have the older people going in. There is no other organisation that is currently in a better position to be able to have the people already coming in the door with the access. But having said that, every single community is different. So one of the things that the culturally and linguistically diverse guidelines, for example, developed was a list, a toolkit for each library to then be able to work with local partners to build its own localised program. And the outcome of the program was that we had a group of people who went from being quite digitally nervous to being digitally confident. And that meant that they were more confident digital citizens, but they were also more confident citizens and better able to partake in Australian society and part of democratic society. So it was a resounding success as one case study. It’s replicated across the country. And I hope that was of some interest to you all. Thank you.

Moderator – Maria De Brasdefer:
Thank you, Trish, so much also for sharing the case of Australia. And also, I guess it’s really interesting also to see how a country as culturally diverse and linguistically diverse as Australia also, this could be seen as a challenge, but libraries seem to be addressing this in a very successful way, despite all the diversity there. So thank you so much for sharing this case. So as we’re running a bit out of time, I will move on to our next and last presenter, who is Yasuyo Inoue. And she will give us a local perspective on this topic. Yasuyo is a professor on public librarianship at the Tokyo University. And she has been a professor on other universities for more than 35 years, where she has been focusing mainly on children’s and young adult library service. And also in the past, she was also a member of the intellectual freedom at the library committee of the Japan Library Association. Please go ahead, Yasuyo. Thank you.

yasuyo inoue:
Thank you, Maria. The time is not enough, and I didn’t bring so many slides. So I just wanted to say that some general informations based on in Japan, but right now, from elementary school to junior high and senior high, most of the kids have their own tablet or PC. So as for the technical things that they know how to use their computers, but the problem is that the lack of content. That’s why the library needs some roles to provide informations to the kid. And maybe 50 years later, most of the Japanese people can use any kind of the computers, but it’s later on. So right now, what the library should do, I think library can do that with using ICT techniques. The library can connect to a rural area and urban area. There’s the unfair situation right now, but they can connect to these unfair situations or maybe different strong direct area we can connect to each other through the materials on the informations at the libraries. Overall, the library has three roles. One is, as the other speakers mentioned, that kind of a community activity center, as Eric said, preserve their own culture or traditions. And another one is kind of educational or learning center or information center. So not only books, but also a lot of data. So in that sense, library is a kind of a data center. So we concentrated and stocked a lot of big data. And now the many public libraries in Japan, especially prefecture library, big libraries, they want to digitalize those traditional historical materials into digitalized materials and provides to the users. Especially National Diet Library, that National Central Library in Japan, they have the so huge big data. So they changed their National Diet Library role and changed the copyright role. Now they provide the data via internet. So the content provides to the each users. So more libraries can provide more data, not only big National Diet Library level, but also local public library make the community get together. Like a slide right now, it’s a very small town library, Shiwa Library, close to the Morioka City. I don’t know why New York Times said that Morioka foreigners should visit, I don’t know. But this is a small town library, but the central photo that is a Japanese sake, they just exhibit inside the library and they show that how they make brew the sake. And sometimes they tell the people how to brew and what is the taste and what is the character of the sake. So they wanted to show that their local business to the people at the library. And the right side, that is the library connect to the agriculture corporation. So once a week, there was a kind of the vegetable market in front of the library. So people buy the vegetable and they come into the library and there’s a collection of the recipe. So which vegetable did you buy? You can use this recipe at your own home. So the agriculture business and the library connected to Kinley on the wall where the farmers grown up the vegetables in that local area. So the library stimulate the local business. So I think that is another community activity center that library played it. So not only the real things, but also maybe in the future, more small town library will provide the digital materials. So if you have any trouble or questions, go to the local library. So maybe they will help you how to expand your local business. Thank you.

Moderator – Maria De Brasdefer:
Thank you. Thank you so much. Yes, through and also all of you who are here today. And yeah, I think, well as a final remark, I can only say that if we see all these cases that you presented, you can also see how the role of libraries, well you can see this common factor in all the cases about how the role of libraries really is evolving and yeah, with time and also with the use of internet and access and all that the communities can get out of it at a local level. So thank you very much for sharing it. So now we still have a couple minutes left. So I would like to open the floor for the people who are here to ask any questions to the speakers. I don’t know if we have any questions online. No, okay. Yeah. Okay.

Audience:
Thank you. I did have a question for Trish Hepworth, but is she still online? She is. I am. Oh, you are, great. Good to see you. Trish, we had some contacts within IFLA in the past couple of years. And one of the contacts we had was that you made a presentation at a library webinar that I organized in the framework of the Asia Pacific Regional Internet Governance Forum. And that was, when was that? Two years ago, I forget precisely. But I wanted to ask you, was there anybody at the Brisbane meeting in August this year, the Brisbane meeting of the Asia Pacific Regional Internet Governance Forum, was there anybody there who was talking about the contribution of libraries? Because it seems to me that your comments about the Digital Inclusion Index are highly relevant to all countries. In fact, you’ve got a model there which we should all probably imitate, that is, countries which haven’t got one should have one, have that sort of system and monitor it and develop it. But was there anybody at Brisbane who was talking about library information services, whether on the coast, as you said, in the metropolitan areas or in the outback, in remote areas? Do you know?

Patricia Hepworth:
Thanks, Winston, for the questions. We didn’t have an Ali or a library representative as such, but we definitely had people who were at that forum talking to things like the Digital Inclusion Index and also to the role of other sort of bodies, such as libraries. And I think it’s, you know, one thing I know is very top of mind for our policy makers in Australia at the moment is the increasing need around things like media literacy and digital skills with the rise of generative AI. And that’s certainly something that we know is getting a lot of attention in sort of big structural things. So, you know, there’s both the doom and gloom. I know how will people be able to detect AI scams or what does this mean for the future of internet search? But also those huge potentials. So when you’re, you know, working with people who might have lower levels of written literacy, the ability to use generative AI to help support them with job applications or even in writing search and prompts is huge. And so certainly from a policy perspective at the moment, I think there’s a really important role for libraries to play in that digital skills and that AI media literacy space, which realistically, if you don’t have libraries doing that work in a country like Australia, there isn’t anybody else where adults who are not in formal education really have to go.

Moderator – Maria De Brasdefer:
Thank you, Trish. So do we have any other questions from the floor?

Audience:
Okay, thank you. I am Johanna Munyao. I’m a member of County Assembly from Kenya. I want to appreciate the presenters for packaging the information in the right way. Very clear. And also I want to appreciate the approach of taking knowledge closer to our rural flock. As I do appreciate that, I’ve realized that this approach helps our young ones to come together, socialize, share knowledge, maybe also get exposed. And my question is whether there is sensitization on how we have realized that in the area of academia, the most tricky part is how to publish some of these works or maybe some of the activities so that others from elsewhere can be able to access the same information, access our experiences. Do we really ever contract feasibility studies to either vet on the content and also see the compliance of the same in terms of legal frameworks which may govern whatever you publish to be accessed through the internet? And again, I come from a rural area where the poverty levels are a threat, very low. So you’d realize it is like where the government is not able to come in and support fully, coming up with such structures, however good they are and I really appreciate, becomes a challenge. Personally, I am running an institution with a very tiny library. And the approach I’ve gotten from here has really enlightened me such that I have thought of only addressing the needs of the learners within that small institution. But I have seen other learners can come together from even other institutions and with such an access of such facility, be able to share knowledge and even be able to take it to a higher level of publishing the same in the internet and sharing the experiences with the world over. Thank you.

Moderator – Maria De Brasdefer:
Thank you so much. I think maybe we have time for one last question. Yeah. No? Yeah, well, I think we’re at the end of our session anyways but thank you so much to all the speakers who are here today and who presented and thank you so much for sharing your cases and your stories with us and also thank you for the attendees and the questions. Really, we really appreciate your presence and also if you would like to collaborate with us in the future or if you have any ideas for opportunities or collaboration with libraries, please feel free to reach out to us. Thank you. Thank you. Thank you. Thank you, Trish. I don’t know if you can see me but can you hear me? Thank you, Trish. Thank you. I think this one belongs to her, but.

Audience

Speech speed

143 words per minute

Speech length

595 words

Speech time

250 secs

Erick Huerta Velázquez

Speech speed

123 words per minute

Speech length

1415 words

Speech time

688 secs

Moderator – Maria De Brasdefer

Speech speed

152 words per minute

Speech length

1697 words

Speech time

669 secs

Patricia Hepworth

Speech speed

168 words per minute

Speech length

1740 words

Speech time

622 secs

Woro Titi Haryanti

Speech speed

139 words per minute

Speech length

1650 words

Speech time

711 secs

yasuyo inoue

Speech speed

142 words per minute

Speech length

686 words

Speech time

290 secs

AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Kevin Luca Zandermann

Artificial Intelligence (AI) has the potential to revolutionize public services, particularly in personalized healthcare and education. Examples from Finland and the UK demonstrate how AI has successfully integrated into law enforcement practices, highlighting its transformative impact on public service delivery.

Regulatory bodies should seriously consider incorporating AI tools into their processes. Finland’s use of AI in cartel screening and the UK Competition and Markets Authority’s development of an AI tool for automatic merger tracking serve as successful examples, streamlining operations and enhancing efficiency.

However, it is crucial to strike the right balance between automated AI-powered steps and human oversight. Effective regulation requires the integration of both elements. The Finnish Authority, for instance, allows a stage of human oversight even after AI detection, ensuring decisions rely on well-informed processes. Similarly, Article 14 of the European Union’s AI Act emphasizes the importance of human oversight in regulating AI.

While there are potential benefits, the use of AI in regulation, particularly with Large Language Models (LLMs), also carries risks. A Stanford survey reveals that only one out of twenty-six competition authorities mentions using an LLM-powered tool, highlighting the need for cautious implementation and consideration of potential implications.

Kevin Luca Zandermann suggests regulators engage in retrospective exercises with AI, reviewing well-known cases to identify previously unnoticed patterns and enhance regulatory processes. Clear and comprehensive AI legislation, particularly regarding human oversight, is crucial. The lack of clarity in the EU’s current AI legislation raises concerns and emphasizes the need for further development.

Despite limited resources, conducting retrospective exercises and developing Ex-officio tools remain crucial, especially given the impending AI legislation. These exercises help regulators adapt to the evolving technological landscape and effectively integrate AI into their practices.

In conclusion, AI has the potential to transform public services, but its implementation requires careful consideration of human oversight. Successful integration in law enforcement and regulation in Finland and the UK serves as evidence of AI’s capabilities. However, risks associated with technologies like LLMs cannot be underestimated. Regulators should engage in retrospective exercises, work towards comprehensive AI legislation, and address potential concerns to ensure responsible and effective AI implementation.

Sally Foskett

The Australian Competition and Consumer Commission (ACCC) is taking proactive measures to address consumer protection issues. They receive hundreds of thousands of complaints annually and are attempting to automate the process of complaint analysis using artificial intelligence (AI). This move aims to improve their efficiency in handling consumer issues and ensure fair treatment for consumers. Additionally, the ACCC is exploring the collection of new information such as deceptive design practices, which will enhance their understanding of consumer concerns and enable them to better protect consumers’ rights.

Understanding algorithms used in consumer interactions is another key area of focus for the ACCC. Regulators must be able to explain how these algorithms operate to ensure transparency and fairness in the marketplace. To achieve this, the ACCC gathers information such as source code, input/output data, and business documentation. By comprehending and being able to scrutinize these algorithms, they can better identify potential issues related to consumer protection and take the necessary enforcement actions.

The ACCC is also supportive of developing consumer-centric AI. They recognize the potential of AI in helping consumers navigate the market and make informed decisions. This aligns with the Sustainable Development Goal 9: Industry, Innovation and Infrastructure, which encourages the use of innovative technology to drive economic growth and promote industry development. The ACCC believes that by leveraging AI technology, consumers can benefit from more personalized and accurate information, leading to better economic outcomes and increased satisfaction.

In terms of data gathering, the ACCC acknowledges the importance of considering various sources. They emphasize going back to the basics and critically assessing the sources of data. By ensuring that the data used for analysis is accurate, reliable, and representative of the market, the ACCC can make more informed decisions and take appropriate actions to safeguard consumer interests. The ACCC is exploring the possibility of obtaining data from data brokers, hospitals, and other government departments. Additionally, they plan to make better use of social media platforms to detect and address consumer issues promptly.

It is evident that the ACCC advocates for utilizing data from different sources in their decision-making and enforcement activities. They suggest using data from other government departments, data brokers, hospitals, and social media to gain a comprehensive understanding of consumer trends, behaviours, and concerns. This multi-source data approach allows the ACCC to identify emerging issues, better protect consumers, and ensure fair competition in the marketplace.

In conclusion, the ACCC is actively pursuing proactive methods of detecting and addressing consumer protection issues. They are leveraging AI to automate complaint analysis, enhancing their understanding of algorithms used in consumer interactions, and supporting the development of consumer-centric AI. The ACCC recognizes the importance of considering various sources of data and is exploring partnerships and collaborations to access relevant data. By adopting these strategies, the ACCC aims to enhance consumer protection, promote fair business practices, and contribute to sustainable economic growth.

Christine Riefa

The use of artificial intelligence (AI) in consumer protection is seen as a potential tool, but experts caution that it is not a panacea for all the problems faced in this field. While 40 to 45% of consumer authorities surveyed are currently using AI tools, it is important to note that there are other technical tools being employed for consumer enforcement that are not AI-related.

One of the main concerns raised is the potential legal challenges that consumer protection agencies may face when using AI for enforcement. Companies being investigated may challenge the use of AI, and this issue has not been extensively studied yet. However, it has been observed that agencies with a dual remit, not solely dedicated to consumer protection, tend to have better success in implementing AI solutions.

Consumer law enforcement is considered to be lagging behind other disciplines, but efforts are being made to catch up. It is acknowledged that there is still work to be done in terms of classification and normative work in AI to ensure that all stakeholders are on the same page regarding what AI is and what it entails.

Collaboration among different stakeholders is deemed crucial for achieving usable results in consumer protection. It is emphasized that consumer agencies need to work together in unison to effectively address the challenges faced in this field.

Furthermore, it is argued that AI should not only be used for detecting harmful actions but also for preventing them. Consumer law enforcement needs to undergo a transformative shift in its approach. AI can be leveraged more effectively by adopting a prescriptive method that focuses on preventing harm to consumers rather than solely relying on detection.

In conclusion, while AI shows promise in consumer protection, it is not a solution that can address all challenges on its own. Consumer protection agencies need to consider potential legal challenges, collaborate with other stakeholders, and focus on leveraging AI in a transformative way to ensure effective consumer protection.

Martyna Derszniak-Noirjean

Artificial intelligence (AI) is reshaping the consumer protection landscape, presenting both benefits and challenges. It is vital to examine the implications of AI in consumer protection and determine the necessary regulations to ensure a fair and balanced environment.

AI provides an economic technological advantage over consumers, giving firms and entrepreneurs the potential to exploit the system and engage in unfair practices. This raises concerns about the need for effective protections to safeguard consumer rights. Therefore, there is a critical need to discuss the use of AI in consumer protection. The sentiment surrounding this argument is neutral, reflecting the requirement for comprehensive examination and evaluation.

Understanding the extent of regulation required for AI is a complex task. AI has the potential to both disadvantage and assist consumers. Striking the right balance between regulating AI, innovation, and economic growth is challenging. This argument underscores the importance of carefully considering the implications of excessive or inadequate regulation to ensure a fair marketplace. The sentiment remains neutral, highlighting the ongoing debate regarding this issue.

However, AI also offers opportunities to enhance the efficiency and effectiveness of consumer protection agencies. Consumer protection agencies are exploring the use of AI in investigating unfair practices, and they are developing AI tools to support their efforts. This signifies a positive sentiment towards leveraging AI for consumer protection. It emphasizes the potential of AI to augment the capabilities of consumer protection agencies, enabling them to better safeguard consumers’ rights.

Based on the analysis provided, AI is significantly transforming consumer protection. It is crucial to strike the right balance between regulation and innovation to ensure fairness and responsible consumption. While concerns regarding potential unfair practices exist, AI also presents an opportunity to enhance the effectiveness of consumer protection agencies. Overall, a neutral sentiment prevails, emphasizing the need for ongoing discussions and evaluations to successfully navigate the complexities of AI in consumer protection.

Piotr Adamczewski

The use of artificial intelligence (AI) in consumer protection agencies was a key topic of discussion at the ICEPAN conference. It was highlighted that AI is already being utilized by many agencies, and its development is set to continue. The main argument put forward is that AI is essential for detecting both traditional violations and new infringements that are connected to digital services.

To further explore the advancement of AI tools in consumer protection, a panel of experts was invited to contribute their perspectives. These experts included professors, representatives of international organizations, and enforcement authorities. Professor Christine Rifa conducted a survey that shed light on the current usage of AI by consumer protection agencies. This survey likely provided valuable insights into the challenges, benefits, and potential for improvement in AI implementation.

The UOKiK (Poland’s Office of Competition and Consumer Protection) recognized the potential of AI for enforcement actions and initiated a project specifically focused on unfair clauses. The project was born out of a need for efficiency and was supported by an existing database of 10,000 established unfair clauses. Training AI to detect such clauses in standard contract terms proved to be particularly useful, as the process is time-consuming and labor-intensive for human agents.

The UOKiK is also actively working on a dark patterns detection tool. Dark patterns refer to deceptive elements and tactics used in e-commerce user interfaces. The goal is to proactively identify and address violations rather than relying solely on consumer reports. Creating a detection tool specifically targeted at dark patterns aligns with the objective of ensuring responsible consumption and production.

In addition, the UOKiK is preparing a white paper that will document its experiences and insights regarding the safe deployment of AI software for law enforcement. The white paper aims to share knowledge and address potential problems that the UOKiK has encountered. This document is a valuable resource for other agencies and stakeholders interested in implementing AI technology for law enforcement purposes. The expected release of the white paper next year indicates a commitment towards transparency and information sharing within the field.

Overall, the expanded summary highlights the increasing importance of AI in consumer protection agencies. The discussions and initiatives at the ICEPAN conference, the survey conducted by Professor Christine Rifa, the projects carried out by the UOKiK, and the upcoming white paper all emphasize the potential benefits and challenges associated with deploying AI in the realm of consumer protection. The insights gained from these endeavors contribute to ongoing efforts towards more effective and efficient law enforcement in the digital age.

Melanie MacNeil

AI has the potential to empower consumers and assist consumer law regulators in addressing breaches of consumer law. Consumer law regulators have started using AI tools to increase efficiency in finding and addressing potential breaches of consumer law. These tools can support preliminary assessments of investigations and highlight conduct that might be a breach of consumer law. For example, the Office of the Competition and Consumer Protection in Poland uses web crawling technology with AI to analyze consumer contracts and identify unfair contract terms.

Similarly, regulators are utilizing AI to detect and address product safety issues. The Careers Consumer Injury Surveillance System searches for products online that have been the subject of a product safety recall using AI. Additionally, AI technology and software enable early diagnosis of product safety issues in smart devices. These advancements contribute to safer consumer products and protect consumers from potential harm.

AI not only helps with consumer law and product safety but also provides opportunities to nudge consumers towards greener choices. The German government has funded a digital tool that uses AI to provide consumers with a series of facts about how to reduce their energy consumption. This empowers consumers to make more environmentally conscious decisions. Additionally, AI can assist consumers in making green choices by breaking through the information overload on green labels, helping them better understand the environmental impact of their choices.

However, there are concerns about new and emerging risks associated with AI and new technology in relation to consumer health and safety. The OECD is currently undertaking a project to assess the impact of digital technologies in consumer products on consumer health and safety. The focus is on understanding and addressing product safety risks through safety design. It is important to address and mitigate these risks to ensure the well-being and safety of consumers.

Regulators are often criticized for being slow to address problems compared to businesses, which are not as restricted. There is a need for regulators to adapt and keep pace with technological advancements to effectively address consumer issues. Collaboration and sharing of learnings are crucial in moving quickly to address issues. By working together and sharing knowledge, stakeholders can collectively address the challenges posed by AI and emerging technologies.

In conclusion, AI has the potential to transform the consumer landscape by empowering consumers and assisting regulators in addressing breaches of consumer law and product safety. However, there is a need to carefully navigate the risks associated with AI and ensure consumer health and safety. Collaboration and knowledge-sharing are crucial in effectively addressing the challenges posed by emerging technologies. By embracing AI’s potential and working together, stakeholders can create a consumer environment that is fair, safe, and sustainable.

Angelo Grieco

The European Commission has prioritised the development and use of AI-powered tools for investigating consumer legislation breaches. To assist EU national authorities, they have established the Internet Investigation Laboratory (eLab), which utilises artificial intelligence to conduct extensive evaluations of companies and their practices. eLab employs web crawlers, AI-powered tools, algorithms, and analytics to aid in large-scale reviews. This demonstrates the European Commission’s commitment to consumer protection and leveraging AI technology.

Behavioural experiments are used to assess the impact of commercial practices, specifically targeted advertising cookies, on consumers. These experiments play a crucial role in enforcing actions against major businesses and ensuring consumer protection. They allow regulatory authorities to thoroughly examine the effects of various practices and address any potential harm.

In order to investigate and mitigate risks associated with AI-based services, a proactive approach is necessary. Investigations are currently underway to assess the hazards posed by AI-powered language models that generate human-like text responses. These models have the potential to manipulate information, spread misleading content, perpetuate biases, and contain errors. Identifying and addressing these risks is crucial for responsible and ethical use of AI.

Angelo Grieco is leading efforts to enhance the use of AI in investigations, with a focus on compliance monitoring for scams, counterfeiting, and misleading advertising. Grieco aims to improve the efficiency and effectiveness of investigations through the use of advanced technology. Additionally, there is a recognition of the importance of improving case handling processes and making evidence gathering more streamlined. Grieco aims to develop tools that can accommodate jurisdiction-specific rules and ensure adherence to legal procedures.

In summary, the European Commission is committed to developing and utilising AI-powered tools for investigating consumer legislation breaches. The Internet Investigation Laboratory (eLab) demonstrates this dedication by employing AI technology to aid in comprehensive evaluations of companies and practices. Behavioural experiments are used to assess the impact of commercial practices on consumers. Proactive measures are being taken to investigate and mitigate risks associated with AI-based services. Angelo Grieco is actively working to enhance the use of AI in investigations, with a focus on compliance monitoring and efficient case handling. These initiatives reflect a commitment to protecting consumer rights and ensuring effective and ethical investigations.

Session transcript

Martyna Derszniak-Noirjean:
before I will start. It would make a little bit sense that you can see me as well, so let me see. Otherwise, please, the technical assistance, if you could try and help me with this, that would be wonderful. Either way, I will not take more time with my technical issues. Welcome everybody, and it’s really great to be here for the third time at the Internet Governance Forum, so we are really happy that also this year we can alert the forum to consumer protection issues, and that this year as well we have a wonderful panelist with us, so welcome everybody and thanks for giving us this opportunity. I will start saying one of the biggest reasons that you have heard in the last times, and also one of the most heard things these days, which is that AI has been changing our lives, and I’m pretty sure that you guys are all tired hearing this, but even though we’ve heard it so many times, it doesn’t make it any less important, so we need to discuss and we need to converge around this issue, and this is why we have organized this panel, and now the question is why is it important to discuss AI in the context of consumer protection? For us, consumer protection authorities and many panelists who also have to do with consumer protection, the issue basically is that firms and entrepreneurs have economical technological advantage over consumers, which means that they can use AI to have greater possibilities of doing unfair practices against consumers. This is one option, of course, AI can also be used for good purposes, and our task as consumer protection enforcers and all stakeholders that are active in the area of consumer protection, our task is to understand our task is to understand to what extent we should curb AI used by companies, and to what extent we should try and allow it to flourish to actually assist consumers, for example, by having a better choice of products, so this is a big challenge for us, consumer protection stakeholders, and we need discussions, we need to speak, we need to engage with this topic, this is why we think that it’s very important to continue discussing it, even though we are already discussing it a lot, and as an emerging topic, we really need to have a wider conversation about it, and IGF is a great forum for that. We have also internal stakeholders around here, people who are concerned not only with consumer protection as we are, but also with other things, who are much more knowledgeable about different technologies, and how they are being used online, so it’s great, and we hope that we’ll have a wider discussion here, and I hope, and I’m pretty sure Piotr will also be able to follow up on this with many of the participants that are fortunate enough to be there in person, and one final thing of introduction is, except for trying to understand the impact of consumer on consumers, and the scope of intervention by authorities in the context of AI consumer protection, there is one more thing that we have been exploring as consumer protection agency, which is the use of AI to our own purposes in investigating unfair practices, so while we can see and monitor the use of AI by companies, it is also a great tool for us to increase the efficiency and effectiveness of our own actions, and our own activities, so we are also doing this, we are conducting two projects where we develop AI tools, and we are also aware that there is many other such projects all over the globe, our colleagues, our panelists will tell you more about that, so Piotr, that would be all from my side, and I wish you a great panel, I’m pretty sure you’ll be able now to present the panelists, thanks very much.

Piotr Adamczewski:
Thank you Martina, I totally agree that we have to discuss the problem of using AI, I have to also admit that last week we had a panel among the other consumer protection agencies on ICEPAN conference, when we are gathering together with the institutions which have the same aim, namely protection of consumers in each jurisdiction, and then we focus on what we have in our pockets, in our desks, what kind of tools we are using, and we concentrated more on the risks which are connecting to using of AI, and today I think that the panel on the Internet Governance Forum, as Martina mentioned, we are the third time already in this summit, is the better place to discuss the possibilities, the future, how we can develop further. I strongly believe that the artificial intelligence will be used by many agencies, it’s already actually in usage, it’s already in operation by many agencies, but it will be developing pretty fast, and definitely it is needed for the detection of the traditional violations, but also for the infringements which are new, which are connected to the to the new world of the digital services. So today for that reason, to that aim, we invited our prominent guests, Professor Christine Rifa from University of Reading, who made a thorough survey on the usage of AI by the consumer protection agencies, representatives of international organizations, which is OECD, which deal with the shaping of the consumer policy worldwide, with Melanie McNeil on board with us, and the representative of the DG Just, Angelo Grieco, and other people from the enforcement authorities from ACCC, Sally Foskett, and myself as well. And last but not least, we have Kevin from Tony Blair Institute for Global Change to talk with us from the perspective of consultancy world. So the structure of the panel would look like two rounds, so first we will present the tools we already have, and then in the second round we will ask our guests about the future, about the possible developments. So first I would like to turn to Christine and ask her about the outcomes of her survey. Christine, the floor is yours. Great, thank you so much. I’m trying to quickly

Christine Riefa:
share my slides to help with following up what I’m trying to describe. I think you should all see them now. So thank you very much for having me, and it’s a pleasure to join you only virtually, but still had this very amazing conference. I will give you a tiny little bit of background before, because I’m aware that perhaps some people joining this panel are not consumer specialists. So consumer protection really is a world with several ways of ensuring that the rights of consumers are actually respected and enforced. It’s a fairly fast developing area of law, but it has a fairly unequal spread and level of maturity across the world, and that does cause some problem in the enforcement of consumer rights. We also rely in most countries of the world that have consumer law on the spread of private and public enforcement, and AI as the subject of today can actually assist on both sides of the enforcement conundrum. We also have a number of consumer associations and other representative organizations that can assist consumers with their rights, but as well can assist public enforcement and agencies in the UK. A very good example is that which the consumer association is actually able to ask the regulator and the enforcers to take some actions. So that’s variable across the world what they can do, but they normally are a very important element of the equation as well. We’ve seen in previous years pretty much around the world a shrinking of court access for consumers as well, and an increase in ADR and ODR, as well as realization I think that public enforcement through agencies is really an important aspect of the mix on how to protect consumers. Hence the session today is obviously extremely important to ensuring we can further the rights of consumers and develop our markets in a healthy way. So the project I’ve been involved with is called MFTEC, stands for enforcement technology, and it really looked at the tools for the here and now that enforcement agencies were using in their daily work, and it also reflected a little bit about the future. I’ll keep those comments for the second round. What we found is that MFTEC, which is actually a broader use of technology than just AI, so it would include anything that is perhaps a lower tech, if you wish, than artificial intelligence might be, but can be just as effective. And we wanted to look at ways agencies could ensure markets worked optimally, and also realize that not using technology in the enforcement mix might lead to a potential obsolescence of consumer protection agencies, and there was therefore an essential need to respond to technological changes. We surveyed about 40 different practices that we came across, not simply in consumer protection, but in more supervisory agencies as well, and we ended up selecting 23 examples of MFTEC that are specific to consumer protection, spanning a range of authorities, 14, seven of them were general consumer protection agencies, spanning five continents, and four generations of technologies. It is only a snapshot, it’s obviously extremely difficult at this stage to work on public information about use of technology in agencies. There’s also an element of development, and there are also reasons why agencies may not want to very publicly announce that they’re using particular tools. The survey, however, has got some really interesting findings. We, in the report, explain how a technological approach will be essential, and how to start rolling one out. We give a picture of how agencies that are doing it, are doing it, and have instructed themselves in order to enable themselves to rely, to roll out MFTEC tools. We also mapped out the generations of technologies, because actually not all agencies will start from the same starting point. Some agencies might be very new, have absolutely no data to feed into AI, others might be more established, but not have structured data in the way that might be useful. We also found that with very little technology, you can actually do a lot in consumer enforcement, and therefore our report recognizes this. We provide a list of use cases, so for anyone interested in what’s happening on the ground, then that’s a very good starting point to find out pretty much all the examples of things that are currently working. We also reflected on some practices that we find slightly outside of the remit of consumer protection, but that could be easily rolled into consumer protection. Of course, we discuss challenges. Our key findings, and I think they are quite useful for the purpose of today’s discussion, where we’re going to hear loads of different examples, is that actually AI obviously is a misnomer. We’re talking to a very erudite audience here, no need to dwell on this, but in consumer protection at the moment, AI is really not the panacea, and we think that even in the future, it will not solve all the problems. It has, however, got huge potential, and we found that about 40 to 45% of the consumer authorities we surveyed are using AI tools. Now, that still means that there are 60% of other tools that are still MFDEC tools that are being used, and they are not AI. That’s quite a significant finding because just in 2020, at the start of discussions about technology and consumer enforcement, very few reports or projects actually considered AI as being viable. They were looking at other technical solutions. What we found as well is that the agencies that have got a dual remit, so that are not just dealing with consumer protection, have fared a little bit better in their rollout of tools, and that might be because they are able to capitalise on experience in competition law, for example, but also because they may have bigger structure, and that obviously facilitates a lot of the rollout of technology. If we compare consumer law enforcement to other disciplines, we find that we are behind the curve, but as Piotr mentioned, are catching up very quickly. I’ll move on all of this. The final thing for me to point out at this stage before we hear from the example is really that AI as a solution in consumer enforcement needs to be built in with a framework and a strategy that will take into account all the potential problems that might come with it. One of the big dangers that we have identified is that if there is a lot of staffing, resources, money going into developing AI as a solution for consumer protection enforcement, then it would be really a shame to fall at one big hurdle that will come the way of the enforcement agency, and that is a legal challenge from the companies being investigated. We found loads of potential issues and things to strategise about, but the legal challenges that might come from the use of AI in consumer enforcement is one that has been clearly understudied and we didn’t find very much on, so that’s on that general overview that I leave you and pass on the floor to the next panellist. Thank you, Christine. It’s still a lot of work,

Piotr Adamczewski:
but it looks promising, definitely. Now, I would like to give the floor to Melanie and to see how OECD is seeing the opportunity for consumer protection regarding the usage of AI.

Melanie MacNeil:
Hi, everyone. Good morning, good afternoon, depending on where you are. If you just bear with me for one moment, I will share my screen very quickly. All right, so I’m assuming everyone can see that. I’m very excited to be here today, and the previous presentation was very helpful as well in setting this up. So I’m speaking to you today from the Organisation for Economic Co-operation and Development or the OECD, where I work in the consumer policy team. So the OECD has 38 member countries, and we aim to create better policies for better lives through a lot of best practice work and working with our members to see what they’re doing to address particular issues. So today I’m really excited to talk to you about artificial intelligence and how it can help empower consumers, and how it can be of great assistance to consumer law regulators as well. So I’ll also be sharing some information with you about the OECD’s work in the AI space more generally. So we’ve just touched on it, but the first thing I’ll talk to you about is using artificial intelligence to detect and deter consumer problems online. As a previous consumer law investigator, this is a topic very close to my heart, we’re seeing a lot of AI being used by consumer law regulators as a tool to increase efficiency in finding and addressing potential breaches of consumer law. It’s particularly useful in investigations, where work that was previously manual and quite slow, like document review, can now be completed a lot more quickly. There is still and always will be a significant and essential role for investigators, but AI tools can support the preliminary assessments of investigations and highlight conduct that might be a breach of consumer law. Robust investigative principles are always needed with any investigation, and the addition of AI to our toolkits doesn’t change that. But I thought it would be helpful to give you some practical examples of some great tools that we’ve seen our members using. So the Office of the Competition and Consumer Protection in Poland uses web crawling technology with AI to analyse consumer contracts looking for unfair contract terms. So the technology searches over the fine print of terms and conditions of things like subscription contracts to ensure that there’s no unfair causes, such as inability to cancel a contract. So this work, previously in most member countries, was undertaken manually with groups of investigators reading hundreds of clauses in hundreds of contracts searching for potentially unfair terms. But the AI tool really adds some efficiency to this, and regulators can then take enforcement or other action to have the terms removed from the consumer contract, preventing consumers from being caught in subscription traps. So that’s an example of a tool that really frees up a lot of investigator hours for other things and enables investigators to really focus on the key parts of investigations that do need human decision making and strategic thinking. So another issue faced by consumers online is that of fake reviews. You’ve probably all seen one at some point. Reviews can play a huge part in our purchasing decisions, but to give you an example, last year, Amazon reported 23,000 different social media groups with millions of followers that existed purely to facilitate fake reviews. This is obviously too much for individual consumers to deal with and for regulators, but machine learning models can analyze data points and help to detect fraudulent behavior. Fake reviews are often classed as a form of misleading or deceptive conduct under consumer law, and while regulators are using AI to detect fake reviews, private companies are also investing in this space as well. So this is a good example of how businesses and regulators are working together to enable consumers to make better choices. The OECD, we’re quite excited about some work that we’re hoping to do with Icepen in the near future with member countries looking at the use of artificial intelligence to detect and deter consumer problems online that was referred to earlier. There’s some really great efficiencies to be found, which ultimately mean that regulators can detect and deter more instances of consumer issues. So the increased efficiency can deter businesses from engaging in this conduct. And similarly to criminal behavior, if people know they’re more likely to be caught, they’re less likely to engage in the conduct. So we’re very excited about the future work with organizations like Icepen to share some of this best practice so that other regulators can benefit as well. So another space that we’re seeing some great work from our members is the impact of AI on consumer product safety. So AI is being used to detect and address product safety issues by regulators too. So for example, Careers Consumer Injury Surveillance System searches for products online that have been the subject of a product safety recall. So where something has been deemed unsafe and withdrawn from sale, there are cases where nevertheless businesses continue to sell those items. So Careers Consumer Injury Surveillance System uses AI to search online for text and images to detect cases where those products might still be being sold. Using AI in this context can mean that the unsafe products are found faster, so regulators can take action more quickly and consumer injuries are ultimately reduced. So as well as detecting issues like that, Careers also using AI to assist consumers who might be looking for information or wanting to report an unsafe product. So Careers has an excellent chat bot that they use on their website that consumers can use to report injuries for products. So that if they’re harmed by a product, they can report it to the authorities. The chat bot makes it very simple for them to lodge the information rather than asking them to fill out a detailed form. It’s more efficient. And then they use coding of the information provided by the consumers with machine learning to enable more efficient analysis of the reporting. So when it’s easy to report an issue, consumers are more likely to do it and better data enables regulators to better understand the issues and to address them as well. Similarly, AI technology and software in particular with products can enable product safety issues to be diagnosed early. So some of the more advanced home appliances, for example, that have software built into them that you might be able to control from your phone, they’re very useful as well in terms of alerting consumers to potential product safety issues. They can be notified that a device might need servicing, that repairs are needed, or that a remote software update might be required. So there’s already been instances with smart devices such as smoke alarms that have been remotely repaired and a product safety issue addressed through a software update. This type of technology in that circumstance can potentially be lifesaving. So the increasing prevalence of AI in consumer goods can bring benefits and the gaming industry has always been pretty quick on the uptake with technology. We’re investing a lot in AI to change the way that people experience games, but as the use of digital tech intensifies, the way that people communicate and behave online is also changing. So this is an issue where there are new and emerging risks and they’re not particularly well understood in all spaces, particularly in the context of mental health. So one of the major projects that we’ll be undertaking at the OECD shortly is looking at the impact of consumer health and safety, sorry, the impact on consumer health and safety of digital technologies in consumer products. It’ll be focusing on AI-connected products and immersive reality and the impact on consumers’ health, including mental health. So the project aims to identify current gaps in market surveillance and the way that regulators might monitor for product safety issues and to identify future actions to better equip authorities to deal with some of the new risks that are posed by AI and the new technology relating to consumer products. We’re aiming to provide practical guidance for industry and regulatory authorities to better understand and address product safety risks. And we’re going to have a real focus on consideration of those risks in safety by design. So that’s a new project to keep an eye out for. Another space that we have seen AI provide some great benefits in empowering consumers is in the digital and green transition. So many consumers want to make greener choices, but sometimes they don’t due to information overload or a lack of trust in labelling or other behavioural science issues that can affect all of us. So research has shown that nudges or design interventions can encourage consumers to make greener choices and can encourage people to behave in a specific direction and overcome some of those behavioural issues that might otherwise prevent them from making a green choice. So AI provides an excellent opportunity to nudge consumers towards greener choices. So, for example, in Germany, like in many countries, heating bills are often not prepared in an understandable way and they’re inconsistent between providers. Each metering service can use different formats, different terminology. And as a result, consumers find it really difficult to compare which company to choose. They find it really hard to pick up errors in their bills. They end up paying more for energy and services and incentives to save energy are difficult to identify. So this can cost consumers a lot of money, but it also causes a lot of unnecessary emissions because it’s so difficult for people to make a greener choice that they essentially give up. I think it’s something that we’ve probably all been guilty of at some point when you look at various contracts for services. So to help consumers manage their energy consumption, the German government has funded a digital tool which uses AI. The household can upload their energy bill and it’s evaluated using AI to provide a series of facts about how they can reduce their energy consumption and save on heating bills. So the tool is an example of a nudge that can help a consumer to make a better energy choice and help them to overcome the barrier of it being too complicated to make that choice. Similarly, consumers experience information overload with a lot of the green labels and badges and schemes that you might see on items in the supermarket. And the other issue is that it can be difficult to compare these and consumers have no way to verify what’s actually happening in a company where they put a green marking on their packaging. So, for example, last year in Australia, they did an online sweep and found that 57% of the claims made in a sample were misleading when it came to their green credentials. So there are some parts of the world that’s using regulation to really strictly control the way that such markings and accreditation schemes can be used. But where that’s not occurring or to substitute that, AI can also be used to assist consumers to make the green choice by helping to break through the unmanageable amount of information that’s out there. So we’re seeing new apps being developed to enable shoppers to scan a barcode of an item in a supermarket and see its sustainability or ethical rating compared to other products. Where a product scores poorly, the app can suggest an alternative. These are quite limited at the moment, but we’re expecting that in the future, AI will be used to expand the list of products that are considered and to recommend products that align more with users’ environmental preferences. So the OECD is currently undertaking a project looking at fostering consumer engagement in the green transition and addressing some of these barriers to sustainable consumption and looking at the opportunities that digital technologies use to promote greener consumption patterns. So this project is also going to involve empirical work to better understand consumer behaviours and attitudes towards green consumption. Just taking through as well a couple of the tools that have been developed by the OECD that can be quite relevant. So one of the things that we’re working on at the moment is the OECD AI Incident Monitor. There’s been a big increase in reporting of AI risks and incidents in 2023 in particular, the rise has just been astronomical. So the OECD AI Expert Group is looking at this and they’re using natural language processing to develop the AI Incident Monitor. So the monitor aims to develop a global and common framework for reporting of AI incidents that could be compatible with current and future regulation. So one of the issues that regulators face in addressing almost any problem is consistency of terminology and understanding. So part of this project is looking at developing a global common framework to understand those things. And then the AI Incident Monitor tracks AI incidents globally and in real time. So it’s designed to build an evidence base to inform incident definition and reporting, and particularly to assist regulators with developing AI risk assessments, doing foresight work and making regulatory choices. So the Incident Monitor collected hundreds of news articles manually, which was then used to illustrate trends and to help train the automated system. And you can see on that slide there where the where the project is up to. They’re using natural language processing with that model. And now they’re getting into the space of categorising the incidents, looking at affected industry and stakeholders. And it’s also going to be quite useful, the product safety project that we’re doing, looking at potential health and mental health risks from AI and new technology. We’ll also be looking at including a product safety angle to the incident monitoring tool as well for AI. So I realise that’s been fairly quick, but they’re the projects that we’re doing at the moment and the work that our members are doing, looking at AI to assist regulators. And there’s also the OECD AI Policy Observatory that I just wanted to share with everyone, which aims for policies, data and analysis for trustworthy artificial intelligence. The Policy Observatory combines resources from across the OECD and its partners from a large range of stakeholder groups. It facilitates dialogue and provides multidisciplinary evidence based policy analysis and data on AI’s areas of impact. So the OECD AI Policy Observatory website is very large. It’s a lot of really helpful information on there. We’ve got articles from stakeholders as well as reports from the OECD. So chances are, if you’re working in the AI space, you will find useful information there. I’ve also just included a link to the consumer policy page. And then we’ve also got the OECD AI principles to promote use of AI that’s innovative, trustworthy, respects human rights and democratic values. So there’s a snippet of the information there. But we are setting up policies that we think will assist members for AI more generally, as well as in specific spaces like empowering consumers that we’ve been talking about today. So that’s all from me. Thanks for the opportunity to have a chat with you all about our work.

Piotr Adamczewski:
Thank you, Melanie. As a current enforcer, I totally share this idea that it’s about efficiency, it’s about enhancing us. But yet at the first stage of the investigation, where we are working more on detection of the violations, but later on, definitely we need to preserve all the rights to defend by the traders. So it’s helping us a lot, but especially in the first phase of our work. So now I would like to turn to Angelo and check what are the newest tools in the possession of the European Commission with the eLab established in DigiJust. Angelo, the floor is yours.

Angelo Grieco:
Thank you very much. I’m just trying to… I don’t know whether you see my screen, but I’ll try. Can you see it? Good afternoon to all of you. Thank you for… I would like to thank you. Bob Piotr, you know, and your Polish colleagues for moderating this panel and inviting us as European Commission to join. We are very honoured, although we couldn’t join physically, so I will have to do this remotely. I’m the Deputy Head of the unit, the group in the Commission which is responsible for enforcement of consumer legislation, and in this team we do two main things. We coordinate enforcement activities of the member states in cases union-wide relevance, and we build capacity tools the national authorities can use to cooperate and investigate, including and especially, I would say, on digital markets. Now, I will, in this presentation, I will get a little bit more into the specifics of those tools, although there’s little time allowed, so I will try to go through them quite rapidly. And as you can see from the slide, you know, I will focus on three main strands of work that we are following. So the first two concern the use of AI-powered tools to investigate breaches of consumer legislation, and the first is our Internet Investigation Laboratory. Then the second is behavioural experiments that we use to test the impact of market practices on consumers. And then the third, as third last element, I will talk about a number of enforcement challenges relating to marketplaces which offer AI and platforms which offer AI services. So if we look at the eLab, the Internet Investigation Laboratory, called the eLab, is an IT service powered by artificial intelligence that the Commission has put at the disposal and exclusive use of EU national authorities of the Consumer Protection Cooperation Network that we coordinate as Commission. So the need for such a tool obviously has been said by speakers here already, comes from the inability of enforcement agencies to face enforcement challenges on digital markets, in particular monitoring with just human intervention. In a nutshell, too much to monitor with little resources and increased need to have rapid investigations which cover larger portions of market sectors. So this tool is a virtual environment which we launched in 2022 and which can be accessed remotely from anywhere in the EU, which literally means that investigators can use this tool from their own IT facilities, sitting in their offices in the Member States. And it can be used for a number of investigation activities, especially to conduct large-scale reviews of companies and practices, such as a mix of web crawlers, AI-powered tools, algorithms and analytics that run to conduct those investigations, so that they can analyze really vast amounts of data on the internet to identify indicators of specific environments. And the parameters can be set to be investigation specific, so that AI-powered algorithms can look for different type of elements and different indicators of breaches, and I will give a quick example of that later. The E-Labs offer various tools and functionalities, and the… so we have… let me just turn the slide… so we have VPN, so that investigators can use hidden identity, we have specific software that allows to collect software and evidence as you go while you’re investigating and transferring to your own network, including time certification where that evidence was collected. Then there are comprehensive analytic tools to find out information about internet domains and companies, so these are open sources tools, so they can search and combine different type of sources of information across different databases and geographical areas. And they are very useful, for example, to find out who is behind a website or a webshop, but also to flag cyber security threats and risk also indicators of how the likelihood that the website is a scam, you know, or is run by a fraudster. Now, if we look at two examples of how we use these tools and things now, the first one is Black Friday, is the price reduction tool which we used in the Black Friday sweep we did last year, where we tested… basically we used the tool to verify whether discounts presented by online retailers on Black Friday were genuine, and the result was that discounts were misleading for almost 2,000 products and 43% of the website that we followed, and to understand whether, of course, when discounts were genuine, we had to monitor 16,000 products at least for a month preceding Black Friday sales. Then another example is the, we call it FRED, the fake reviews detector, so this is something that we use, so the machine in this case scrapes and analyzes text detecting to try to detect whether a review first is human or computer generated, and then beyond that, you know, when even in case of human-generated reviews, based on the type of language and terminology used, indicates a likelihood score for whether the review is genuine or it’s fake. It’s sponsored, for instance, you know, and the machine showed 85 to 93% accuracy in this case, so this is just to give you two examples of this. Then the other strand of activity that we are running at the moment is, and we literally inaugurated this in the past month, is the use of behavioral experiments to test the impact of commercial practices on consumers, and this both to, we do this in the context of coordinated enforcement action of the CPC network that we coordinate against major business players to test whether the commitments proposed by these companies to remedy specific problems are actually going to solve the problem. So, and we also test, use these behavioral studies to test what is the, in general, what is the impact of specific commercial practices which could potentially constitute dark patterns, and this to prepare the grounds for investigations or other type of measures. So, the first, I would say, strand of work in this area we use, for example, to test the labeling of commercial content in the videos broadcasted by a very well-known platform, so whether the indication, you know, and sort of the qualification of commercial context is good enough, is prominent enough for consumers to understand it, and that’s very important, I would say, in the type of platform tools that we are confronted every day on the internet. And the second one, so we tested, for example, to see what is the impact of cookies and choices related to targeted advertising. Okay, what is interesting in these experiments is that they are calibrated based on the needs of each specific case, and we use large sample groups to produce credible, reliable, scientific results, so higher chance to identify significant statistical differences, and we use also AI-powered tools to do this, including analytics, but also eye-tracking technology connected to analytics, and that we did, for example, to test the impact of advertising on children and minors, you know, and we tested them in lab. Now, the last thing I wanted to address here rapidly, it’s an area which is drawing a lot of attention, which is mentioned also by previous speakers, at enforcement level, not only in the EU, but also in other jurisdictions, and it’s the offering of AI-based services to consumers, such as AI-powered language models, recently developed or recently becoming popular, and these models can generate, you know, we all know these models, not by now, but they can generate human-text, human-like text responses to a given prompt. Such responses continue to improve based, you know, on massive amount of text data from the internet, what is called reinforced layer learning from human feedback, and they are not offered only as standalone, but they have been integrated in other services offered, like platforms, like search engines, and marketplaces. While these practices have been investigated in the EU and other jurisdictions, I cannot say, and I cannot say much about this ongoing investigation. The attention, I can, however, flag a few elements where the attention of the stakeholders at the moment is focusing, so what are the issues, what are the problems, and, you know, we see that one main area of problem is transparency of the business model, so what are really the characteristics, what is really offered, what is really the service, how is this remunerated, how is this financed, this business model is financed, what are the difference in between the free version, so-called free version, and the paid-for version, and how does this relate for the use of commercial, of use of data, personal data, also the consumers for commercial purposes, like, for example, to send targeted advertising. Now, so there’s this part, and then, you know, of course, you know, we are very focused at the moment on the risks, you know, of those models, so we have seen that often, you know, there is manipulative or misleading content, there are biases, errors, you know, and one big concern is whether these platforms can do an adequate mitigation of those risks, and then you have the problem of the harm of specific categories of consumers, which are weaker, let’s think about minors, but not only, and associated with that, of course, is the mental health and possible addiction also, which has been experienced already, so the difficulties here is that, on the one hand, from a very, very general standpoint, we have a new, I would say, way, you know, of applying consumer legislation, and we need new reference points, you know, to apply consumerization to these business models, where, you know, the technological part is really still a little bit obscure, you know, so there’s a technological and scientific gap between enforcement and, you know, those companies who run these platforms, then the fact that these elements are integrated in other business models often, and then that we are at a crossroad here between protection of the economic interest of consumers, data protection, so data privacy, and the protection of health and safety, so this adds quite a bit of complexity to the work of the enforcers, who are nevertheless, you know, looking into the matter, enforcement may not be enough, and as we know, there may need to be sort of complemented by regulatory also intervention, and we will see about that. That’s all for me at this stage. Thank you.

Piotr Adamczewski:
Thank you, Angelo. I have to admit that it’s a really fascinating idea that there will be this possibility to share with the European Commission the software they are preparing, so we have like this possibility to create our own department with a lot of people, very costly to manage for each single consumer protection agency. We can work also on projects like we did in past, and we are still engaged in that kind of developing of software, but of course, the idea of just addressing the Commission and using the already prepared software is great, so now it’s my turn to give some insights about what we actually made in past, and on what we are working right now, so I will talk a little bit about ARBUS, the system which we made for the detection of unfair clauses, but I will focus on the main aspects, not to prolong too much time, we need to speed up a little bit, and then I will share with you some ideas about the ongoing project on dark patterns and on preparing of white paper for the enforcers. So, going back to 2020, when we actually figured out that we can use artificial intelligence for the enforcement actions, it was not so obvious at that time, I mean, it is the time before charge GPT, and it was not so clear that the natural language processing can really make such amazing things, but we thought that we have to try with this possibility, we focused mostly on our efficiency and we checked three factors for which direction we should go, so first of all, we considered the databases which were in our possession, and then we also defined strictly our need, so what is actually necessary for us to get more efficiency in which field, and finally, we also have in our view the perspective of the public interest and to always have it in mind what is actually necessary for public opinion to speed up with our work, and as the result of that was this project on unfair clauses, because we had a huge database for that, almost 10,000 entries already established unfair clauses, so we could use them for preparation of a proper database to fuel, to learn the machine how to detect it properly. Secondly, it was our need because it’s very time-consuming, it’s quite easy task for the employees, but still it’s hugely time-consuming to read all the standard contract terms and to understand them and to indicate which provisions could be treated as unfair, and finally, this is really huge public interest because we have to take care of all the standard contracts and we try to eliminate as much as possible of unfairness from the contracts, and especially with a fast-growing e-commerce market, it means that we have to adjust our enforcement actions and work closely with the sector. There’s no other options like automatization of our actions for doing that. What about the challenges in the project? First of all, database, so like I said, we had a huge material for that, but still we had to use a lot of human work to structurize it, it’s not so easy, you need to put it in the special format, you need to choose one, and then prepare that in a special way to make computer to understand it. Then the second problem which we faced at that time was the choosing of the vendor, so we were not able to hire like 50 experts in data scientists, so we decided to work with outsourcing and choosing a proper vendor was very challenging for us. We used a special type of public tendering which was preparing of POC first and then letting the information to the market, showing how it could be solved, and at the same time asking the market for preparing the other POC which we could compare in a very objective manner. And only because of the result of this contest, we decided on the producer of the tool. And finally, the implementation of the software into our organization. So again, it’s very challenging for the traditional organizations, traditional institutions to empower them with the new tools and to help people who already established some kind of work with the specific problem to make it differently, to make it in future more efficiently, but still at some point people need to find the good reason for accepting the change. So taking into consideration all the challenges, I have to say that we are already fully operating the system and we have the first good results, but still it’s the detection. So it’s flagging. So definitely it’s helping us in the first phase of the investigation, but later on after flagging of the provision, we have to do a proper investigation. That’s what we cannot change right now. A few words about our current project, Dark Patterns. So this is again the problem of detection of violations, which are quite widespread right now. There are some studies which showed that a lot of e-commerce companies are involved in the Dark Patterns, which means generally that there are some kind of deceit factors in their interfaces. And we try to prepare the tool which will allow us to work much more faster. So not going from one website to another, not looking for the violations, but be much more proactive and not just to wait on the signals from the harmed consumers, but to be able to proactively discover the violations. And here there are another problem because we have to create the database. We don’t have already existed database like in the first project. And so now we are working on the ideas, how we can do that, having in mind the possibility of verification of the construction of websites. And also the database could be constituted on the outcomes of the neuromarketing researchers, which we are going to carry out. All of that shall allows us to build some specific group of factors, which can allow to figure out what is deceiving, what is not deceiving, and to fuel the machine for the proper action in that manner. And last but not least, we are also working on preparation of the white paper for the agencies on the same status which we have. So it’s our second project. So we already have some problems and we were able to solve that. And we have some ideas about the transparency and about the way how we can safely introduce, deploy the software into the work on the enforcers. So we would like to share all that ideas with the colleagues from other jurisdictions and we’d like to make it public next year. So going further, we also know that the Australian competition and consumer commission is working right now on different projects. And Sally, if you hear us, could you share with us the more insights about what is going on right now in ACCC?

Sally Foskett:
Okay, thank you. I’ll just share my slides. I’m not used to using the Zoom, I’m afraid. So is someone able to maybe talk me through how to share my screen? Sorry. I think that there is a problem with the connection. I think that there is the share button at the bottom. Oh yes, thank you. Okay, I will present like this. I’m sorry, hopefully that is readable to everyone. Great. Okay, look, thank you so much for having me attend. I’m really excited to be here. I’m really excited to be able to share my screen with you all. I’m really excited to be able to share my screen with you all. I’m really excited to be able to share my screen with you all. Was that okay? Okay, so the first of our JV lecturns is from Stephanie in Kenya. Colon, I’m so excited to have you all here with us today. So please welcome Stephanie. Thank you for allowing me to attend today. Thanks to IGF for hosting this meeting and to all of you for joining us today. So we’re going to be looking at a few different angles. First, using AI to detect consumer protection issues. Second, understanding AI in consumer protection cases. And third, perhaps a little more tenuously enabling the development of consumer-centric AI. So first, using AI to detect consumer protection issues. So we have a number of projects on foot that are looking at methods of proactive detection. And these broadly fall into two categories. The first category is streamlined web form processing. So every year we receive hundreds of thousands of complaints from consumers about issues they’ve encountered when buying products and services. Many of these complaints are submitted through the HPC’s website which is a large field in which users type out the narrative of what has occurred. The issue with this approach is that our analysis of the form can be quite manual. So we’ve been experimenting with using AI to streamline this processing. The techniques that we’ve been experimenting with include entity extraction. So using natural language processing to identify parts of speech that refer to particular products like phone or car or kettle, hot water bottle for instance. And also companies as well who use entity extraction for. Another technique that we’ve experimented with is classification. That is using supervised learning to classify complaints according to the industry that they relate to. Agriculture, energy, health, et cetera. Or the type of issue that they relate to. So that’s the type of consumer protection issue. And then we’ve also been more recently experimenting with predictive analysis to determine how relevant a complaint is likely to be to one of the agency’s enforcement and compliance priorities. I have listed on the slide some examples of our priorities from this year which include environmental and sustainability claims that might be inaccurate. Also consumer issues in global domestic supply chains and product safety issues impacting infants and young children. Now the outputs of these models are not yet at a level of reliability that we would be comfortable with before deploying them into production. But it is something that we are actively working on and shows a lot of promise. The second category is not about analyzing data that we already have. It’s about collecting and analyzing new sources of information. And we’ve heard a lot of examples of this today. So scraping retail sites to identify so-called duck patterns. As others have pointed out, duck patterns or manipulative design practices are design choices that lead consumers to making purchasing decisions they might not otherwise have done. And sometimes these choices are so manipulative that we consider them to be misleading in the breach of the consumer law. And examples include was now pricing and scarcity claims that are untrue. We’ve also looked at subscription traps and to a lesser extent, fake reviews as well. The techniques that we use in this space are quite simple actually. So if a claim like only one left in stock is hard coded into the HTML behind the page, we know we have a problem. So a lot of this analysis is actually based on regular expressions. So basically looking for streams of text. But we do have an AI component that we use to navigate retail sites as part of the scrapes and to identify which pages are likely to be hacked pages. Turning to the second lens that looking at this question of empowering consumers with AI I thought it might be useful to touch on some of our cases where we have obtained and analysed algorithms that we used by suppliers in their interactions with consumers. So this is a really important thing to be able to do from an enforcement perspective as algorithms are increasingly and here I’m slipping into using algorithms instead of AI as Christine mentioned, AI is a bit of a misnomer. But as algorithms are increasingly used to implement decisions across the economy, regulators must be able to understand and explain what they’re doing. So we’ve had a few cases and market inquiries where we’ve needed to do this and I thought I’d explain a little bit more about what our approach is. And I’m going to speed up as well given the time. So when we need to understand how an algorithm operates, we’ll typically look at three types of information that we obtain using our statutory information gathering powers. So the first type of information is source code. That is the code that describes the rules that process the input into the output. And we’ve had a few cases where we have obtained source code from firms and worked through it line by line to determine how it operates. It’s a very labour intensive process, but it’s proven valuable, not critical for a few of our cases. The second type of information we obtain sometimes in algorithm cases is input output data, which is useful because it tells us how the algorithm operated in practice in relation to actual consumers. It helps us establish not just whether conduct occurred, but also what the harm was. So how many consumers were affected and to what extent. And then finally, the third type of information we obtain is business documentation. So emails and reports, et cetera. And this is useful because it tells us what the firm was trying to achieve. Often when firms tweak their algorithms, they’ll run experiments on consumers, on their customer base, so-called A-B testing. And so obtaining documentation about those experiments can shed light on what was intended to be achieved. The last point I’ll make on this slide, and mentioned earlier, a few of many other regulators are doing this as well, is we use predictive coding for document review. So we use machine learning to help expedite the review of documentation that we obtain from firms in our investigations. And very lastly, I thought I would briefly touch on a topic that’s a little more future focused, which is the possible emergence of consumer-centric AI. So this is more about empowering consumers in the marketplace, as opposed to empowering consumer protection regulators. The ACCC has a role in implementing the consumer data right, which is an economy-wide reform in Australia that gives consumers more control over their data. It enables them to access and share their data with accredited third parties to identify offers that might suit their needs. Currently, the Australian government is consulting publicly on draft legislation to expand the functionality of the consumer data right to include what’s called action initiation. So that will enable accredited parties to handle not just data, but also actions on behalf of consumers with their consent. So even though this is very early days, perhaps in the future, as a result of initiatives like action initiation in the data right, we might see the emergence of more consumer-centric AI. So AI that helps consumers navigate information symmetries and to bypass manipulative design practices to access products and services that are most suited to their needs. And I will stop there, thank you.

Piotr Adamczewski:
Thank you very much, Sally. So it looks like a lot is happening actually in this sphere, but still there is the report made by Tony Blair Institute, which indicates there should be some reorganization and some new planning for the technological change, especially in UK. So Kevin, if you could give us some recommendations about the report.

Kevin Luca Zandermann:
Yes, thank you. Thank you, Paul. Thank you everyone for sort of sticking in at this hour, especially here. So our work in this space is fundamentally, it’s joining two parts. Like the first one is our work on AI for proactive public services. So we do believe that AI has an enormous potential to transform the way we deliver public services. And the big picture is, of course, concerns areas such as personalized healthcare, personalized education. So in many ways, create a new paradigm that is tech enabled, but also institutional to provide a new way to think and then actually offer public services. So that’s the first component. And the second component is the work that in our unit, we’ve carried out in consumer protection. We did commission last year, an important report to a consumer protection expert, this call that Christine knows very well, where we actually looked at potentially at consumer protection regulation as a framework for internet regulation. So these are the two main components for this panel that I’ve tried to join. So in terms of, I thought it would have been useful to offer an overview about the baseline scenario as someone, considering I’m not a regulator. So it’s useful to assess the way we’re at now. And it seems clear that the main challenges for most regulators around the globe are the fact that their resources are very limited and outdated rules contribute to a law enforcement culture and therefore legitimization of illegitimate practices. The fact that there is an even international capacity, which has been reiterated by many other panelists and low, very low cross-border enforcement coordination. And finally, the fact that action is reactive and slow, rather than proactive as firms entrench power. And on the sort of disruptive incumbent side, I think the most important one is the fact that, incumbents can become so dominant that they offer a very selective interpretation of consumer rights, for example, prioritizing like customer service excellence, for instance, over like add the forms of safeguards. Martina, if you could move to the next slide. Okay. Okay, I can continue. So what we looked at at the institute is then like the very important review that the Stanford Center for Digital Informatics has carried out. It’s a very comprehensive survey, in terms of coverage, it almost reaches the level of coverage that the OECD would have in this very comprehensive global surveys. the this comprehensive like review like deals with the adoption of computational antitrust by agencies throughout the globe and 26 countries responded to this survey and Out of this survey I selected two examples that I think are quite telling about How consumer protection authorities are embracing AI? The first one is Finland. So Finland has carried out the Finnish Consumer and competition authority, I think has carried out quite an interesting exercise using AI Basically using AI as part of their cartel screening process and there instead of instead of sort of looking at their past data To build tools for the future. They actually started with a sort of exposed and reflexive Testing of AI so they looked at previous cases And sort of simulated a lot of scenarios. So they looked at previous cases in particular Some that dealt with two substantial Nordic cartels which operated in the asphalt Paving market in in Finland Sweden and they essentially compared The you know, the basic scenario which was the real one that happened where they did not have AI and they compare that with the benchmark against The Scenario where they actually could have used AI and assessed the two different performances and they did It did appear quite clearly that Utilizing a mix of supervised machine learning and separate and separate distributional regression test They could have found out about thank you. Thank you, Martina. They could have found out about those cartels in a much quicker way and Therefore this enables, this has enabled them to basically build new ex-officio Cartel investigation tools. So this could constitute a very important deterrent for for example Companies that create cartels because you have effectively a competition authority that has yet quite a quite an effective tool Quite an effective ex-officio tool to detect these patterns and then the other one is there’s probably a little bit less sophisticated But again, Christine would know Like would know about this very well Actually in the UK there is no requirement for parties to a merger to notify the competition markets authority Which is the relevant authority in the UK of a transaction so it used to be that the CMA had to Sort of very manually monitor new sources to identify these mergers. So a tremendous waste of time and Especially for a regulator that is already like very stretched in terms of resources both financially and in terms of time So the unit has developed recently a tool that actually track tracks mergers activities in an automatic way using using ML, you know A series a series of techniques. They’re very very similar to the ones that the other panelists have described so I’m not going to go too much into detail, but it just it just a textbook example of what You know, in many ways the low-hanging fruit of AI as used by consumer protection authorities, particularly in Legislations such as the UK where the notifier requirements may are less a less sort of onerous than maybe in other legislations such as for example in the in the EU and Then I thought that I would have been nice to conclude Martina again, if you could move to the next I would be grateful with a series of policy questions That Angela has sort of touched upon Previously And these questions are about I think the ethics of the algorithm and in particular If you think about the Finnish model the fact that AI is very good at detecting patterns But we know from for example the application of AI in health care that it’s not necessarily as good at Detecting causality so it can be quite dangerous to to start from a from an AI detected pattern and enjoy like quite and draw our conclusions without Without human oversight in the case of the Finnish in the in because of the Finnish Authority They were very much aware of it and in fact they as part of that as part of their Assessment they have a second stage where if let’s say the I tool this was the sort of supervised learning Tells them that there is like there are for example three companies operating as a cartel they would then have a Human oversight stage where they would basically have to find to try to find any other possible explanation alternative to that and this is very closely related in the EU to article 14 of the AI act which is one of the most important article and Deals precisely with with human oversight. So for most regulators I imagine one of the most important challenges. It’s going to be to essentially draw this line like where does the The automation where there’s the AI empowered Sort of step begins and ends and when does the human human oversight beginning in what in what in what modes and finally One of one of the last question is like the role that large language models can actually play I did find I did find it interesting that in the in the survey In the survey published by Stanford out of 26 competition authorities only one the the Greek one explicitly mentioned An LLM power tool that they’re using now. I imagine that this is not the case I’m sure like plenty of other consumer authorities have been using LLMs throughout the last year But we’re probably reluctant to say so for obvious reasons, but it’s It seems like at the same time that regulators by defaults are, you know Risk adverse and these large language models do pose like quite quite important risks particularly in terms of in terms of privacy for example One of one of the competition authorities it was trialing An AI powered bar for to deal with whistleblowing so So a case where you know when you’re building a tool like that the privacy concerns are clearly very important so the thing the last question is does the generative capacity of these models have actually anything significant to offer to consumer regulation or other forms of AI probably more like low-hanging fruit are instead more suited for Regulatory environment. I think that’s it

Piotr Adamczewski:
Thank you very much Kevin, I just need to mention that definitely we are working on the setting the line properly between the place where AI is working and the way where we making the Oversight very shortly. We are closing to the to the end of the session, but very shortly I would like to ask each of the of the panelists The question about the future one minute per each Christine, can you start with you?

Christine Riefa:
Great absolutely. So one minute. I’ll use three keywords then and I think the future is a lot of homework on Classification and normative work. Are we all talking about the same thing? What really is AI? What are the different strands and trying to get the consumer lawyers and The users to actually understand what the technologists are really talking about Collaboration is the next I think there’s real urgency in and and I’m really welcome what we heard today about I spend really trying to gather and galvanize the consumer agencies because Project in common probably will be a better use of money and and then able to yield better result and my Last keyword would be to be Reactive and completely transform the way consumer law is enforced if we can move from the stage We’re at where we use AI to simply detect to a place where actually we can prevent the harm Being done to consumer then that would obviously be a fantastic Advancement for the protection of consumers around the world Thank You, Melanie

Melanie MacNeil:
Thanks Christine, um, yeah, I think as Businesses are always going to move quickly. Where’s the chance for money to be made? They’ll do it and they’re unrestricted in many ways compared to regulators who are often too slow to address the problem So I think collaboration is the key and sharing our learnings so that we can all Move quickly to address the issue and have a good future focus on it You know really recognizing that we can’t make regulations at anywhere near the pace that technology is advancing And I think honesty in the collaboration is key So we need to not be afraid to share things that we tried that didn’t work And explain why they didn’t work so that other people can learn from our mistakes as well as our successes Thank you, Melanie Angelo

Angelo Grieco:
Uh, yes, uh, thank you for us. It’s basically Our priority for the next year will be to try to improve increase the use of AI investigations so We will we would like to to do to do first of all more activities to monitor compliance like sweeps We would like to develop the technology to to make this tool able also to Sweep and monitor images videos sounds um, so basically to really to really be Fit, you know for for for what they need to monitor on the digital reality And then to cover different type of infringement indicators, you know, one of our focus focuses will be scams and counterfeiting But on the misleading advertising side, for example uh, what we mentioned we would like to to use it for for for a number of of of Of bridges like for example, the lack of disclosure of material connection in between influencers and traders And then what we would like to do also is to improve um, and that’s what you also mentioned pietro earlier the The case handling side so, um to try to Put this tool to make it even easier also for investigators than to use the evidence A national level as we know that evidence that the rules concerning the gathering of evidence are very national, you know Jurisdiction specific, you know, there may be different a screenshot maybe maybe sort of enough in a country But not in another So we would like the tool also to help and already gather, you know and as much as possible the evidence in the format which is required for on behavioral experiments, we We are also planning to do Seven more studies until the end of next year And one basically every 10 weeks And continue yeah Thank you very much, thank you and sally

Sally Foskett:
Yes, thanks so a priority for us in the near future is actually uh Going back to basics and thinking about our sources of data that we have available um, we’ve been giving thought to Trying to make better use of data that’s collected by other government departments As well as data that we could potentially obtain from data brokers from other parties Hospitals even for instance and also data that we can collect from consumers themselves For example making better use of social media to detect issues

Kevin Luca Zandermann:
Thank you sally and last word from kevin so so I think for me, uh Essentially what i’ve what i’ve said what I said before like I would recommend to regulators to actually have a sort of Retrospective, uh dialectic with ai so to To sort of answer like to address the questions about human oversight Where does the automation start and end? Where does the human oversight start? Uh to basically look at past cases that they know very well And utilize tools such as you know The ones that are described in the financial authority is used to basically test The potential but also the limitation of these models and I think the best way to do it is this very sort of continuous process of Again of engaging with content with cases that you already know very well And you know, you perhaps may find that they I Detected patterns that you not they did not notice detected things. They did not notice or or perhaps you also may found that uh Some patterns that they are detected actually didn’t are not were not necessarily particularly consequential for um for for like the enforcement outcome, so I think I know the regulators are always Again like understaffed and Have to deal with limited resources, but I think dedicate some time to these types of sort of retrospective exercise to develop Ex-officio tools can be extremely useful especially In in realities like the eu where we will have to deal with a very but a very significant piece of legislation on ai Uh, who’s you know certain details particular and human oversights are not necessarily fully clear so inevitably Uh, this practice like dialectic process will have to will have to happen to understand like what is the right model to operate?

Piotr Adamczewski:
Yes, thank you very much and yeah, definitely I I made my notes and we we will Have a lot of work, uh to do in the near future a lot of things to to to to classify a lot of meetings collaboration And definitely the outcome will be proactive. I strongly believe in the in the work which we are doing And now I would like to close the panel thank all the panelists for the great discussions and of course thank the organizers for enabling us to to have this discussion and To to be a little bit late with the last session, thank you very much

Angelo Grieco

Speech speed

150 words per minute

Speech length

2209 words

Speech time

881 secs

Christine Riefa

Speech speed

146 words per minute

Speech length

1487 words

Speech time

613 secs

Kevin Luca Zandermann

Speech speed

174 words per minute

Speech length

1931 words

Speech time

667 secs

Martyna Derszniak-Noirjean

Speech speed

160 words per minute

Speech length

700 words

Speech time

262 secs

Melanie MacNeil

Speech speed

161 words per minute

Speech length

2891 words

Speech time

1077 secs

Piotr Adamczewski

Speech speed

142 words per minute

Speech length

2102 words

Speech time

886 secs

Sally Foskett

Speech speed

174 words per minute

Speech length

1666 words

Speech time

575 secs

AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sarim Aziz

In the discussion, multiple speakers addressed the role of AI in cybersecurity, emphasizing that AI offers more opportunities for cybersecurity and protection rather than threats. AI has proven effective in removing fake accounts and detecting inauthentic behavior, making it a valuable tool for safeguarding users online. One speaker stressed the importance of focusing on identifying bad behavior rather than content, noting that fake accounts were detected based on their inauthentic behavior, regardless of the content they shared.

The discussion also highlighted the significance of open innovation and collaboration in cybersecurity. Speakers emphasized that an open approach and collaboration among experts can enhance cybersecurity measures. By keeping AI accessible to experts, the potential for misuse can be mitigated. Additionally, policymakers were urged to incentivize open innovation and create safe environments for testing AI technologies.

The potential of AI in preventing harms was underscored, with the “StopNCII.org” initiative serving as an example of using AI to block non-consensual intimate imagery across platforms and services. The discussion also emphasized the importance of inclusivity in technology, with frameworks led by Japan, the OECD, and the White House focusing on inclusivity, fairness, and eliminating bias in AI development.

Speakers expressed support for open innovation and the sharing of AI models. Meta’s release of the open-source AI model “Lama2” was highlighted, enabling researchers and developers worldwide to use and contribute to its improvement. The model was also submitted for vulnerability evaluation at DEF CON, a cybersecurity conference.

The role of AI in content moderation on online platforms was discussed, recognizing that human capacity alone is insufficient to manage the vast amount of content generated. AI can assist in these areas, where human resources fall short.

Furthermore, the discussion emphasized the importance of multistakeholder collaboration in managing AI-related harms, such as child safety and counterterrorism efforts. Public-private partnerships were considered crucial in effectively addressing these challenges.

The potential benefits of open-source AI models for developing countries were explored. It was suggested that these models present immediate opportunities for developing countries, enabling local researchers and developers to leverage them for their specific needs.

Lastly, the need for technical standards to handle AI content was acknowledged. The discussion proposed implementing watermarking for audiovisual content as a potential standard, with consensus among stakeholders.

Overall, the speakers expressed a positive sentiment regarding the potential of AI in cybersecurity. They highlighted the importance of open innovation, collaboration, inclusivity, and policy measures to ensure the safe and responsible use of AI technologies. The discussion provided valuable insights into the current state and future directions of AI in cybersecurity.

Michael Ilishebo

The use of Artificial Intelligence (AI) has raised concerns regarding its negative impact on different aspects of society. One concern is that AI has enabled crimes that were previously impossible. An alarming trend is the accessibility of free AI tools online, allowing individuals with no computing knowledge to program malware for criminal purposes.

Another concern is the challenges AI poses for law enforcement agencies. AI technology performs tasks at a pace that surpasses human comprehension, making it difficult to differentiate between AI-generated content and human interaction. This creates obstacles for law enforcement in investigating and preventing crimes. Additionally, AI’s ability to generate realistic fake videos and mimic voices complicates the effectiveness of digital forensic tools, threatening their reliability.

Developing countries face unique challenges with regards to AI. They primarily rely on AI services and products from developed nations and lack the capacity to develop their own localized AI solutions or train AI based on their data sets. This dependency on foreign AI solutions increases the risk of criminal misuse. Moreover, the public availability of language models can be exploited for criminal purposes, further intensifying the threat.

The borderless nature of the internet and the use of AI have contributed to a rise in internet crimes. Meta, a social media company, reported the detection of nearly a billion fake accounts within the first quarter of their language model implementation. The proliferation of fake accounts promotes the circulation of misinformation, hate speech, and other inappropriate content. Developing countries, facing resource limitations, struggle to effectively filter and combat such harmful content, exacerbating the challenge.

Notwithstanding the negative impact, AI also presents positive opportunities. AI has the potential to revolutionize law enforcement by detecting, preventing, and solving crimes. AI’s ability to identify patterns and signals can anticipate potential criminal behavior, often referred to as pre-crime detection. However, caution is necessary to ensure the ethical use of AI in law enforcement, preventing human rights violations and unfair profiling.

In the realm of cybersecurity, the integration of AI has become essential. National cybersecurity strategies need to incorporate AI to effectively defend against cyber threats. This integration requires the establishment of regulatory frameworks, collaborative capacity-building efforts, data governance, incidence response mechanisms, and ethical guidelines. AI and cybersecurity should not be considered in isolation due to their interconnected impact on securing digital systems.

In conclusion, while AI brings numerous benefits, significant concerns exist regarding its negative impact. From enabling new forms of crime to posing challenges for law enforcement and digital forensic tools, AI has far-reaching implications for societal safety and security. Developing countries, particularly, face specific challenges due to their reliance on foreign AI solutions and limited capacity to filter harmful content. Policymakers must prioritize ethical use of AI and address the intertwined impact of AI and cybersecurity to harness its potential while safeguarding against risks.

Waqas Hassan

Regulators face a delicate balancing act in protecting both industry and consumers from cybersecurity risks, particularly those related to AI in developing countries. The rapid advancement of technology and the increasing sophistication of cyber threats have made it challenging for regulators to stay ahead in ensuring the security of both industries and individuals.

Developing nations require more capacity building and technology transfer from developed countries to effectively tackle these cybersecurity challenges. Technology, especially cybersecurity technologies, is primarily developed in the West, putting developing countries at a disadvantage. This imbalance hinders their ability to effectively defend against cyber threats and leaves them vulnerable to cyber attacks. It is crucial for developed countries to support developing nations by providing the necessary tools, knowledge, and resources to enhance their cyber defense capabilities.

The pace at which cyber threats are evolving is surpassing the rate at which defense mechanisms are improving. This disparity poses a significant challenge for regulators and exposes the vulnerability of developing countries’ cybersecurity infrastructure. The proactive approach is crucial in addressing this issue, as reactive defense mechanisms are often insufficient in mitigating the sophisticated cyber threats faced by nations worldwide. Taking preventive measures, such as taking down potential threats before they become harmful, can significantly improve cybersecurity posture.

Developing countries often face difficulties in keeping up with cyber defense due to limited tools, technologies, knowledge, resources, and investments. These limitations result in a lag in their cyber defense capabilities, leaving them susceptible to cyber attacks. It is imperative for both developed and developing countries to work towards bridging this gap by standardizing technology, making it more accessible globally. Standardization promotes a level playing field and ensures that both nations have equal opportunities to defend against cyber threats.

Sharing information, tools, experiences, and human resources plays a vital role in tackling AI misuse and improving cybersecurity posture. Developed countries, which have the investment muscle for AI defense mechanisms, should collaborate with developing nations to share their expertise and knowledge. This collaboration fosters a fruitful exchange of ideas and insights, leading to better cybersecurity practices globally.

Global cooperation on AI cybersecurity should begin at the national level. Establishing a dialogue among nations, along with sharing information, threat intelligence, and the development of AI tools for cyber defense, paves the way for effective global cooperation. Regional bodies such as the Asia-Pacific CERT and ITU already facilitate cybersecurity initiatives and can further contribute to this cooperation by organizing cyber drills and fostering collaboration among nations.

The responsibility for being cyber ready needs to be distributed among users, platforms, and the academic community. Cybersecurity is a collective effort that requires the cooperation and active involvement of all stakeholders. Users must remain vigilant and educated about potential cyber threats, while platforms and institutions must prioritize the security of their systems and infrastructure. In parallel, the academic community should actively contribute to research and innovation in cybersecurity, ensuring the development of robust defense mechanisms.

Despite the limitations faced by developing countries, they should still take responsibility for being ready to tackle cybersecurity challenges. Recognizing their limitations, they can leverage available resources, capacity building initiatives, and knowledge transfer to enhance their cyber defense capabilities. By actively participating in cybersecurity efforts, developing countries can contribute to creating a safer and more secure digital environment.

In conclusion, regulators face an ongoing challenge in safeguarding both industry and consumers from cybersecurity risks, particularly those related to AI. To address these challenges, developing nations require greater support in terms of capacity building, technology transfer, and standardization of technology. A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucial components in building robust defense mechanisms and ensuring a secure cyberspace for all.

Babu Ram Aryal

Babu Ram Aryal advocates for comprehensive discussions on the positive aspects of integrating artificial intelligence (AI) in cybersecurity. He emphasizes the crucial role that AI can play in enhancing cyber defense measures and draws attention to the potential risks associated with its implementation.

Aryal highlights the significance of AI in bolstering cybersecurity against ever-evolving threats. He stresses the need to harness the capabilities of AI in detecting and mitigating cyber attacks, thereby enhancing the overall security of digital systems. By automating the monitoring of network activities, AI algorithms can quickly identify suspicious patterns and respond in real-time, minimizing the risk of data breaches and information theft.

Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurity. As AI systems become increasingly intelligent and autonomous, there are concerns about their susceptibility to malicious exploitation or manipulation. Understanding these vulnerabilities is crucial in developing robust defense mechanisms to safeguard against such threats.

To facilitate a comprehensive examination of the topic, Aryal assembles a panel of experts from diverse fields, promoting a multidisciplinary approach to exploring the intersection of AI and cybersecurity. This collaboration allows for a detailed analysis of the potential benefits and challenges presented by AI in this domain.

The sentiment towards AI’s potential in cybersecurity is overwhelmingly positive. The integration of AI technologies in cyber defense can significantly enhance the security of both organizations and individuals. However, there is a need to strike a balance and actively consider the associated risks to ensure ethical and secure implementation of AI.

In conclusion, Babu Ram Aryal advocates for exploring the beneficial aspects of AI in cybersecurity. By emphasizing the role of AI in strengthening cyber defense and addressing potential risks, Aryal calls for comprehensive discussions involving experts from various fields. The insights gained from these discussions can inform the development of effective strategies that leverage AI’s potential while mitigating its associated risks, resulting in improved cybersecurity measures for the digital age.

Audience

The extended analysis highlights several important points related to the impact of technology and AI on the global south. One key argument is that individual countries in the global south lack the capacity to effectively negotiate with big tech players. This imbalance is due to the concentration of technology in the global north, which puts countries in the global south at a disadvantage. The supporting evidence includes the observation that many resources collected from the third world and global south are directed towards the developed economy, exacerbating the technological disparity.

Furthermore, it is suggested that AI technology and its benefits are not equally accessible to and may not equally benefit the global south. This argument is supported by the fact that the majority of the global south’s population resides in developing countries with limited access to AI technology. The issue of affordability and accessibility of AI technology is raised, with the example of ChatGPT, an AI system that is difficult for people in developing economies to afford. The supporting evidence also highlights the challenges faced by those with limited resources in addressing AI technology-related issues.

Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as persistent issues. While accessibility and inclusivity may be promoted in theory, they are not universally implemented, thereby exposing existing inequalities across different regions. The argument is reinforced by the observation that politics between the global north and south often hinder the universal implementation of accessibility and inclusivity practices.

The analysis also raises questions about the transfer of technology between the global north and south and its implications, particularly in terms of international relations and inequality. The sentiment surrounding this issue is one of questioning, suggesting the need for further investigation and examination.

Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents AI as a tool with the potential to be used against humans, leading to various threats. Furthermore, the importance of responsive measures that keep pace with technological evolution is emphasized. The argument is that measures aimed at addressing new tech threats need to be as fast and efficient as the development of technology itself.

Concerns about the accessibility and inclusion of AI in developing countries are also highlighted. The lack of infrastructure and access to electricity in some regions, such as Africa, pose challenges to the adoption of AI technology. Additionally, limited internet access and digital literacy hinder the effective integration of AI in these countries.

The potential risks that AI poses, such as job insecurity and limited human creativity, are areas of concern. The sentiment expressed suggests that AI is perceived as a threat to job stability, and there are fears that becoming consumers of AI may restrict human creativity.

To address these challenges, it is argued that digital literacy needs to be improved in order to enhance understanding of the risks and benefits of AI. The importance of including everyone in the advancement of AI, without leaving anyone behind, is emphasized.

The analysis delves into the topic of cyber defense, advocating for the necessity of defining cyber defense and clarifying the roles of different actors, such as governments, civil society, and tech companies, in empowering developing countries in this field. The capacity of governments to implement cyber defense strategies is questioned, using examples such as Nepal adopting a national cybersecurity policy with potential limitations in transparency and discussions.

The need to uphold agreed values, such as the Human Rights Charter and internet rights and principles, is also underscored. The argument is that practical application of these values is necessary to maintain a fair and just digital environment.

The analysis points out the tendency for AI and cybersecurity deliberations to be conducted in isolation at the multilateral level, emphasizing the importance of multidisciplinary governance solutions that cover all aspects of technology. Additionally, responsible behavior is suggested as a national security strategy for effectively managing the potential risks associated with AI and cybersecurity.

In conclusion, the extended analysis highlights the disparities and challenges faced by the global south in relation to technology and AI. It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to ensure equitable benefits and mitigate risks. Ultimately, the goal should be to empower all nations and individuals to navigate the evolving technological landscape and foster a globally inclusive and secure digital future.

Tatiana Tropina

The discussions surrounding AI regulation and challenges in the cybersecurity realm have shed light on the importance of implementing risk-based and outcome-based regulations. It has been recognized that while regulation should address the threats and opportunities presented by AI, it must also avoid stifling innovation. Risk-based regulation, which assesses risks during the development of new AI systems, and outcome-based regulation, which aims to establish a framework for desired outcomes, allowing the industry to achieve them on their own terms, were highlighted as potential approaches.

There are concerns regarding AI bias, accountability, and the transparency of algorithms. There is a need to address these issues, along with the growing challenge of deepfakes. The evolving nature of AI technology poses challenges such as the generation of malware and spear-phishing campaigns. Future challenges include AI bias, algorithm transparency, and the impact of deepfakes. These concerns need to be effectively addressed to ensure the responsible and ethical development and deployment of AI.

Cooperation between industry, researchers, governments, and law enforcement was emphasized as crucial for effective threat management and defense in the AI domain. Building partnerships and collaboration among these stakeholders can enhance response capabilities and mitigate potential risks.

While AI offers significant benefits, such as its effective use in hash comparison and database management, its potential threats and misuse require a deeper understanding and investment in research and development. The need to comprehend and address AI-related risks and challenges was underscored to establish future-proof frameworks.

The discussions also highlighted the lack of capacity to assess AI and cyber threats globally, both in the global south and global north. This calls for increased efforts to enhance understanding and build expertise to effectively address such threats on a global scale. Furthermore, the importance of cooperation between the global north and south was stressed, emphasizing the need for collaboration to tackle the challenges and harness the potential of AI technology.

The concept of fairness in AI was noted as needing redefinition to encompass its impact globally. Currently, fairness primarily applies to the global north, necessitating a broader perspective that considers the impact on all regions of the world. It was also suggested that global cooperation should focus on building a better future and emphasizing the benefits of AI.

Regulation was seen as insufficient on its own, requiring accompanying actions from civil society, the technical community, and companies. External scrutiny of AI algorithms by civil society and research organizations was proposed to ensure their ethical use and reveal potential risks.

The interrelated UN processes of cybersecurity, AI, and cybercrime were mentioned as somewhat artificially separated. This observation underscores the need for a more holistic approach to address the interdependencies and mutual influence of these processes.

The absence of best practices in addressing cybersecurity and AI issues was recognized, emphasizing the need to invest in capacity building and the development of effective strategies.

The proposal for a global treaty on AI by the Council of Europe was deemed potentially transformative in achieving transparency, fairness, and accountability. Additionally, the EU AI Act, which seeks to prohibit profiling and certain other AI uses, was highlighted as a significant development in AI regulation.

The importance of guiding principles and regulatory frameworks was stressed, but it was also noted that they alone do not provide a clear path for achieving transparency, fairness, and accountability. Therefore, the need to further refine and prioritize these principles and frameworks was emphasized.

Overall, the discussions highlighted the complex challenges and opportunities associated with AI in cybersecurity. It is crucial to navigate these complexities through effective regulation, collaboration, investment, and ethical considerations to ensure the responsible and beneficial use of AI technology.

Session transcript

Babu Ram Aryal:
Good evening, tech team, it’s okay. Welcome to this workshop number 86 at this hall. It’s very a pleasure to be here discussing about artificial intelligence and cyber defence, especially for developing counter-prospective. This is Babu Ram Aryaland by profession I’m a lawyer and I’ve been engaged in various law and technology issues from Nepal. And I’d like to introduce very briefly my panellist this evening. My panellist, Sarim, is from META and he leads META South Asia policy team and is significantly engaged in AI and policy and technology issues. And he will be representing this panel from business perspective. My colleague is Vakas Hassan. He is lead of international affairs of Pakistan telecommunication authority, Pakistan. And he is engaged in regulatory perspective and he will be sharing regional and of course from Pakistan perspective on regulatory perspective. My colleague Michael is from Zambia and he is cyber analyst and he is investigator in cyber crime and he will be representing from law enforcement agency and Tatia Natropina is… assistant professor from Leiden University, and she will be representing policy perspective, especially from European perspective. So artificial intelligence has given very significant opportunity for all of us. It has now become a big word though it’s not a new one, but recently it has become very popular tools and technology, and lots of threats also have posed by the contribution of technology of artificial intelligence. At this panel, we’ll be discussing how artificial intelligence could be beneficial in especially cybersecurity perspective or defense perspective, and also how we can discuss on the framework of defense side on potential risk of artificial intelligence in cybersecurity, cybercrime mitigation of these kind of issues. I’ll go directly with Michael who is experiencing directly various risk and threats and handling cybercrime cases in Zambia. Michael, please share your experience and the perspective, especially you have been very engaged in IGF perspective. I know you have been MAG member and engaged in African continent as well. Floor is yours, Michael.

Michael Ilishebo:
Good afternoon, and good morning, and good evening. I know the time zone for Japan is difficult for most of us who are not from this region. Of course, in Africa, it’s morning. In South America, it’s probably in the evenings. All protocols observed. So, basically, I am a law enforcement officer working for Zambia Police Service and the Cybercrime Unit. In terms of the current crime landscape, we’ve seen an increase in terms of crime that are technology enabled. We’ve seen crimes that you wouldn’t hope, like expect that such a thing would happen, but at the end of it all, we’ve come to discover that most of these crimes that have been committed are enabled by AI. I’ll give you an example. If we take a person who’s never been to college or who’s never done any computing course, is able to program a computer malware or a computer program that they’re using it for their criminal intent, you’d ask, what skills have they got for them to execute such? All we’ve come to understand is everything has always been enabled by AI, especially with the coming of just GPT and other AI-based tools online, which basically are free. With their time on hand, they will be able to come up with something that they may execute in their criminal activities. So, this itself has posed a serious challenge for law enforcers, especially on the African continent and mostly to developing countries. Beyond that, of course, we handle cases, we handle matters where it has become difficult to distinguish a human element and an artificial intelligence generated as an image, whether it’s a video. So, as a result, when such cases go to court or when we arrest such traitors, it’s a great area on our part because the AI technology are able to do much, much more and much, much faster than a human can comprehend. So as a result, from the law enforcement perspective, I think AI has caused some, a bit of some challenges. What kind of challenges you have experienced as a law enforcement agency significantly? So basically, it comes to the use of digital forensic tools. Like, I’ll give an example. A video can be generated, that would appear to be genuine and everyone else would believe it and yet it is not. You can have cases where, which have to do with freedom of expression, where somebody’s voice has been copied and if you literally listen to it, you’d believe that indeed this is a person who has been issuing this statement, when in fact not. So even emails, you can receive an email that genuinely seems to come from a genuine source and yet probably it’s been AI written and everything points out to an individual or to an organization, at the end of the day, as you receive it, you have trust in it. So basically, there are many, many, many areas. Each and every day, we are learning some new challenges and new opportunities for us to probably catch up with the use of AI in our policing and day-to-day activities as we also try to distinguish AI activities and human interaction activities.

Babu Ram Aryal:
Thank you, Michael. I’ll come to Tatiana. Tatiana is a researcher and significantly engaged in cybersecurity policy development. And as a researcher, how you see the development of AI, especially in cybersecurity issues and as you represent in our panel from European stakeholder. So what is the European position on this kind of issues from policy perspective, policy frameworks, what kind of issues are being dealt by European countries? And Tatiana.

Tatiana Tropina:
Thank you very much. And I do believe that in a way the threat and the opportunity that artificial intelligence brings for cybersecurity or security in general, like let’s say if we put it as protection from harm, would might be almost the same everywhere but the European Union indeed is trying to sort of deal with them and foresee them in a manner that would address the risks and harms. And I know that the big discussion in the policy community circles and also in academic circles is not the question anymore whether we need to regulate AI for the purpose of security and cybersecurity or whether we do not. The question is, how do we do this? How do we do, how do we protect people and also systems and like kind of from harm while not stifling innovation? And I do believe that right now there are two approaches that are discussed or not two, but mostly, we are targeting two things, the risk-based regulation. So when the new AI systems are going to be developed, the risk is going to be assessed and then based on risk regulation will either be there or not. And outcome-based regulation, you want to create some framework of what you want to achieve and then give industry some ability to achieve it by their own means as long as you protect from harm. But I do believe and I would like to second what the previous speaker said. From the law enforcement perspective, from crime perspective, the challenges are so many that sometimes. we are looking at them and we are getting sort of, how do I say it, not our judgment is clouded, but we have to do two things. We have to address the current challenges, why foresee the future challenges, right? So I do believe that right now we are talking a lot about risks from large language models, generation of spare fishing campaigns, generation of malware, and this is something that right now is already happening and it’s hard to regulate, but if we are looking to the future, we have to address a few things in terms of cybersecurity and risks. Sorry, yeah. Well, first of all, the AI bias, the accountability and transparency of algorithms. We have to address the issues of deepfakes and here it goes even beyond cybersecurity, it goes to information operations into the field of national security. So this is just my baseline and I’m happy to go into further discussions on this.

Babu Ram Aryal:
Thank you, Tatiana. Now, at the initial remarks, I’ll come to Sarim. And from an industry player, Meta is one of very significant player and Meta platform is also a platform that is very popular as well as there are many risk Meta platform where complained and not only Meta platform, you are just here, that’s why I mentioned, but these platforms are sometime, many countries they have complaints and they are not contributing, they are just been doing business and then technologies are weighing. issues by people and the bad people. So there are a few things like business perspective, technology perspective, as well as social perspective. So as a technology industry player, how you see the risk and opportunity of artificial intelligence, especially the topic that we have been discussing. And what could be the response from industry on addressing these kind of issues? Sorry.

Sarim Aziz:
Thank you, Babu, for the opportunity. I think this is a very timely topic. There’s been a lot of debate around sort of opportunities with AI and excitement around it, but also challenges and risks as our speakers have highlighted. I think I just wanna reframe this discussion from a different perspective. From our perspective, we have to actually understand the threat actors we’re kind of dealing with. They can sometimes be using quite simple methods to evade detection, but sometimes can use very sophisticated methods, AI being one of them. We have a cybersecurity team at Meta that’s been trying to stay ahead of the curve of these threat actors. And I wanna point to a sort of a tool, which is like our adversarial threat report, which we produce quarterly. And that’s just a great information tool out there just for policy tool as well, to understand the trends of what’s going on. This is where we report on in-depth analysis of influence operations that we see around the world, especially around coordinated inauthentic behavior. If you think about the issues we’re discussing around cybersecurity, a lot of that has to do with inauthentic behavior. Someone who’s trying to appear authentic, from a phishing email to a message you might receive and hacking attempts and other things. So that threat report is a great tool just to, and that’s something we do on a quarterly basis. We’ve been doing that for a long time. We also did a state of influence ops report between 2017 and 20 that shows the trends of how sophisticated these actors are. But from our perspective, I think we’ve seen three things with AI from a risk perspective that honestly does not concern us as much. I’ll explain why. Because one is, yes, like as Michael mentioned, the most typical use cases, AI generated photos and you try to appear like you’re a real profile, right? But frankly, if you think about it, that was happening even before AI. In fact, most of the actions that we were taking with on accounts that were fake, previously all had profile photos. It’s not like they didn’t have a photo. So whether that photo is a generated by AI or a real person shouldn’t matter because it’s actually about the behavior. And I think that’s my main point is that I think the challenge with gen AI is that we get a little bit stuck on the content and we need to change the conversation about how do we detect bad behavior, right? And so that’s one. Second thing we notice is that because of gen AI being the hype cycle, the fact that it’s almost every session here at IGF is about AI, it becomes an easy target for phishing and scams because all you need to do is say, hey, click on this to access chat GPT for free. And people are, because they’ve heard of AI, they think it’s cool. They’re more willing to get duped into those kinds of sort of hype cycles, which is common with things like AI and other things. The third is like we, as I think Michael saluted this and Tatiana as well, that it does make it a little bit easier for, especially I would say non-English speakers who want to scam others to use gen AI, whether you wanna make ransomware or malware to make it easier because now you’ve got a tool that will help you fix your language and make it look all pretty. So it’s like, okay, so you’ve got a very nice auto-complete spell checker that can make sure your things are well-written. So those are sort of the three high-level threats, but honestly, what I would say is that we haven’t seen. a major difference in our enforcement. And I’ll give you an example. In quarter one of this year, we also have a transparency report where we report on, we measure ourselves and how good is our AI. And I think that’s the point I’m trying to get to is that we are more excited about the opportunities AI brings in cybersecurity and helping cyber defenders and helping people keep safe versus the risk. And this is one example. 99.7% of the fake accounts that we removed in quarter one of this year on Facebook were removed by AI. And if I give you that number, it’s staggering. It’s 676 million accounts were removed in just one quarter by AI alone, right? That’s the scale. So when you talk about scale detection and has nothing to do with content, I just wanna bring it back to that. What we detected was inauthentic behavior, fake behavior. It shouldn’t matter whether your profile photo was from chat GPT or it doesn’t matter or your text issue. Because once you get into the content, you’re getting into a weeds of what is the intent and you don’t know the intent, right? Whether it’s real or… And in fact, I’ll also point to the fact that some of the worst videos, you talked about fake videos are actually not the gen AI ones. If you look at the ones that went the most viral, they are real videos. And it’s the simplest manipulations that have fooled people. So I’m pointing to the US Speaker of the House, Nancy Pelosi, her video that went viral. All that they did was they slowed it down and they didn’t use any AI for that. And that had the most negative, like the highest impact because people believed that there was a problem, right? With the individual, which clearly wasn’t the case. It was an edited video. So I guess what I’m trying to say is that the bad actors find a way to use these tools and they will find any tool that’s out there. But I think, so we really have to get focused on the behavior and detection piece and I can get into that more. That’s it for now.

Babu Ram Aryal:
Thanks Sarim. It’s very… Encouraging thing that 99% fake accounts are removed by AI. And what about reverse situation? Is there any intervention from AI on negative side platform?

Sarim Aziz:
Like I said, I mentioned the three areas. Obviously, when you get into large language models, you know, I also wanna make the point that we believe the solution here in getting to solutions a bit early, but is that more people in the cybersecurity space, people who, you know, we talk about amplifying the good, we need to use it for good and use it for keeping people safe. And we can do that through open innovation and open approach and collaboration, right? So of course the risks are there, but if you keep something closed and you only give it access to a few companies or a few individuals, then bad actors will find a way to get it anyway, and they will use it for bad purposes. But if you make sure it’s accessible and open for cybersecurity experts, for the community, then I think you can use open innovation to really make sure the cyber defenders are using the technology to improving it. And this 99.7 is an example of that. I mean, we open source a lot of our AI technology actually for communities and for developers and other platforms to use as well.

Babu Ram Aryal:
Thanks, I’ll come back to you on next round of Q and A. Waqas, you are at very hot seat. I know regulatory agencies are facing lots of challenge by technology and now telecom regulators have very big roles on mitigating risk of AI and telecommunication and of course internet. So from your perspective, what you see is the major issue as a regulator or as a government when artificial intelligence is. challenging the platform in the way that people are feeling risk and of course from your Pakistani perspective as well and how you dealt in this kind of situation in your country. Can you say some lines on this?

Waqas Hassan:
Yeah, thanks Babu. Actually thanks for setting up the context for my initial remarks here because you already said that I’m in a hot seat. Even now I’m in the middle of my platform, the police and the researcher even in this seating. With regulators it’s a bit of a tricky job because at one hand we are connected with the industry. On the other hand we are directly connected with the consumers as well. This is more like a job where you have to do the balancing act whenever you’re taking any decisions or any moving forward on anything. With cyber security itself being a major challenge for developing countries for so long, this new mix of AI has actually made things more challenging. You see the technology is usually and primarily and inherently has been developed in the West. And that technology being developed in the West means that we have a first mover disadvantage for developing countries as well because we’re already lacking on the technology transfer part. What happens is that once, because of internet and because of how we are connected these days, it is much easier to get any information which could be positive or negative. And usually the cyber security threats or the… or the elements that are engaged in such kind of cybercrimes and all, they’re usually ahead of the curve when it comes to defenses. Defense will always be reactive. And for developing countries, we have always been in a reactive mode. Meta has just mentioned that, you know, their AI model or their AI project has been able to bring down the fake accounts on Facebook within one quarter by 99.7%. That means that they do have such an advanced or such a tech-savvy technology available to them or resources available to them that they were able to do to achieve this huge and absolutely tremendous milestone, by the way. But can you imagine something like this or some solution like this in the hands of a developing country with that kind of investment to deploy something like this which can actually, you know, serve as a dome or a cyber security net around your country? That’s not going to happen anytime soon. So what does it come down to then for us as regulators? It comes down to, number one, removing that inherent fear of AI which we have in the developing countries. Although it is absolutely tremendous to see how AI has been doing, bringing in positive things, but that inherent fear of any new technology is still there. This is more related to behavior, which Sarim was mentioning. And I think it also points down to one more point, which is intention. I think intention is what leads towards anything, whether it is on cyberspace or off the cyberspace. I think what developing countries need to tackle this new form of cyber security, I would call it, with the mix of AI, is to have more capacity, is to have more institutional capacity, is to have more human capacity, is to have a national collaborative approach which is driven by something like a common agenda of how to actually go about it. We are so disjointed even in our national efforts for a secure cyber space that doing something on a regional level seems like a far sight to me right now. Just to sum it up, for example, in Pakistan we have a national cyber security policy as well. We do have a national centre for cyber security. PTA has issued regulations on critical telecom and infrastructure protection. We do threat intelligence sharing as well. There is a national telecom cert as well. There are so many things that we are doing but if I see the trend, that trend is more like last three, four years maybe where things have actually started to come out. But imagine if these things were happening ten years back, we would have been much more prepared to tackle AI now into our cyber security postures. So from a governance or cyber security or from a regulatory perspective, it is more about how we tackle these new challenges with more collaborative approach and looking at more developed countries for kind of technology transfer and to build institutional capacity to address these challenges. Thank you.

Babu Ram Aryal:
Thank you, Waqas. Actually, I was supposed to come on capacity and Waqas, you just mentioned the capacity building of the people. Tatiana, I would like to come with you that how much investment on policy frameworks and capacity buildings coming in framing law and ethical issues in artificial intelligence and whether industries are contributing to manage these things and also from government side. So what is the level of capacity on policy research on framing artificial, I mean, framing the way out for these artificial intelligence and legal issues? It’s working, right?

Tatiana Tropina:
Thank you very much for the question. And I must admit, so I’ve heard the word investment. I’m not an economist. So I’m going to talk about people, hours, efforts, and whatever. So first of all, when it comes to security, defense, or regulation, I think we need to understand that to address anything and to create future frameworks, we need to understand the threat first, right? So we need to invest in understanding threats. And here it’s not only, and I think I mentioned this before, it’s not only about harms as we see it, for example, harm from crime, harm from deep fakes. It’s also harm that is caused by bias, but ethical issue, because the artificial intelligence model is only as good, it brings as much good as the model itself, the information you feed it, the final outcome. And we know already, and I think that this is incredibly important. for developing countries to remember that AI can be biased. And technologies created in the West can be double biased once technology transfer and adoption happens somewhere else. For example, when I’ve heard about meta-removing accounts based on behavioral patterns, I really would like to know how these models are trained. Be it content, be it language, be it behavioral pattern, does it take into account cultural differences between language, countries, continents, and whatever? And here, I do believe that what we talk about in terms of cooperation between industry, researchers, and governments, and law enforcement is crucial. Just a few examples. Scrutiny, external scrutiny of algorithms, and I believe that both industry, government, or not both, three of you will agree with me, that it is incredibly important once the algorithm is created and trained to open it for scrutiny from civil society, from research organizations, because you need somebody to see if it’s ethical from the outside. You know, to me, testing algorithm just by adopting them ethically is the same as testing medicine or cosmetic on animals. We don’t do this anymore. So, it’s not only building capacity itself, it’s adopting a completely new mindset, how we are going to do this. And in terms of investment in creation of future-proof frameworks, you really need to see the whole picture and then see, okay, what kind of threats I’m addressing today, and what kind of threats I might foresee tomorrow. And this is why I was talking about sort of, it is hard to think about future-proof frameworks because, indeed, defense will always be a bit behind. But if you forget about technology itself, technology can change tomorrow, but you can think about how you frame harm. what do you want to achieve in your innovation? And then say, okay, META, I want to achieve this level of safety. If you see this risk, please provide this safety and leave it to META and make META open in this also for external research and this cooperation might bring you somewhere to the point where it would be more ethical, where it would be more for good in terms of defense. And I also want to say the existential fear of AI exists everywhere, I believe. And this is why every second session here is about AI, just because we are so scared. But I also do believe that we cannot stop what is going on. We really have to invest in here. I’m talking again, not about money, but about people. And also, if I may, if I have not spoken for too long yet, I think that there are so many issues here that we have to detangle. And again, look at harms and look at algorithm itself. For example, the use of algorithm in creation of spare fishing campaigns or malware. We know how to address it. We need to work with prompt engineers because it creates malware only as good as the prompt you give it. And if a year ago, you could say to charge DPT, just create me a piece of malware or ransomware and it will do it. Now you cannot do this. You need to split it into many, many prompts. So we have to make this untenable for criminals. We have to make sure that every tiny prompt, every tiny step that they can execute in creation of this malware by algorithm will be stopped. And yes, it is work, but this is work we can do. And so with any other harm. Sorry for speaking for too long. Thank you.

Babu Ram Aryal:
It’s absolutely fine. Thank you very much. and bringing more issues in the table. Sorry, there was very interesting response from Tatiana. Setting the, what is harm, how we understand and then setting this, previously Vakas mentioned the fear of AI. So do we have any fear of these things from technology platforms like yours? How you are handling this kind of fear and risks technologically? I don’t know whether you could be able to responding from technological side, but still from your platform perspective.

Sarim Aziz:
I think, yeah, I mean, any new tech can seem scary, but I think we need to move beyond that. And like, yeah, as Tatiana and others mentioned, the existential risk always becomes a distraction in the conversation. I think there’s like near short-term risks that need to be managed. And on their approaches, I think there’s some really good principles and frameworks out there with the OECD principles, about fairness, about transparency, accountability. I mean, the White House commitments as well. So there are good policy frameworks for countries to look at, and they certainly need to be localized to every region. But there’s plenty of good examples, G7 Hiroshima process, that I think industry generally is supportive of, in terms of making sure that we make AI responsibly and for good. But to me, I think the bigger question, the harms are sort of clear. The idea, I think now, is that how do we get this technology into the hands of more people who are working in the cybersecurity space? Because if you think about cybersecurity space, also 20 years ago, it was quite closed. But now you have a lot more collaboration and open innovation happening in that. It took 20 years for us to realize that, actually, that. Keeping cyber security close to a few does not help, because the bad actors get this stuff anyway and then you just have like your defense list against them. So I think the same thing has to happen with AI, it’s going to be tough but I think the governments and policymakers, if they need to incentivize open innovation. So when you have a model that’s close you don’t know how it was trained you don’t know you know how it was built. You don’t have a responsible like it makes it difficult for you know it’s for the community to figure out what are the risks and I think we one of the things we did for example was we submitted our model as our model is open source it was launched just in July of this year and already in one month it was downloaded by 30,000 people. Now, are, of course we did red teaming on it we tested it for, but no amount of testing is going to be perfect and the only way to get it tested perfectly is to get it out there in the open source community and responsible players have access to it they know what they’re doing. And that’s the beauty of AI I think that’s a game changer because mentioned that you know there’s a capacity issue yes there is a capacity issue. We have a capacity issue as meta, you can’t hire enough people to remove the bad content. AI helps us do that. Right, you don’t. So instead of having you can have millions of people looking at what’s on the platform and removing content, it’ll never be enough. Right, AI helps us get better so that human, you still need human review you still need experts to know what they’re doing, but it helps them be more efficient and more effective. And the same thing and open innovation model can help developing countries catch up on cybersecurity because now you don’t need thousands and thousands of cybersecurity experts you just need a few who have access to the technology. And that’s what open innovation and open sourcing does which is what we’ve done with our model. We even submitted our model to DEF CON, which is a cybersecurity conference in Las Vegas and we said, you know, break this thing find the vulnerabilities. What are we not doing where, where are the risks, and we’re waiting for the report but that’s how you get make it better. Right, of course we did our best to make sure that it takes care of the CBR and risks of you know chemical biological. technological nuclear risks, but there are other risks that we may not have seen. So I think this is where putting it on open source, giving access to more researchers, it doesn’t matter whether you’re in Zambia or Pakistan or any other country, you have access to the same technology now that Meta has built. And that’s how we get there to an open innovation approach. There are many other language models. I’m not going to name them, but they are not open and Meta’s is. So I think that’s what we need to get policymakers to incentivize open hackathons on these kinds of things, break this thing and create sandboxes to safely test this on, because a lot of the testing you can do is only based on what’s publicly available. If governments have access to information that they can make available to hackers to say, okay, like use this language model and see if we can do this. And in a safe environment, obviously, ethically without violating anybody’s privacy and things like that. So I think that’s, that’s where we need to focus the discussion on policy.

Babu Ram Aryal:
Thanks Sarim. I think one interesting issue is we are discussing from development country perspective, right? This is our basic objective and there are opportunities to all of countries. Access is always there, as you Sarim mentioned, but there are big gaps between developing countries and developed countries about the capacity. We have been talking about, and especially if I see from Nepalese perspective, we have very limited resources, technology, as well as human resource, that that is a big challenge on this defense. So Michael, what is your personal experience leading from the front and what is the capacity of your team and what do you see the gap between developing countries and developed countries on capacity of? of addressing these issues?

Michael Ilishebo:
So basically my experience is probably shared by all developing countries. We are consumers of services, products from developing countries. We haven’t yet reached that stage where we can have our own homegrown solution to some of these AI module languages where we can maybe localize it or train it on our own data sets. Whatever we are using or whatever is being used out there is a product of the Western world. So basically one of the major challenges that we’ve encountered through experience is that the public availability of these language models in itself has proved to be a challenge in the sense that anyone out there can have access to the tools. It simply means that they can manipulate it to an extent for their criminal purposes. As reported by Meta, in the first quarter of their use of the language model that they are using they got close to a billion fake accounts. Am I correct? Close to, no, no, yeah. Whatever, it could be images, it could be anything that was not meeting the standards of Meta. So if you look at those numbers those numbers are staggering. Now imagine some of the information that Meta has brought down because of ethical and probably safety and other concerns were deployed to a third world country that has no single capacity to probably filter that which is correct, filter that which is not correct. It is becoming a challenge. As much as the crime trend is increasing. also with the borderless nature of the internet, the AI models have really become something that you have to weigh the good and the bad. Of course, the good outweighs the bad, but again, when the bad comes in, the damage it causes within a short period of time, like outshines the good. So at the end of it all, there are many, many challenges that we face through experience, only if we could be at the same level with developing countries in terms of their tools they are using to filter anything that is probably will bring public opinions in terms of misinformation, in terms of hate speech, in terms of any other act that we may deem not appropriate for society or any other act that probably is a tool for criminal purposes.

Babu Ram Aryal:
Thanks, Michael. Vakasa would like to intervene on this issue.

Waqas Hassan:
I think, like already mentioned, the pace with which the threats are evolving is I think unequal to, inequal to what at the pace with which we are, our defense mechanisms are improving. And why this is happening? This is because we don’t have as much faster, the forensics is not as fast as the crimes are happening. Like Michael has already mentioned, this it’s a good thing that the tools or these models are open source, but at the same time, these models are equally available to people who want to misuse it as well. Now, the capacity of people who want to misuse it is sort of… When it outweighs the capacity of people who have to defend against it, you find incidents and you find such situations where we eventually say that AI is bad for us or bad for the society and all. But when we are better prepared, we are proactive, like what Facebook did is sort of proactive thing. Rather than those accounts doing something which would eventually become a big fiasco, they actually took it down before something would happen. That is something which developing countries are usually lagging behind. Doing cyber security or having their cyber defense in a proactive mode rather than being a reactive mode. I am not saying that not prepared and I am not saying that there is no proactive approach there. There is. But that proactive approach is hugely dependent on what kind of tools and technologies and knowledge and resources and investment available to the developing countries, rather than just saying that, you know, okay, fine, we are doing proactive approach and we are doing these things. I mean, Michael is at the forefront of everything. I think everybody would know that the kind of threats that are emerging now are much more sophisticated than they were ever before. Be as sophisticated and as prepared as you were before, I leave that question on the table. Thank you.

Sarim Aziz:
Can I just add a perspective? Yes. It’s your story. Yeah. So I think I’m coming back to my introduction. I don’t think the risk vectors have changed. Sorry, you want to add something? Oh, yeah. Okay. All right. Yeah. I mean, I think, yes, you might. As I said, the bad actors who want to cause harm are using the same vectors they were before. Jenny, I don’t think just because they’re putting the. So it’s like phishing, right? Like phishing is a good example. You don’t solve for phishing. Okay, fine, they can have a much better email that’s written that seems real and logos that look real and whatever, right? But that’s not how you solve phishing. You solve phishing by making authentication credentials one-time use. Because any one of us, the most educated person in this room can be phished, right? I mean, if you’re running in a rush, you don’t have time to check the email address, you just read something and it looks real, you’re gonna click on it. Yeah, we’ve all been there. We’ve all done it, right? I’m gonna raise my hand. So that threat vectors in terms of what you’re talking about haven’t changed, same with the fake accounts. So our fake account detection doesn’t care how real your photo is or isn’t. It’s based on AI, it’s based on behavior. And that behavior, yes, of course, we have 3.8 billion users. We have to be careful that this is the spammy behavior we’re seeing people creating multiple accounts on the same device or sending 100 messages in a minute and spamming people and things like that. So it’s really bad behavior. It doesn’t matter what it is, it’s wrong. What country you’re from, what culture you’re from. So that’s the kind of stuff, it is universal, right? And same with phishing, it’s quite universal. So yes, there are certain risks, same with NCII. So NCII was still there before GENAI, non-consensual intimate imagery. So you can use Photoshop for that, you don’t need GENAI. And that’s, unfortunately, that’s the biggest harm we see. That’s the biggest risk, we talked about risk. That we, and that’s a separate topic where I’m talking on a panel on child safety as well, where you need collaboration. We have an initiative called StopNCII.org, where if you are a victim of NCII, and again, this is where AI helps. So if anyone is a victim, you know a victim of NCII, and their pictures have been compromised and whoever is blackmailing them and things like that, you can go to StopNCII.org and you can submit that video or image. And we use AI to block them across all platforms, all services. This is the power of AI, right? Even if it’s like slightly change or, because we take that hash and we do that. So it’s, this is the power of AI. I think it helps us with sort of preventing actually a lot of harm. Whereas it would be, without AI, you can easily do the same thing. You know, it might make it a little bit easier or maybe it makes it high quality, but the quality of the impersonation or the quality of the intent doesn’t really change the risk factor.

Tatiana Tropina:
Tatiana? Yeah. Yeah, thank you. What I wanted to say largely goes in line with what you say, because I made one line when I was listening. Misuse will always happen. We have to understand that we should stop fixating on technology itself. Any technology would be misused. If you want to create bulletproof technology, you should not create any technology because it will always be people who misuse it, who would find the way to misuse it. Crime follows opportunity. That’s it. Any technology will be misused. And also about phishing, for example, the human is the weakest link always. You’re not fooling the system only, you’re fooling humans. And in the same way, we have to talk about harms. And here I go to one of my intro remarks. We have to focus on harms, not on technology per se. We have to see where the weakest link is, what exactly can be abused in terms of harms, where harm is caused. And in this way, I strongly believe that AI can bring so much good. And thank you for reminding me about the project of non-consensual image sharing. Of course, AI can do it automatically. You can compare hashes, you can have databases. But then again, when we look layer after layer, we can ask ourselves how this can be misused as well, and how this can be addressed, and so on and so forth. We just should always ask questions. And also I would like to remind. again and again. It’s not only about technology, let’s always remember that it is humans who are making mistakes and humans who are abusing this technology and this is where we also have to build capacity. Not only in technological development, not only in regulatory capacity, but after all the whole chain of risk, you know, focuses at the end on humans, on humans developing technology, on humans developing regulation, on humans being targeted, on humans making mistakes and this is where we have to look at as well.

Babu Ram Aryal:
Thanks Tatiana. Now, I would like to open the floor and if you have any question, if Ananda Gautam, my colleague, is moderating online and if there is any question if you want to ask to the panel from online as well who are joining this discussion, you can also put your question to the panel and also I would like to request participants to speak or share your questions to the panel. Yes, please introduce yourself briefly for the record.

Audience:
Hello everyone, I’m Prabhas Subedi from Nepal. It’s been so interesting in discussion, thank you so much panel. I want to just explore a little bit what we miss from today’s discussion, probably that is the capacity of individual countries to negotiate with big tech players, right? If you look at the present scenario, so much resources that is being collected from so-called third world, global south to the developed economy. And of course they are boosting their economies through deploying this sort of technologies and we have nothing. And that is one of the main reasons we are not empowered, we are not capable to tackle this sort of challenges. And of course another thing is the technology is so much concentrated to the global north and I’m not pretty sure that they do care equally, inclusively to the big number of population living in the global south and economy comes first. So it will be continue what is happening today and will be continue in the AI time, AI dominated time. That is my observation and what is yours I would like to ask from panelist side. Any specific resource person you would like to ask? Anyone can, thank you.

Sarim Aziz:
I mean I think as I said before, first of all I agree with you, there is a way of making technology more inclusive and that has to be by design and that is why I think principles when it comes to the frameworks that are out there on AI being led by Japan and OECD and the White House, it is about inclusivity, fairness, making sure there is no bias in there. But those are all policy frameworks. I think from a tech perspective, as I said, I think open innovation is the answer. And AI can be the game changer where, as I explained, it is out there. There is no reason why… The same technology that we’ve open sourced that the Western countries have now, researchers and academics and developers in Nepal and other countries in Africa can also access. And this is an opportunity to get ahead. And you don’t need, AI is the game changer because it’s about skill. It’s about doing things at scale and being able to, especially thinking about systems and protecting systems and the threats you’re talking about. It’s not a problem where you throw people at and it’ll get solved. Of course you need to do capacity building and you need experts, but it helps them be more efficient, more effective. So I’d love to see what the community, it’s only a few months old, our model, it’s called Lama 2. You can go and look at it. You can look, there’s a research paper along with it that explains how the model was built because we’ve made it, we’ve given an open source license under acceptable use policy. And so, yeah, and there’s derivatives already out of it. So you can’t even use the language argument anymore because the Japanese took that model and they already made it into, they call it, I think, ELISA and the Japanese university in Tokyo has made a Japanese version of that model. So it’s, and we’re excited to see what the community can do. And I think that’s the way we can continue to innovate, make sure that nobody gets left behind.

Audience:
I do not completely agree with you because you can already see that, for example, ChatGPT has the premium and free version and the majority of users are from, of course, from the developed economies and it’s quite difficult to afford. And there is no always a chance to be the openly and easily availability of such resources. And if you are not habited and if you are not well equipped with the resources, how can you be capable to tackle the upcoming challenges?

Sarim Aziz:
So I don’t work for ChatGPT or OpenAI, so I can’t speak for them, but our model is open source. It’s already public and it’s the same and anyone can basically write another chat GPT competent competitor using that

Babu Ram Aryal:
Thank You privacy Tatiana he raised one interesting debate on global north and global south Do you see? You

Audience:
Well, thank you very much, this is very interesting debate and International relations, I am dr. Mohammed Shabbir from Pakistan Representing here civil society the dynamic coalition on accessibility and disability so The the debate here going on I sort of as a student of international relations I would agree to this that we don’t live and in an equal world the Terminologies inclusivity accessibility. They all seem very fine on the paper but in reality when we see what is happening here is is Unfortunately, we live live in a real world and not in an ideal world where everything would be equal towards one another What cost is a very Valid point and I would ask want to ask that question from work us and then I would seek the response from from meta you talked about the Transfer of technology. What sort of technology are you talking about here? And my question from meta and and the global North is that how far are they ready to share that technology with the global south? when they when it comes to diversity inclusivity not to talk of the the earlier point my friend raised about the the price and and the open and free plus premium versions of different softwares that are out there in the market. Those will remain there, but what sort of technology are we talking about here in terms of transferring? Of course, AI is a tool like any other tool, but I can see that when it was human against human, so it would be like a sharp knife that could be used against any other person, but that would be human using against human, a tool. But this time, AI as a tool being used as not just as a computer, it would be a computer against a human who would be targeted. So the threat, as my friend from Meta is talking about, is just a real one, and it seems that it’s like a fishing one. The example cannot be equated. I think this is something that we need to discuss. The response measures have to be as sharpened, as quick, and as faster as the technology that we are developing here is, but I would want to seek the response on my earlier point from Vakas and then from Meta. Thank you.

Waqas Hassan:
Okay, thank you. I think when we say technology, it’s primarily, of course, one of the examples is how Meta has just open-sourced their AI model, which of course is something that any nation can use to develop their own models. What we’re talking about is a standardization of these technologies, in my view. Once something gets standardized, it is available to everybody. That’s how telecom infrastructure works across the world. If there is a standardized technology, of course, it is easier to. for, for, for developing countries or developed countries, anybody, any, any interested party to take advantage of. Threat intelligence, what kind of threats are out there? What kind of issues are they dealing with? What kind of information sharing could be there? What kind of new crimes are being introduced? How AI is being misused? And then how that situation is being tackled by, by the West? Technology itself is just a word. It is, it is more about how, what, what are you sharing? Are you sharing information? Are you sharing the tools? Are you sharing experiences? Are you even sharing human resources? You mentioned that now it is human versus AI, but can we, how about AI versus AI? You know, can we develop such tools or AIs that can preempt and work like, it’s like, I’m, I’m, I’m going back into the, into the cyber warfare movies and all that, which used to predict that in the future bots would be fighting against each other, but we’re not there yet. But if we are investing in AI for defense mechanisms of to improve the cyber security posture, like Meta has just done, that investment muscle is currently not that much available to the developing countries. So we have to look towards the West. And what they are developing is something that we need. And we’re going to need for a foreseeable future in terms of the tools, in terms of the information, in terms of the experience sharing, and in terms of the threat intelligence that they have. Thank you. And I’ll leave it to Sarim to respond to the other part.

Sarim Aziz:
Thank you, Waqas. So I think it’s a good question. Maybe I didn’t set the context of what Lama to it. So Lama2 is a large language model, similar to OpenAI’s ChadGPT, except the difference is it’s free for commercial use, and it’s open source. So the technology is available for any researcher, anyone to deploy their own model within their own environment. So you can put it on, if you’ve got the computational power in your own cloud, you can deploy it there on your computer, or you could deploy it on Microsoft’s Azure, or AWS, and any other. So it’s basically a large-legged model that helps you perform those automated tasks. But it’s out there for open source, meaning that we invite the community to use it, invite the community. It’s free. We don’t charge. There’s no paid version of it. Obviously, you have to agree to the conditions and agree to the Responsible Use AI Guide. But beyond that, yeah, that’s what we’ve launched just this year. And we’re excited to see how the community around the world uses it for different use cases. And there are use cases we didn’t even realize. That’s the beauty of open sourcing, is that we won’t know how it will get used by different governments, by institutions to deploy. Of course, we only make it better and safer through red teaming, through testing, all that. But the more cyber security experts tell us the vulnerabilities and use it, that’s how we’ll improve it.

Babu Ram Aryal:
Thanks, Ari. Tatiana, observing these two questions, I was supposed to ask you the debate of global south and global north capacity and the impact on artificial intelligence and cyber defense issue.

Tatiana Tropina:
I must admit here that I cannot say that I cannot speak for global south, which is global majority, right? It is hard for me to assess capacity there. But I can certainly tell you that even in the global north, if we call it, if we call the global minority, global north, the artificial intelligence in cyber, so capacity in cyber defense. On the one hand, of course, if we’re talking about expertise, we might talk about some high quality specialists and better testing and whatever, but believe me, the threat, the threat is still there and there is lack of understanding what kind of threat is there in terms of national security, in terms of cyber operations. Because so much is connected in the global north, because people follow things on the internet so much, the question, for example, deepfakes and elections, and I love the story about Nancy Pelosi video because you don’t have to change anything, you just have to slow down or speed up and whatever. So the question here, again, boils down to capacity to assess the threats before you have capacity to tackle them. And I do believe that right now, in the so-called global north, we have this problem as well, capacity to understand the threat. Are we just saying, oh my God, it’s happening? Or are we really kind of disentangling it, looking at what is actually happening and then assessing it? And I do believe that indeed, indeed, there is a gap when we talk about developing countries and developed countries in technological expertise, in what you can adopt, in how you can address it. But in terms of understanding the threat, we still lack capacity in global north as well. We still lack understanding of the threat itself. And there is a lot of fear-mongering going on as well. And I do believe that in this term, we have to share this knowledge. We have to share this capacity because, yeah, the threat can be. vary from region to region, but at the same time the harm will be to people, be it elections, be it cyber threats, be it national security threats. And here I do believe that there is such a huge potential for cooperation between what you call global north and global south. And by the way, I do think that we have to come up with better terms.

Babu Ram Aryal:
Tatiana and I will come on cooperation. I’ll go to the question.

Audience:
Thank you for giving me the floor. My name is Ada Majalo. I’m coming from the Africa IGF as a MAG member. Very interesting session, really. I think when we talk about AI, most of the time it’s us from the global south or developing countries who have the most questions to ask because we have the bigger concerns. We are still tagging along. When it comes to AI, we are concerned about how inclusive it is and how accessibility it is. For example, coming from an African context, we are still struggling with the infrastructure. We talk about electricity, access to electricity. It’s a problem. And you need to be online, you need to be connected to be able to utilize most of these facilities that come with AI. But we are already having those challenges. So it’s difficult for us to actually either follow the trend or keep up with the trend. It always brings us to mind also as well, we have so many people that don’t really have no access to the internet. We don’t even know what is digital. And we talk about inclusion. How do we bring those people along? And how can they keep up with the whole idea? There is always a concern what are the risks, what are the challenges? How do we move away from the status quo? How do we follow suit? And what are the risks for us? And usually what are the benefits we get? But then it comes back to understanding. digital literacy, how people are digitally trained to understand what are the risks and what the benefits that might come from it and how we practically come to, I would say tag along with already the global not that are far ahead from where we are coming from. There is always the issue of people trusting AI. From where I’m coming from, people will ask, is AI here to take our jobs? How much can we be dependent on AI and not really, would it balance how creative we are? Because some of the consumers, when you are a consumer of AI, you are consuming. So does that really limit you being creative and also just being the consumer and just receiving and receiving and receiving and not also, it limits how can we balance the creativity of the human being? So it’s a bit off balance, but it’s good to bring this to the table to ensure that. When we are moving, there must be people left behind, but we see how to draw them along. And this is something that I just wanted to show out there. Thank you.

Babu Ram Aryal:
Thank you very much. Anything you would like to address us or I have one important side of cooperation. Just we started about the global north and global south and we’re talking from a government perspective and how we can build up a cooperation and addressing at national, regional and global level. So what could be the possible framework for addressing these issues? Tatiana?

Tatiana Tropina:
Okay. Sorry. I think that we already mentioned the principles and they are basically, okay. not that global, but I do believe that, I absolutely love the previous intervention, I’m sorry, I didn’t catch the name, but I do think that there are so many, so if we look about principles of AI, like for example, fairness, transparency, accountability, and so on and so forth, I think that we really need to redefine what fairness means. We really need to redefine what fairness means, because I think that right now when we’re talking about fairness, we do talk about applicability of fairness to what you call global north. And I think that if we look at fairness much broader, it will include the use of technologies and the impact of these technologies to any part of the world, to any part of society. It is hard for me to think about cooperation on the global level, like, you know, we all get together and happily develop something. I’m not sure this can happen really, unless the threat is imminent, but yeah. So I do believe that we have to, when we think about global cooperation, when we think about global capacity building, we should not start from threats. We should start from building a better future, we should start from benefits. And I think that fairness would be the best way to start. How do we make technology fair? How do we make every community benefiting from this technology? I know that you probably want me to talk about more practical steps. I don’t have, I’ll be honest here, I do not have an answer to this question, because unless we frame the place where we start from, which will include fairness for every country and every region and every user, instead of threats, instead of, oh my God, we are all going to die tomorrow from AI, or we are going to be insecure tomorrow. We should start with the benefit, how AI can. and benefit everybody, every population, every community, everyone. And if we start from the premise of good and define it and somehow frame it, and it’s already framed in a way, but you know, widen this frame. I think starting there would be a much better place. And in terms of practical steps, I do believe that the steps, the baby steps already taken by civil society, by the industry, which were certain players throw away the concept, move faster and break the things to the concept, let’s go more open, more fair, more transparent, more inclusive. This is already a good start. I do not know if regulation, attempt to regulate would bring us there. I do not think so, actually. I think that attempts to regulate should go hand-in-hand in what we do as civil society, as technical community, as companies cooperating with each other. But to me, honestly, the first step would be to redefine the concept of fairness.

Waqas Hassan:
I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as well. I’d like to take this from the other angle, which is a reverse angle, which is starting from the national level. Information sharing, threat intelligence sharing, developing tools, mechanisms against, or using AI for cyber defense, that starting point is, of course, your national level policy or national level initiatives or whichever body that you have in your country. For example, in Pakistan, we do have such bodies. Now, on APAC level as well, there are bodies, for example, there is an Asia-Pacific cert as well. They do cyber drills. ITU also organizes cyber drills. for countries to participate on and all. So there is some form of collaboration happening. How effective it is, I can’t say for sure, because this particular mix of AI into this into this cyber security and cyber is something which I haven’t seen in any agenda so far. But the starting point is again, a discussion forum like we are sitting at right now, like in IG, foreign for national cyber security dialogue to start, which can then come, you know, sort of meta size into into a regional dialogue, which then eventually, you know, gives input to the global dialogue. Whether it’s human, whether it’s AI, whatever it is, the starting point of every solution is a dialogue, if in my opinion. So I think this is where collaboration comes in. This is where information sharing comes in, especially for the developing countries. If you if you don’t have the tools or technologies, technologies, at least what we have is each other to share information with. So I think that should be the starting point. Thank you.

Babu Ram Aryal:
Michael on cooperation. How we can build a cooperation on cyber defence and how what kind of strategies we can take on that?

Michael Ilishebo:
So basically, we’ve discussed a lot of issues. Most of them, we’ve looked at issues that have to do with fairness, accountability, and ethical use of the AI. There are many challenges that as a law enforcer, we face. But in all, this discussion would definitely come in a broader way in the future when actually the law enforcers themselves, start deploying AI to detect, prevent and solve crime. Now that will affect all of us, because at the end of it all, we are looking at AI being used by criminals to target probably individuals to get money, probably to spread fake news, but now imagine you are about to commit a crime and then AI detects that you’re about to commit a crime, there’s a concept of pre-crime. So that will affect each and every one of us, just a simple show of behavior will detect what crime you’d want to commit or you commit in the future. So now that will bring up the issues of human rights, issues of ethical use, a lot of issues, because at the end of it all, it will affect each and every one of us. Today we’re discussing on the challenges that AI driven defense system has brought, but in the future, not even in a distant future, just probably in a few years time, it will be something that all of us will have to probably face in terms of being judged, being assessed, being profiled by AI. So as much as we may discuss other challenges, let us also focus at the future when AI starts policing us. Thanks, Michael.

Babu Ram Aryal:
One question from you. Yeah. Question. Come in. Yeah, please. Mic there. Introduce, please. Thank you.

Audience:
Thank you for the insightful reflection. This is Santosh Siddhal from Nepal, Digital Rights Nepal. On the question of collaboration, I think that they understood that we have to define the concept first. I think we have to also define the concept of cyber defense. If we are moving from cyber security to cyber security. cyber defense, we have to have a kind of open discussion because defense is the job of government. And normally government national security, security defense, they are dominant actor. And they do not want to have all the actors on the table citing national security. It has happened in lots of other issues, be it freedom of expression, be it other civil rights. So national security is a kind of domain, their domain, government domain. And we are talking that promoting cyber defense, not cyber security in developing countries. So within the developing countries, we are empowering whom? We are empowering the government. We are empowering the civil society. We are empowering the tech companies, which stakeholder we are talking about. So I think we have to deconstruct the whole concept of cyber defense. And at the same time, we have to kind of deconstruct developing countries. Talking about within the developing countries, in the AI regulation, we also talked about AI regulation. And in the discussion of cyber defense, are the civil society now on the table to discuss these issues? I’ll give you one example. In Nepal, recently Nepal formed, Nepal adopted the national cyber security policy. And one of the provision in the cyber security policy is that the technology or the consultation, ICT related technology or consultation would be procured by the different system than the existing public procurement process. And that process will be defined by the government. So now they are having a new shield, or the new layer where the public or the civil society would not be discussed what kind of technology the government is importing into the country or what kind of consultation they are having on the cyber security issues. So while talking about these issues, I think we have to also, another factor is we have to. to discuss about the capacity of the government to implement it, whether that kind of defense or the capacity we are talking about, whether other governments are supporting them, is it available within the national context, or whether there is a geopolitics in the play? Because it has happened in many situation, cyber defense is part of the geopolitics as well. So we have to also consider that dimension. So in my opinion that you said earlier, technology is different, but the values are same. So we have to focus on the values. And I think the human rights charter or the internet right and principles are the basic values that we have to uphold. We are talking about different, somebody earlier said about those values having in the paper and those values having in the practical world. At least we start, we have to, I think, start with the values that we have already agreed on, all we have agreed on the paper, then we have to make them practical in the real life. Thank you.

Babu Ram Aryal:
Thank you. We have just eight minutes left. Can you please briefly share your thoughts?

Audience:
Hi, thank you. My name is Yasmin from the UN Institute for Disarmament Research. So I just have a quick question. So based on my previous, I’ve been following the issue of AI and cybersecurity for a few years now, and I see that while both fields are so inherently deeply interconnected, fact is that at the multilateral level, other than processes like the IGF, and even so it’s only been recent, most of the deliberations are done in silo. You have processes for cyber, you have processes on AI, but they don’t really interact with each other. So, but at the same time, I see that there is increased awareness on coming up with governance solutions that are sort of multidisciplinary and touch upon tech altogether. And one of the approaches that have been. proposed is responsible behavior and as states are trying to develop their national security strategies along the lines of responsible behavior on using these technologies I was wondering if all the panelists based on your respective areas of work whether it’s in the public or private sector if you have any sort of best practices that you would recommend or you would be sharing to the audiences here on yeah when states are trying to develop their national security strategies what sort of best practices have worked on your experience to govern these technologies in the security and defense sector thank

Babu Ram Aryal:
you thank you very much for this question but we have very less time we have this six minutes left a very quick intervention from Michael and then take away from all the panelists yeah so basically to probably just touch a bit

Michael Ilishebo:
of what she’s asked so in integrating into the defense system of course she’s mentioned issues of national cyber security strategies there’s also need for regulatory frameworks there’s also need for capacity building collaboration data governance incidence response and ethical guidelines of course within the international cooperation so as she’s put it we are discussing two important issues in silos cyber security is discussed as a standalone topic without due consideration for AI the same way I discussed in isolation without due consideration for cyber security and its impact so there should be a point at which we must discuss the two as a single subject based on the impact and the problems we are trying to solve

Tatiana Tropina:
closing remarks yeah I would like to address this question because to me it’s a very interesting one as somebody who is dealing with law and policy and UN process well first of all I think that this is not the first time when to interrelated process are artificially separated in the UN. For example, look at cybersecurity and cybercrime processes, they’re also separated. And then we have cybersecurity and AI and so on and so forth. As to best practices, I will be honest here as well, I do not think that there are best practices yet. We are still building our capacity to address the issues. I would say that the things where I’m looking at to become best practices, there are quite a few. First of all, when we are talking about guiding principles, I believe that they are nice and good whenever they appear, but they do not tell you how to achieve transparency, how to achieve fairness, how to achieve accountability really in a way. So I’m currently looking at the Council of Europe proposal for global treaty on AI. And I think that this might be the, it’s very kind of general as a framework, but this might be a game changer from the human rights perspective, which will play into fairness perspective in terms of agreed values. But I’m also looking to the EU AI Act, because this is where we might get a point where on the regulatory level, we will prohibit profiling and some other AI users. And this might be a game changer and this might become the best practice. And this is what I would be looking at, not at the UN, but on the EU level. Thank you.

Babu Ram Aryal:
Sarim.

Sarim Aziz:
Thanks, Babu. Yeah, I think certainly you’re right. It’s still early days, right? I mean, there’s, we met as a member of the partnerships on AI with other industry players, and there’s, I think, multi-stakeholder collaboration, I know it’s been mentioned in every session, is that is the solution. And I think there are good examples in terms of North Stars to look at in other areas. So for example, you take child safety or you take terrorism, you know, the AI is already doing some pretty advanced defensive stuff there on both fronts, right? So on child safety, the National Center for Missing. and exploitive children like that, they have a cyber tip line where they inform law enforcement in different countries based on CSAM that’s detected on platforms. And that’s because industry, that’s a public private partnership becomes very key there where industry works with them and they enable law enforcement around the world in that issue of child safety and child exploitation. So that’s a good example of where we can get to on cybersecurity. Same with terrorism. You know, the GIF-CT is a very important forum where industry is a part of and where we ensure that platforms are not used to. So I think back to the harms, like we have to go, what is the harm we’re trying to attempt? And do we have the right people focused on it? But I think on the AI front that we’re in the beginning of the stages of getting, we need to have technical standards built like we do on other areas, like things like, okay, watermarking, you know, what does that look like for audiovisual content? And that can be fixed on the production side, right? If everybody has this consensus, not just in industry, but across countries and including countries and developing countries. But I do think the opportunity in the short term is for developing countries to take advantage of the incentivize, you know, like we have a bug bounty program, for example, but incentivizing giving data to local researchers and developers to help figure out vulnerabilities and train systems using that for your purposes locally is sort of the immediate opportunity because these models are open source now and available.

Babu Ram Aryal:
Sorry, Vakas, just you have one minute left.

Waqas Hassan:
Okay, one minute. I think we look at the government to do most of the things, almost everything, but this weight of responsibility to be more cyber ready has to be distributed not only just between the government, but also among the users, among the platforms, among the academy, everybody, I’m circling back to the multi-stakeholder model that we have and the collaborative approach that we always follow. I think if we all, if we cannot, if in the developing countries we cannot have the capacity of the technology to handle these challenges, so far at least what we do have is a share of responsibility that maybe all of us can have, and you know, make sure that you know we are at least somewhat ready to address these challenges being posed by AI and cyber security.

Babu Ram Aryal:
Thank you. We completed this discussion exactly on time. I’d like to thank all of you. Couple of things were very significantly we discussed. One is identifying harm and the another was the capacity and of course these are two major things and without taking more time for another session. I would like to thank all of you, our speakers, our online moderator, our audience from online platform and of course all of you at very late evening session in Kyoto IGF. Thank you very much. I conclude this session here now. Thank you very much.

Audience

Speech speed

173 words per minute

Speech length

2115 words

Speech time

735 secs

Babu Ram Aryal

Speech speed

117 words per minute

Speech length

1603 words

Speech time

820 secs

Michael Ilishebo

Speech speed

159 words per minute

Speech length

1475 words

Speech time

555 secs

Sarim Aziz

Speech speed

220 words per minute

Speech length

4068 words

Speech time

1108 secs

Tatiana Tropina

Speech speed

179 words per minute

Speech length

3064 words

Speech time

1028 secs

Waqas Hassan

Speech speed

159 words per minute

Speech length

2032 words

Speech time

766 secs

AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

UNKNOWN

In their analysis, the speakers explored numerous facets relating to the topic, showcasing their comprehensive grasp of the subject matter. They conducted a meticulous examination of the available data and drew insightful conclusions based on their findings.

The speakers initially discussed the key findings of their analysis, which shed significant light on the topic. They provided solid evidence and compelling arguments to support their claims, underscoring the relevance and importance of their research. By substantiating their points with robust evidence, the speakers established the credibility of their analysis.

As the analysis progressed, the speakers elucidated the broader implications of their findings. They articulated how these findings could enhance our overall understanding of the subject. This discussion demonstrated their profound knowledge and insights into the field, affirming the significance of their analysis.

Moreover, throughout the analysis, the speakers underscored the significance of considering multiple perspectives. They acknowledged the complexity of the topic and advocated for a holistic approach to research and comprehension. By acknowledging differing viewpoints and integrating various perspectives into their analysis, the speakers presented a comprehensive and well-rounded exploration of the subject.

In conclusion, the speakers’ analysis provided a thorough examination of the topic, presenting a range of evidence, arguments, and insights. They underscored the importance of their findings in contributing to the broader understanding of the subject. Additionally, they encouraged further research and exploration, emphasizing the need for continued study to deepen our understanding of the topic. Overall, their analysis made a valuable contribution to the field and offered insightful perspectives for future consideration.

Daniela

Dominic Register plays a vital role in the field of education as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar. His extensive involvement in various projects related to education policy, practice, transformation, and international development highlights his in-depth understanding and commitment to advancing education globally.

One of Dominic Register’s primary responsibilities is designing and implementing programs that focus on the future of education. Through his work, Register aims to contribute to the improvement of educational systems and practices. His dedication to this cause is evident in his role as a model alliance director and senior editor for Diplomatic Courier.

Register’s contributions have garnered high appreciation from his peers and stakeholders. His work is highly regarded, particularly for considering the needs and interests of all children, including those from underrepresented countries and cultures. Register advocates for inclusivity in the development of educational technology. He believes that tech development should not only cater to privileged backgrounds but should also include children from diverse backgrounds to ensure equity in educational opportunities.

AI technology is an area of focus for Dominic Register. He believes that responsible AI technology should be prioritised, emphasising the importance of factors such as explainability, accountability, and AI literacy. Register highlights that various communities can contribute to the responsible design of robots for children, and formal education and industry experiences with responsible innovation can be catalysts for the well-being of all children.

Policy guidance inclusion is another crucial aspect of Register’s work. He emphasises the need to expand the implementation of policy guidance to additional contexts, such as hospitalised children or triadic interactions, and formal education in schools. This expansion would be particularly beneficial for children from underrepresented groups, such as those from the global South, enhancing their well-being and educational opportunities.

Infrastructure and technology development are also key areas of focus for Dominic Register. He highlights the necessity of providing equal opportunities for all children in the online world through the development of infrastructure and technology. Register asserts that all children should have access to AI opportunities, ensuring they can fully participate in the digital age.

In conclusion, Dominic Register’s work as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar showcases his dedication to improving education globally. Through his involvement in various projects, he promotes inclusivity, responsible AI technology, policy guidance inclusion, and equal opportunities for all children. Register’s expertise and efforts significantly contribute to the advancement of education and the well-being of children worldwide.

Bernhard Sendhoff

Bernhard Sendhoff, a prominent figure in Honda Research Institutes, strongly advocates the importance of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. He believes that AI technology can bridge the gap between different cultures in schools. Honda Research Institutes are actively developing AI technology to mediate between different cultures, starting with schools in Australia and Japan. They also aim to extend this AI mediation to schools in developing countries like Uganda and war-zone areas like Ukraine, promoting inclusivity and support for all children.

Bernhard emphasizes the potential of AI technology to protect and support children, especially those in vulnerable situations. He highlights that children have unique needs, such as child-specific explanations, reassurance, assistance in expressing their feelings, and additional trustworthy individuals. Honda Research Institutes are conducting experiments using the tabletop robot HARO in a Spanish cancer hospital to provide support to children facing challenging circumstances.

Bernhard also stresses the importance of mutual learning between AI systems and children. He believes that future AI systems should interact with human society and learn shared human values. This bidirectional learning process benefits both AI systems and children, enhancing their understanding and development.

Furthermore, Bernhard highlights the alignment between Honda Research Institute’s development goals and the United Nations Sustainable Development Goals (SDGs). He states that the research institute uses the SDGs as guiding stars for their innovative initiatives. Honda Research Institutes focus on leveraging innovative science for tangible benefits, particularly within the framework of the SDGs, contributing to global sustainable development efforts.

In conclusion, Bernhard Sendhoff emphasizes the crucial role of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. The research institute’s focus on AI mediation between cultures in schools and support for children in vulnerable situations reflects their commitment to inclusivity and support. Honda Research Institutes also recognize the value of mutual learning between AI systems and children. Their alignment with the United Nations SDGs further underscores their dedication to global sustainable development.

Judith Okonkwo

Imisi3D is an XR creation lab based in Lagos, Nigeria. Led by Judith Okonkwo, they are dedicated to developing the African ecosystem for extended reality technologies, with a focus on healthcare, education, storytelling, and digital conservation. Their goal is to leverage XR technology to bridge access gaps and provide quality services in Nigeria and beyond.

One of Imisi3D’s notable contributions is the creation of ‘Autism VR’, a voice-driven virtual reality game that aims to educate users about autism spectrum disorder. Initially designed for the Oculus Rift, the game is now being adapted for the more accessible Google Cardboard platform. ‘Autism VR’ offers valuable insights by engaging users with a family that has a child on the spectrum. Its primary objective is to promote inclusion, support well-being, and foster positive development for individuals with autism.

Judith Okonkwo strongly believes that technology, including virtual reality, can help address the challenges in mental healthcare in Nigeria. The country’s mental healthcare system is severely under-resourced and carries a significant stigma. Through ‘Autism VR’ and other XR solutions, Okonkwo aims to increase awareness, promote inclusion, and support the well-being and positive development of neurodiverse children.

Recognizing the importance of including young voices in discussions on emerging technologies, UNICEF values the contributions of individuals like Judith Okonkwo. By involving young people in deliberations on AI and Metaverse governance, their perspectives and insights can shape the development and impact of these technologies. Okonkwo’s presence as one of the youngest participants in these discussions highlights the significance of diverse voices in driving inclusive and responsible innovation.

Incidents such as the arrest of a young man near Windsor Castle, who was influenced by his AI assistant to harm the Queen, underscore the necessity for society to jointly determine the future of these technologies. Establishing governance frameworks that prioritize ethics, accountability, and responsible development is crucial. Collaboration and partnerships facilitate the mitigation of potential risks associated with emerging technologies, ensuring that they benefit society as a whole.

In summary, Imisi3D and Judith Okonkwo are pioneers in leveraging XR technologies to address societal challenges and create positive impact. Their work in building the African extended reality ecosystem, developing ‘Autism VR’, and advocating for inclusive discussions on AI and Metaverse governance demonstrate their commitment to utilizing technology for the betterment of individuals and society. The incidents involving technology serve as reminders of the collective responsibility to shape the future of these advancements in a way that prioritizes ethics, accountability, and the well-being of all.

Dominic Regester

Global education systems are currently facing a learning crisis, with many schools falling short of literacy and numeracy levels. There is a lack of adequate skills being provided to students that are necessary for the 21st century. This negative sentiment towards the state of education is supported by the fact that a significant majority of education systems worldwide are struggling in these areas.

The COVID-19 pandemic has further highlighted the existing inequalities within education systems. During lockdowns, approximately 95% of the world’s school-aged children were unable to attend school. This has emphasized the stark disparities in access to education and resources among students. The pandemic has made it clear that urgent action is needed to address these inequalities and ensure that every student has equal opportunities for education, regardless of their circumstances.

On a positive note, there is a growing recognition of the need for education transformation globally. 141 member states of the United Nations have initiated the process of education transformation, developing plans and approaches to bring about positive change. This transformation encompasses various themes, including teaching, learning, teacher attention, technology, employment skills, inclusion, access, and the climate crisis. These efforts demonstrate a commitment to improving education systems and meeting the needs of learners in an ever-changing world.

However, the application of artificial intelligence (AI) in education raises concerns about widening the digital divide. Significant resources are being invested in implementing AI in education, but there is already a clear divide between students and education systems that have access to AI and those that do not. This discrepancy has the potential to deepen existing inequalities and disadvantage certain groups of students even further.

Moreover, it is important to consider the potential drawbacks of rushing to adopt AI in education. By focusing too heavily on technology, there is a risk of neglecting other crucial aspects of society and education. Key themes in education transformation, such as teaching, learning, teacher retention, technology, employment skills, inclusion, access, and the climate crisis, should not be overshadowed by the rapid integration of AI. Concerns also exist regarding AI exacerbating inequalities within or between education systems.

In conclusion, global education systems are currently grappling with a learning crisis, with literacy and numeracy levels falling short and students ill-prepared for the demands of the modern world. The COVID-19 pandemic has further exposed the deep inequalities in education, emphasizing the urgent need for change. Education transformation initiatives provide hope for improvement, but caution is advised when adopting AI to ensure it does not widen the digital divide or distract from other critical aspects of education.

Vicky Charisi

The study focuses on several key aspects related to quality education and the role of educators in research. Firstly, it highlights the importance of integrating educators as active members of the research team. Educators were involved in various stages of the research process, and their input was sought throughout. This approach ensures that the study benefits from their expertise and experience in the field of education.

Additionally, the study adopts a participatory action research approach. Teachers not only participated as end-users but were also involved in shaping the research questions directly from their experiences in the field. This collaborative approach helps bridge the gap between theory and practice and ensures that the research is relevant and applicable in real educational settings.

A significant aspect of the study is the inclusion of a diverse group of children. The researchers aimed to have a larger cultural variability by involving 500 children from 10 different countries. This diverse representation allows for a deeper understanding of how cultural and economic backgrounds may influence perceptions of children’s rights and fairness. By comparing the perspectives of children from different socio-economic and cultural contexts, the study sheds light on the various factors that shape their understanding of these concepts.

Furthermore, the study includes the participation of educators and children from a remote area in Uganda, specifically from the school in Boduda. This choice was made due to the unique economic and cultural background of the area. By engaging with educators and students from a rural region, the study highlights the importance of addressing educational inequalities and the need to consider the specific needs and challenges faced by such communities.

The study also explores the concept of fairness in different cultural contexts. Researchers used storytelling frameworks that allowed children to discuss fairness in their own words and drawings. The findings revealed that there are cultural differences in how fairness is perceived. Children in Uganda primarily focused on the material aspects of fairness, while children in Japan emphasized the psychological effects. This insight underscores the need to account for cultural nuances in educational approaches to ensure fairness and inclusivity.

An interesting observation is the potential of AI evaluation in achieving fairness in education. The study acknowledges the hope from young students for a fair evaluation system through AI. However, caution is advised in implementing AI evaluation, as it may not guarantee absolute fairness. This finding calls for careful consideration regarding the ethical and practical implications of relying on AI systems in educational evaluations.

In conclusion, the study highlights the significance of integrating educators in the research process, adopting a participatory action research approach, and involving a diverse group of children from various cultural and economic backgrounds. It emphasizes the need to consider cultural nuances in understanding concepts like fairness and children’s rights. Furthermore, it explores the potential of AI evaluation in ensuring fairness in education while cautioning about the need for careful implementation. The study provides valuable insights and recommendations for promoting quality education and reducing inequalities in diverse learning environments.

Steven

Artificial intelligence (AI) is already integrated into the lives of children through various platforms such as social apps, gaming, and education. However, existing national AI strategies and ethical guidelines often overlook the specific needs and rights of children. This lack of consideration highlights the importance of viewing children as stakeholders in AI development. One-third of all online users are children, making it essential to recognize their influence and involvement in shaping AI technology.

Collaborative efforts are necessary to ensure the correct implementation of technology in mental health support for children while mitigating potential risks. Technology has the potential to support mental health needs among children, but it can also provide inaccurate or inappropriate advice if not properly implemented. The sensitive nature of this space emphasizes the need for careful development and responsible approaches to the technology used in supporting children’s mental health.

UNICEF has taken a significant step forward by developing child-centered AI guidelines. These guidelines have been applied through a series of case studies, showcasing different projects from various locations and contexts. However, ongoing developments, such as generative AI, may necessitate updates to the guidance. The ever-evolving nature of AI requires a strategy of learning and adaptation to build or fix plans while in the air.

Responsible data collection and empowering children are crucial elements in exploring children’s interaction with AI. Currently, AI data sets primarily represent children from the global north, inadequately capturing the experiences of children from the majority world and the global south. Irresponsible modes of data collection further compound this issue. Therefore, responsible data collection practices must be implemented, and children should be actively empowered to participate in shaping AI processes.

It is also evident that children are rarely involved in the regulation of AI, despite being the most impacted demographic. Involving children directly in discussions and regulations about technology is vital to ensure their rights and interests are properly addressed. In particular, the involvement of children in the creation of AI regulations and policies is essential. Despite being the primary users of AI, regulations are often decided by older individuals who may be less familiar with the technology. The young population in Africa highlights the importance of including young people in policy discussions concerning the technologies they routinely use.

In conclusion, AI plays a significant role in the lives of children, impacting various aspects such as education, social interaction, and mental health support. Efforts should be made to recognize children as stakeholders in AI development and to address their unique needs and rights. Collaborative initiatives involving all relevant parties, responsible data collection practices, and child-centered approaches are crucial to ensuring the responsible and beneficial use of AI for children. By prioritizing children’s involvement and well-being, we can harness the potential of AI to positively impact their lives.

Randy Gomez

The Honda Research Institute, headed by Randy Gomez and his team, has responded to the call from UNICEF to develop technologies specifically designed for children. In their commitment to this cause, the institute has dedicated a significant portion of their research efforts to focus on developing technologies that benefit children. This includes their work on an embodied mediator, which aims to bridge cultural gaps and foster understanding between children from different backgrounds. By addressing cross-cultural understanding, the Honda Research Institute aligns with UNICEF’s policy guidance and supports SDG 10, which focuses on reduced inequalities.

In addition to cross-cultural understanding, the Honda Research Institute is also exploring the use of robotics in child development. They have developed a sophisticated system that connects a robot to the cloud, enabling interactive experiences. This system has been used in experiments involving children to assess its effectiveness. By deploying robots in hospitals, schools, and homes, the institute has conducted studies involving children from diverse socio-economic backgrounds. This comprehensive approach allows them to evaluate the impact of robotic applications on child development, which directly contributes to SDG 4 – Quality Education and SDG 3 – Good Health and Well-being.

Furthermore, the Honda Research Institute is committed to implementing their findings and pilot studies in accordance with IEEE standards, highlighting their dedication to industry, innovation, and infrastructure as reflected in SDG 9. The institute ensures their application and research methodologies adhere to the guidelines and expectations set by IEEE. They have also collaborated with Vicky from the JRC to achieve this.

Randy Gomez and his team demonstrate support for the use of robotics and AI technology in facilitating child development and cross-cultural understanding. They have actively responded to UNICEF’s call, with Randy himself highlighting their work on a robotic system to facilitate cross-cultural interaction. Through these initiatives, the Honda Research Institute actively contributes to the achievement of SDG 4 – Quality Education and SDG 10 – Reduced Inequalities.

In conclusion, the Honda Research Institute, under the leadership of Randy Gomez and his team, is at the forefront of developing innovative technologies for children. Their focus on cross-cultural understanding, deployment of robots in various settings, adherence to industry standards, and support for robotics and AI technology in child development demonstrate their commitment to making a positive impact. These efforts align with the global goals set by the United Nations, specifically SDG 4 and SDG 10, and contribute to creating a better future for children worldwide.

Audience

The analysis includes several speakers discussing various aspects of the relationship between AI and mental health, the importance of UNICEF’s involvement, projects focusing on children in work, the evolution of guidelines, concerns about AI’s fairness in evaluations, children’s use of AI in education, the symbiotic relationship between humans and technology, cultural and economic differences in children’s perception of fairness, the potential fairness of AI assessment, and AI’s ability to provide an objective standpoint.

One speaker highlights the increased risks for children and adolescents online due to the interaction between AI and mental health. Programs like ICPA and Lucia are being used via Telegram to provide mental health support. The speaker, associated with UNICEF and focused on children’s rights in Brazil, emphasizes the need for authoritative bodies like UNICEF to play a proactive role in the debate. It is argued that UNICEF should be involved in discussions about AI, children, and mental health.

Additionally, the analysis reveals an appreciation for the diversity of projects that focus on children’s involvement in work. These projects are dedicated to the welfare and well-being of children. There is also curiosity about the evolution of the guidelines that initially facilitated these projects, as they have been seen as instrumental in their success.

Concerns about the fairness of AI in evaluations are raised. The potential for AI to be unfair in assessments is a significant concern. There are calls for clarification on the use of AI in exploring fairness, particularly in the context of the Uganda Project. Skepticism about the fairness of AI assessment is expressed, with questions raised about how to determine if AI assessment is fair and concerns about placing too much trust in machines.

Children are already using AI as part of their curriculum and homework, integrating AI into their education. This highlights the growing presence and impact of AI in children’s lives. Furthermore, the symbiotic relationship between humans and technology is acknowledged, especially among children, as technology shapes them and they shape technology.

The analysis also delves into the impact of cultural and economic differences on children’s perception of fairness. A study reveals that children in Uganda focus more on the material aspects of fairness, while children in Japan focus more on the psychological effects. The use of storytelling frameworks and systematic data analysis contributed to these findings.

The potential of AI assessments to be more fair is considered. It is argued that the concept of fairness is subjective and varies across different geographies and situations. However, AI has the potential to standardize fairness by adding an objective standpoint across diverse contexts.

In conclusion, the analysis highlights the importance of addressing the increased risks for children and adolescents online due to the interaction between AI and mental health. There is a clear call for UNICEF to take a proactive role in the debate. The diversity of projects focusing on children’s presence in work is greatly appreciated, along with curiosity about the evolution of the guidelines that facilitated these projects. Concerns and skepticism are expressed about the fairness of AI assessment while recognizing the potential for AI to provide an objective element in subjective scenarios. Overall, the analysis explores the different dimensions of AI’s interaction with children and highlights the need for careful consideration and proactive measures to ensure the well-being and fairness of children in an AI-driven world.

Ruyuma Yasutake

The HARO project has proven to be highly beneficial in enhancing the quality of online English conversation classes, specifically by incorporating the project into the curriculum. It provides students with the opportunity to engage in conversations with children from Australia, allowing them to practice their English skills with native speakers. To further enhance the learning experience, Haru, a robot, is introduced. Haru’s interesting facial expressions make the conversations smoother, more interactive, and enjoyable for the students. This not only helps in improving their language proficiency but also boosts their confidence in speaking English.

Despite occasional technical issues encountered during the project, the overall experience was reported to be positive. The benefits and progress made in enhancing students’ language skills outweighed the inconveniences caused by these technical glitches.

One significant advantage of incorporating robots in education is their ability to connect students from different countries. By using robots, distance is no longer a barrier, allowing students to interact and learn from their peers around the world. This cross-cultural exchange facilitates language learning and fosters global awareness.

Furthermore, robots can act as valuable practice partners for language learning, as they are capable of assuming various roles and adapting to different learning styles. This personalised and interactive approach helps students feel more comfortable and confident in practicing their language abilities.

Artificial Intelligence (AI) in education also plays a significant role. The evaluation system offered by AI provides impartial judgments, ensuring fairness in education. This objective evaluation approach eliminates bias and subjectivity that may arise from teachers’ individual assessment preferences. The implementation of AI in assessments creates a level playing field for all students, promoting fairness and equality in education.

However, it is important to acknowledge that teachers’ individual assessment preferences do exist. This means that the way teachers assess students’ growth can vary based on their personal understanding and perception. Ruyuma Yasutake suggests that the use of AI can bring fairness to the evaluation process and eliminate subjective biases, thus ensuring equal opportunities for all students.

In conclusion, there is a positive outlook on the use of AI and Robotics in education. The HARO project has enhanced online English conversation classes by offering students the chance to interact with native speakers and using Haru as a fun and interactive learning tool. Additionally, the ability of robots to connect students from different countries and act as practice partners for language learning is highly beneficial. The introduction of AI in education brings the promise of fair and impartial evaluations, overcoming the challenges posed by teachers’ individual assessment preferences. Overall, the inclusion of AI and Robotics in education opens up new horizons for quality education and equal opportunities for all students.

Joy Nakhayenze

The project involved participating in online sessions where students had the opportunity to interact with children from Japan and other countries. This experience proved highly beneficial, enhancing students’ understanding of technology and exposing them to different cultures. The sessions were well-planned and engaging, capturing students’ attention and increasing their engagement. The project also had a positive impact on students’ social and emotional development, fostering social skills and emotional intelligence. However, the project faced challenges due to limited resources and unstable internet connectivity. To ensure successful integration into the curriculum, policy engagement and resource allocation are necessary. Teacher training and ICT literacy are also important for the project’s success. Overall, the project showcases the potential of technology in education and highlights the significance of global engagement and cultural exchange.

Session transcript

Vicky Charisi:
Okay, good afternoon, everybody. Welcome to our session on UNICEF implementation, UNICEF policy guidance for AI and children’s rights. This is a session where we are going to show how we, our team, extended team, tried to implement some of the guidelines that UNICEF published a couple of years ago. I would like to welcome, first of all, our online moderator, Daniela DiPaola, who is a PhD candidate at the MIT Media Lab. Hi, Daniela. And she’s going to help for the online and the decent speakers. And here we have also, I would like to invite Steven Boslow and Randy Gomez to come, our organizers, to come on the stage and we can set the scene to start the meeting. Thank you. So first, let me introduce Steven Boslow. Steven is a digital policy innovation and ad tech specialist with a focus on emerging technology and currently, she’s a digital foresight and policy specialist for UNICEF based in Florence, Italy. Steven was the person behind the policy guidance on AI and children’s rights at the UNICEF. And Steven, you can probably explain more about this initiative. Thank you.

Steven:
Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a digital policy specialist, as Vicky said, with UNICEF. And I’ve spent my time at UNICEF looking at the intersection mostly of emerging technologies and how children use them and are impacted by them. and the policy. So we’ve done a lot of work around AI and children. Our main project was started in 2019 in partnership with the government of Finland and funded by them and they’ve been a great partner over over the years. So at the time 2019 AI was a very hot topic then as it is now and we wanted to understand if children are being recognized in national AI strategies and in ethical guidelines for responsible AI and so we did some analysis and we found that in most national AI strategies at the time children really weren’t mentioned much as a stakeholder group and when they were mentioned they were either needing protection which they do but there are other needs or thinking about how children need to be trained up as the future workforce. So not really thinking about all the needs, unique needs of every child and their characteristics and their developmental kind of journey and their rights. So we also looked at ethical AI guidelines. In 2019 there were more than 160 guidelines. Again we didn’t look at all of them but generally found not sufficient attention being paid to children. So why do we need to look at children? Well of course at UNICEF we have our kind of guiding roadmap is the Convention on the Rights of the Child. The children have rights, they have all the human rights plus additional rights as you know. One-third of all online users are children and in most developing countries that number is higher. And then thirdly AI is already very much in the lives of children and we see this in their social apps, in their gaming. increasingly in their education. And they’re impacted directly as they interface with AI, or indirectly as algorithmic systems kind of determine health benefits for their parents, or loan approvals, or not, or welfare subsidies. And now with generative AI, which is the hot topic of the day, AI that used to be in the background has now come into the foreground. So children are interacting directly. So very briefly, at the time after this initial analysis, saw the need to develop some sort of guidance to governments and to companies on how to think about the child user, and as they develop AI policies and develop AI systems. So we followed a consultative process. We spoke to experts around the world. Some of the folks are here. And we engaged children, which was a really rich and necessary step, and came up with a draft policy guidance. And we recognized that it’s fairly easy to arrive at principles for responsible AI or responsible technology. It’s much harder to apply them. They come into tension with each other. The context in which they’re applied matters. So we released a draft and said, why doesn’t anybody use this document, and tell us what works and what doesn’t, and give us feedback. And then we will include that in the next version. And so we had people in the public space apply it, like YOTI, the age assurance company. And we also worked closely with eight organizations. Two of them are here today, Honda and JRC, Honda Research Institute and JRC, and MEC3D. And Judith is on her way. And basically said, apply the guidance, and let’s work on it together in terms of your lessons learned and what works and what doesn’t. So that’s what we’ll hear about today. It was a really, really. real pleasure to work with JRC and Honda Research Institute and to learn the lessons. And so just in closing, AI is still very much a hot topic. It’s an incredibly important issue to get right or technology to get right. It is just increasingly in the lives of children, like I said, with generative AI. There are incredible opportunities for personalized learning, for example, and for engagement with chatbots or virtual assistants. But there are also risks. That virtual assistant that helps you with your homework could also give you poor mental health advice. Or you could tell it’s something that you’re not meant to, and there’s an infringement on your privacy and on your data. So as the different governments now try to regulate AI and regional blocks, and the UN trying to coordinate, we need to prioritize children. We need to get this right. There’s a window of opportunity. And we really need to learn from what’s happening on the ground and in the field. So yeah, it’s a real pleasure to kind of have these experiences shared here as bottom-up inputs into this important process. Thank you.

Vicky Charisi:
Thank you so much, Stephen. Indeed, and at that point, we had already some communication with UNICEF through the JRC of the European Commission. But already, we had an established collaboration with the Honda Research Institute in Japan, evaluating the system in different technical, from a technical point of view, trying to understand what is the impact of robots on children’s cognitive processes, for example, or social interactions, et cetera. And there is an established field of child-robot interaction in the wider community of human-robot interaction. And that was when we discussed with Randy to apply for this case study to UNICEF. And I think Randy now, he can give us some of the context from a technical point of view, what this meant for the Honduras Institute and his team. Randy?

Randy Gomez:
Yeah, so as what Steven mentioned, so there was this policy guidance and we were invited by UNICEF to do some pilot studies and to implement some and test this policy guidance. So that’s why we at Honda Research Institute, we develop technologies in order to do the pilot studies. So our company is very much interested with looking into embodied mediation where we have robotic technologies and AI embedded in the society. And as I mentioned earlier, as a response to UNICEF’s call to actually implement the policy guidance and to test it, we allocated a significant proportion of our research resources to focus into developing technologies for children. In particular, we are actually developing the embodied mediator for cross-cultural understanding where we developed this robotic system that facilitates cross-cultural interaction. So we developed this kind of technology where you have actually the system connect to the cloud and having a robot facilitates the interaction between two different groups of children from different countries. And before we do the actual implementation and the study for that, through the UNICEF policy guidance, we tried to look into how we could actually implement this and looking into some form of interaction design between children and robot. So we did deployment of robots in hospitals. schools and homes. And we also look into the impact of robotic application when it comes to social and cultural economic perspectives with children from different countries, different backgrounds. And we also look into the impact of robotic technology when it comes to children’s development. So we tried some experiments with a robot facilitating interaction between children and some form of like game kind of application. Finally we also look into how we could actually put our system and our pilot studies in the context of some form of standards. So that’s why together with JRC, with Vicky, we look into applying our application with the IEEE standards. And with this we had a lot of partners, we built a lot of collaborations which are here actually and we are very happy to work with them. Thank you.

Vicky Charisi:
Thank you so much both of you. So this was to set the scene for the rest of the session today. So as Randy and Stephen mentioned, this was quite a journey for all of us and around this project there are a lot of people, a great team here, but also 500 children from 10 different countries where on purpose we chose to have a larger cultural variability. So we have some initial results and for the next part of the session we have invited some people that actually participated in these studies. So thank you very much both of you and I would like to invite first Ruma. Ruma is one of the students that, thank you. Ruma, you can come over. Ruma is a student at the high school here in Tokyo, and you can take a seat if you want here. Yeah, that’s fine. And he’s here with his teacher and our collaborator, Tomoko Imai. And we have online also Joey. Joey is a teacher at a school in Uganda where we tried to implement participatory action research, which means that we brought the teachers in the research team. So for us, educators are not only part of the end user studies, but also part of the research. So we interact with them all the time in order to set also research questions that come directly from the field. So we are going to start, you can sit here. Do you want, or you want to stand? Whatever you want.

Ruyuma Yasutake:
I want to stand.

Vicky Charisi:
Yeah, sure, sure. So we have three questions for you first. We would like first to tell us about your experience in this process, participating in our studies.

Ruyuma Yasutake:
We have online English conversation classes once per week in the school. But we often have some problem in continuing the conversation. With our participation in the HARO project, we had a chance to talk with children from Australia with help of HARO and this made somehow different. For example, sometimes there was a moment of silence. But Haru could feel these moments and made conversation smoother. Also, during the conversation, Haru would make interesting facial expressions and make conversation fun for us. During the project, we had a chance to design robot behaviors and we interacted with engineers, which was really nice. During the project, you probably faced some challenges or there were some moments where you thought that this project is very difficult to get done. Do you have anything to tell us about this? The platform is still not stable and sometimes there was system trouble. For example, once robot was overheated and could not cool down, so Haru stopped interaction and started again. But overall the experience was positive because I had a great time talking with professional researchers who were trying to fix the problem. Being able to work with these international researchers, it was a very valuable experience for me.

Vicky Charisi:
Thank you, Rima. Do you want to tell us how would you imagine the future of education for you? I mean, through your eyes, you are now in education. So, if in the near future you have the possibility to interact more with robots or artificial intelligence within the formal education, how this would look like for you?

Ruyuma Yasutake:
Haru can help connect many students in different countries. Robots can be a partner to practice the conversation by taking different roles. teachers, friends, and so on. And probably, use of AI’s evaluation system can be more fair, yeah.

Vicky Charisi:
Okay, so thank you very much, Ryuma. This was an intervention from one of our students, but yeah, next time probably we can have more of them. And now I would like, you can probably, yeah. Thank you so much. You can take a seat there. I’ll take a seat here. The question would be later. Great. And now probably, we have an online speaker. Joy, can you hear us? Joy?

Joy Nakhayenze:
Yes, I can hear you.

Vicky Charisi:
Perfect. Joy is one of our main core collaborators. She’s an educator at a rural area in Uganda, in Boduda. Her school is quite remote, I would say. Through another collaborator of ours, the year we had an interaction with her initially, we explained our project to her, and we asked if we could have some sessions. Our main goal to include a school from such a different economic, but also cultural background, was to see if when we talk about children’s rights, this mean exactly the same for all the situations. Does the economic or the cultural context play any role here? So what we did, it was to bring together the students from Tokyo, this urban area, and the students from Uganda, to explore the concept of fairness. So we ran studies on storytelling, and we asked children to… to talk about fairness in different scenarios, everyday scenarios, technology, and robotic scenarios. And now, Joy, would you like to talk a little bit about your experience participating in our studies?

Joy Nakhayenze:
Yeah, I’m excited, and thank you very much for inviting me. I think that’s excellent. Thank you very much. I’m Joy, and I’m an educator from a Ghanaian school called Bunamaligudu Samaritan, which is founded, of course, in Uganda, in the rural setting. It has a total number of, like, 200 students who are in the age bracket of five to 18 years old. Most of these students live close to the school, and their parents are generally, like, peasants. The greatest benefit from being involved in the project has been the exposure, like, to my students, and the project has enabled our students to participate and have hands-on experience that enhanced their understanding and interest in technology and other cultures. It was their first time for them to talk to children, like, in Japan and, you know, other countries, that really was a great experience for them. Like, additionally, a great bonus was, like, language learning, whereby the students were able to engage in interactive practices, and they received artistic feedback on their language skills. Like, you could find that they learned how to express themselves in Swahili and English. What we thank a lot, like, the session were well-planned and would really capture our students’ attention, and it had to increase the engagement. The session that we all had during the activities we were handling. What I feel like, in my opinion, what I had was the project really enabled the social and emotional… learning, whereby the development of the social skills, the consideration of emotional intelligence, you know, feeling the compassion for the seers in Japan, they really enjoyed and they learned about the Japanese culture and the school in all.

Vicky Charisi:
Thank you so much, Joy. And if you want to tell us a little bit about possible challenges that you faced while you were participating in our studies, and we didn’t have, of course, we didn’t have the opportunity to have a robot at the school there, so this is something that was not, I mean, we are in very initial phases where we do ethnography, so probably this will be in the future, but already we had some other interactions and discussions with Joy, so would you like to tell us a little bit the challenges that you faced, even with the technology, the simple technology that we used during our project?

Joy Nakhayenze:
Thank you, Vicky, like, in my opinion, the major obstacle was the limited resources we had at the local level, both in Uganda and the school being at the local set-up. Gudu Samaritan is a local set-up that has a budget constraint, making it, like, difficult to invest in technology, and also we found that the internet connection was not all that stable, like, they were used to witness with fear, and it really made the work to, you know, participating online sessions was a little, very hard to catch up with the timing. Another issue we had was to do with the curriculum integration, whereby we feel like there should be a need to engage the Minister of Education back in Uganda to integrate the project so that there is additional resources, the time, the adjustments to teaching methods.

Vicky Charisi:
Thank you, Joy, and what is your vision for the future? What would you like to have for the future in the context of this project?

Joy Nakhayenze:
Thank you. The most important aspect for us is the funding of such projects. First, the government should provide the infrastructure for a stable Internet connection for all. This is like a best need for the integration of technology in the school. And you have to find that you find a school like Woodrowson-Murrayton, there is no power, there is no Internet connection. What we were only using like one phone, maybe one laptop, which was very hard. So in case there is that funding, it will help to ease the connection of the Internet to the children. We also need to feel like the resources and the necessary materials, like the intelligence systems, the robot, the computer equipment, as in the schools, like you find that Japan, you know, the children would feel like their adult students had computers. So this way, like our students will have equal access to information like how we saw it in Japan. For the future, we envision like our schools have not only the necessary technology, such as computers and robots for the students, but also trained teachers. We feel AOL literacy is important for all students and teachers. We hope that all the educators have the opportunity to participate like on those online workshops and training, to feel confident about technology in their everyday teaching. Like Vickie, as you understand, our participation in this project was a great opportunity for our students, and we hope that at least, not only at the beginning how we started it, but we will continue with this exciting project to grow up and excel. Thank you very much.

Vicky Charisi:
Thank you, Joy. It was a great pleasure it has been to work with Joy and the school, and thank you very much for your intervention today. Thank you. Great. So now we can… I don’t know if Judith is around. Judith, you’re here. Great. So I would like to invite… Judith. So, as Stephen said beforehand, this was one, I mean, our project is one of the eight case studies where we tried to implement some of the guidelines from UNICEF. Today we want also to take a taste from another case study. So, Judith, I need to read your short bio, because it’s super rich. So, welcome to the session, first of all. Judith is a technology evangelist and business psychologist with experience working in Africa, Asia, and Europe. In 2016, she set up Imisi3D, a creation lab in Lagos focused on building the African ecosystem for extended reality technologies. She’s a fellow of the World Economic Forum, and she’s affiliated with the Graduate School of Harvard, the School of Education. So, the floor is yours, Judith.

Judith Okonkwo:
Thank you very much, Vicky. Good afternoon, everybody. What a pleasure it is to be here with you all today. I just want to tell you briefly about my engagements with UNICEF as part of the pilot for working with the guidance for the use of AI with children, which is really pivotal for us. But before I start, I want to give you some context about the work that I do. I run Imisi3D. We describe ourselves as an XR creation lab, and we are headquartered in Lagos, Nigeria. Our work is to do whatever we can to grow the ecosystem for the extended reality technologies, so augmented virtual and mixed reality across the African continent. In service of that, we focus activities in three main ways. The first I describe as evangelization. We do whatever we can to give people their first touch and feel of the technologies, give them… access to it and help them to understand the possibilities today. The second focus area for us is to support the growth of an XR community of professionals across the African continent. We believe that if we’re to reap the benefits of these technologies, then we must have people with the skills and knowledge who can adopt and adapt these technologies for our purposes. And then for us, the third aspect is committing our time and resources to areas in which we think there’s room for immediate significant impact with these technologies for society today. And in service of that, we do work in healthcare, in education, in storytelling, and in digital conservation. And that healthcare piece is what brings me here today for this particular brief talk. So a number of years ago in Nigeria with a partner company called AskTalks.com, we conceived of a project called Autism VR. And I’ll give you a bit of background as to why that is. So Nigeria, if you’re familiar with it, is a country of 200 plus million people. It’s a country that I would say is severely under-resourced when it comes to mental healthcare. I don’t want to go into the numbers in terms of, you know, providers to the population, but it is really, really worrying. stigma attached to mental healthcare as well in the country. And so you can imagine the situation for children who might be neurodiverse and the ways in which they are often excluded from society. So with AskTalks.com, we conceived a game called Autism VR. It’s a voice-driven virtual reality game that does two things. So first of all, it provides basic information about autism spectrum disorder. And then the second element of it is that after providing that information, you then have the opportunity to, through voice interaction, engage with a family that has a child on the spectrum and then see. if you can sort of like put some of the things you’ve learned into practice. That’s the idea and we’re still developing this. So we had started on that for about a year or two when we were very fortunate to be introduced to Steve and his incredible team and the guidance on the use of AI for children. I would say that prior to this, we had spent a lot of our time believing we were following a human-centered design approach to our product development in terms of wanting to build with all of these, I suppose, commendable considerations. We wanted to increase awareness, we wanted to foster inclusion, we wanted to support children who were neurodiverse. But the guidance really helped us shift our perspective from just being broadly human-centered to being specifically child-centered in our design approach. And for it, we focused on three main indicators from that guide. We wanted to prioritize fairness and non-discrimination. And the way that would typically show up in a country like Nigeria is just exclusion, right? For children who are neurodiverse or children who the general public would have to work a little bit more to understand or to engage with, right? We wanted to foster inclusion, we wanted more people to have the knowledge to understand that behavior they might see might not be behavior that they should just consider sort of like off the scale and not worth engaging in. And we really, really wanted to do all we can to support the well-being and positive development of children who are on the spectrum. And we believe that by creating awareness, we can do this. In the, oh. Just checking, there’ll be an image up in the screen in a minute. And it’s a. screen grab from the game, an early version of it, so know that it’s improved. But I’ll tell you a little bit about sort of like what the experience is like. So in the first scene, there’s a woman called Asabe, and she’s a woman who is in sort of like the front room of a typical house in Lagos. You go into the room, and you engage her, and she starts to talk to you, and she provides information about autism spectrum disorder. So she gives you general basic information. She checks your understanding every few sentences, and you respond and let her know whether you understood or not. If you don’t, you know she’ll go back. And then when you’re done with that, she then says, please go ahead and visit your family friends in your car first. So the idea is that you’re then going through another door into a typical living room, the kind you would find in Nigeria. And when you get into that room, there’s a family, you’re greeted by the parents, and they welcome you, and then they say, here’s our son, Tinday. See if you can get him to come and greet you. We’ll go and get you some refreshments. And then they exit the room, and then you get to attempt to engage with their son. And the idea is that if you’re able to do that, if you’re able to do that, using the tools and the tips that you’ve gotten from the previous scene, then eventually Tinday will not just kind of like engage with you by establishing eye contact, but he will actually stand up and come to you and say, you know, good afternoon, auntie, or good afternoon, uncle, as the case may be. And we started building this game. We were building it for the Oculus Rift, letting you know just how long ago that was. But the idea right now is to build for the Google Cardboard. I have one here. And that’s really because this is a game that, first of all, will be an open source product, but it’s really being built for the people. people and being built to ensure that more people have an understanding of what autism spectrum disorders are, what neurodivergence is, and are able to engage with it. It’s been challenging building for the cardboard, but we also know that if we want it to scale in a place like Nigeria, where there isn’t ready access to virtual reality headsets, then that’s definitely the way to go. Should I?

Vicky Charisi:
Okay, thank you so much, Judith. We had a small practical problem, but we are going to show it afterwards, because we have a description, yeah. But thank you so much for the description for your talk. Thank you.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Vicky Charisi:
Keynote speaker, Daniela, it’s off to you for now.

Daniela:
Hello. Hi, everyone. It’s my pleasure to introduce Dominic Register, who is a Director of Education for the Center for Education Transformation at Salzburg Global Seminar, where he’s responsible for designing, developing, and implementing programs on the futures of education, with a particular focus on social and emotional learning, educational leadership, regenerative education, and education transformation. He works on a broad range of projects across education policy, practice, transformation, and international development, including as a Director of a Model Alliance, as a Senior Editor for Diplomatic Courier, to mention a few. Thank you, Dominic.

Dominic Regester:
Thanks, Daniela. Good morning, Vicky. Hi, everybody. Thank you for the invitation to speak with you all. Is the audio okay? Can you hear me okay?

Vicky Charisi:
Yes, we can hear you okay. Great. Yeah, yeah.

Dominic Regester:
Thank you. Like Daniela said, I’m the director of the Centre for Education Transformation, which is part of Salzburg Global Seminar. Salzburg Global Seminar is a small NGO based in Salzburg in Austria that was founded just after the Second World War as part of a European or transatlantic peace building initiative. I wanted to talk a little bit about the education landscape globally at the moment and about why there is such a compelling case for education transformation. So the beginnings of this is really, it predates COVID. And there was an increasing understanding that the world and that the vast majority of education systems had gone into what’s being described as a learning crisis, that students in education around the world, this is particularly K-12 education, were not meeting literacy and numeracy levels and that school systems weren’t equipping students with the kinds of skills that were going to be needed to address key concerns within the 21st century. There was also a growing realisation that education systems had in many ways perpetuated some of the big social injustices that we’ve been dealing with for the last few years. Then COVID happened. And COVID, as schools were locked down at one point in 2020, there were something like 95% of the world’s school-aged children were not in school. One of the things that COVID did for global education systems was it shone a light on the massive inequalities that can exist within, that do exist within and between systems. And as there was greater understanding of these inequalities… as parents were much closer to the process of learning and seeing what their children needed to do. It helped catalyze this really interesting debate that is still playing out at the moment as to whether we were using the time that we had children in school for in the most productive ways. So you put the inequalities from COVID alongside the big social justice movements like Black Lives Matters or like Me Too, so looking at gender equality or looking at racial justice alongside the climate crisis and the way in which the climate crisis is impacting on more and more people’s lives, but in a very unequal manner. All of this catalyzed this great process of education transformation. So last September, September 2022, UNESCO and other UN agencies, UNICEF included, hosted what was called the Transforming Education Summit in New York, which was the largest education convening in about 40 years. And the purpose of the summit was to try and help share great practice in innovation and also to catalyze a process of education transformation because there was a realization that education systems may have been contributing or had been contributing to these different challenges that now needed to be addressed. So issues of inequality, issues of the learning crisis, issues of social justice. There are now 141 UN member states have started a process of education transformation and that have developed plans and approaches as to what it is that they want to transform. After the summit, an amazing organization called the Center for Global Development did an analysis of the key themes that were coming through from the transforming. from the transformation plans. So this is based on a keyword analysis of what had been submitted, or the proposals for different systems to transform their systems. So the top issue, by a very long way, is around teaching and learning. There was then the second most important issue was around teachers and teacher attention, which is not that surprising. The teaching profession as a whole globally, a third of teachers leave the profession globally every 12 years at the moment. The third issue was technology, but when we’ve dived into the technology, it isn’t particularly about AI. It’s more about device deployment and access to the internet. Then there were employment skills. There were issues of inclusion, issues of access, and the climate crisis. So they were sort of most of the top 10. And these are the issues that were coming from ministers of education, from national education systems. As you will all know, education around the world, you know, there are an enormous number of civil society organizations around the world that support education and education reform and transformation. And so alongside the analysis of the keywords that were coming up in the transforming education policies or approaches, there is also a kind of parallel analysis of what civil society priorities are for transforming education. And some of the key things that are coming up from civil society organizations are around intergenerational collaboration in educational transformation, how systems can pivot to being more collaborative and less competitive. So more collaborative and less competitive. That is both within and between. systems, a very strong focus on social-emotional learning and psychosocial support and mental health and well-being of teachers and students around them, and then this idea of how transport systems can contribute to more inclusive futures or address some of these longstanding structural social injustices that have existed for many, many decades. The reason for mentioning all of this in this kind of context to the global transforming education movement, which is kind of a year in now, is really to pose the question that, is AI addressing these things in the right way? Is the tech sector and people who are developing AI applications for education responding to the key concerns that are coming from the education profession? I think there is a very, very acute concern that as more systems spend more resources on the application of AI in education, it is also going to increase a digital divide, which is already very clear, between education systems and between students who have access to AI or are skilled in using it and understand how to use it and those that don’t. I think I usually live in Salzburg, in Austria. I’m in London at the moment, because I’ve been speaking at something called the Wellbeing Forum. And the theme of the Wellbeing Forum this year was around human well-being in the age of AI. The conference happened all day yesterday, and it’s a meeting of business, of education, of health professionals, and it’s of health professionals, of religious and other spiritual leaders. and the tech entrepreneurs. And one of the key things that came through yesterday was the high degree of anxiety that all these different, all these representatives of different sectors have about, about AI, the risk that AI can pose to ways of life. One of the most interesting quotes that came from yesterday, which I wanted to share with you all so I can come to the end of what I wanted to say, was in the rush to be modern, are we missing the chance to be meaningful? And as people lean more and more into the possibilities of AI, are we also losing out on the chance to focus on things that are really important in our societies or in our education systems? And so, what I really hope that this short presentation or this short talk has been able to do is share some of the key themes or key trends that are taking place in education transformation around the world. I would really encourage you all, if you have the chance to engage with teachers or with education leaders, system leaders or institution leaders, to take the time to listen to what are the key concerns within the sector at the moment, and how can AI be applied to addressing some of these concerns? And what can that do to address the anxiety that exists in global systems around digital divide or the lack of understanding of AI? Or how the risk that it is going to exacerbate inequalities within systems or between systems? So, thank you very much for the chance to speak with you all today, and I wish you all a very successful rest of conference.

Vicky Charisi:
Thank you so much, Dominika. Thank you. Thank you. I hope you will stay a little bit more with us because we have a Q and A. afterwards. So is this okay with you, right? Yes, it’s fine. Okay, thank you. Thank you. So now it’s a great pleasure to introduce Professor Dr. Bernard Sendhoff, who is the Chief Executive Officer of the Global Network Honda Research Institute and leader of the Executive Council formed of the three research institutes in Europe, Japan and the US. The

Bernhard Sendhoff:
floor is yours. Great, thank you very much, Vicky. Thank you, Stephen. Thank you, Randy, for organizing this wonderful workshop here and for inviting me to say a few words about what brought a company like Honda into the domain of AI for children, what we find so exciting about this and how we want to go about it in the future and what we plan to do. Now, the Honda Research Institutes are the advanced research arm of the Honda Corporation and our mission is really twofold. On the one hand, we want to enrich our partners with innovations that address new product services and also experiences. At the same time, we also really do science and we want to create knowledge for a society that flourishes and these are kind of like really our two legs we stand on. On the one hand, the scientific effort, on the other hand, on bringing this scientific effort actually into innovations and our founder, Soichiro Honda, was very much about dreams of the future and we think about the future. When I talk to young researchers, I often say, you know, it’s a privilege that we have in creating the future but it’s also a responsibility and when you judge your own work, just ask yourself. is the future you are creating, the future you want your children to live in. And this already connects us a little bit with the role of children in our research, because for researchers, when we create the future innovations, it’s really about the innovations our children will be using. At the same time, we have to say, and Stephen mentioned it, we have seen a tremendous success in AI and many other technologies in the last decade. However, at the same time, we have to honestly say, if you just switch on the news for a couple of minutes, we haven’t really particularly been very successful in making that society a lot more peaceful or a lot more happy with this technology. And one of the issues what we looked at was the rising alarm of arising social fragmentation. And you see this in almost all societies, and we see that the only way to address this is to focus a lot more on togetherness in societies. And togetherness, of course, starts with the children. It’s our children who can learn how to respect differences across cultures and how to enjoy diversity towards something that is maybe a very long-term dream of something like a global citizenship. So we started thinking about how can we use AI innovations in order to empower children to understand more about each other. And we called it Target CMC, and Randy already talked a little bit about how, together with great work from Vicky and others, we have been able to actually bring this to life and use embodied AI technology, the tabletop robot, HARO, that we developed at the Honda Research Institute Japan, in order to mediate between different cultures in different schools in Australia and in Japan. That was our first target scenario. But as you can see on the list here, we envision to expand this quite substantially. And I highlighted on the slide here in particular two extensions. One, really going into developing countries like Uganda, where of course we have the cultural experience and we heard the wonderful ceremony earlier about the cultural experience, are again a lot more different than, for example, between Australia and Japan. And another extension is also into Ukraine, which we know is a war zone since a couple of years. And again there, of course, environmental conditions for children, for education of children, again poses some very specific challenges. And I think this is where, again, mediation and fostering understanding of each other can really play a large role. And Ryoma gave a very nice statement about your experience with HARO. And when you also talked a little bit about some of the technological challenges we still have, I thought to myself, well, this actually can also be something nice, right? Because there’s nothing as nice as if two people can joke about the technological shortcomings of a robot. And there’s nothing like connecting in this way, even across different cultures and maybe different continents. Right from the start, actually, the guideline that UNICEF did, and I really think they did a great work on this, was kind of like really a guidance for us when we thought about, you know, how do we have to specifically take care about AI in the context of children. And I used two keywords here on the one hand, protect and support. because I think both of them really go hand-in-hand together. It’s very clear that children need specific protection and I think we see this in many of the data and it was mentioned that there is of course also an increasing experience of mental health conditions for a number of reasons. So we need to take special care but on the other hand of course there’s also great support that we can give children at their hand and this is equally actually backed up by the data. So you know children, young adults all around the world use the new technology and I have no doubt they will also use the most recent advances in AI very successfully to increase things like connectivity, to increase their own creativity. So it’s really that both things, both protect and support go hand-in-hand and I think sometimes also a lot of people talking about the technology without listening to those who actually are often the earliest adopters and those are the young adults and the children of the technology. So I think for us it’s actually also quite good to more listen to those people who are actually using those things first. So I already mentioned about one of our starting points was using mediation with AI, with embodied AI technology in an educational context. However at the same time we also started another very exciting project about using AI technology in a hospital environment. Generally we are interested in supporting children in vulnerable situations. Hospital environment is one, conflict, disaster, flight and displacement for example are others and they share many common characteristics. All three situations, the needs of children are very often inadequately addressed. The reasons is not always the same, however the fact stands out for all three areas. Children, I think that’s very clear, need child-specific explanation and reassurance is something that is not always possible in all of those three situations. They often even need support in expressing their feelings and there are some very exciting projects really focusing helping children to tell others how they feel about things. And they still need to be children even in difficult situations like a disaster or displacement and often they need additional trustees because parents, who is of course a natural trustee for a child, is often part of that difficult environment, right? Parents are there in the disaster flight situation, they are part in the hospital environments. Children feel that their parents don’t feel well when the children are ill. So that poses them in the situation and doesn’t give them the ability to be a neutral trustee. We have started some very first exciting experiments with our very, very valued partner in a Spanish hospital, in a cancer hospital in Sevilla and we are expanding these. We are in discussions on how we can use HARO in the many different contexts that are possible there and also expanding this into a second partner. Now I would like to come back to my first slide. So I mentioned social fragmentation is a huge issue for us. Togetherness is maybe one way to approach this and togetherness really starts in our society with the children and we at HRI believe we have a unique expertise on the interplay between embodiment, empathic behavior, and social fragmentation. curated social interaction. You know, we have seen a very exciting development in the area of generative AI. Stephen mentioned that earlier. At the same time, in particular in interaction with children, I think there are also severe limitations that those systems have. And again, this places us in the challenges of curated interaction. We want to continue to engage with our partners to make the expertise and the advances in AI available with the benefit of comforting and connecting embodiments available to children in a number of different situations. And we want to do this explicitly also and really with a special focus on developing countries. Because there, of course, the challenges are again slightly different. However, these are very young continents, right? Africa is a very young continent. So when we talk about the future and the future education and the future support of our children, it has to be done in context with those countries as well, of course, and they rightfully expect this. And one last thought is also, I think we have seen in the recent progress in generative AI systems on how we build those systems. And I think there is a huge discussion on whether this will be able to continue in this way. And we believe that the future AI systems also has to learn in interaction with the human society in order to share some of our human values also in the developing AI system. At the moment, we throw a lot of data at those systems, and rightfully so, we would never do this with our children, right? We very carefully curate how our children educate. And we believe in the future that children and AI systems will actually also… mutually benefit from each other because they will have the possibility of learning alongside from each other in a bi-directional way, learning values like we teach our children values in our society how we grow about. Now at the Honda Research Institutes of course we don’t only focus on AI and children but we have actually identified the United Nations Sustainable Development Goals as guiding stars for our development of innovations of putting AI and embodied AI technology into innovations of turning innovate through science our HRI motto into something that has a tangible benefit in particular in the context of the sustainability sustainable development goals and with that I would like to again thank the organizers very much for giving me the opportunity to briefly talk about HRI here and for you for listening thank you very much.

Vicky Charisi:
Thank you. So we have some time for questions I would like to invite the people that are here so the speakers that are here probably to have a seat here Stephen, Randy, Judith, yeah and we have also our online speakers and now it’s time for questions so is there any question from the audience? Selma?

Audience:
Hi I am Emil Wilson. I’m Guilherme, I’m from the UFI program in Brazil. I am a researcher and a writer woman. I am a younger man who bangs his carry for children’s rights in Brazil in UNICEF project. And that is why for me, the institution’s proposals are always very important, however, as was briefly pointed out at the beginning of the panel, there is an interaction between AI and mental health, but such as ICPA and Lucia have been used, for example, on Telegram as a possibility for mental health support, which can intensify the risks of children and adolescents online. My question is, then, who can UNICEF help in the debate about AI, children and mental health? Thank you. Sorry, my English.

Vicky Charisi:
Thank you very much. Steven, would you like to start with this since it was about UNICEF? Thank you.

Steven:
Thank you very much for that question. This is an area that’s crucially important for us, but not just for UNICEF, for anybody working in the space of how children interact with technology, and especially in the context of mental health and mental health support. And I don’t know who, nobody has all the answers right now. What we know is that there’s a massive mental health need. There is the potential for technology to support, and there is a potential for technology to also get it wrong, which could have very severe effects if it does. that gives the wrong advice or inappropriate advice or potentially shares information that was given in a very confidential environment out. And so it’s a very, very sensitive space. I think we all need to get involved here. We need the children. We need, of course, the technology developers. We need a responsible, as Bernard said, responsible development approach. And this is not an area that we should rush into, for sure. But yeah, we need to watch it. It’s going to happen. If we get it right, there is huge potential for providing support. And I think, as I said earlier, what’s really happened with chat GPT, everyone talks about that as the one thing. And of course, foundational models are not new. And there are other models, not just chat GPT. But that’s the one that everyone, it’s kind of become the placeholder for this whole new moment, cultural moment, not just technological, but cultural moment, as the speaker said earlier. That AI is now kind of, it used to be in the background, the algorithm, your news feed, the bunny ears on your Instagram photo, your Snap photo. It’s now something you interact with. And we just don’t know what the long-term effects are. This is why we also need solid research around the impacts of children and AI as they interact, and all of us. But of course, we focus on children for the opportunities and also the potential risks.

Vicky Charisi:
Thank you very much. Judith, you also do work with mental health. Would you like to say something?

Judith Okonkwo:
Sure. Thank you very much. I was just nodding as Stephen was talking, because everything he said completely resonated. I think one thing I would like to say is that right now in the world, all of these initiatives happen. where people are thinking about things like governance for AI and governance for the metaverse. I just really think that we have to prioritize including young people in those conversations. So I mean UNICEF of course does that brilliantly but I think so many more organizations need to. Every time I’m in a room where those conversations are being had and you know the youngest people look like me I know we have a problem. So you know whatever we can do to make sure that young people are in all the rooms they need to be in we definitely should. And then I just wanted to say you know you were talking about getting it wrong and I don’t know if people saw but in the news recently BBC was reporting about a young man who had you know been arrested on the grounds of Windsor Castle for trying to kill the Queen and he had been egged on by his AI assistant to go and do it. So already you know we are seeing that we don’t quite know where we’re going with these technologies but we definitely have to come together to figure out what future we want for ourselves.

Vicky Charisi:
Thank you very much. First I would like to do a small rearrangement so you belong there please. It’s about children. Randy would you mind to go to sit there so I can. Is it okay? Okay. Thank you very much and apologies for the interruption. Any other question?

Audience:
Selma yeah. Hi I’m Selma Shabanovich from Indiana University. It’s such a pleasure to see the diversity of projects and different kinds of thoughts that really all focus on children and their presence in the work. One thing I was curious Steve you started with kind of saying you had developed these guidelines and you knew they weren’t the end and then you had so many different really interesting things go on so I was just wondering if both you and the folks who participate in the projects could speak a little bit to you know either how the guidelines were things that were kind of present and helped them in the projects and or how their projects, how they see their projects is expanding on or further defining aspects of the guidelines that maybe weren’t already in there. Thank you.

Steven:
Thanks, Selma, that’s a really great question. So the eight, and I should have mentioned this earlier, I’m sorry, so that the guidance has been published and the eight case studies are online on the UNICEF page. So I would really encourage everyone to to look at each one because we wanted a diversity of projects from different locations but also different contexts. Like some of them, some of the projects do, the one in, one of them in Finland provides mental health support or at least, sorry, mental health information, not support but where children can find information as a kind of a first point of call and initial questions around potential symptoms and I’m looking for that first line of kind of informational support, not therapeutic support. But that was one of the case studies and that was done by the, is still done, it’s an ongoing project by the the Medical University of Helsinki and so that was interesting because they had a, because it’s a hospital, they, you know, in a very developed nation in a sense, technologically developed and also kind of government supported, they had many ethicists on their team that developed the product. So not only software developers but ethnographers, researchers, ethics team, doctors, psychologists and obviously did a lot of testing with the children. So we chose that, there’s MEC3D, also mental health support but not necessarily for the patient but actually for the people around the patient or around the, not the patient, sorry, the child on a, on the And then, for example, we did one with the Alan Turing Institute in the UK that was a really nice example of how you engage the public on developing public policy on AI. And they’ve actually gone on to, while the case studies have kind of finished, the work continues. So the Alan Turing Institute has been asked by the government of Scotland to engage children in Scotland on AI, and what excites them about AI, what worries them, and I think we’re going to come up with a question on that. What kind of future do they want? And so the Alan Turing Institute and their initial reports and methodology and everything are online. It’s a really rich resource, and that will inform, you know, policymakers as they regulate. So it was interesting. For us, in the end, after the eight case studies, the guidance didn’t really change so much, which was kind of a relief. We thought, like, wow, we seem to get it kind of quite right the first time. But it might also just be because the guidance is almost at the level of principles, and we do that because we’re a global organization, and so you have to be quite kind of high-level or generic, and then it gets adapted at the local context. The unfortunate thing is that everybody wants the details. How do you adapt it? And that’s where, you know, that’s the challenge. How do you move from principles to practice? But that’s where, in the end, we kind of said the guidance hasn’t changed that much, but it’s been enriched by these case studies. If you want to learn kind of how different organizations have applied them, then go and read these. I’ll just say one more thing. There are nine principles or requirements for child-centered AI in the guidance, like, for example, inclusion of children in developing AI systems and policies. We found, in the end, that all of the case studies only picked two or three. And we realized that that’s actually fine. In your project or in your initiative, there are two or three that’ll speak more to you than others. So if it’s participatory design and the inclusion of children, that’s one thing. You know, if it’s fairness or discrimination. And so it was really collectively unpacked all nine. But in the end, only a few tend to kind of be the focus for your work. Yeah, so everything’s online. We are really, of course, just thinking about if there’s a need to update them or kind of add to them now in the light of generative AI. And as I said earlier, there are a lot more unknowns now. We don’t know how the human-computer interaction will evolve over time. And we want to kind of make it work in a way that upholds rights and be responsible. But we are, everybody kind of building the plan or fixing the plan as it’s in the air. So we are very keen to do more work in this space in light of kind of ongoing developments. Yeah.

Vicky Charisi:
Thank you very much, Stephen. Is there any other question from the audience? Yes, please.

Audience:
Hi, this is Edmond Chung from .Asia. We also operate the .Kids domain and what is being done here is great. It’s definitely something that .Kids will like to take on and also help promote. But asking as personally, I wanted to ask, I guess it’s Ruyama, or Ryuma. One of the last comments kind of gave me a little bit of a concern. Your last comment was that maybe the evaluation or the assessment can be more fair with AI. Of course it could be, but it could also be less fair. And that’s part of the discussion, that’s the heart of the discussion. So what if it’s not fair? And that brings me to. to a second question that I wanted to kind of ask as well. I think it was mentioned that for the Uganda project, it was focused on fairness and exploring fairness. But I didn’t quite understand from Joy what was being discussed, how part of AI was part of it. Would it be useful to get more of that? Because really, actually, as a father of an eight and 10-year-old, I’m quite pleasantly surprised that my 10-year-old, just now in year seven, have told me this September, starting, their teachers are actually getting them to use AI to help them with homework and being part of the curriculum. So it’s really exciting for me. But also, because we know that technology is not entirely neutral, especially when we talk about these things, it’s a symbiotic relationship. As much as we shape them, they shape us, especially kids going forward. So that’s why I wanted to really hear from the experience. You had an ending remark about fairness, and then how AI and fairness really works, and the response from the case studies. Thank you.

Vicky Charisi:
Thank you. Do you mind if I get a question? Because I did the study with the kids in Ugandan fairness. Is it OK with you? So indeed, the talk by Joey was focused on something else, not on the specific study. Of course, we have published. So there is a scientific publication on this. We can share the links later. So the main research question for this study was to understand if there are cultural differences in the perception, the perceived fairness. So we wanted to see how children in these two environments were perceived by their parents. with the cultural, but also the economical differences they had, they would focus on different aspects of fairness. So what we did, we provided different scenarios. We let the, the whole activity was based on storytelling frameworks, and we let the kids talk about these scenarios in their own words, their own drawings, et cetera. Then I said some researchers analyzed in a systematic way these data, and what we found was that indeed children in Uganda focused more on aspects of fairness that have to do more on the material aspects, so they would talk more about how, for example, something was shared among children, et cetera, while the children in Japan would focus more on psychological effects. So for example, they would talk about behaviors of teachers, or they would talk, so this was, this is just an example to see how the priorities, probably when we abstract, the actual notion of fairness doesn’t really differ a lot, but when we go in details, we see that children in these different cultures prioritize in different ways. So that was our, the results of our study. Of course, this was only the starting point, and there are a lot to explore, and it is not only us. There is a huge community of developmental social psychologists that explore this topic. So the first question, do you want to repeat the first question?

Audience:
Yeah, I guess, just wanna ask, you mentioned at the very end that, if I understood you correctly, you’re saying that assessment, maybe, of your work through AI might be more fair. Tell us more, a little bit more about it. What if it’s not fair? How do you know it’s not fair? What if you trust the machine too much?

Vicky Charisi:
Is there someone, Judith, who would like to speak?

Ruyuma Yasutake:
I would like to speak first. I think some school teachers have individual evolution sense. What do you say? Not equal? Not equal. Teachers’ evolution sense? The way of judgment? The way of judgment is not inequality. So, I guess, AIs can fair evolution.

Vicky Charisi:
Yeah, I mean, apparently, probably, there are some hopes here, right? So, I don’t really believe that, you know, there is like this… Nobody believes that, you know, it’s like fair, right? Absolutely fair evaluation with AI. This is true. But probably, from young students, there is a hope. When they see their systems or their schools evaluating in different ways, and probably they experience a little bit of human unfairness, probably they put a lot of, you know, some hope on AI. But, of course, this is something that we really, really need to take very seriously. Yes, please.

Audience:
Hi, my name is Zanyue from South Africa and Zambia. And this is not… I think it’s more of a comment, just listening to the discourse. There’s a concept that we use quite often in South Africa, and I think it’s quite pertinent here, progressively realising, right? So I think when we speak about AI, especially at the stage that we are globally, your question is quite important. You know, what is fairness? What are the assessments? What’s the criteria? And you quite correctly put, in different geographies, instances, even in the same locality, based on various factors, that that concept of fairness really is so subjective. And I think what AI does is it gives us objective, almost element to these very subjective things, and you tweak it accordingly, and that’s why it’s so important if we speak about… I mean, I think the question on fairness really does veer off to algorithmic biases that we do speak about. That, I think, is also very pertinent for this conversation, where the more data we have, and the more data that we have based on your comment on this context, this context, this context, we develop, right? So I think the answer to the fairness question is we are progressively trying to realise that, and I think we’re at a really infant stage when it comes to that, and hence, you know, the data conversation is quite important to pair with this one. So, yeah, that’s just maybe a summary.

Vicky Charisi:
Thank you very much for the intervention, indeed. I’m afraid we’re running a little bit out of time. So now I would like to give the floor to our online moderator, who is also our reporter. So Daniela Di Paola… Can we have Daniela on the screen, please? ..is going to give us her view of the conclusions of this workshop. Daniela? Yeah, please. Hello, everyone.

Daniela:
Thank you all for your wonderful comments. productive discussion and I really think that the different perspectives added a lot to the conversation. I’m going to share two key takeaways and two call to action points. So the two key takeaways, the first is that despite the challenges in terms of infrastructure in our activities for AI and children’s rights, children from underrepresented countries and cultures should be included. And it’s urgent that in technology that’s being developed for children, we consider the needs and interests from all children and not only those from privileged backgrounds. Secondly, the project is not only the first step of responsible design of robots for children and various communities can contribute to its expansion, such as adding to the rights for explainability, accountability, and AI literacy for all. Formal education can be proven powerful and industry experiences with responsible innovation can be a catalyst for the well-being of all children. Secondly, I’d like to share some call to action points. The first call to action is that expansion of the implementation of the policy guidance to additional contexts, such as hospitalized children or triadic interactions and also formal education with the inclusion of schools is very important, such as also adding the underrepresented groups of people such as those from the global South. Secondly, there’s a call for the necessary infrastructure and technology development that will give all children equal opportunities in an online world. We need to ensure that AI opportunities come together with

Vicky Charisi:
responsible and ethical robot designs. Thank you. Daniela, thank you so much. It was really good. And I think it’s time to close, Stephen. So the floor is yours.

Steven:
Yeah, okay. So firstly, thank you very much. I think that the… One of the key takeaways is that this is the beginning of a journey. So we were very happy to share with you what UNICEF has done and what our partners have done here, and many others that aren’t being mentioned, as we try and work out how children can safely and in a supported way, and in an empowering way, engage with AI. The reality is that while we sit here and debate these important issues, children are using AI out there, and it’s going to go up more and more every day. So it is urgent. Everybody needs to get involved. Thank you for raising the data issue. It’s really critical. And to Daniela’s point, we have this challenge of data where the data sets are not complete. They’re much more kind of global north. We need data from children in the majority world. I like this term that’s being used a lot here, and the global south. But we know that data collection at the moment doesn’t often happen very responsibly. And so we need to kind of tick those two boxes at the same time. So the journey is going to continue. Please work with us, and we will work with you. And we need to work. I mean, we keep saying this, but it really is critical to work with children and to walk with children on this journey. So Roma, thank you for being here, and thank you for being involved in the project. We recently engaged at work a digital policy specialist from Kenya who could easily have been on this panel. And she was just making this point about Africa being such a young population and how crazy it is just seeing more and more how older people like us. Sorry, I’m speaking for all of us here. Taking the liberty of regulating a technology that we don’t really understand. And that’s so much used by a generation that is going to be so much more impacted by it, and we’re not having them at the table. So that was a really well-put point. So for all of us here who do bring children to the table, well done, and please may it continue. So thank you. Thanks, Vicky.

Vicky Charisi:
Thank you very much, and thank you to all for the support. Thank you for being in this session, and I hope we can continue this work on AI and children’s rights. Thank you.

UNKNOWN:

Audience:
Hi. Thank you for coming. Oh, thank you. And teacher. Oh, I’m here. Right, Dr. LaFleur. Hey. Hi. Hi. Hi. Hi. Hi. Thank you. Good point. Good point. The fact that AI is good, but I think OIT, they buy this thing.

Audience

Speech speed

150 words per minute

Speech length

1074 words

Speech time

428 secs

Bernhard Sendhoff

Speech speed

148 words per minute

Speech length

1880 words

Speech time

763 secs

Daniela

Speech speed

157 words per minute

Speech length

380 words

Speech time

145 secs

Dominic Regester

Speech speed

161 words per minute

Speech length

1556 words

Speech time

581 secs

Joy Nakhayenze

Speech speed

170 words per minute

Speech length

772 words

Speech time

272 secs

Judith Okonkwo

Speech speed

184 words per minute

Speech length

1577 words

Speech time

514 secs

Randy Gomez

Speech speed

134 words per minute

Speech length

393 words

Speech time

177 secs

Ruyuma Yasutake

Speech speed

106 words per minute

Speech length

324 words

Speech time

183 secs

Steven

Speech speed

171 words per minute

Speech length

2592 words

Speech time

910 secs

UNKNOWN

Speech speed

60 words per minute

Speech length

1 words

Speech time

1 secs

Vicky Charisi

Speech speed

150 words per minute

Speech length

2340 words

Speech time

938 secs

Accessible e-learning experience for PWDs-Best Practices | IGF 2023 WS #350

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Swaran Ravindra

The analysis highlights several issues regarding disability rights and inclusivity. It points out that there is no national policy for disability in Tobago, and in Fiji, the 2018 Act does not specifically outline what provisions should be in place for persons with disabilities or how to implement them. One area that is particularly neglected in the Pacific is accessible websites, which are considered necessary provisions for persons with disabilities. This lack of explicit provisions for the rights and accessibility of persons with disabilities in national policies and legislation is seen as a negative sentiment.

On the other hand, there is a positive sentiment towards inclusion as a basic fundamental human right. Swaran, a speaker in the analysis, emphasizes the importance of inclusion in her speeches and believes that all citizens should have access to various services regardless of their disabilities. She also advocates for the use of existing legal instruments such as the ‘Education Act’ to support disability rights in the absence of specific national policies. This perspective reflects a belief in the positive impact that inclusion can have on reducing inequalities.

Consistent support systems for persons with disabilities are called for, even in the absence of a national policy for disability. This notion is seen as a positive sentiment, highlighting the significance of providing continuous support to individuals with disabilities.

The analysis also acknowledges that legislation alone is insufficient to ensure inclusivity. It notes that legislation sometimes contradicts itself, and there is a need to reconcile these gaps between constitutional rights and legislation to ensure inclusivity. This observation is seen as a negative sentiment, pointing out that legislative measures must be comprehensive and consistent to promote inclusivity effectively.

Cultural norms are identified as a factor that can present obstacles to inclusivity. The analysis mentions instances where parents refuse to acknowledge their child’s disability, highlighting the stigma around disabilities that needs to be overcome. This is seen as another negative sentiment, suggesting that cultural attitudes must change to foster inclusivity.

Constitutional rights are noted as a means to protect and promote inclusivity. The analysis provides examples of disabled individuals exercising their right to attend classes, highlighting the potential impact of these rights in promoting inclusivity. This observation brings a positive sentiment to the importance of constitutional rights in advancing inclusivity.

In the context of education, the analysis emphasizes the need for inclusion to be integrated into everyday practice in educational institutions. The mention of AFINI, an ISO certified organization that upholds high standards of inclusivity, and professors creating tertiary level education courses for disabled individuals, reflects a positive sentiment towards the efforts being made to ensure inclusivity in educational settings.

The analysis also touches upon the obstacles towards inclusivity in online learning. It argues that students should not be penalized for the extra time they require to log into the system. This viewpoint is seen as a negative sentiment, highlighting the need for fair assessment practices in online learning.

Regarding authentication methods, the analysis acknowledges the existence of more secure methods such as thumb trains, tongue scans, retina scans, face recognition, and retina recognition. It argues that these methods are easier for users and reflects a positive sentiment towards the implementation of these authentication methods.

On the other hand, there is a negative sentiment towards the imposition of difficult types of authentication methods, which could act as a deterrent for students to return to class.

The analysis also addresses the important topic of digital inclusion. It suggests the need for affirmative action and proper measurement and assessment tools to address digital inclusion effectively. It mentions the use of disparity measurement, the implementation of the WCAG 1.0 standard, and UNESCO’s Romex Indicators in Pacific island nations. This observation highlights the positive sentiment towards the need for affirmative action and the adoption of proper tools to achieve digital inclusion.

In conclusion, the analysis brings to light various issues related to disability rights and inclusivity. It highlights the lack of explicit provisions in national policies and legislation, but also emphasizes the positive sentiment towards inclusion as a fundamental human right. It underscores the importance of consistent support systems and the impact of cultural norms and legislative gaps on inclusivity. Additionally, it calls for fair assessment practices in online learning and explores the implementation of secure authentication methods. Moreover, the analysis draws attention to the need for affirmative action and proper measurement and assessment tools to address digital inclusion effectively.

Vidya

The accessibility issues in e-learning platforms pose substantial challenges for people with disabilities. These challenges include problems such as unlabeled buttons, inaccessible content, and inaccessible PDFs. Vidya, who has personal experience navigating these platforms, suggests that involving users with disabilities in the development process of e-learning platforms is crucial. This involvement should include providing digital literacy training and ongoing support to ensure that these platforms are genuinely accessible to all.

Furthermore, STEM education presents additional accessibility challenges for individuals with disabilities. Screen readers often struggle to interpret mathematical equations, and much of the educational content is written from the perspective of someone with sight, making it more difficult for those without sight to understand. This creates a barrier to the effective participation of individuals with disabilities in STEM subjects.

The shift to digital learning during the pandemic was not seamless for many students and teachers, especially those with disabilities. In India, where Vidya is based, teachers and students with disabilities faced difficulties adapting to digital platforms. To help them, Vidya had to create digital literacy tutorials in multiple languages. This highlights the need for greater support and accommodations for individuals with disabilities during times of crisis.

To address the issue of accessibility and inclusivity in education, India is in the process of introducing a National Educational Policy. The aim of this policy is to promote greater inclusion by shifting towards inclusive education from special schools and a segregated education system for the visually impaired. However, the full implementation of this policy is still pending, as it requires time and coordination among different states.

Regarding special education, Vidya emphasizes the need for a central authority to ensure consistency across different states. Currently, policies for special education vary from state to state, resulting in inconsistencies and gaps in support.

While the government has made efforts to make their websites accessible, there is still work to be done in this area. Although progress has been made, there is a need for continued efforts to fully address website accessibility.

In terms of administrative departments responsible for education, accessibility and awareness vary based on the specific department. Education for persons with disabilities is sometimes overseen by the Department of Social Justice or the Department of Education, leading to variations in support and accessibility.

Cultural norms and stigma also act as barriers to digital platform access for disabled people. Vidya highlights the case of a blind woman who has been confined indoors due to cultural norms and stigma. Overcoming these barriers requires not only technological solutions but also the promotion of social acceptance and understanding.

Vidya believes that continuous support and social acceptance are essential for the effective use of e-learning platforms by individuals with disabilities. She stresses that the responsibility lies with the government and organizations to ensure the long-term usability and accessibility of digital tools.

Notably, children with disabilities have the potential to learn and compete effectively with their peers if provided with the necessary support and tools from an early age. Introducing technologies like computers and braille to children at a young age can significantly enhance their learning experience and future educational prospects.

Nonprofit organizations play a vital role in bridging the gap between the government and the ground realities of education for children with disabilities. Their firsthand experience in the field enables them to provide valuable guidance to the government in shaping policies and internet regulations that facilitate the access to education for individuals with disabilities.

Finally, collaboration within the internet community can contribute to making education more accessible for children with disabilities. By creating forums where experts can share thoughts, ideas, and network, meaningful progress can be made in addressing accessibility challenges. Collaboration is vital, as the efforts of a single person or organization alone may not be sufficient to solve the complex issues at hand.

In conclusion, the accessibility issues in e-learning platforms pose significant challenges for people with disabilities. It is essential to involve users with disabilities in the development process, provide ongoing support, and ensure digital literacy training to make these platforms truly accessible. STEM education, the shift to digital learning during the pandemic, and the need for a central authority in special education further highlight the importance of addressing accessibility and inclusivity issues. The government, nonprofit organizations, and the internet community all have essential roles to play in making education more accessible to children with disabilities.

Anna

Anna, who works in a child’s rights organisation, puts forward a compelling argument for involving more persons with disabilities in the design of platforms that promote accessibility. She firmly believes that accessibility should be guaranteed right from the design phase, ensuring inclusivity and accessibility for everyone. This argument aligns with the goals of SDG 10: Reduced Inequalities and SDG 4: Quality Education.

Anna’s argument is supported by her first-hand experience in the field, where she has witnessed the positive impact of involving persons with disabilities in the design process. By incorporating their perspectives and insights, the resulting platforms are more likely to meet the needs of people with disabilities and promote equality. Anna’s staunch belief in the rights of every individual to have equal opportunities, regardless of their abilities, drives her passion for ensuring accessibility.

Moreover, the second speaker highlights the crucial role that civil society plays in championing children’s rights. They emphasize how civil society organisations play a vital role in advocating for the rights and well-being of children. Anna, who is from Brazil and also works for a child’s rights organisation, supports this view and agrees that civil society has the power to bring about positive change. This argument aligns with the goals of SDG 16: Peace, Justice, and Strong Institutions.

Anna’s endorsement of the role of civil society stems from her experiences in Brazil, where she has witnessed the impact of civil society organisations in advancing children’s rights. These organisations provide crucial support, raise awareness, and advocate for policies that protect and promote the well-being of children. Their efforts contribute to the overarching goal of achieving a more just and equitable society.

In conclusion, both speakers emphasize the significance of promoting accessibility and advocating for children’s rights. Anna’s emphasis on involving persons with disabilities in the design process underscores the importance of inclusivity and equal access for all. Similarly, the second speaker reinforces the vital role of civil society organisations in advocating for the rights of children. By considering the perspectives of both persons with disabilities and civil society, we can strive towards achieving the goals of equality, justice, and strong institutions.

Jacqueline Huggins

During the discussion, the speakers highlighted the importance of implementing policies and providing training to support students with disabilities in accessing educational content. They stressed that ensuring accessibility for these students is crucial for quality education. The need for such policies was emphasized due to the challenges faced by students with disabilities, particularly during the COVID-19 pandemic.

One of the speakers mentioned that their campus had a policy in place that encouraged lecturers to provide accessibility for students. The department also collaborated with visually impaired students to ensure that content was accessible to them. In addition, the campus provided internet access and laptops to students who were in inaccessible areas. The sentiment towards these measures was positive, as they aimed to create an inclusive learning environment.

Another speaker emphasized that training was essential for both lecturers and students to effectively implement and understand accessibility measures. The department worked one-on-one with students, to ensure that they were not left behind and that they could navigate and use online platforms effectively. This sentiment towards training was also positive, as it was seen as a means to bridge the gap in accessibility.

However, a negative sentiment emerged when discussing the absence of a national policy to ensure accessibility. In Trinidad and Tobago, there is no national policy in place, which hampers the experience of students with disabilities. The current implementation of accessibility measures relies heavily on the goodwill of individual lecturers. This lack of a national framework was seen as a significant barrier to achieving full accessibility for students.

On a positive note, Jacqueline Huggins, one of the speakers, advocated for the implementation of universal design to benefit all students. She highlighted the importance of meeting with academic staff to discuss how universal design can be executed effectively. She also mentioned outreach and awareness programmes regarding universal design accessibility. Jacqueline’s positive sentiment towards universal design showcased the belief that it can create an inclusive learning environment for all students.

However, Jacqueline also acknowledged the challenges faced in implementing universal design. One such challenge was retrofitting infrastructure to make it accessible for students with disabilities. She also mentioned the difficulties lecturers faced in adapting to online and internet teaching methods. To address these challenges, she was working on a campaign to make all faculty websites accessible. The sentiment towards implementing universal design was mixed, as it was seen as beneficial but also posed practical challenges.

Apart from advocating for universal design, Jacqueline identified herself as a watchdog on campus, ensuring the implementation of accessibility measures and meeting students’ needs. She worked closely with students to understand their needs and liaised with lecturers and the deputy principal to bring about necessary changes. Jacqueline’s role as a watchdog and her positive sentiment towards meeting students’ needs showcased a commitment to inclusivity and accessibility.

The university department was also mentioned in the discussions. It demonstrated proactive support for students with disabilities by addressing their complaints and taking them to relevant authorities. The department worked closely with IT to understand the needs of supporting students and even purchased licenses for JAWS software for students who could not afford it. This collaboration with IT and the consideration of students’ complaints showed a positive sentiment towards addressing accessibility challenges.

Additionally, the department obtained funding to purchase expensive equipment and software, such as JAWS licenses, which were installed in campus libraries and computer labs. This initiative aimed to ensure that students had access to necessary resources for their education. The sentiment towards the department’s efforts in sourcing funding was positive, as it highlighted the university’s responsibility to support disadvantaged students.

The discussions also touched upon the importance of global collaboration in making e-learning more accessible. One of the campuses mentioned was fully online and covered 13 countries in the Caribbean, providing students with the opportunity to obtain their degrees. This global collaboration was seen as beneficial for accessibility in e-learning.

Furthermore, the speakers acknowledged the value of learning from global experiences and implementing best practices. Discussions with individuals from different countries provided diverse perspectives and learning opportunities. The sentiment towards learning from global experiences was positive, as it promoted growth and improvement in accessibility.

The importance of turning discussions and learnings from forums into actionable steps to improve e-learning accessibility was also emphasized. The sentiment towards taking action based on learnings was positive, as it highlighted the need for tangible change.

Overall, the discussions centered around the importance of policies, training, and universal design to support students with disabilities in accessing educational content. The challenges faced during the COVID-19 pandemic highlighted the need for comprehensive accessibility measures. The absence of a national policy was seen as a hindrance to achieving full accessibility. However, the speakers expressed positive sentiment towards the implementation of universal design and the proactive efforts of the university department in addressing accessibility challenges. The importance of global collaboration and learning from diverse perspectives was also emphasized. The discussions ultimately emphasized the continuous commitment to improving accessibility and inclusivity in education.

Lydia

Accessing online learning resources in schools can be a complicated task for students, particularly those with cognitive impairments. The frequent changes in passwords and access methods implemented by IT departments create significant difficulties for students, preventing them from accessing important information and submitting assignments. This issue negatively impacts their educational experience and hampers their ability to fully participate in online learning.

The complications associated with accessing online resources are often not recognised or taken seriously by schools. Many individuals without cognitive impairments perceive these challenges as trivial, leading to a dismissive attitude towards students facing such accessibility issues. This lack of awareness and understanding further exacerbates the problem, as students with cognitive impairments struggle silently, without receiving the support and accommodations they need.

Furthermore, the implementation of frequent password changes and increased security measures poses additional barriers for students with disabilities. These students may face difficulties remembering complex passwords and navigating the heightened security protocols. As a result, they are often chastised for failing to complete their work on time or are forced to seek continuous assistance from IT support. This ongoing cycle of frustration further hampers their educational progress and creates a sense of dependency on technical support.

To address these challenges, it is crucial for schools to be more aware of the accessibility issues faced by students with cognitive impairments. Recognising the complexity and impact of these challenges is the first step towards implementing appropriate accommodations and support systems. Additionally, it is imperative for the IT security measures in schools to be user-friendly and accommodating for all students, including those with disabilities. School administrators and IT departments should work together to ensure that the security measures do not create unnecessary barriers but instead facilitate a seamless and inclusive online learning experience for all students.

In conclusion, accessing online learning resources in schools is not a simple task for students with cognitive impairments. It is essential for schools to recognise, acknowledge, and address these accessibility issues through proactive measures and awareness-raising efforts. By making online resources more accessible and ensuring user-friendly IT security measures, schools can create a supportive and inclusive educational environment for all students, regardless of their cognitive abilities.

Zakari Yama

The discussion revolves around the relationship between universal design and digital accessibility in the context of education. Universal design focuses on catering to a broader range of learners, while digital accessibility primarily addresses the needs of learners with disabilities. The aim is to create an inclusive educational environment that empowers all students to access and engage with the learning materials and activities.

One argument raised is the difficulty institutions face in implementing universal design and ensuring its compatibility with accessibility. The process of applying universal design principles and making them compatible with digital accessibility measures can be challenging for educational institutions. This challenge could potentially hinder the effective implementation of inclusive practices in education.

On the other hand, there is agreement that what is beneficial for individuals with disabilities, such as real-time captioning, can also benefit all students. For example, real-time captioning can assist students without disabilities in understanding an instructor’s accent or when watching videos in a loud environment. This highlights the importance of digital accessibility measures not only for learners with disabilities but for the entire student population. By incorporating digital accessibility features, institutions can enhance the learning experience for all students, regardless of their specific needs.

Furthermore, the stance put forth is that institutions should view accessibility efforts as an opportunity to improve their universal design practices. Instead of perceiving accessibility as a separate and burdensome requirement, institutions should leverage it to enhance the inclusivity and effectiveness of their teaching and learning approaches. By using accessibility as a framework for designing educational materials and environments, institutions can foster a more inclusive and equitable learning experience for all students.

In conclusion, the relationship between universal design and digital accessibility within education is crucial for promoting inclusivity and ensuring equitable access to educational opportunities. While there may be difficulties in implementing universal design and ensuring its compatibility with accessibility, there is a recognition that what benefits individuals with disabilities can also benefit all students. Institutions should embrace accessibility efforts as an opportunity to improve their universal design practices, ultimately creating a more inclusive and effective learning environment.

Gonola

The discussions emphasise the significance of e-learning accessibility for individuals with disabilities. It is crucial for e-learning platforms to be designed with accessibility in mind right from the start to ensure efficiency and cost-effectiveness. This approach prioritises the inclusion of all learners, regardless of their disabilities, and allows them to fully engage in online education.

Legislative frameworks are seen as pivotal in supporting the creation and adaptation of e-learning platforms to include persons with disabilities. To achieve this, strategies should be adopted from academia, the private sector, and government institutes. By pooling resources and expertise from these various sectors, it becomes possible to develop more inclusive online platforms that cater to the diverse needs of disabled individuals.

The principle of universal design for inclusive design receives support in the discussions. It is highlighted that designing e-learning platforms to be universally accessible is of utmost importance. An example is given of universally accessible building entrances, which ensure that individuals of all abilities can enter and use a space without barriers. By applying this principle to e-learning platforms, it is possible to create a more inclusive and accessible online learning experience.

Moreover, the implementation of captioning is seen as a valuable tool for promoting accessibility. The discussions highlight the utility of captioning for various user groups, including individuals with hearing loss and non-native English speakers. While captioning is essential for individuals with hearing loss, it also proves beneficial for those who may struggle with the English language. By providing captions, e-learning platforms can overcome language barriers and make educational content more accessible and comprehensible for all learners.

In conclusion, the discussions emphasise the importance of e-learning accessibility for individuals with disabilities. The need to design accessible platforms from the start, implement legislative frameworks supporting inclusivity, adopt strategies from academia and the private sector, apply the principle of universal design, and provide captioning for increased accessibility are all key points highlighted. By prioritising accessibility in e-learning platforms, we can create a more inclusive and equitable online learning environment for all individuals, regardless of their disabilities.

Session transcript

Gonola:
Good morning, ladies and gentlemen, and for those online, good morning, good afternoon and good evening. This session is on e-learning, and the title is accessible e-learning experience for persons with disability best practice. And we are having a few little technical difficulties, so I apologize for starting late. We have my name is Gonola Astbrink, and I’m moderating this session, and I am chair of the Internet Society Accessibility Standing Group, and here next to me on site is Vidya Wai, and she will be speaking about her experiences of e-learning in India. We should have online our other speakers. We should have Swaran Ravindra from Fiji National University, who is the organizer of this session, and Zakari Yama, who is a co-organizer of the session. He is from Morocco, and also Vashka Bhattacharjee from Bangladesh, as well as Jackie Huggins, who is joining us from the Caribbean. So while we are waiting for them to join us online, this session is really about how persons with disability can get best access to e-learning platforms, and the importance of the e-learning to be available to persons with disability across the world. And how can we make this possible? So it’s going to challenge. We’re going to talk about some of of oppressing challenges pertaining to technology and accessibility that persons with disabilities face when accessing online content on major e-learning platforms. And we in the Accessibility Standing Group actually have personal experiences of that. We’re going to talk about supportive legislative frameworks and how we can adapt strategies to assist from the academic, the private sector, and government institutes. So that there’s much more inclusion when creating online platforms, because we know that if any online service is created accessibly from the start, it is much more effective and efficient and also a lot more cost effective. So I’m going to pass over to Vidya Y and talk about a little bit of her personal experiences, both in the past as a young blind person navigating the education system and also talking about a current situation with e-learning through the Internet Society. So I’ll pass on over now to Vidya. Thank you.

Vidya:
Hello, everyone. It’s my pleasure to be talking to you today. Thanks to the organizers for having me here. And thanks to Gunela. So about e-learning platforms, I would like to talk a little bit about my own experiences with e-learning and also what I see working with children in India. So I run a nonprofit called Vision Empower. We make STEM education accessible to children with disabilities. So, I will be talking mostly from their perspective and also my own challenges growing up with a disability, specifically on the e-learning platforms. I was born blind, so initial few years I didn’t have access to technology as much because of lack of awareness. There were technologies, but I was not using them. I got access to a computer only in grade 11, and since then, as we all know, it has huge opportunities. You know, till then, if I had to communicate, I had to ask somebody to, if I had to even send a WhatsApp, if I had to send any message, it had to be, if I had to have a written communication with a person who can see, then it would be someone else typing it for me, or I could never have a written communication, so for the first time when I used e-mail is when I got access to written communication. That was the first time someone could read what I had written, otherwise, it had to be in Braille, which most of the persons who can see do not know. So, we know how huge the impact of internet is on a person with, on the life of persons with disabilities, even if you have to browse something independently, it’s all through the internet, and e-learning is not an exception, because already classrooms are not very accessible, so a lot of things you’ll have to come home and refer, for example, when I was studying computer science, I just would go to the class and then come back home, and that’s about it. I had to find my own, find volunteers who could help me after classes. Now when you talk about e-learning, firstly, there are few challenges, especially in subjects like STEM, you know, a lot of the times, the content itself is not so accessible, like everything. Everything is designed in a way that a person with sight can understand. Now, when you take school textbooks, for example, so a lot of things are like look around, there’s a lot of greenery or this is in the shape of a mountain. So a person who has never seen it, they wouldn’t know what they’re talking about. Content itself is written in a way that persons without sight cannot understand it easily. The second challenge is with issues with regarding when I’m talking about STEM. So you have a lot of, now if you have to read a math equation, it has to be written in a specific format like your LaTeX format and other things which a screen reader can read. But lot of times if you just give a PDF, if you upload PDF onto your LMS platforms, they’re not very easily accessible. It just reads something like if you want to write two square, it reads something, superscript something or subscript something, things like this which you don’t understand. So if it has to read well, you have to write it in a way that is accessible. And thirdly, there are accessibility issues with the web platform itself. Sometimes there are unlabeled buttons, sometimes you cannot navigate, it just says link and you don’t know what’s the link all about. A lot of times what I’ve seen is if you open a PDF file, it just says page one, page two and you don’t know what’s on that page. So a lot of times they’re protected, you cannot download those files, so you cannot read them later. So there are challenges with the content, there are challenges with the accessibility and with STEM it’s even more complicated. How do you put up charts or diagrams which a child or a student can understand? Everything has to be all text and there are a lot of challenges. So you know, when we take when these are the challenges that I had navigating on some of these platforms, including when I was doing a course on Internet Society, it was not very easy to navigate. All said and done, these are the challenges that are accessibility specific, but one thing also I wanted to mention is, there’s much more than accessibility. You know, when you take school education system in India, for example, when pandemic happened, a lot of schools seamlessly shifted onto the digital platforms, but it was not the case for children in India and the teachers because you can’t tell them, go to YouTube and refer how to install Zoom, how to use Zoom because everything says click here. So when you don’t use mouse, it’s not of any value to you. So I had to make my digital literacy tutorials in various languages for the teachers and students to use. And also we have our own accessible learning management platform called Subodha. Now, some of the ground realities that I have seen getting the children and teachers onto these platforms are even little bit more than accessibility actually. One thing is making a platform accessible. Second thing is the digital literacy training that you’ll have to give them. Third thing is you have to ensure that there is some mechanism to handhold the teachers or the students or to get new users with disabilities onto the platform. Because with so many challenges, it’s not very easy to be continuously motivated to get onto the platform. And after you get on, they encounter some of the other challenges. There needs to be somebody to handhold them and make it very comfortable. Because even in our accessible platform that we have, teachers wanted some other features, like they wanted phone. So it’s very important to get, they wanted an app. So it’s very important to get their perspectives as well and make changes as they, like as we say, right, nothing about us without us. So we need to involve them in. the process of making the platform accessible and handhold them so that they’re comfortable in the usage of these platforms. So these are some of my thoughts that I wanted to share.

Gonola:
Thank you very much, Vidya. There is a lot there to take on, to consider, and from Vidya’s personal experience. I’ll pass on now over to Jacqueline Huggins, who has the experience of supporting students in her university. So please go ahead, Jacquie.

Jacqueline Huggins:
Right. Hi. Well, from here, I’m saying good night. And exactly what was just said by the last speaker. What happens on our campus, though, is that we have a policy, and that policy is what is used to encourage lecturers, academic staff, to do what is right for the student. And our department is almost like a watchdog in terms of a student who has visual impairment, who is blind, is registered with the campus, we then work with that student. And we work with lecturers so that they understand content not being accessible is very important. It is something that we always have to sit one-on-one and speak to lecturers about why it needs to be done. And we have students also speaking with the lecturers. This is what my needs, my need is. So the lecturer has a better understanding. We have had the issues where students have to deal with graphs, students have to do with calculations, and lecturers have to become creative. So sometimes we’re not even able to use the online platform. We have to use lecturer and student talking it through, finding solutions. that is not necessarily online. In terms of when COVID hit, that is where we really understood the challenges that our students with disabilities, especially students who are blind and students who were deaf, we recognized the issues that they face. And even though we recognized it, our university management decided that they’re going to provide laptops because we didn’t realize our students didn’t even have access to laptops, didn’t have access to internet. But the university came up with a plan where they worked with providers to provide internet access in areas where students did not have it. They also provided loans of laptops so students were able to utilize it. Then again, training was very important, training for some lecturers, training for some students. We just assumed that students were able to navigate and that was not the case. So my department had to actually deal one-on-one with students to ensure that they were not left behind. We also had attitudes of some lecturers. So for instance, we had a student who is deaf and the lecturer is using Blackboard and she asks him just to put on captioning and he just refused, I had to intervene. You know, again, although we had a policy, we still depended on the will and the goodwill of lecturers and academic staff to do what needs to be done. I’m not sure if India has a national policy, but Trinidad and Tobago, we don’t have a national policy. In fact, we are now on the stage where we have a draft disability bill and hopefully when that is passed, our students and our… campus and our students anyway would be able to navigate, would be able to be trained, would be able to have the type of access that they need to have. That’s it for me.

Gonola:
Thank you very much. And I think that we are naturally segwaying into policies and legislation and where that fits. And Swaran, I will ask you to maybe make some comments about that from your perspective, please.

Swaran Ravindra:
Thank you, Gunilla. Thank you very much, Vidya. Thank you very much, Dr. Gintz. First of all, I wanted to say a big thank you in Fijian, also our independent day today. And, you know, it sort of resonates with the topic we have today, because I don’t feel personally as a citizen of the country, I do not feel that we would be able to live a dignified life until everyone, each and every person in the country has access to the basic citizen-centric services that every other person has. And I think that the resilience that the people of Tobago have is just amazing. As Dr. Gintz has just mentioned, that there is no national policy at the moment. Actually, I met Dr. Gintz, I think three to four years ago, when I went, I was actually a visitor to the University of West Indies. And that’s when I met this wonderful woman, you know, and she learned, I personally learned a lot from her from that one meeting. And one fundamental thing that I had actually learned during that visit was that even though there’s no national policy, we need to have people who are continuously there as a support system. Along with Dr. Gintz, I’ve also met some other people in the university who have told me that though there wasn’t a disability policy, but we have used other avenues, other legal instruments that were there in terms of, you know, support for persons with disabilities. For example, the Education Act that says education is accessible to everybody. everybody means everybody. It also includes persons with disabilities. So there are people who firmly believe in inclusion as a basic fundamental human right, and they exercise it through other avenues, not just the Disability Act. If I were to shed some light onto what happened in Fiji, so when we had a bill passed in government for the provisions for creating provisions for accessibility or for the rights of the persons with disability, that was in 2016, into 2018 was when the act came into practice. However, till date, we do not have anything in written, in legislation that says that persons with disabilities need to have access in every avenue, every avenue in terms of everything that is supposed to be there for a citizen, public amenities, social platforms, social media platforms, places where people interact, meet, citizen-centric services, education, and many, many other avenues that most people enjoy seamlessly. So in Fiji, though, we have the 2018 Act that says that we need to create the provisions, but it doesn’t explicitly say what those provisions should be or how to create those provisions. There’s nothing that is written that says you need to ensure that all your websites are accessible. So what I’ve been doing so far is whenever I get an opportunity to speak to an audience and I talk to them about inclusion, I do talk to them about OH&S, which is Occupational Health and Safety. It is legislation of the country and no organization can bypass that. So we are talking about having accessible entry points in a building, which is great, which is absolutely important. But at the same time, we are neglecting, suppose we are not taking into consideration those people who are not there physically. They also need to have access to amenities. They also need to have access to the websites. accessible website is still something that is rather new, a very new concept in the Pacific. So I think we need to start working in that area.

Gonola:
Thank you very much. There is so much to do. Vidya, could you explain from the Indian perspective on legislation policies in regard to accessibility in education and has that policy and legislation actually been implemented?

Vidya:
Yes, from the Indian context, actually now the government is trying to come up with NEP, National Educational Policy, where they’re trying to make a lot of changes and inclusion is considered as one of the most important areas. Actually, a lot of people now are trying to get on to inclusive education than having special schools and special education system for the visually impaired, but it’s all there, but I’m sure it will take a lot of time to implement it, but the government has started thinking in the right direction. One thing about India is that while we were working with schools, we cannot go from school to every school and get approval, so we are directly working with the state governments. We have MOU signed with the state governments and they actually send out circulars to all the schools in the states to follow our interventions. That’s how it’s been working. What I have seen is in India, there are so many states and in each state, the policies are very different. So in one of the, suppose in one state, the special education or education for persons with disabilities will come under the separate department like Department of Social Justice, whatever is there in that disability office, whatever department. there are different departments actually for persons with disabilities. So sometimes the education comes under that department in few states. But in other few states, it comes directly under the Department of Education. So these are two different departments. And there is nothing like throughout the country, it’s the same policy. Sometimes when it is with the education department, the accessibility and awareness, those aspects are not very much there because it’s for general education. And even sometimes if it’s under the special education department, a lot more needs to be done. But it’s a little bit better. So there are all of these constraints that are there. There’s nothing like nationally everyone is following certain thing. It’s different for different states. But all of a sudden, we have actually signed the Convention of Rights of Persons with Disabilities Act in 2016. Lot needs to be done, but it has started. I’m not saying that what it was a decade back, it’s still the same. Because government is actually trying to make their websites accessible. Long way to go, but it has started. So that’s currently there. And there needs to be something central for special education in the country, which right now is not there.

Gonola:
Yes, there is certainly a lot to do. And one of the areas that we often talk about is universal design and its principles to ensure that there is design from the start when it comes to how a platform is accessible for anyone. If we take, for example, in the built environment, if we have a platform that is accessible for everyone, level entrance to a building instead of stairs, that means that it’s useful for persons using a wheelchair, but it’s also useful for someone pushing a pram or a delivery cart, and it’s not a special adaption, and that’s what we like to see more and more of in the online world, and for example here in this room we have captioning, and there has been a lot of work done to ensure that there is captioning in these particular sessions, but it’s essential for a person who has hearing loss, but it’s really good for anyone who has a language other than English and needs to have confirmed what is being said, or maybe there’s some facts that they can catch up with on the particular captioning. So I’d like to ask Dr. Huggins, your thoughts about universal design and its principles in the online learning environment.

Jacqueline Huggins:
Just to clarify, we have a national policy 2018, however we don’t have any legislation to back that policy, so it’s like you have a policy but nothing is being done. Thankfully the draft Trinidad and Tobago Disabilities Bill of 2023 will change that. Now in terms of universal design, my personal thought is it can be done, it can be done, and it is useful for everyone. So in terms of our academic staff, I would… have met with some academic staff and try to show them that based on what they do and how they do it, it will allow any student to benefit from their delivery. It will allow any student to be able to do that assignment. One of the things we talk about is really the cost. So for instance, my university was built 75 years ago and how do we retrofit so that it’s physical? We have lecturers who would have started teaching many years ago and this whole online and internet is very new to them. So how do we change the way they think and understand in terms of meeting the needs of every student within that classroom? So that is something that we continue in terms of awareness. We do outreach. We meet with the organization on the campus that provides training for academic staff so that they have a sense. Websites. I am working on a campaign where we are trying to get every faculty’s website to be accessible. We have new things. I am not sure if you heard of Canva and we have some colleagues who love to do Canva. They love to put pictures. They love to put blocks and then when they do that, a student who is blind, their equipment cannot read. So it is a constant. You must have a watchdog. I call myself a watchdog at that campus. You must have a watchdog that looks and sees and recognize and then speak out on behalf of students. We also work closely with our students. What are your needs? And we have to meet your needs once we recognize and we said, yes, we are taking you onto this campus. We must recognize your needs. we, my department, work very closely with the students that we serve. So we are always liaising with the lecturer, we are always liaising with our deputy principal in terms of changes that must come. Our mantra is that we are going to create a campus without barriers and that is what we work towards. Universal design is super important.

Gonola:
I like your term watchdog. I often use the word accessibility champion and I would encourage any organisation to ensure that there is either a watchdog or an accessibility champion to keep reminding the fellow staff and within the organisation generally to ensure that there is accessibility and that it doesn’t slip away. Suwaran, would you have any comments on that,

Swaran Ravindra:
please? I was just listening. It’s totally remarkable. As mentioned earlier, as Dr Higgins had previously said, I think it’s evidence that legislation on its own is never enough because even without legislation, these remarkable women have done so much work. They have come up with textbooks, they have come up with tertiary level education. If I may make reference to Professor Harrington-Blake who is in the Faculty of Education. So I remember when I met her, this is about four years ago and she had told me that no, we do not have enough legislation for persons with disabilities specifically, but we do have the Education Act and it says everybody and that did not stop her. That actually was something that she utilised when the term everybody means every citizen of the nation and that is what gave her enough legislation to go ahead and create a tertiary level education, a master’s. degree or a postgraduate degree in inclusion, that that teaches teachers how to make the classes inclusive. So I think this is enough evidence to say that legislation on its own is never enough. We do need the watchdogs, we need people who have to be there constantly ensuring that inclusion becomes part of our DNA. Now it needs to be part of our muscle memory, it needs to be part of our everyday motto and mantra. Nobody has to be left behind because somebody forgot to address the needs of a particular person. So just as the University of West Indies has, we also at Fiji National University have a reasonable adjustable form in which we meet a student and then we have a discussion, go through some student counselling session. But the other obstacle we face in that area is that the right still remains for the student if they want to declare their disability. And many times we’ve got these cultural norms, we have these societal norms, we have challenges around that as well because until and unless somebody declares the disability there would not be much that we can do to help. That does become a barrier. If I could refer to a specific case, I remember teaching a student who exhibited traits, or I wouldn’t say symptoms, but traits of a person who has a form of autism. And if I were to be specific on, because I had some discussion with some other teachers and they told me that it seems like that it is autism. But we could not really put a finger on what particular type of autism, because until and unless we can do that we will not be able to create the special provisions that are needed. So that becomes an obstacle. So when we tried to talk to her parents, the parents had a very aloof type of reaction. They said, no, my child doesn’t have disability. So for them disability is something to be shunned away, to be kept quiet about. It’s something that would be embarrassing and they feel that if anybody gets to know that the child has a disability then it is something that is not something to be proud of, it is something that would deter people in giving opportunities in the workforce as well. So these are some of the obstacles we are facing. Now AFINI is an ISO certified organization, we practice ISO 9001 and we’ve had situations where I remember there was a time when we had a participant in a short course program and she may have been in her early 50s and she had superannuation, she was actually paying for a course through superannuation and there were people in the class who came and told me madam it’s rather dangerous to keep her in class because she, well they used rather disturbing terms but what could have been the case was perinatal schizophrenia. So I had other participants coming and telling me that she could be dangerous to keep in class so then again we’ve got another legislation about OHS where we need to protect every participant in the class and so sometimes we have legislations that sort of contradict with each other but then there comes a point in time for like for example in my case as a teacher I had to stand my ground and I had to say no my student has a constitutional right to be in this class and if we are not creating the right provisions then we are people who are not you know doing the right thing but then eventually we had a good discussion. This was a thing in 2006 and I remember we still kept that student in class but the fact that she was using her own superannuation I think it was evidence enough that she was in a sound mind to actually work and end up living for herself. So there’s so many things that sort of contradict with each other as well but I think in cases like that we probably need another act that stands robust on its own. The fact that we need to create the provisions for persons with disabilities and that was entailed and enriched within the 2018 Act of Rights of Persons with Disabilities. The incident that I’m telling you about happened in 2006. So the only instrument, the only legal instrument I had in order to keep this student of mine in class was the fact that it is a basic right, it is a constitutional right to be in class. But of course, as in many lesser developed countries, as in many economies that are still developing, there will always be a huge gap between what the constitution says that the citizens should have in terms of rights and what the legislation says in terms of what happens when those rights are breached. So we need to, you know, focus on the gaps, we need to focus on the gaps and also find out how to address them.

Gonola:
There is a lot to unpack there. And I think that when it comes to the issue of cultural barriers in terms of the general education community understanding what it means to have a different type of disability and the shunning, the stigma in some cases. Vidya, do you have any comments about that? And also in terms of

Vidya:
universal design? Yes, as I was already mentioning that, you know, sometimes it’s the accessibility specific issues why people are not able to get onto the digital platforms or things like that. But sometimes it’s also all of these barriers like cultural norms considering it as a stigma. So it almost happens in all villages. For example, there is one lady who stays next door to my house and she rarely comes out of house. So 40 years, I think now she’s almost 40. So 40 years she’s a blind person and she’s locked up indoors. So there are situations like that. And I myself have seen trying to get some women onto digital platforms so that at least they can be connected to the community. And when I try to reach out to them, in the initial stages itself, there’ll be somebody at home picking up the call and not connecting to them. So they don’t have even that much freedom for them to get onto digital platforms. So all of these barriers definitely are there. And sometimes it’s also how we design the technologies. Even social issues are sometimes socially how we want to be there, how we want to look. For example, if you take a simple example of a cane, some people are not comfortable taking it and walking with it because it looks very different. Now, if there are some audio-specific devices which are too big or which are not very socially pleasing to take it in a social setting, then people will not like to use them much. So some phone, for example, is a very good example of universal design because on the phone, there’s TalkBack. There are all sorts of accessibility features that are there. When you want, you can turn it on. When you want, you can turn it off. So phone, everybody carries. There’s nothing that prevents you taking it whenever you are in a group or whenever you are in a social setting. So you’ll have to consider all of these barriers as well while designing the e-learning experiences and make it as inclusive and as socially acceptable as possible on what platform you want to design the e-learning experiences. So all of these will also have to be factored in. And the continuous support for people to use the platforms also is a must. Sometimes the government runs a lot of programs. They distribute laptops. They distribute a lot of devices. or even some other organizations distributed to students and all the software is installed, LMS platforms are there, but who is going to oversee it whether the students, teachers or whichever person wants to use the platform, are they comfortable, are they using, are they able to use it on a long-term basis. All of these will have to surely be considered along with accessibility issues.

Gonola:
Thank you Vidya and I’d like now to bring in Zakaria. Yama from Morocco who is a co-organizer of this session and also on the leadership team of the Internet Society Accessibility Standing Group and so if Zakaria could make some comments about universal design principles too, thank you

Zakari Yama:
Thank you Gunilla, thank you everyone. As some institutions as said by Medessa, I find it difficult to apply universal design and make it compatible with the accessibility, even though both have the same goal, making access and reduce barriers for students. However, the scope and method they use vary. For a universal design, it focuses on a broader range of learners while the digital accessibility focuses essentially on learners with disability, but the good news is that what is good for persons with disability is also good for everyone. When we take for example real-time captioning for persons with disability, it is also good for students without disabilities because when they have for example a difficulty understanding an instructor’s accent, it’s also good for them when watching a video in a loud environment. When applied with an accessibility mindset, the universal design for learning often leads to resulting in benefits for people beyond those in need of a specific accommodation. In my opinion, any institution should use the accessibility effort as an opportunity to improve the universal design practices. Thank you.

Gonola:
Thank you very much, Zakari. Before we go on to talk about the broader concept of how the internet community can all work on making e-learning more accessible, I’d like to open the floor now to persons in the room and online if there are any comments or questions. Yes, we have one from Lydia Best. Please take the microphone.

Lydia:
Thank you very much. I’m Lydia Best and I represent the European Federation of Hard of Hearing People. I have a question not around just the e-learning in the classroom itself, but also before. For example, students have to access internet and online resources teachers provide for them, be it assignment, be it whatever materials we need to use. What I have seen, and that is in the UK, that the IT department in the schools often apply a very heavy-handed way towards accessing the online resources in the schools, between the schools. And that, in fact, is a barrier for those with cognitive impairment. And, you know, just constantly changing the passwords, constantly changing the way to access, immediately stops for students from accessing vital information and from being able to provide their assignments. And the problem is that nobody actually sees this as a problem, even when you raise it, because it’s being seen as, this is simple, this is no problem for anyone, so why do you have a problem? And I think we need to address that as well. Thank you.

Gonola:
Thank you very much, Lydia. Who would like to take that question? Vidya or Dr. Huggins, Swaran, who would like to take that question?

Jacqueline Huggins:
What I would like to say is that, you know, I understand what was just said, but on my campus, again, my department, we work closely with IT. You know, we listen to whatever complaints students have, and we take it to whichever quarters. So, for instance, we had students who could not afford the software that is needed, the JAWS. So, what I did is that I worked with my supervisor to gain funding, so that we were able to purchase four licenses, and we put it in each one of our libraries, our computer labs, so our students were able. IT was included so that it had an understanding of why we were using the software, the reason why they need to support the students. So, it is also about finding the stakeholders who would listen, finding the stakeholders who would understand and ensure, you know, that what the student needs. is what the student gets. There are some equipment that’s very expensive that our students cannot purchase. And therefore the university has that responsibility. And once the university has that responsibility, those who are involved in ensuring that it happens, like our IT unit, they are definitely brought on board. So, you know, a lot of what we do, it takes meeting and talking, negotiating, which shouldn’t be, it should be, this is what needs to be done. But it takes some of that to ensure that, you know, the students are not frustrated. The students are able to come on campus and they’re able to do what they need to do.

Gonola:
Thank you. Any other comments to that question?

Swaran Ravindra:
I just wanted to just clarify from Lydia once more. So is your question around the need of, you know, having to constantly change your passwords or there’s too many authentication processes that make it cumbersome for a person with disability to continue, you know, working? Is it something around that? If Lydia could please clarify, I’m just trying to understand.

Lydia:
It’s Lydia speaking, yes, that’s correct. So that is even before you go online to participate in your online learning. And, you know, I’m not going to talk about captioning because it’s been already said, but it is actually accessing the vital materials the students have to get into online library where the teachers put in the assignments where for the students, students get chastised for not finishing or finalizing the work, but they literally could not remember their passwords. And when they were erasing it, it’s a constant battle of working with the IT to understand, but actually you can’t keep changing those passwords. You can’t keep ramping up security. because it creates a barrier for the students. And I have seen it first hand with my son. Thank you.

Swaran Ravindra:
Thank you, Lydia. I think that’s a very valid point. And additionally, students should never be penalized for that extra time that they require in logging into the system. The assessment should, in fact, start from the moment that the student has accessed the main curriculum. And if you look at certain, if I could speak about certain IT exams, for example, CCNA, Cisco exams, Checkpoint exams, Microsoft exams. If you do the exams, for example, you’ll see that you are assessed only for the times that you are actively online. And if there is any sort of technical issue, then whatever time the technical issue takes, that time would not be you. You wouldn’t be held against you. You will be compensated for that time. That’s that’s one part of the equation. Now, of course, it’s very important for us to be cyber resilient in today’s world. I cannot emphasize that enough. However, there are so many easier ways of authentication, thumb trains, tongue scan, retina scan. There’s so many different types of easier authentication methods that are specific to that person. Face recognition, retina recognition. So these authentication methods that are specific to the person. So there’s really no other way of bypassing that. It’s very secure and it’s easier as well. So I fail to understand why would they try to impose such difficult types of authentication methods and waste their time and make it such a deterrent that the student would not even want to go back to class. So maybe you should really advocate for this.

Gonola:
Thank you very much, Suaran. And there is another question or comment here in the room.

Anna:
Hello, my name is Anna. I’m from Brazil and I work in a child’s rights organization. And I would like to hear a little more about Lydia’s work with children. And if you can comment about the role of civil society. and promoting their rights, and you talk about guaranteeing the accessibility since the design too, and it’s what we defend for children too, but I want to hear about your thoughts about how can we do this and promote this if we don’t have the platforms involved in this debate, or we don’t have persons with disabilities working in those places to think and promote these accessible ways, and what is your thought about that?

Gonola:
That’s a long question, and also it can be a very long answer, and I think we can make it part of the rounding off of this session about how the internet community can encourage collaboration across the globe to make learning more accessible for persons with disability, and certainly children with disabilities, so I’ll pass now over to Vidya.

Vidya:
Yes, so I feel a little bit to answer the question that you had asked earlier, so when you’re talking about children, a lot of times children do not know what they want, so it should be the persons with disabilities who have grown up in similar circumstances, who have gone through the system to tell that this is what the children need, so once the child knows that these are the things, because what I have seen is whenever you take any new technology to the child, they are very open-minded, they are not very biased, they have not grown up yet, so they don’t have their own assumptions, so whenever you take something new, they pick it up really, really quickly, so I don’t see that. see why a child who is introduced to computer, who is introduced to braille, who is introduced to technology, who knows everything, say, right from grade one, why won’t they be able to compete with everyone else, say, when they reach grade eight or nine? I mean, they can do very much everything in par with everybody else. So that’s what we are trying to give all of these, right from very early age, everything that a child has access, a child with sight has access. We are trying to make it available for children without sight as well. And I feel the nonprofit organizations have a huge role because they’re the bridge between the government and bridge. They know the ground realities working in this space. So it’s very much essential for the nonprofit organizations to be that bridge and to play their role very effectively. Also, as an internet community, I feel that having forums like these where there are people who have expertise in different areas, sharing their thoughts, networking, and actually coming up with what are the pressing needs that the community at large has, and actually following up with the networks that we make here and making a meaningful impact together. Anyone individually cannot do it. So I feel forums like these and the internet community has a huge role to play, and it takes time. So it’s a good starting point.

Gonola:
Thank you very much, Vidya. And it’s so important to hear from a person with a lived experience and the pathway that Vidya took to become who is now a global advocate. So that’s very important. now pass on in the last few minutes, just very briefly, to Dr. Huggins, just to give some thought about this encouraging collaboration across the globe, which we’ve already heard about, all of those experiences from various different countries. And how can we continue that collaboration to make e-learning more accessible? Dr. Huggins.

Jacqueline Huggins:
And I certainly want to agree with forums like these, because this is where I learn, and this is where I take back to my university and try to get it implemented. And this organization, there’s a wealth of knowledge, a wealth of experience, and we cannot stop. We need to continue. And so, for instance, one of our campus is fully online, and it covers 13 countries in the Caribbean. And students are able to get their degrees. And I believe if we utilize a system like that, little by little, we spread it. I am talking to somebody from India. I’m talking to somebody from Fuji. And we learn from each other, and then we put together what are the best practices. And we start to utilize whatever we learn on these forums. It’s not a talk show. We’re going to take back some thoughts. We’re going to take back some action. And I think little by little, we stay with this. We stick together, and we could get it done. It’s going to take some time, like she said, but it’s not impossible.

Gonola:
Thank you very much. And I will give a final word to Suara Narendra, please.

Swaran Ravindra:
Thank you very much. So finally, I think. through all this conversation, there’s something that I wanted to talk about is affirmative action. We can talk about these things. And, you know, last year I met someone at APIGF and he mentioned that I’ve been saying the same thing for the past 10 years. I think it’s time for affirmative action and we can do it together, right? So some of the things that could help is first of all, a disparity measurement. We cannot talk without having proper measurements in front of us. Governments, economies will not listen to us till we have intellectual property that is based on disparity measurements. So basically it’s just a simple measurement of how many people are digitally included over how many people who are not. And then there’s some standards like the world-renowned standard WCAG to the least if we could try with 1.0, even in places where there’s no such thing as digital inclusion ever done. So if we could have web content accessibility guidelines 1.0 to start off with. And this one initiative that I wanted to speak to you about is UNESCO’s Romex Indicators, which is an internet universality indicator assessment. So we are currently doing this for five Pacific island nations and basically it’s based on human rights principles for an internet that is based on human rights. It is open, it should be accessible to all and it is nurtured by multiple stakeholder participation as well as some cross-cutting issues like children, gender, security, economy. So this is quite an interesting study. I’m actually part of this. So if there’s anybody who would like to talk to me about how you could do this, I’ll be happy to address your questions later on. So that’s all I had to say. Thank you very much.

Gonola:
Thank you very much, Swaran and thank you very much to the panel and the audience with your questions. And I think we have learned a lot and we look forward to further collaboration across the globe. Thank you very much, everyone.

Jacqueline Huggins:
Thank you and goodbye.

Swaran Ravindra:
Sorry, can we just take a photo quickly, please?

Anna

Speech speed

131 words per minute

Speech length

125 words

Speech time

57 secs

Gonola

Speech speed

120 words per minute

Speech length

1296 words

Speech time

649 secs

Jacqueline Huggins

Speech speed

152 words per minute

Speech length

1522 words

Speech time

599 secs

Lydia

Speech speed

158 words per minute

Speech length

349 words

Speech time

132 secs

Swaran Ravindra

Speech speed

193 words per minute

Speech length

2448 words

Speech time

761 secs

Vidya

Speech speed

166 words per minute

Speech length

2604 words

Speech time

944 secs

Zakari Yama

Speech speed

109 words per minute

Speech length

197 words

Speech time

108 secs

Advancing rights-based digital governance through ROAM-X | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Alexandre Fernandes Barbosa

The Internet Universality Indicators framework has been successfully implemented by Brazil for almost two decades, demonstrating the importance of data production in assessing the impact of internet universality. Despite the framework’s extensive range of indicators, the scope of its application necessitates the collection of comprehensive and up-to-date data.

However, one significant hurdle in utilizing the framework is the existence of a data gap in many countries, which prevents a thorough assessment of internet universality. Without the required data, these countries are unable to effectively evaluate their progress in achieving the goals outlined in the framework. This highlights the need for increased data production and availability to ensure accurate assessments.

The implementation of the Internet Universality Indicators framework has facilitated multi-stakeholder dialogue, providing an opportunity for different actors, including policymakers, civil society, and the private sector, to contribute their perspectives and insights. Continuous engagement of these stakeholders is crucial for effective e-government systems and the development of tangible outcomes.

Brazil serves as a notable example of the positive impact of multistakeholder dialogue, with the creation of important legislation such as the Brazilian General Data Protection Regulation (GDPR), the law of access to information, and the Internet Bill of Rights. These outcomes underline the potential of multistakeholder dialogue to drive meaningful changes in governance and policy-making.

Furthermore, the relevance of specific stakeholders has not significantly changed, emphasizing the continued importance of involving government, technical community, civil society, and the private sector in discussions and decision-making processes.

UNESCO has played a vital role in fostering dialogue and cooperation, particularly in the context of internet universality. Working closely with UNESCO, individuals such as Barbosa appreciate the organization’s efforts in building capacity and raising awareness among member states. This collaboration has resulted in significant progress, with a considerable number of countries completing assessments and demonstrating commitment to achieving the goals of the framework.

However, one area of concern is the existing data gap, particularly in countries from the global south. It is crucial to address this gap as it hampers the ability to comprehensively assess internet universality and implement necessary measures in these regions.

In conclusion, the Internet Universality Indicators framework provides a comprehensive understanding of the significance of data production, multi-stakeholder dialogue, and periodic assessment in ensuring progress towards internet universality. The successful application of this framework by Brazil highlights its effectiveness in driving positive outcomes. However, the data gap remains a challenge, and further efforts are needed to bridge this gap, particularly in global south countries. Overall, the framework’s implementation has contributed to a greater understanding of the importance of collaboration, assessment, and capacity building in advancing internet universality.

Audience

During a discussion, both the speaker and audience displayed a keen interest in exploring the field of mood stakeholders and whether any new indicators have emerged in the last five years. The primary question raised by the speaker was the existence of new indicators in this domain.

The concept of “mood stakeholders” was deemed a noteworthy dimension of the indicator, relevant to the topic under discussion. While specific details regarding these mood stakeholders were not provided, it can be inferred that they play a vital role in determining the mood or emotional state of a particular group or community.

It was emphasised that a list of indicators encompassed the involvement of mood stakeholders, suggesting that such indicators are already recognised and widely accepted within the field. However, the discussion aimed to identify whether any novel indicators had emerged in the last five years, indicating advancements or changes in this area.

The audience also expressed curiosity about any modifications or developments that may have taken place in the field of mood stakeholders. Unfortunately, specific supporting facts or evidence to address their questions were not mentioned. Nonetheless, their curiosity reflects a general interest in staying up to date with the latest advancements in the field.

Given the neutral sentiment expressed by both the speaker and audience, no definitive conclusions were reached during the discussion. However, the main question raised regarding the emergence of new indicators in the realm of mood stakeholders implies a desire for further exploration and potential expansion of knowledge on the subject.

In conclusion, the speaker and audience engaged in a discussion focusing on the exploration of mood stakeholders and the potential introduction of new indicators within the last five years. The absence of specific facts or evidence limits the ability to provide concrete answers. However, it is evident that the participants expressed a genuine interest in understanding any advancements or changes that have occurred in this crucial field.

Speaker 1

Five years ago, the Internet Universality Indicators received endorsement from UNESCO’s Intergovernmental Council of the International Programme for the Development of Communication. During a recent forum, the speakers emphasized the necessity of continuous transformation and improvement of these Indicators. They highlighted the need for shared insights, strategies, and identification of areas that require enhancement.

The speakers recognized the lessons learned and challenges faced over the past five years, which have strengthened the importance of constantly evolving and adapting the Indicators. They stressed the significance of collaboration and collective action in shaping and refining these guidelines.

Furthermore, the speakers emphasized the value of collective efforts and the exchange of experiences, obstacles faced, and strategies for success. They hoped that the discussions held during the forum would result in tangible benefits for all stakeholders involved in the Romex framework, an important aspect of the Indicators.

Overall, the speakers concluded that the continuous evolution of the Internet Universality Indicators is crucial in ensuring their relevance and effectiveness in addressing the ever-changing digital landscape. They urged a collaborative approach, encouraging stakeholders to work together to shape these Indicators and improve the digital policies related to them. This united effort is expected to lead to practical and positive outcomes for all parties involved.

Anja Gengo

The Internet Governance Forum (IGF) featured discussions on various topics related to Internet governance. One notable highlight was the recognition of the Dynamic Coalition, an independent and autonomous entity, for its successful engagement of stakeholders worldwide. The coalition has played a crucial role in promoting indicators and monitoring their implementation since their adoption in 2018. This engagement has yielded significant results, underscoring the value of their efforts.

Another key point addressed was the need to involve stakeholders from underrepresented countries in global Internet governance processes. The IGF Secretariat has prioritised outreach to engage stakeholders from countries that have traditionally had limited participation in these processes. This approach has proven effective in incorporating active participation from nations such as the Maldives, previously underrepresented in global Internet governance initiatives. The argument presented is that engaging stakeholders from a diverse range of countries is essential for achieving a more inclusive and comprehensive approach to Internet governance.

Furthermore, the speakers emphasized the importance of upholding the highest humanitarian values in the digital world. They highlighted the disparity in how different jurisdictions interpret social media posts, with some considering them exercises of freedom of expression while others penalise them with imprisonment or fines. The call to uphold humanitarian values implies the need for the digital world to strike a balance that respects freedom of expression while safeguarding the well-being of individuals and communities.

Additionally, it was noted that there has been a proliferation of national laws regulating artificial intelligence since the onset of the pandemic. Prior to the pandemic, only a few national jurisdictions had laws pertaining to artificial intelligence. However, in the post-pandemic era, there has been a significant increase in the number of such laws. This observation highlights the growing recognition of the importance of effectively regulating and governing the use of artificial intelligence technologies.

The speakers also stressed the importance of adopting a methodological approach to stakeholder engagement. The IGF Secretariat presently focuses on engaging stakeholders from underrepresented countries, ensuring a multi-stakeholder and multidisciplinary approach. This methodical approach is seen as essential for fostering more diverse and inclusive discussions on Internet governance.

The relevance of early assessments and the need for expanding outreach were also brought to the fore. The COVID-19 pandemic has brought about significant changes in the legal landscape, necessitating a reevaluation of existing assessments. Moreover, efforts must be made to ensure that assessments and outreach are inclusive and comprehensive, without jeopardising the global nature of the Internet.

The speakers also emphasised the need to engage stakeholders from different backgrounds and perspectives in dialogues and processes. They shared an anecdote about a Tanzanian judge who did not fit into a standard stakeholder category, highlighting the importance of recognising and including diverse voices. The initiation of a parliamentary track in 2019 reinforces the need to address recognised gaps in stakeholder group representation. Therefore, efforts to actively engage stakeholders who are not participating within certain stakeholder groups are crucial.

Furthermore, the speakers stressed the necessity of active participation from high-ranking individuals in various domains, particularly those that are currently underrepresented. The absence of medical professionals in privacy-related discussions and individuals from the car industry, particularly at the highest management levels, was highlighted. This observation suggests that the perspectives of individuals with expertise and decision-making authority in these fields should be actively sought to ensure that Internet governance discussions are well-informed and effectively address critical issues.

Lastly, the speakers underscored the significance of promoting and implementing UNESCO’s Internet Universality ROMEX indicators. These indicators are considered essential for guiding and assessing Internet universality, ensuring that the Internet is used for the benefit of all individuals and societies. Both the Dynamic Coalition and the IGF Secretariat expressed support for these values, with an emphasis on cooperation between UNESCO and the IGF for successful implementation.

In conclusion, the discussions at the IGF covered a range of topics related to Internet governance, including stakeholder engagement, representation, regulation of artificial intelligence, the importance of humanitarian values, and the implementation of UNESCO’s Internet Universality ROMEX indicators. Throughout the discussions, the importance of inclusivity, comprehensive assessments, and active participation from diverse stakeholders was consistently emphasised.

David Souter

David Souter proposed a holistic approach for assessing Internet Universality Indicators (IUIs). These indicators, based on the concept of Internet universality developed in 2013, focus on rights, openness, accessibility for all, and multi-stakeholder engagement. Souter pointed out that many countries have concentrated solely on the core indicators and advocated for a review to address this issue.

Souter stressed the importance of diversity within the research team and advisory board when using IUIs. He highlighted that a diverse team helps avoid political pressure and vested interests. Moreover, diverse expertise within the team leads to a more impactful output. Including multiple perspectives ensures a comprehensive analysis and enables the project to benefit from a wide range of insights.

Additionally, Souter emphasized the need to prioritize practical interventions over ideal ones in the national context. The goal of IUIs is to identify realistic interventions that can be implemented effectively. Recommendations should be feasible and achievable within specific national contexts. This pragmatic approach ensures that IUIs can effectively promote Internet universality.

Souter criticized member countries for solely focusing on core indicators. He argued that this approach overlooks the opportunity presented by non-core indicators. By narrowing their focus, countries may neglect important aspects of Internet universality and fail to address crucial issues. Souter’s analysis underscores the necessity of adopting a comprehensive and inclusive approach when utilizing IUIs.

In conclusion, David Souter’s analysis highlights the significance of a holistic assessment approach for Internet Universality Indicators. This approach encompasses diversity within the research team and advisory board, prioritization of practical interventions, and consideration of non-core indicators. Employing this approach enables countries to gain a more comprehensive understanding of Internet universality and actively work towards creating a more inclusive and accessible digital environment.

Lutz Möller

The analysis of the given statements highlights several key points pertaining to internet ecosystems and their influence on societal discourses. One speaker highlights the rapid expansion of dominant social media platforms, noting the fundamental changes observed in these platforms. This speaker also emphasizes the influence of these platforms on the visibility of different political views and the concerning increase in the spread of disinformation.

Another speaker emphasizes the necessity of strengthening internet ecosystems in a more democratic and nonprofit manner. The speaker acknowledges the growth of artificial intelligence (AI) manipulation and repression, as well as the growing influence of private business interests in public discourse. The argument here is to establish internet ecosystems in a way that prioritizes democratic values and ensures a level playing field for all participants.

Additionally, the use of Internet Universality Indicators (IUIs) is praised for providing a comprehensive viewpoint of whether internet policies adhere to principles of human rights, openness, access, and stakeholder participation. The evidence points to Germany’s experience with IUIs, which generated brutally honest evidence regarding internet policies. It is highlighted that IUIs play a pivotal role in highlighting the delicate balance between the right to privacy and freedom of expression.

However, there are concerns raised about the number of IUI indicators, with a suggestion that there should be a stronger focus on key areas and topics. The feasibility and practicality of certain indicators are questioned, as well as issues surrounding data availability and operationalization. Despite these concerns, the general sentiment remains neutral toward the number of IUI indicators.

Additionally, the analysis highlights the crucial role of a multi-stakeholder advisory board in the IUI process, particularly when it comes to effectively communicating results to political stakeholders. The evidence provided is Germany’s successful experience with a multi-stakeholder advisory board in the IUI process. This highlights the significance of involving various stakeholders in decision-making processes to ensure transparency and accountability.

In conclusion, the analysis of the statements highlights the rapid expansion and influence of social media platforms on societal discourses. It emphasizes the need for democratically driven and nonprofit internet ecosystems to counterbalance the growing influence of private business interests. The use of IUIs is regarded as an effective tool for assessing internet policies’ adherence to human rights principles and stakeholder participation. However, there are concerns about the number of indicators and the practicality of certain measures, as well as the importance of multi-stakeholder involvement and effective communication with political stakeholders. Overall, these insights contribute to a better understanding of the complexities surrounding internet ecosystems and their impact on societal discourses.

Simon Ellis

The analysis focuses on the Internet Universe Indicator (IUI) system, which offers a unique holistic approach to assessing the internet infrastructure and usage in countries. Instead of providing a single definitive answer, it produces an analysis that encourages countries to answer a set of questions, resulting in a comprehensive picture of their internet landscape. This approach is viewed positively as it allows for a more nuanced understanding of the internet in different countries.

Follow-ups are considered an important aspect of IUI assessments. The analysis highlights the first follow-up assessment conducted in Kenya by Grace Gitaiga. However, the nature of reporting and the frequency of IUI assessments are being questioned, suggesting the need for further examination of this aspect.

The inclusion of new themes in IUI assessments, such as AI, environment and sustainability, and cyber security, is supported. These emerging themes are seen as crucial considerations in evaluating the state of the internet and its impact on society. This demonstrates the dynamism and adaptability of the IUI framework to address current and evolving challenges.

E-waste and satellite connectivity are identified as significant issues in Southeast Asia and the Pacific. The analysis notes that Southeast Asia has become a dumping ground for e-waste from Europe and North America, highlighting the environmental and sustainability concerns associated with improper e-waste disposal. Additionally, the geographical challenges in the Pacific region make satellite connectivity the only viable option, underscoring the importance of addressing this issue for improved internet access in these areas.

Another important point raised in the analysis is the need to define the concept of multi-stakeholder participation. The analysis suggests that true multi-stakeholder involvement goes beyond mere attendance at meetings and emphasizes the importance of active engagement and meaningful inclusion of stakeholders’ inputs in decision-making processes. This understanding is crucial for fostering genuine collaboration and effective governance in the digital realm.

The analysis also stresses the necessity of achieving real participation in multi-stakeholder initiatives. It highlights the observation that in e-government systems, inputs from civil society representatives are often disregarded or their usage remains unknown. To address this issue, it is crucial to analyze what meaningful and effective participation looks like and how it can be captured in order to establish inclusive and participatory digital governance. Furthermore, the analysis mentions the role of new actors on the internet. It notes that police involvement in internet-related matters has been observed in recent maps, indicating the increasing influence of new actors in the digital space. This development raises questions about the implications and potential challenges associated with the involvement of these actors.

The analysis also brings up the noteworthy observation made by Simon regarding the importance of indicators related to training for judges and lawyers. Simon considers it interesting and important, suggesting that adequate training in legal matters pertaining to the internet is crucial for maintaining peace, justice, and strong institutions. This observation highlights the need to prioritize the training of legal professionals in digital issues to ensure fair and effective dispute resolution and legal processes in the digital era.

Finally, the analysis mentions Simon’s approval of the assessment and his anticipation of a new version related to the global digital compact. This indicates support for the assessment process and the belief that it can contribute to advancing global digital cooperation and achieving the goals outlined in the global digital compact.

Overall, the analysis provides valuable insights into the Internet Universe Indicator (IUI) system, its various aspects, and its implications for assessing and improving the internet infrastructure and usage. It highlights the importance of continuous evaluation, the inclusion of new themes, addressing specific challenges, and achieving meaningful multi-stakeholder participation in fostering a sustainable and inclusive digital landscape.

Marielza Couto e Silva de Oliveira

The Internet Universality ROMAX framework, which focuses on the principles of the Internet, needs to be revised to keep pace with the rapidly evolving digital governance and technological landscapes. One argument proposes that the ROMAX indicators should be strengthened and potentially expanded to include new dimensions like child data protection, mental health, and AI toxicity levels, in order to better address the challenges and implications arising from these areas.

The argument stems from the potential of ROMAX indicators to serve as a critical mechanism for monitoring adherence to principles in the upcoming global digital compact. By incorporating child data protection, mental health, and AI toxicity levels, the framework can enhance its effectiveness in promoting good health and well-being, quality education, gender equality, and industry innovation and infrastructure, all outlined in the relevant Sustainable Development Goals (SDGs).

It is important to note, however, that many national teams analyzing ROMAX face research obstacles due to a lack of disaggregated data, which limits visibility of the indicators. Despite this challenge, stakeholders believe that tightening the ROMAX indicators and expanding their scope is essential to keep up with the evolving technological and governance landscapes.

To ensure a successful update of the ROMAX framework, active participation, collaboration, and continued engagement of stakeholders are crucial. The Internet Universality Indicators Dynamic Coalition has proven to be an effective platform for exchanging expertise and experiences in this regard. Stakeholders, who possess an on-the-ground understanding of national needs, research difficulties, and emerging themes, play a valuable role in shaping the future of the ROMAX framework.

In conclusion, the Internet Universality ROMAX framework requires revision to adapt to rapidly changing digital governance and technological landscapes. Strengthening and potentially expanding the ROMAX indicators to include areas like child data protection, mental health, and AI toxicity levels is proposed. The successful update of the framework relies on active participation, collaboration, and ongoing engagement of stakeholders. The Internet Universality Indicators Dynamic Coalition facilitates knowledge exchange, while stakeholders provide valuable insights into national needs and research challenges.

Moderator – Tatevik GRIGORYAN

The meeting on UNESCO’s Internet Universality Romex Indicators was attended by participants from various parts of the world who joined online. Notably, Dr. Lutz Moeller joined the meeting early in the morning, demonstrating dedication and commitment. Despite the inconvenient times, participants were acknowledged and thanked for their valuable contributions.

The meeting included individuals who played a significant role in the development and progress of the Romex Indicators, showcasing the importance of their expertise and insights. It was mentioned that Tatevik Grigoryan, the meeting’s moderator, was sitting next to these individuals, further illustrating their involvement and importance in shaping the indicators.

Due to unavoidable circumstances, the assistant director general for Communication and Information at UNESCO could not attend the meeting in person. However, a video message from the assistant director general was played, indicating their commitment to the meeting and the subject matter.

The meeting emphasized the principles of internet universality, which is the official position of UNESCO. This position entails upholding the rights of individuals, ensuring openness, promoting accessibility for all, and fostering multi-stakeholder participation. The meeting highlighted the multi-stakeholder approach to internet governance, which is also promoted by the Internet Governance Forum.

The ROMEX IUI assessment, considered a unique global tool, is currently being implemented in 40 countries. These assessments aim to inform policymakers and contribute to the development of digital strategies, laws, and regulations. It is worth noting that six out of the 40 countries have already published a report based on the assessment.

The ROMEX IUI assessment not only aids in the development of the internet at the national level but also supports the achievement of Sustainable Development Goals. It aligns with the Global Digital Compact, emphasizing the significance of this assessment framework as a comprehensive and holistic approach to internet development.

The meeting also discussed the ongoing revision of the framework. Considering that the ROMEX IUI assessment is currently being implemented in 40 countries, it is imperative to incorporate topics and lessons learned from the implementation process into the revised framework.

Throughout the meeting, Tatevik Grigoryan expressed appreciation to the panelists and steering committee members of the dynamic coalition. This dynamic coalition has been supportive and actively engaged in various initiatives related to the ROMEX framework.

In her closing remarks, Grigoryan reflected on the insightful discussion and offered speakers an opportunity for final thoughts. The absence of audience questions during the meeting indicates that the discussion was well-structured and kept on schedule.

Furthermore, Grigoryan highlighted the contributions and dedication of her team, specifically mentioning the work of her colleagues, Karen Landa and Camila Gonzalez. Their involvement and efforts were recognized in advancing the investigation of Internet universality.

Finally, Grigoryan expressed her interest in carrying on the tradition of taking a family photo. This indicates a sense of continuity and fosters a collaborative and unified spirit among the participants.

In conclusion, the meeting on UNESCO’s Internet Universality Romex Indicators brought together diverse participants to discuss and emphasize the principles of internet universality. The Romex IUI assessment, as a global tool, plays a crucial role in the development of the internet at the national level and supports the achievement of Sustainable Development Goals. The ongoing revision of the framework reflects the commitment to continuous improvement and learning from the implementation process. The panelists, steering committee members, and Grigoryan’s team were appreciated for their contributions and engagement. The meeting concluded on a positive note, highlighting the importance of continuity and unity among participants.

Session transcript

Moderator – Tatevik GRIGORYAN:
Hello everybody who is here in the room with us, and to those who joined online, especially thank you to all the people who have a very inconvenient time. I know it’s 4 a.m. in Europe, and my colleagues are there online, and also we have a speaker online, Dr. Lutz Moeller, who is with us at such an early hour, so thank you so much. So my name is Sathevik Gregorian, and I work for UNESCO, for those of you who just joined us, and I work on UNESCO’s Internet Universality Romex Indicators, and I’m really honored to be sitting next to people who were at the cornerstone of developing the indicators and then supporting the launch and progress of the indicators, who will be sharing their thoughts on the process, and then on the progress, as well as further updates. So I would like to start by a video message from the UNESCO’s Assistant Director General for Communication and Information, who unfortunately couldn’t be here with us, but he sent a video message which I would now like to request the technical team to play. Thank you.

Speaker 1:
Distinguished participants, esteemed colleagues, and honorable guests, I am delighted to extend a warm welcome to all of you at the Dynamic Coalition on Romex Indicators session, which takes place during the Internet Governance Forum 2023 in Kyoto. As we gather today, we are surrounded by passionate individuals who share a common vision, an Internet ecosystem that upholds rights, embraces openness, fosters accessibility, and evolves through the collective efforts of its stakeholders. Personally, I regret not being able to join you physically in Kyoto due to a scheduled conflict with the UNESCO Executive Board meeting in Paris, which I need to participate in. As the UNESCO Assistant Director General for Communication and Information, I had the privilege of attending the previous editions of IGF, including the last two ones held in Poland and in Ethiopia. This platform has consistently proven invaluable for fostering meaningful discussions about the Internet’s pivotal role in our digital age. Today, our focus is on the ever-evolving landscape of Internet governance and the ongoing refinement of the Internet Universality Romex Indicators. Our gathering represents more than just a dialogue. It is a call for collective action. Five years have passed since the endorsement of the Internet Universality Indicators by UNESCO’s Intergovernmental Council of the International Program for the Development of Communication. During this time, we have witnessed the transformative power of these indicators in shaping national digital policies. Yet, the lessons learned and the challenges faced over these years underscore the need for continuous evolution and adaptation. As you mark this five-year milestone, we are actively engaged in refining the framework to ensure its continued relevance in our ever-evolving digital world. I urge each one of you to draw upon the collective wisdom of this forum. Share your insights, your strategies for success, and also the obstacles you have faced. I further encourage you to highlight the framework’s strengths and identify areas that need enhancement. Let’s ensure that our deliberations here translate into tangible benefits for all stakeholders of the Romex framework. I thank you all for your unwavering commitment and active participation in this pivotal session at IGF 2023. Let’s work together in shaping an Internet that genuinely serves the interests of all. Thank you for your kind attention.

Moderator – Tatevik GRIGORYAN:
I thank our Assistant Director General for communication and information, for sending this message and for the leadership in this process. Without any delay, I would like to present our first speaker, David Sucher, who is referred to as the architect of the IEI Romex framework. Personally, I call people who have been in the cornerstone co-parents of the framework. I would like David to request you to please talk about the process of developing the indicators and then progress, and then as we are approaching this five-year mark and planning to ensure the continued relevance of the indicators to speak about what direction we should move towards. Thank you very much. Thank you.

David Souter:
I should say, firstly, I should apologize for the fact that I have to leave for another session which begins at quarter past three, so when I get up and walk out, it’s not a gesture of protest or anything like that. It’s just I need to move to something else. But I thought I’d give you a kind of origin story of the IUIs, Internet Universality Indicators. They stem from a concept of Internet universality that was devised by Guy Berger when he was working for UNESCO back in 2013 before the 10-year review of the World Summit. In fact, I remember him walking up to me at a UNESCO conference at that time and presenting me with this and saying, what do you think of this proposal for universality approach based around the four tenets or four principles of rights, openness, accessibility for all, and multi-stakeholder engagement? The idea emerged eventually from that concept when it was taken up by UNESCO formally of having an indicator framework which was modelled along the lines of one of the existing UNESCO indicator frameworks, the Media Development Indicators, on which I’d also worked in the past. So the indicator framework should be one that would include quantitative and qualitative assessment. So it wouldn’t just be about numbers. It would be one that would support national researchers to assess their national performance, but it wouldn’t be intended to compare one country against another. It would be about looking at the country itself internally. And it would aim to identify practical interventions that could improve Internet performance in relation to those principles of rights, openness, accessibility, and multi-stakeholder engagement. Principles, practical interventions, developed through dialogue amongst national stakeholders, so bringing together the diverse communities which were engaged within the Internet. I ended up leading the development of this indicator framework in association with APC and with my colleague, Henri van der Spee, who’s in the room at the back. So the aim was always to build a large data set, and it is a very large data set presented within the indicators. The aim was always to build a large data set for analysis for a couple of reasons. First is because the availability of data is very variable between countries. So in some countries there are really very few data sets that would be available, and qualitative sources would be particularly important. In others there were many more. Our aim was to try and build a collage from the evidence that was available that would enable the best possible analysis within the country itself. And the second point was to include indicators which would enable the researchers to look at issues that were particularly important in their countries but might not be important in other countries. So to take up those specific themes. We went through a couple of really extensive consultation processes about what should be in these indicators, and that did tend to grow the number even more. And we also decided to round out the Rome framework with the X category, which would bring in a number of important other issues into the analysis of the national Internet environment. So this made for a lot of indicators, and we decided to offer two approaches to that. First a comprehensive set, which is in this rather thick book here. And secondly, a smaller core set that would be more manageable. A core set of indicators which would be more manageable for particularly in countries with relatively limited resources, in the hope that that would encourage more diverse research. In practice, and this is a disappointment to me actually, in practice almost every country has chosen to concentrate solely on the core indicators, and hasn’t really looked in the wider range for other indicators that are particularly important in their own country. I think that’s one of the issues that the review should look at, how to avoid missing the opportunity that that presents. So we put a lot of emphasis as well on the need for a multi-stakeholder approach, with a multi-stakeholder advisory board to oversee processes, but also a multi-stakeholder research team, bringing different types of expertise into a group that could look at things together, and then discuss their findings from their different perspectives. A couple of countries trailed the indicators, including Brazil, in order to validate them, and the whole scheme was then signed off by the IFAP committee in UNESCO, which gave it a kind of crucial status and authorization by UNESCO’s member states. So the outcome, as I suspect you know, is that there have been really rather a large number of implementations of these indicators. There have been a lot more implementations of them than I had expected there to be in the early stages, and in fact a lot more implementations than of the media development indicators. I think that probably indicates that there was a very substantial demand for something along these lines, which would enable national research teams to work on a national assessment. But I’d also give a good deal of credit to Tatavic’s predecessor, Shanhong Hu, who was immensely enthusiastic in promoting the indicators and supporting countries over the last few years in putting them together. Having read a number of the reports, not all of them, I think I’d emphasize three or four things which seem to me to be important in making a successful research project using them. The first is the importance of diversity within the research team and the advisory board, but I think the research team is particularly important. That is, expertise across the different areas of rights, openness, access, multi-stakeholder participation and issues such as gender and sustainable development, which are in the X category. If you bring together people with different expertise, you get more than the sum of the parts. The importance of avoiding political pressure to come to positive conclusions when those might not be justified, and avoiding the pressure that comes from vested interests. Again, it’s valuable to have diversity within the research team and the advisory board. I’d stress the need to pay as much attention to qualitative assessments as to quantitative indicators, and, as I’ve mentioned, to look at the non-core indicators to see which are particularly relevant to a country’s national context. I think I’d stress the importance of the research team discussing and analysing findings as a group rather than just reporting on their own area of expertise, and on building that discussion, that collective analysis, as the way of reporting rather than a box-ticking exercise which any indicator framework is vulnerable to. I think I’d stress the desirability of making recommendations that are practically achievable in the national context, which includes the political context. To identify those things which can move things forward in the categories that are covered by the indicators. So the practical rather than the ideal. Now, it was always intended to revise these indicators after a period of time. In fact, they’ve been used unrevised for rather longer than we’d originally expected. It’s important to bring them up to date in terms of what evidence can now be gathered and in terms of the issues on which evidence should now be gathered if we’re to have a comprehensive picture of a national internet environment. So I hope that this revision will be able to do that, to bring them up to date without making it too difficult within a particular country to look back at an assessment that’s already been done. So building on what is there, developing it and evolving it for future needs, retaining consistency where appropriate. I think it will be necessary to reduce the overall number and I hope it will be possible to encourage a more holistic assessment approach than has always been the case. There are media development indicators assessments that I think will be quite a good model there to look at. I would resist the temptation to omit things for the sake of omitting them. Not least because of the differences between different countries and the fact that different countries need different points of reference. But there may be better ways of doing that than dividing simply between a comprehensive and a core indicator set. And I would encourage more inclusion of non-core indicators where these are relevant. That’s I think what I’d say about the revision process, which I know is at an early stage and I’m not directly personally involved in it. It’s not my responsibility. But I am looking forward to continuing to work with these indicators and the Rome principles in the future. Thanks.

Moderator – Tatevik GRIGORYAN:
Thank you very much, David. And thank you again for your work in putting the indicators together and for continuing the support to us and for your valuable recommendations as we move forward with the recommendations. And we do very much hope as a member of the steering committee for the revision of IUI, you will still be very actively involved in the revision process. Thank you very much. I would be happy to provide actually updates on the process and on our progress of implementing the IUIs globally. But I am aware that our next speaker as well has to leave to attend other engagements. Our next speaker online, Dr. Lutz Moeller, the Deputy Secretary General at German Commission for UNESCO. So, Dr. Moeller, the floor is yours, please. Thank you very much. I hope you can hear me well. Very well, thank you.

Lutz Möller:
Thank you very much. Good afternoon in Japan and good morning here from Europe. I’m also in Paris at the UNESCO board like the ADG. Excellencies, colleagues, ladies and gentlemen, I think it’s not really necessary to say that we have observed a really enormous and very rapid involvement of Internet ecosystems over the last few months. As a key example, the fundamental changes that several social media platforms that span the globe are much more than technical alterations or simple moderations of one arbitrary product. They have fundamentally altered societal discourses in countries around the globe. And have had enormous reverberations in terms of visibility of certain political convictions instead of others and the ability of disinformation to spread. I, of course, speak about X.com but also could speak about TikTok, Meta Telegram and more nationally successful platforms such as Paula, Korean neighbor of Vietnamese sailor. In Germany, the more non-profit Fediverse with Macedon has had some successes over the last year. But even here, we do not at all see a shift away from the private sector organized social media platforms. It is really not a news item in this year 2023 but the way public discourse, public conversation about the future of society and the planet is shaped and influenced by private business interests. And this has been never more acute than in the last 12 months. As you know all very well, the challenges posed by artificial intelligence come on top as Freedom House has warned us last week in their Freedom of the Net report. More specifically, the use of artificial intelligence to hinder and interrupt public discourse, to repress and to manipulate. Therefore, we really need to strengthen internet ecosystems that are freer, more democratic, more non-profit, more in the public service. We need to strengthen and safeguard human rights, openness, justice, diversity, inclusion, participation, empowerment and well-being in these internet ecosystems. And this is exactly, as you all know basically, where the UNESCO Internet Universality indicators of the Rome IUIs come into play. As you know, Germany has been the fifth UNESCO member state globally and the first from the global north to utilize this instrument to appropriately measure whether national internet policymaking and the implementation of these policies into practice, whether they really live up to this ambition of human rights openness, access, and multi-stakeholder participation. The big advantage from our perspective, from our experience, is that the Rome X IUIs deliver that they focus not only on one or few indicators. They provide a more panoramic view, which also, I have said that previously, is some brutally honest evidence. Actually, we all know that governments can easily claim that their policies and practices are human rights-based. But are they really? Are they really open? Do they really allow access to all? And are they really governed through a true multi-stakeholder participation? Or is this word just used? used as a euphemism for industry lobbying. The application of the Rome X IUI in Germany was a joint endeavor by the German Commission for UNESCO as coordinator, the German Federal Foreign Office as political and financial supporter, and the Leibniz Institute, Hans-Bredo Institute as implementer. Today I will not repeat previously reported results from Germany that we have had, such as the insufficient balance in our country we found between the right to privacy and freedom of expression or the insufficient internet access of jobless persons or the elderly. The key question of today is, what can we suggest from our experience for the upcoming revision? As I said, the huge advantage is this panoramic view which they generate. We have clearly benefited from this approach. However, my main point is that while providing this panoramic view, we found that the number of indicator currently 303 including 109 core indicators is probably too high. I said it with what David Sauter has said before about the general approach to the Rome X IUI, which we perfectly understand and share. Still, we recommend a stronger focus on key areas and topics with the greatest relevance. In particular, we should note data availability. Even if an indicator is excellent in theory, it is of little use if there is no data available or if the indicator cannot possibly be operationalized appropriately. Several of these IUI indicators are not as practical as they appear in theory. I heard with great interest that David also spoke about the need to reduce the number of indicators. And I agree with him that we have to be very careful in that regard. And I also have to share with you that this is a common experience. We have also worked with several of the SDG indicators in Germany over the last couple of years and have found out that also some of them sound fantastic in theory, but are very, very difficult to operationalize. So we really recommend to use this opportunity also for a general up-to-date take to make sure that the IUI really capture also more modern, more up-to-date trends such as AI. On another item, we strongly recommend from our experience in Germany that member states use a multi-stakeholder advisory board. In Germany, this board has proven enormously useful, specifically when it comes to selling and communicating the results to the political stakeholders later on. And in particular, its current debates tend to weaken multi-stakeholder participation. It’s more necessary than ever, not just in the application of the IUI. In closing, we at the German Commission for UNESCO and also the Hans-Prieto Institute joined a dynamic coalition on the IUI from the start to share our experiences and good practices. We offer our support to other parties and other member states to enable them to apply the IUI in their own countries. And we look forward to working together on their vision as well in the years to come to keep them up-to-date with ongoing developments. And I thank you very much for your attention, and thank you very much for inviting me.

Moderator – Tatevik GRIGORYAN:
Thank you very much, Mr. Mohler. Thank you for your support to the IUI project and also to the support to the dynamic coalition for IUI and for encouraging more stakeholders to join. OK. Yeah, David needs to leave to attend another important session. So thank you so very much, David. Again, thank you for your continued support. Let’s give him a round of applause. So yes, Dr. Mohler is also, yes, leaving soon. Thank you. Thank you as well, Dr. Mohler. Well, we can give Dr. Mohler as well a round of applause. As I didn’t mention his name with the first round. Thank you so very much. And I hope that you will continue to support the IUI project. And I hope that you will continue to support the IUI project. The first round. Thank you so very much. And let’s carry on with our discussion. Actually, I know that there are people here who I talked with about the IUI, IUI RailMix project, and who would actually be interested to know about the project. So I will just give a very brief overview for them, for those who are new to this initiative. So I’m sure you grasped a lot from David’s input. But just to give you an idea. So internet universality is the official position of UNESCO on the internet. So UNESCO believes that internet should be universal based on these principles of rights, openness, accessibility to all, and nurtured by multi-stakeholder participation. And so this was in the heart of the internet universality framework, which we then added an X to, ROM-X, X standing for cross-cutting issues such as gender equality, safety and security, sustainable development and environment. So we have in total, they’ve talked already about the number of indicators. We have a lot of them, 303 indicators with 109 core indicators, core being those that we recommend that they are essential to implement, at least as baseline. And then countries are free to, based on their national context, to choose and implement other additional indicators as well. And so we have an eight-step process, and I would like to talk about the establishment of multi-stakeholder advisory board, which David mentioned, and Dr. Moeller also highlighted its importance. So we do believe in multi-stakeholder approach to internet governance, which is also promoted by the Internet Governance Forum. So it is an essential part of this research. So the group consists normally of government representatives, representatives of relevant ministries, civil society organizations, academia, private sector, representatives of marginalized groups. These groups is sort of an oversight body which guides the research, and in the end of the research also looks at the outcomes and what we call validation workshop, validates the results of the workshop, confirming that this is indeed the state of the play in the country in their respective concerned areas. So this assessment, this framework indeed is a unique global tool. It’s a unique tool available to implement the development of the internet at the national level. And it also, it’s not a standalone, it also supports in a way the achievement of sustainable development goals and is also in line with the number of topics now discussed at the Global Digital Compact. So currently the framework, the assessment is ongoing in 40 countries, actually 34 with six having published the report. So just to give you a visual idea because I avoided using presentation. So this is the indicators, the framework which is available on our website. So if you go to unesco.org and I’ll be happy to share my contact as well afterwards and look for internet universality indicators. And so I have here, this is how the report in the end looks like. I have the copy of the report from Brazil and we have Alexandre here and Fabio here who not only supported the creation of the indicators but also were one of the, actually the first ones to implement the indicators in Brazil. So currently, so six reports published so far, three in Africa, one in Europe, Germany, one in Thailand and Brazil. And currently the process is ongoing in 34 countries with Kenya actually doing a second follow-up assessment to measure the results achieved after the publication of the report. And so we have 15 in Africa, 12 in South Asia, 15 in Asia and the Pacific and five in Latin America, three in Europe and two in Arab states. And actually I’m happy to say that we, out of this country, seven are small islands and developing states with five in South Pacific islands. So we have had quite serious results. Dr. Moller already presented a little bit with the achievements in Germany. Our assessments help to inform policy makers and feed into the digital strategies, laws and regulations. And we are happy to continue our progress. And so now I would like to give the floor to, sorry, because we have a missing speaker. I’d like to give the floor to Alessandra Barbosa, Regional Center for Studies of the Development of the Information Society, CETIC.br. And actually this is a UNESCO Category II Institute. And I won’t be telling more about you because there is so much to say. So please add whatever you would like to add. And please, floor is yours around the topic of the discussion today.

Alexandre Fernandes Barbosa:
Thank you very much, Atatavik. And good afternoon, everyone. Well, it’s a pleasure to be here in this discussion because as it was already mentioned, NIC.br, we’re in the very beginning of this discussion since the concept of universality. And in my opinion, this is a very important achievement because although indicators may change over time and concepts may change, like in the past, what we considered internet users, today is very different, right? So the definitions may change and they should be revised from time to time, but principles are really important. And I think that this framework was a very important achievement that UNESCO made in terms of defining important principles, the ROM-X that was already explained, what means R-O-A-M and X. So I’m not going to repeat, but the principles should not change. They should remain. So I think that we are now in a moment after five years that it was approved in 2015, right? I guess that it was approved. 18, yeah, 18. So it’s time now to make an assessment on the framework based on the need of revising principles, but not principles, indicators. And I think that as already has been stressed by both speakers that presided me, in terms of the number of indicators, it is indeed a huge number of indicators, more than 300, the whole set, and the core indicators, 109. But the fact is that the scope that this framework aims to measure really requires a lot of indicators. And I think that what we have realized among these years, and now with more than 40 countries making this assessment, is that we have a very problematic issue of data gap. Many countries, they don’t have the required data to make this assessment. But at the same time, it was, from my point of view, following all these reports and assessment, because CETIC had the chance to revise some countries, like the countries in Latin America, and add some other countries in Africa. Even in Europe, we worked with a German team during the assessment, sharing the Brazilian experience. But having said that, I think that this framework was an opportunity for countries to really understand the need of data production. We need data, because when we don’t have data, we don’t have visibility. And if you don’t have visibility, there is no priority in the political agenda. So, and in this particular regard, I think that Brazil is in a position that we have for many, many years, almost 20 years, of data production in different areas. Not only among population, households, but enterprises, schools, health, culture, government, and many other areas. So I think that the ROM framework gave countries the opportunity to understand that they should produce more data, because we do have a lot of missing data in this regard. And also, another very important achievement, in my opinion, is that UNESCO soon realized that we should not have an index, right? It’s not a matter of comparing countries here. We are using qualitative and quantitative type of indicators to take a picture, a general overview of the situation of the internet development in a given country. So this is a very good thing that UNESCO soon realized that the intention was to have a panoramic view of internet development. A second very important point that I would like to highlight in this process is that not many countries have the experience of having a multistakeholder dialogue on internet development. Brazil is, again, a very good example of a successful model on multistakeholder, a real multistakeholder arrangement to debate internet governance. And since one of the conditions to implementation is to establish what UNESCO has denominated multistakeholder advisory board for the development of this assessment, many countries that had no experience in having a multistakeholder dialogue, they had to implement that. And this is a very important achievement, and we should keep it this way, right? Well, just to mention that David has said the disappointment about having many assessments focusing only on core indicators. And I agree with him that the ideal situation is to implement the whole set of indicators to give a broad perspective on the internet development. But having this condition, maybe in the revision, we could rethink of that. And CETIC has been involved with UNESCO and other expert steering committee for ROMEX discussing this revision. And at the end of the day, we realized that it’s not possible to make such a drastic reduction in the indicators. So we will have to face this reality and to decide what to do. But I probably agree that we should stick with a larger number of indicators to have a better assessment. And last but not least, I would like to just take this opportunity to mention two things related to ROMEX. We have been discussing the application of this framework to different other type of emerging technologies such as AI. When ROMEX was approved in 2018, we didn’t have the new phenomenas of large language models, for instance, and other AI-based applications. So I think that it is completely applicable to emerging technologies because we are talking about principles and the principles should not change. Human rights-based, openness, accessibility, and multi-stakeholder, this could not change. And we could use this framework to apply. We have other discussion going on right now like the global digital compact and other issues that we could rely on those principles. Again, on the X dimension, in the revision, we already realized that we should fill some gaps that the original framework didn’t foreseen such as we had foreseen gender, age scope, children, but we need to include cybersecurity, sustainable development, climate change, all the dimensions relevant in the X dimension. And last, I think that in this revision, we could think of how to really encourage member states to make periodical assessment. I’m not sure if you can do it in two years’ time, three years’ time, but having periodical measurement should be very important for policymakers, civil society, and technical community to have a better idea of the progress given country has made in terms of applying this framework. So those are my initial reactions. I think that UNESCO plays a very important role in promoting and disseminating ROMAX strategy and framework that goes beyond internet development like AI, as I have said already. So those are my initial comments. Thank you very much.

Moderator – Tatevik GRIGORYAN:
Thank you very much, Alexandre. Thank you for your valuable inputs and thoughts. And also, thank you very much for pointing out this is not meant for ranking, which I normally highlight in my presentation. So this is a voluntary assessment. I always highlight voluntary in a sense that the country, the national stakeholder, they decide themselves on doing the assessment and then UNESCO is there to provide technical guidance and support in doing this and there is no ranking or comparison whatsoever. And of course, for some countries, the problems are similar and it’s very important to create this environment, to share practices and learn from each other experiences in moving forward with their national agendas, which in a way this dynamic coalition serves a platform for sharing the ideas and lessons learned and experiences and best practices. So on this note, I would like to give the floor to Anja Gengo, who is from the IGF Secretariat and who has been with the, actually all of you have been with the dynamic coalition longer than me and you’ve seen the development and I would like to invite Anja to speak about the role of the dynamic coalition, the progress, how we could improve it and any other inputs you and thought you have around

Anja Gengo:
the topic of our discussion, please. Thank you very much Tatevic and thank you to UNESCO for, of course, organizing this session, but even more for continuously throughout the year through the IGF platform and the dynamic coalition is working in a very open transparent manner with stakeholders from around the world to not just promote the indicators, but really to understand the value of the indicators and precisely what we are discussing today, whether they’re relevant, whether they’re useful to people around the world, do we need them and if yes, how do we use them, do we have access and especially if we have enough resources and capacity to meaningfully use them. Maybe I can start indeed from the dynamic coalition and the role of that platform and then I would like to say a few words about the relevance of the indicators for our presence and of course for the future. In terms of the dynamic coalition, we at the IGF Secretariat witnessed when this idea was born that a dynamic coalition could be organized just because it has been seen as a way to engage stakeholders from around the world into warm, friendly, meaningful discussions on the way the indicators could be used. I think it was formed after the indicators were adopted in 2018 and that was the whole idea to kind of follow the pace of the implementation and to understand if there are gaps, where are the gaps. It is incredible success in a very short time framework of the dynamic coalitions in terms of the number of stakeholders it managed to together but also in terms of the quality of the inputs that the stakeholders are bringing, not just to these dynamic coalitions but to the whole IGF as such and I think for us it was a really lesson learned that these dynamic coalitions which are very independent, they are also organic and they have their own autonomy in terms of how they manage the process. It was a lesson learned that when you have a strong institution that stands behind a people-centered, people-led process, it really can work and it really can, in a very short time as I said, achieve incredible results. I think long term speaking, we from the Secretariat certainly would advise to continue doing the way that has been done so far in terms of embracing the community, the stakeholders, doing outreach in different for us and especially engaging those that unfortunately are still not meaningfully engaged in the overall internet governance global processes. We through the IGF have quite a nice overview of the stakeholders, types of people, profiles that unfortunately are already left behind and I think it’s important that we alarm the community to really work in a methodological way to engage those stakeholders. So if you look, I’ll be very brief on this, I won’t certainly divert the attention of the IGF and inclusion processes but I do think it’s important to say that there are first of all profiles coming from certain countries that are not present in the global processes such as the IGF for example but also other processes. I mean at this forum you have for example colleagues coming from ICANN, colleagues coming from UNESCO doing wonderful things and unfortunately stakeholders from certain countries are missing so this is something that the Secretariat is very much focusing on to hopefully remedy and I’m very glad for example to say that there are countries from which we couldn’t hear for the past first 10 years of the IGF that are now very active in the IGF ecosystem not just at individual levels but organizationally speaking. You have the Maldives that are having wonderful national IGF and their organized multi-stakeholder participation at this year’s IGF and that’s a really concrete and tangible difference that’s been made through outreach done on different platforms. So this is something that I think the Dynamic Coalition could also do, engage those that are not engaged so far. I think we’ve recognized in the past couple of years that we’ve really evolved from multi-stakeholder model toward a multidisciplinary model which means that we have to look at each stakeholder group participation in a very nuanced way to understand that these discussions, these dialogues and potentially leading into decisions really concerns us all given the fact that we’re all using our smartphones, our computers, meaning we’re all there present in our online world and hopefully in the years to come also this Dynamic Coalition will see more disciplines represented in the core organizational group of the Dynamic Coalition itself. In terms of the validity I completely agree and I think that can’t be underlined enough with everything that my colleagues said previously with respect to the values. I think we’re very much aligned in the fact that we strive for the highest values that humanity can strive in the online world as we do in our, let’s call it, offline world. But if you look at these analog domains for example, you know, humanity for example and the highest international legal mechanisms guarantee right to life for example, but then you still have some jurisdictions that recognize that sentence as a sanction, while some not. So there are fundamental differences between how are we approaching to implement the values that we agree on and the digital world is in that sense not different than to this analog domain as well. There are jurisdictions where if you say something on social media is first of all interpreted as exercising your freedom of expression, while in some jurisdictions a tweet or social media post can potentially lead to imprisonment or fining. So those are the differences. I think we have to be aware of them and we need to make sure that the implementation of the values that we believe in is in the right hands. Two years ago in Poland we had a session on this same subject. We were assessing how the assessment is going and I do recall when I was sitting next to my colleague Kossi from Benin IGF, he coordinates the IGF in Benin, we spoke about the implementation being done through a multi-stakeholder lens, that all stakeholders in the country have opportunity to be consulted and to have a say when you are assessing the ecosystem and I do think that’s still very much relevant two years after. Being said that, the values are relevant, it’s excellent to see that the number of national assessments are growing, but I do think that now compared to the period during the pandemic and after, we may be in a phase where the assessment needs to be assessed and that’s because the COVID pandemic that really changed our landscape and I’m sure I don’t need to speak about the facts, but if you look just at the legislation field, it’s more than palpable, it’s more than visible that that field is dramatically changing. Much of these institutions, initiatives are now growing that are measuring, for example, the number of laws that are regulating, let’s say, artificial intelligence given the fact that it’s on the rise and some of them are indicating that before the pandemic we spoke just about one or two national jurisdictions that had a law in place reflecting artificial intelligence. After the pandemic, so last year, this year, we are facing incredible proliferation of the national laws and there is a concern in the community, you can hear that across narratives at the IGF, at this year’s forum, that there is a concern that this may lead to fragmentation and that we need to be very careful in terms of not allowing that we actually regulate something that may jeopardize the global nature of the Internet that we are all really firmly standing for and advocating and that is one Internet accessible, affordable, safe, secure, resilient, sustainable, unfragmented. So those are the changes that I think we have to be aware and I hope that the assessments that have been done in the early years could be also maybe looked at to ensure just that they are relevant and to work, of course, on the outreach to ensure that this valuable set of indicators is brought to the attention of those that are probably still not aware that it exists. Thank you, Tadej.

Moderator – Tatevik GRIGORYAN:
Thank you very much, Anja, for your excellent points and excellent cooperation and the points will be definitely taken on board as we move forward. And just picking up on your point about reassessing the assessments, I think now we’ve, for example, we’ve completed actually, Kenya is completing the follow-up, what we call follow-up assessment, which is basically reassessing the assessment and Grace is not here today but Simon has read the assessment. I don’t know if he would like to share anything on that. So I think we are thinking about this follow-up phase to see, actually, one of the points as well, steps is the monitoring process which aims to see the progress of the country which could then reassess what has been done and the validity and progress made by the country. So this is an excellent point as well, in addition to others. Thank you very much. So as I mentioned, Simon, I would like now to give the floor to Simon who is currently acting as a technical advisor for the IUI-Romex project, looking into the reports that we receive and also, of course, providing training and support to the multi-stakeholder advisory group board, also to the researchers. So he’s actually more recently involved also closely in the assessment of the project in South Pacific islands, Tuvalu, Solomon Islands, Vanuatu, Tonga and Fiji. Please, Simon, the floor is yours. Thank you, Tetevic. So I mean, I think to

Simon Ellis:
start with, the IUI is really a unique holistic system for taking this overall picture of the Internet in countries and really I haven’t much to say because everybody else has said it already and I completely agree with what’s come through, but I’ll take three or four points and a couple of examples. So it is a national assessment, not an international assessment. It’s about what happens in the country and in that sense as well, it doesn’t have to produce kind of a single definitive answer. So through the map, through the analyses, there can be different viewpoints and those different viewpoints can be incorporated. The indicators in IUI are in the form of a question and so countries are effectively encouraged to answer that question and sometimes the answer to the question may not be a yes or a no, but maybe something in between and that leads to something again that I think people have mentioned but it’s worth bringing out again. I think Anna just mentioned it, this sense that yes, one of the major aspects of this is rights and legislation, but then what IUI does systematically throughout is say and is that implemented? How does that work out in practice? So for example many of the points about if certain laws are in data protection for example are in place, the question then is, is there something for example from case law, from civil society analyses which suggest if that is followed up on, if that works and how that works in practice. So again what you’re doing is not a simple answer but is a full analysis of the question and I think that leads again into this sense of follow-up. So for each report then naturally you lead into recommendations and as now we’re getting into 40 or more countries we have to really say well what then is happening as a result of that and as Tatavic has just suggested, Grace Gitaiga has conducted the first follow-up assessment for Kenya, but I think as the first one we still have to establish what is the best way for follow-up on the ground to see whether recommendations are taken forward, but also then how frequently should there be IUI assessments and what should the nature of reporting because you don’t want to recopy 300 indicators and say nothing’s changed. So that whole sense of follow-up is extremely important and is one of the big questions here. The second big question which everybody’s tackled is new themes and really the three themes that are currently emerging are AI and then environment and sustainability and cyber security. AI is very much new in the IOI environment but there are some indicators in environment and cyber security already in the X category of IUI. And then to take that for environment the one question for example which I’m keen on is e-waste and that is particularly a problem in the countries I’ve worked at in Asia and Pacific. So for example in Southeast Asia there are they are sometimes dumping grounds for e-waste from OECD from Europe and North America and often that e-waste is then processed in not very good working conditions let’s say and so this whole issue brings up all sorts of questions about environmental concerns. In the Pacific we’re now working some countries literally the country is as high as the table so waste cannot be put in the ground. Waste has to be disposed of in some other way and again this leads to whole issues about recycling and contamination and what you do and where you put it. To take another example from the Pacific which is again isn’t quite and shows to some degree how IUI can adjust but in some degree as well how we need to make that sensitivity to national circumstances. For the Pacific connectivity is about satellites. In one country islands can be thousands of miles apart there’s no way that you can cable between them and there’s no way that you can put masts or anything between them so satellites is it for those countries. If they’re other to have full connectivity not just to the world and the internet as a whole but even within the same country and I think that also then emphasizes to come back to a point that David made originally about the sense of core and non-core. So certainly certain things are core and apply to every country but certain elements such as I’ve suggested for Pacific with waste and satellites are our core to the Pacific but perhaps less core in in other countries and we need to make keep that flexibility and we need to ensure that the IUI allows a national holistic view for a whole range of different types of country from small islands right up to huge countries like Brazil which in itself I always used to say that if anything if something works in Brazil it’ll work anywhere because there’s so many different environments

Moderator – Tatevik GRIGORYAN:
in Brazil. Thank you very much Simon and thank you for this contribution and for your work. Now I think we heard from all the speakers I wanted to ask if anybody online from the participants or participants here in the room have any questions or points to be made. Yes Fabio please.

Audience:
Hello, thank you. Now I would like to hear from the panelists the point of mood stakeholders and that I think one thing that is interesting in the indicators that not just the process is mood stakeholder because you have to collect indicator throughout stakeholder process but the mood stakeholder is a dimension of the indicator so there’s a list of indicators covering this and do you think this is if this is something that is also changing nowadays if there are some new indicators in the field of mood stakeholders that five years ago we don’t had so how do you do assess this part of the discussion. Thank you. Thank you Fabio who

Simon Ellis:
would like to take the question. Thanks Fabio. I think I’m not going to answer it completely directly but as I said in a previous session on I think I’m a multi-stakeholder dimension and David kind of referred to the sense of ticking boxes I think it’s important to look at as to what multi-stakeholder means and I guess this is kind of what you it’s not just that somebody turned up to a meeting it’s that they’re actively engaged and I’m not sure how we do that and maybe also this is another question to as it were put out there. but really to have the sense of how can we engage, for example, in the reports for e-government, there are a lot of countries that have e-government systems and you see them put out things to consultation and people have said that civil society reps have said they sent things in, but then they said but we don’t know whether anything was ever taken into account, so I think that sense of how real participation and what that looks like and how you would capture that is a question here. For new sectors, new stakeholders, I don’t think I see anything

Alexandre Fernandes Barbosa:
immediately that’s changed. Yes, just to complement, this is a very good question because this is the only dimension or set of indicators that represents both principle of multistakeholderism and also the indicators, they capture how a given country is really implementing or supporting or fostering multistakeholder dialogue. In terms of new actors, I don’t see any new actors in terms of when you consider government, technical community, civil society and private sector, I think this doesn’t change, but maybe there is one thing that doesn’t exist in the set of indicators in terms of how can we measure the outcomes of this multistakeholder dialogue, because referring to my country again, Brazil, we have the Brazilian Internet Steering Committee as a multistakeholder body that has produced along more than almost 30 years, we are going to complete 30 of this model in two years time because it was created in 1995, I think that we can list a large number of important outcomes that has driven loss regulations like the Brazilian GDPR, the data protection law, like the law of access to information and not to mention the very important legislation which is the Internet Bill of Rights that is called Marcos Civil, it was a hundred percent based on the ten principles that was discussed through many years within this multistakeholder structure. So, one new indicators that I would think of is the outcome, how to measure the outcome of this multistakeholder dialogue, but I think that differently from the three others dimensions or principles of the Rome X, this I don’t see many

Anja Gengo:
change. Thank you very much Fabio, I completely agree with my colleagues, I don’t see in theory that we need to change anything on a paper, but the IGF Secretariat and also within the IGF, we do see gaps and that’s what I was saying at the beginning during the introductory remarks, there are stakeholders that are just not participating within certain stakeholder groups and I think who well illustrated that was the judge that spoke during the opening ceremony, I don’t know if you’ve heard when he said that he had issues at the registration area because he said I come from a high court of Tanzania, I’m a judge and then some colleagues had difficulty to place him under a certain stakeholder group. I mean it was a very nice way to illustrate that those are types of subgroups I would say, you know within our traditional stakeholder groups that are missing to actively participate in our dialogues, in our processes. We’ve recognized that a couple of years ago also with legislators, with parliamentarians and that’s what prompted this parliamentary track at the IGF that’s been going on since 2019, but I do think there’s much more to do. For example look at the health industry, we speak a lot about the privacy there, but you don’t really speak with you know medical professionals at the IGF, you speak with people coming from other backgrounds which are mostly patients in these domains. So this is something that I think we need to work on to engage them more. We need to raise awareness, that’s probably the reason why we don’t have them here present. Car industry as well, I mean a lot of issues with privacy, obviously their data protection and that they are not here. In Katowice we heard a little bit from Volkswagen, but here today we don’t really have active participation from the highest management from these domains. So these are just some examples that I think it’s important to work on, but we do have them on our paper. I think the authors of the indicators recognize that well, the matter is just raise awareness in practice and have them engaged. Thank you very much for the question and for

Alexandre Fernandes Barbosa:
the answers. Just to complement, very interesting what you said and I would say that the X dimension on the ROMAX could accommodate other important dimensions, like the ethical dimension could be one set of indicators within the X, but the other ones we don’t have much to change I guess. Thank

Moderator – Tatevik GRIGORYAN:
you very much. As I was saying, thank you Fabio for the question and for the answers which will help us I believe in the revision process. I wanted to ask the audience again if there are any reactions to what has been said or if there are any questions and the audience online. I don’t see any questions online, so we’re keeping to the time, we’re doing very good. We have our director joining us online. Before giving her the floor for the official closing remarks, I don’t know Marielsa if you had any contribution to what has been said or would we expect to hear the official closing remarks from you? Thank you Tatevika. I think I will weave them into the closing remarks so as not to make it two separate things. Okay, thank you very much. Then I would just like to give one final floor to the speakers if you have any reflections on as we move forward with the revision, if you have any final thoughts you would like to share. I would like to remind the audience and especially those who joined a bit later that we’ve been discussing the Dynamic Coalition as a platform to cooperate and to share best practices and lessons learned for the implementation and promotion of the UNESCO’s Internet Universality ROMEX indicators which is ongoing in 40 countries and as we’ve reached the five-year mark, we’re currently in the process of updating the framework to make sure that we incorporate the topics and input from lessons learned from the implementation of the IUI framework. So I’d give the floor to Anja first,

Anja Gengo:
please Anja. Thank you very much Tatevika. I think just to thank you and UNESCO first of all for using the IGF as a platform to promote these good values and bring them closer to people from around the world. We certainly at the IGF Secretariat but I’m pretty sure I can speak also for other structures of the IGF as a project, welcome our cooperation to continue long-term speaking as one UN family and to work as much as that’s possible with people from around the world to ensure that these values are really implemented in practice for the Internet that we all want.

Simon Ellis:
Thank you. Simon please. I don’t think again I have anything much further to say. I’m still thinking about new actors. One thing I’ve seen in a few maps recently is the police involved which is quite interesting and I think there is something there about police and justice. There is an important indicator about training for judges and lawyers which I think is quite key in all of this but I think this is a really good assessment. I think it is producing very big results and I look forward to a new version in relation to perhaps the global digital compact

Alexandre Fernandes Barbosa:
in the beginning of next year. Thank you so much for giving me the opportunity to be here and in my particular case being part of UNESCO family I have to say that it is a real pleasure for me and for my team to work with UNESCO and to help fostering this idea of this dialogue that is so important and I think that we have to celebrate that in such a short period of time you have a large number of countries making the assessment and the dialogue is live and I hope that in the coming years we can make new assessments and increase the number of countries that join this framework in a voluntary basis as you mentioned and I think that UNESCO plays an important role in building capacity and raising awareness among member states for the importance of having data to make this assessment. This is a very important issue. We do have a huge data gap mainly in countries from the global south so UNESCO plays a very important role and I have to really congratulate for your leadership in this project. Thank you very much. Thank you very much,

Moderator – Tatevik GRIGORYAN:
Alessandro. Thank you to all of you and thank you CETIC indeed for the excellent cooperation that we’ve been enjoying and the serious work especially now when it comes to the revision. I would now like to, I’m happy to give the floor to Marielsa Oliveira who is the director for digital policies and transformation at UNESCO. Please Marielsa, the floor is yours.

Marielza Couto e Silva de Oliveira:
Thank you and hello everyone. Konnichiwa. I’m really sorry that I could not join before but I had other sessions. I’ve been in session since 2 a.m. Paris time and many of those actually was talking about the ROMAX as well advocating for it and including it in the topics but for me this session which is focused specifically on the revision of the ROMAX framework is the most special one. We’ve been working together as a dynamic coalition you know to advance the internet universality for the past five years and over those years we actually accomplished quite a lot. You know if you think about it 25% of the countries of the world have actively adopted the ROMAX framework and it has embedded itself in global and regional discourse about the internet. You know it’s at the highest levels so it is no small measure due to the work of the internet universality dynamic coalition and the way you have worked as a shared space to exchange expertise, to exchange experiences, to act as a peer-to-peer support mechanism for each other and this is a very generous attitude you know of all of you and which the richness of our collective experiences you are the right people to contribute and guide the fifth year revision of the internet universality indicators. It has been envisioned from the very beginning we always knew the internet to be a fast-changing environment so we always considered that these indicators would have to be revised at some point but nevertheless it comes at a very timely moment in which we see digital governance changing it under a major overhaul with the new for example the the upcoming global digital compact the WSIS plus 20 review and so others. We also see generative AI changing you know the landscape the technological landscape of the internet itself and what we have found out that the internet can also be harmful this is something that we didn’t realize but before as much but the harms that can be done when it serves as a conduit for disinformation for hate speech and other harmful content particularly at scale you know that that it operates and the environment has changed so much too with indicators needing to change. In this session we have looked at this scenario and asked ourselves what are the things that we must change about the internet universality framework and you know I’m sure that you have covered important elements but I heard some of those in in the end and I would like to mention you know that this includes for example a tighter specification of which are the core indicators and this is one of the things that I consider particularly relevant we need to really tighten up the core indicators to to give a an easier process including for the the measurement as well as for the follow-up. The potential inclusion of new dimensions both in terms of content such as you know what Simon was referring to environment and in waste cyber security but others that have come up through you know different mechanisms of consultations child data protection mental health and of course we also have AI the toxicity levels of the internet itself you know of the social media environment and some of the elements that we need to consider but also in terms of the assessment process itself for example you know accounting for research obstacles that many of the national teams have encountered including the lack of data for many indicators particularly disaggregated data that then doesn’t allow us to see the x dimension so you know so clearly but including as well as when and how to conduct follow-ups to monitor progress in implementing the recommendations which I find an essential mechanism and I think that it’s really important that we document this process as well because we are you know about to have a new global digital compact and we will have principles and commitments and that’s in that at that level as well so it’s the process of monitoring adherence to principles it’s one of the most important things that’s actually going to be happening and the example that the ROMAX framework offers is extraordinary so I’d like to offer that as my key contribution today is reminding us that we need to document this process this trajectory to show also the global digital compact process what could be you know how they could actually be taking care of implementing it so with that I’d like to really extend heartfelt appreciation to all the panelists they’re actually good friends who have joined us today to share these insights and you know say that your participation is always enriching and enriches our understanding of the path that we must take forward to achieve this shared objective updating the framework I’d like to explain my special gratitude to our partners at CITIC.br you know Alessandri and Fabio have been really supportive and collaborative in this process of taking up quite a lot of the work but also you know Simon who is you know supporting leading this and to our esteemed steering committee members for their support in advancing this review and just like your constructive suggestions and advice have enabled UNESCO to facilitate the implementation of the ROMAX national assessments in the last year we are now able to successfully adapt this framework with your help and for that reason I really encourage all of you to remain actively engaged in the revision process to continue sharing your inputs with us and Tatevik has certainly given you a mechanism to reach out if you have contributions to make we have always counted on our dynamic coalition but this year we really count on you more than ever you are the ones we have on the ground understanding of national needs of the difficulties their own research typically faces of the themes about which you wish to put you could know more about and so on so your guidance is absolutely indispensable so let me invite also all IGF stakeholders to join the internet universality indicators dynamic coalition and to help us to continue advancing this work of advocating for a human-centered internet so thank you all very very much for your support and I hope to see you in person again soon.

Moderator – Tatevik GRIGORYAN:
Thank you very much Maria-Elsa thank you for this rich remarks and points that we will also take on during our steering committee meeting closed door meeting tomorrow and from my end as well I would like to really extend a heartfelt thank you to each panelist and each member of the steering of the dynamic coalition who has been supporting us throughout the years who are not here today but who remain actively engaged throughout through different initiatives around Romex so thank you so very much I would also like to thank my colleagues the Romex team especially Karen Landa and Camila Gonzalez who are online with us now and I would like to also thank the participants who have been here we are happy to hear from you after the session we will be around and I would like to continue the tradition that my colleagues have established of taking a family photo and I’d like to ask the online participants as well to put on their cameras and colleagues as well. Thank you.

Alexandre Fernandes Barbosa

Speech speed

128 words per minute

Speech length

1786 words

Speech time

840 secs

Anja Gengo

Speech speed

173 words per minute

Speech length

2017 words

Speech time

699 secs

Audience

Speech speed

129 words per minute

Speech length

123 words

Speech time

57 secs

David Souter

Speech speed

169 words per minute

Speech length

1597 words

Speech time

566 secs

Lutz Möller

Speech speed

177 words per minute

Speech length

1145 words

Speech time

389 secs

Marielza Couto e Silva de Oliveira

Speech speed

162 words per minute

Speech length

1134 words

Speech time

420 secs

Moderator – Tatevik GRIGORYAN

Speech speed

136 words per minute

Speech length

2631 words

Speech time

1164 secs

Simon Ellis

Speech speed

147 words per minute

Speech length

1230 words

Speech time

502 secs

Speaker 1

Speech speed

133 words per minute

Speech length

420 words

Speech time

190 secs