AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Kevin Luca Zandermann

Artificial Intelligence (AI) has the potential to revolutionize public services, particularly in personalized healthcare and education. Examples from Finland and the UK demonstrate how AI has successfully integrated into law enforcement practices, highlighting its transformative impact on public service delivery.

Regulatory bodies should seriously consider incorporating AI tools into their processes. Finland’s use of AI in cartel screening and the UK Competition and Markets Authority’s development of an AI tool for automatic merger tracking serve as successful examples, streamlining operations and enhancing efficiency.

However, it is crucial to strike the right balance between automated AI-powered steps and human oversight. Effective regulation requires the integration of both elements. The Finnish Authority, for instance, allows a stage of human oversight even after AI detection, ensuring decisions rely on well-informed processes. Similarly, Article 14 of the European Union’s AI Act emphasizes the importance of human oversight in regulating AI.

While there are potential benefits, the use of AI in regulation, particularly with Large Language Models (LLMs), also carries risks. A Stanford survey reveals that only one out of twenty-six competition authorities mentions using an LLM-powered tool, highlighting the need for cautious implementation and consideration of potential implications.

Kevin Luca Zandermann suggests regulators engage in retrospective exercises with AI, reviewing well-known cases to identify previously unnoticed patterns and enhance regulatory processes. Clear and comprehensive AI legislation, particularly regarding human oversight, is crucial. The lack of clarity in the EU’s current AI legislation raises concerns and emphasizes the need for further development.

Despite limited resources, conducting retrospective exercises and developing Ex-officio tools remain crucial, especially given the impending AI legislation. These exercises help regulators adapt to the evolving technological landscape and effectively integrate AI into their practices.

In conclusion, AI has the potential to transform public services, but its implementation requires careful consideration of human oversight. Successful integration in law enforcement and regulation in Finland and the UK serves as evidence of AI’s capabilities. However, risks associated with technologies like LLMs cannot be underestimated. Regulators should engage in retrospective exercises, work towards comprehensive AI legislation, and address potential concerns to ensure responsible and effective AI implementation.

Sally Foskett

The Australian Competition and Consumer Commission (ACCC) is taking proactive measures to address consumer protection issues. They receive hundreds of thousands of complaints annually and are attempting to automate the process of complaint analysis using artificial intelligence (AI). This move aims to improve their efficiency in handling consumer issues and ensure fair treatment for consumers. Additionally, the ACCC is exploring the collection of new information such as deceptive design practices, which will enhance their understanding of consumer concerns and enable them to better protect consumers’ rights.

Understanding algorithms used in consumer interactions is another key area of focus for the ACCC. Regulators must be able to explain how these algorithms operate to ensure transparency and fairness in the marketplace. To achieve this, the ACCC gathers information such as source code, input/output data, and business documentation. By comprehending and being able to scrutinize these algorithms, they can better identify potential issues related to consumer protection and take the necessary enforcement actions.

The ACCC is also supportive of developing consumer-centric AI. They recognize the potential of AI in helping consumers navigate the market and make informed decisions. This aligns with the Sustainable Development Goal 9: Industry, Innovation and Infrastructure, which encourages the use of innovative technology to drive economic growth and promote industry development. The ACCC believes that by leveraging AI technology, consumers can benefit from more personalized and accurate information, leading to better economic outcomes and increased satisfaction.

In terms of data gathering, the ACCC acknowledges the importance of considering various sources. They emphasize going back to the basics and critically assessing the sources of data. By ensuring that the data used for analysis is accurate, reliable, and representative of the market, the ACCC can make more informed decisions and take appropriate actions to safeguard consumer interests. The ACCC is exploring the possibility of obtaining data from data brokers, hospitals, and other government departments. Additionally, they plan to make better use of social media platforms to detect and address consumer issues promptly.

It is evident that the ACCC advocates for utilizing data from different sources in their decision-making and enforcement activities. They suggest using data from other government departments, data brokers, hospitals, and social media to gain a comprehensive understanding of consumer trends, behaviours, and concerns. This multi-source data approach allows the ACCC to identify emerging issues, better protect consumers, and ensure fair competition in the marketplace.

In conclusion, the ACCC is actively pursuing proactive methods of detecting and addressing consumer protection issues. They are leveraging AI to automate complaint analysis, enhancing their understanding of algorithms used in consumer interactions, and supporting the development of consumer-centric AI. The ACCC recognizes the importance of considering various sources of data and is exploring partnerships and collaborations to access relevant data. By adopting these strategies, the ACCC aims to enhance consumer protection, promote fair business practices, and contribute to sustainable economic growth.

Christine Riefa

The use of artificial intelligence (AI) in consumer protection is seen as a potential tool, but experts caution that it is not a panacea for all the problems faced in this field. While 40 to 45% of consumer authorities surveyed are currently using AI tools, it is important to note that there are other technical tools being employed for consumer enforcement that are not AI-related.

One of the main concerns raised is the potential legal challenges that consumer protection agencies may face when using AI for enforcement. Companies being investigated may challenge the use of AI, and this issue has not been extensively studied yet. However, it has been observed that agencies with a dual remit, not solely dedicated to consumer protection, tend to have better success in implementing AI solutions.

Consumer law enforcement is considered to be lagging behind other disciplines, but efforts are being made to catch up. It is acknowledged that there is still work to be done in terms of classification and normative work in AI to ensure that all stakeholders are on the same page regarding what AI is and what it entails.

Collaboration among different stakeholders is deemed crucial for achieving usable results in consumer protection. It is emphasized that consumer agencies need to work together in unison to effectively address the challenges faced in this field.

Furthermore, it is argued that AI should not only be used for detecting harmful actions but also for preventing them. Consumer law enforcement needs to undergo a transformative shift in its approach. AI can be leveraged more effectively by adopting a prescriptive method that focuses on preventing harm to consumers rather than solely relying on detection.

In conclusion, while AI shows promise in consumer protection, it is not a solution that can address all challenges on its own. Consumer protection agencies need to consider potential legal challenges, collaborate with other stakeholders, and focus on leveraging AI in a transformative way to ensure effective consumer protection.

Martyna Derszniak-Noirjean

Artificial intelligence (AI) is reshaping the consumer protection landscape, presenting both benefits and challenges. It is vital to examine the implications of AI in consumer protection and determine the necessary regulations to ensure a fair and balanced environment.

AI provides an economic technological advantage over consumers, giving firms and entrepreneurs the potential to exploit the system and engage in unfair practices. This raises concerns about the need for effective protections to safeguard consumer rights. Therefore, there is a critical need to discuss the use of AI in consumer protection. The sentiment surrounding this argument is neutral, reflecting the requirement for comprehensive examination and evaluation.

Understanding the extent of regulation required for AI is a complex task. AI has the potential to both disadvantage and assist consumers. Striking the right balance between regulating AI, innovation, and economic growth is challenging. This argument underscores the importance of carefully considering the implications of excessive or inadequate regulation to ensure a fair marketplace. The sentiment remains neutral, highlighting the ongoing debate regarding this issue.

However, AI also offers opportunities to enhance the efficiency and effectiveness of consumer protection agencies. Consumer protection agencies are exploring the use of AI in investigating unfair practices, and they are developing AI tools to support their efforts. This signifies a positive sentiment towards leveraging AI for consumer protection. It emphasizes the potential of AI to augment the capabilities of consumer protection agencies, enabling them to better safeguard consumers’ rights.

Based on the analysis provided, AI is significantly transforming consumer protection. It is crucial to strike the right balance between regulation and innovation to ensure fairness and responsible consumption. While concerns regarding potential unfair practices exist, AI also presents an opportunity to enhance the effectiveness of consumer protection agencies. Overall, a neutral sentiment prevails, emphasizing the need for ongoing discussions and evaluations to successfully navigate the complexities of AI in consumer protection.

Piotr Adamczewski

The use of artificial intelligence (AI) in consumer protection agencies was a key topic of discussion at the ICEPAN conference. It was highlighted that AI is already being utilized by many agencies, and its development is set to continue. The main argument put forward is that AI is essential for detecting both traditional violations and new infringements that are connected to digital services.

To further explore the advancement of AI tools in consumer protection, a panel of experts was invited to contribute their perspectives. These experts included professors, representatives of international organizations, and enforcement authorities. Professor Christine Rifa conducted a survey that shed light on the current usage of AI by consumer protection agencies. This survey likely provided valuable insights into the challenges, benefits, and potential for improvement in AI implementation.

The UOKiK (Poland’s Office of Competition and Consumer Protection) recognized the potential of AI for enforcement actions and initiated a project specifically focused on unfair clauses. The project was born out of a need for efficiency and was supported by an existing database of 10,000 established unfair clauses. Training AI to detect such clauses in standard contract terms proved to be particularly useful, as the process is time-consuming and labor-intensive for human agents.

The UOKiK is also actively working on a dark patterns detection tool. Dark patterns refer to deceptive elements and tactics used in e-commerce user interfaces. The goal is to proactively identify and address violations rather than relying solely on consumer reports. Creating a detection tool specifically targeted at dark patterns aligns with the objective of ensuring responsible consumption and production.

In addition, the UOKiK is preparing a white paper that will document its experiences and insights regarding the safe deployment of AI software for law enforcement. The white paper aims to share knowledge and address potential problems that the UOKiK has encountered. This document is a valuable resource for other agencies and stakeholders interested in implementing AI technology for law enforcement purposes. The expected release of the white paper next year indicates a commitment towards transparency and information sharing within the field.

Overall, the expanded summary highlights the increasing importance of AI in consumer protection agencies. The discussions and initiatives at the ICEPAN conference, the survey conducted by Professor Christine Rifa, the projects carried out by the UOKiK, and the upcoming white paper all emphasize the potential benefits and challenges associated with deploying AI in the realm of consumer protection. The insights gained from these endeavors contribute to ongoing efforts towards more effective and efficient law enforcement in the digital age.

Melanie MacNeil

AI has the potential to empower consumers and assist consumer law regulators in addressing breaches of consumer law. Consumer law regulators have started using AI tools to increase efficiency in finding and addressing potential breaches of consumer law. These tools can support preliminary assessments of investigations and highlight conduct that might be a breach of consumer law. For example, the Office of the Competition and Consumer Protection in Poland uses web crawling technology with AI to analyze consumer contracts and identify unfair contract terms.

Similarly, regulators are utilizing AI to detect and address product safety issues. The Careers Consumer Injury Surveillance System searches for products online that have been the subject of a product safety recall using AI. Additionally, AI technology and software enable early diagnosis of product safety issues in smart devices. These advancements contribute to safer consumer products and protect consumers from potential harm.

AI not only helps with consumer law and product safety but also provides opportunities to nudge consumers towards greener choices. The German government has funded a digital tool that uses AI to provide consumers with a series of facts about how to reduce their energy consumption. This empowers consumers to make more environmentally conscious decisions. Additionally, AI can assist consumers in making green choices by breaking through the information overload on green labels, helping them better understand the environmental impact of their choices.

However, there are concerns about new and emerging risks associated with AI and new technology in relation to consumer health and safety. The OECD is currently undertaking a project to assess the impact of digital technologies in consumer products on consumer health and safety. The focus is on understanding and addressing product safety risks through safety design. It is important to address and mitigate these risks to ensure the well-being and safety of consumers.

Regulators are often criticized for being slow to address problems compared to businesses, which are not as restricted. There is a need for regulators to adapt and keep pace with technological advancements to effectively address consumer issues. Collaboration and sharing of learnings are crucial in moving quickly to address issues. By working together and sharing knowledge, stakeholders can collectively address the challenges posed by AI and emerging technologies.

In conclusion, AI has the potential to transform the consumer landscape by empowering consumers and assisting regulators in addressing breaches of consumer law and product safety. However, there is a need to carefully navigate the risks associated with AI and ensure consumer health and safety. Collaboration and knowledge-sharing are crucial in effectively addressing the challenges posed by emerging technologies. By embracing AI’s potential and working together, stakeholders can create a consumer environment that is fair, safe, and sustainable.

Angelo Grieco

The European Commission has prioritised the development and use of AI-powered tools for investigating consumer legislation breaches. To assist EU national authorities, they have established the Internet Investigation Laboratory (eLab), which utilises artificial intelligence to conduct extensive evaluations of companies and their practices. eLab employs web crawlers, AI-powered tools, algorithms, and analytics to aid in large-scale reviews. This demonstrates the European Commission’s commitment to consumer protection and leveraging AI technology.

Behavioural experiments are used to assess the impact of commercial practices, specifically targeted advertising cookies, on consumers. These experiments play a crucial role in enforcing actions against major businesses and ensuring consumer protection. They allow regulatory authorities to thoroughly examine the effects of various practices and address any potential harm.

In order to investigate and mitigate risks associated with AI-based services, a proactive approach is necessary. Investigations are currently underway to assess the hazards posed by AI-powered language models that generate human-like text responses. These models have the potential to manipulate information, spread misleading content, perpetuate biases, and contain errors. Identifying and addressing these risks is crucial for responsible and ethical use of AI.

Angelo Grieco is leading efforts to enhance the use of AI in investigations, with a focus on compliance monitoring for scams, counterfeiting, and misleading advertising. Grieco aims to improve the efficiency and effectiveness of investigations through the use of advanced technology. Additionally, there is a recognition of the importance of improving case handling processes and making evidence gathering more streamlined. Grieco aims to develop tools that can accommodate jurisdiction-specific rules and ensure adherence to legal procedures.

In summary, the European Commission is committed to developing and utilising AI-powered tools for investigating consumer legislation breaches. The Internet Investigation Laboratory (eLab) demonstrates this dedication by employing AI technology to aid in comprehensive evaluations of companies and practices. Behavioural experiments are used to assess the impact of commercial practices on consumers. Proactive measures are being taken to investigate and mitigate risks associated with AI-based services. Angelo Grieco is actively working to enhance the use of AI in investigations, with a focus on compliance monitoring and efficient case handling. These initiatives reflect a commitment to protecting consumer rights and ensuring effective and ethical investigations.

Session transcript

Martyna Derszniak-Noirjean:
before I will start. It would make a little bit sense that you can see me as well, so let me see. Otherwise, please, the technical assistance, if you could try and help me with this, that would be wonderful. Either way, I will not take more time with my technical issues. Welcome everybody, and it’s really great to be here for the third time at the Internet Governance Forum, so we are really happy that also this year we can alert the forum to consumer protection issues, and that this year as well we have a wonderful panelist with us, so welcome everybody and thanks for giving us this opportunity. I will start saying one of the biggest reasons that you have heard in the last times, and also one of the most heard things these days, which is that AI has been changing our lives, and I’m pretty sure that you guys are all tired hearing this, but even though we’ve heard it so many times, it doesn’t make it any less important, so we need to discuss and we need to converge around this issue, and this is why we have organized this panel, and now the question is why is it important to discuss AI in the context of consumer protection? For us, consumer protection authorities and many panelists who also have to do with consumer protection, the issue basically is that firms and entrepreneurs have economical technological advantage over consumers, which means that they can use AI to have greater possibilities of doing unfair practices against consumers. This is one option, of course, AI can also be used for good purposes, and our task as consumer protection enforcers and all stakeholders that are active in the area of consumer protection, our task is to understand our task is to understand to what extent we should curb AI used by companies, and to what extent we should try and allow it to flourish to actually assist consumers, for example, by having a better choice of products, so this is a big challenge for us, consumer protection stakeholders, and we need discussions, we need to speak, we need to engage with this topic, this is why we think that it’s very important to continue discussing it, even though we are already discussing it a lot, and as an emerging topic, we really need to have a wider conversation about it, and IGF is a great forum for that. We have also internal stakeholders around here, people who are concerned not only with consumer protection as we are, but also with other things, who are much more knowledgeable about different technologies, and how they are being used online, so it’s great, and we hope that we’ll have a wider discussion here, and I hope, and I’m pretty sure Piotr will also be able to follow up on this with many of the participants that are fortunate enough to be there in person, and one final thing of introduction is, except for trying to understand the impact of consumer on consumers, and the scope of intervention by authorities in the context of AI consumer protection, there is one more thing that we have been exploring as consumer protection agency, which is the use of AI to our own purposes in investigating unfair practices, so while we can see and monitor the use of AI by companies, it is also a great tool for us to increase the efficiency and effectiveness of our own actions, and our own activities, so we are also doing this, we are conducting two projects where we develop AI tools, and we are also aware that there is many other such projects all over the globe, our colleagues, our panelists will tell you more about that, so Piotr, that would be all from my side, and I wish you a great panel, I’m pretty sure you’ll be able now to present the panelists, thanks very much.

Piotr Adamczewski:
Thank you Martina, I totally agree that we have to discuss the problem of using AI, I have to also admit that last week we had a panel among the other consumer protection agencies on ICEPAN conference, when we are gathering together with the institutions which have the same aim, namely protection of consumers in each jurisdiction, and then we focus on what we have in our pockets, in our desks, what kind of tools we are using, and we concentrated more on the risks which are connecting to using of AI, and today I think that the panel on the Internet Governance Forum, as Martina mentioned, we are the third time already in this summit, is the better place to discuss the possibilities, the future, how we can develop further. I strongly believe that the artificial intelligence will be used by many agencies, it’s already actually in usage, it’s already in operation by many agencies, but it will be developing pretty fast, and definitely it is needed for the detection of the traditional violations, but also for the infringements which are new, which are connected to the to the new world of the digital services. So today for that reason, to that aim, we invited our prominent guests, Professor Christine Rifa from University of Reading, who made a thorough survey on the usage of AI by the consumer protection agencies, representatives of international organizations, which is OECD, which deal with the shaping of the consumer policy worldwide, with Melanie McNeil on board with us, and the representative of the DG Just, Angelo Grieco, and other people from the enforcement authorities from ACCC, Sally Foskett, and myself as well. And last but not least, we have Kevin from Tony Blair Institute for Global Change to talk with us from the perspective of consultancy world. So the structure of the panel would look like two rounds, so first we will present the tools we already have, and then in the second round we will ask our guests about the future, about the possible developments. So first I would like to turn to Christine and ask her about the outcomes of her survey. Christine, the floor is yours. Great, thank you so much. I’m trying to quickly

Christine Riefa:
share my slides to help with following up what I’m trying to describe. I think you should all see them now. So thank you very much for having me, and it’s a pleasure to join you only virtually, but still had this very amazing conference. I will give you a tiny little bit of background before, because I’m aware that perhaps some people joining this panel are not consumer specialists. So consumer protection really is a world with several ways of ensuring that the rights of consumers are actually respected and enforced. It’s a fairly fast developing area of law, but it has a fairly unequal spread and level of maturity across the world, and that does cause some problem in the enforcement of consumer rights. We also rely in most countries of the world that have consumer law on the spread of private and public enforcement, and AI as the subject of today can actually assist on both sides of the enforcement conundrum. We also have a number of consumer associations and other representative organizations that can assist consumers with their rights, but as well can assist public enforcement and agencies in the UK. A very good example is that which the consumer association is actually able to ask the regulator and the enforcers to take some actions. So that’s variable across the world what they can do, but they normally are a very important element of the equation as well. We’ve seen in previous years pretty much around the world a shrinking of court access for consumers as well, and an increase in ADR and ODR, as well as realization I think that public enforcement through agencies is really an important aspect of the mix on how to protect consumers. Hence the session today is obviously extremely important to ensuring we can further the rights of consumers and develop our markets in a healthy way. So the project I’ve been involved with is called MFTEC, stands for enforcement technology, and it really looked at the tools for the here and now that enforcement agencies were using in their daily work, and it also reflected a little bit about the future. I’ll keep those comments for the second round. What we found is that MFTEC, which is actually a broader use of technology than just AI, so it would include anything that is perhaps a lower tech, if you wish, than artificial intelligence might be, but can be just as effective. And we wanted to look at ways agencies could ensure markets worked optimally, and also realize that not using technology in the enforcement mix might lead to a potential obsolescence of consumer protection agencies, and there was therefore an essential need to respond to technological changes. We surveyed about 40 different practices that we came across, not simply in consumer protection, but in more supervisory agencies as well, and we ended up selecting 23 examples of MFTEC that are specific to consumer protection, spanning a range of authorities, 14, seven of them were general consumer protection agencies, spanning five continents, and four generations of technologies. It is only a snapshot, it’s obviously extremely difficult at this stage to work on public information about use of technology in agencies. There’s also an element of development, and there are also reasons why agencies may not want to very publicly announce that they’re using particular tools. The survey, however, has got some really interesting findings. We, in the report, explain how a technological approach will be essential, and how to start rolling one out. We give a picture of how agencies that are doing it, are doing it, and have instructed themselves in order to enable themselves to rely, to roll out MFTEC tools. We also mapped out the generations of technologies, because actually not all agencies will start from the same starting point. Some agencies might be very new, have absolutely no data to feed into AI, others might be more established, but not have structured data in the way that might be useful. We also found that with very little technology, you can actually do a lot in consumer enforcement, and therefore our report recognizes this. We provide a list of use cases, so for anyone interested in what’s happening on the ground, then that’s a very good starting point to find out pretty much all the examples of things that are currently working. We also reflected on some practices that we find slightly outside of the remit of consumer protection, but that could be easily rolled into consumer protection. Of course, we discuss challenges. Our key findings, and I think they are quite useful for the purpose of today’s discussion, where we’re going to hear loads of different examples, is that actually AI obviously is a misnomer. We’re talking to a very erudite audience here, no need to dwell on this, but in consumer protection at the moment, AI is really not the panacea, and we think that even in the future, it will not solve all the problems. It has, however, got huge potential, and we found that about 40 to 45% of the consumer authorities we surveyed are using AI tools. Now, that still means that there are 60% of other tools that are still MFDEC tools that are being used, and they are not AI. That’s quite a significant finding because just in 2020, at the start of discussions about technology and consumer enforcement, very few reports or projects actually considered AI as being viable. They were looking at other technical solutions. What we found as well is that the agencies that have got a dual remit, so that are not just dealing with consumer protection, have fared a little bit better in their rollout of tools, and that might be because they are able to capitalise on experience in competition law, for example, but also because they may have bigger structure, and that obviously facilitates a lot of the rollout of technology. If we compare consumer law enforcement to other disciplines, we find that we are behind the curve, but as Piotr mentioned, are catching up very quickly. I’ll move on all of this. The final thing for me to point out at this stage before we hear from the example is really that AI as a solution in consumer enforcement needs to be built in with a framework and a strategy that will take into account all the potential problems that might come with it. One of the big dangers that we have identified is that if there is a lot of staffing, resources, money going into developing AI as a solution for consumer protection enforcement, then it would be really a shame to fall at one big hurdle that will come the way of the enforcement agency, and that is a legal challenge from the companies being investigated. We found loads of potential issues and things to strategise about, but the legal challenges that might come from the use of AI in consumer enforcement is one that has been clearly understudied and we didn’t find very much on, so that’s on that general overview that I leave you and pass on the floor to the next panellist. Thank you, Christine. It’s still a lot of work,

Piotr Adamczewski:
but it looks promising, definitely. Now, I would like to give the floor to Melanie and to see how OECD is seeing the opportunity for consumer protection regarding the usage of AI.

Melanie MacNeil:
Hi, everyone. Good morning, good afternoon, depending on where you are. If you just bear with me for one moment, I will share my screen very quickly. All right, so I’m assuming everyone can see that. I’m very excited to be here today, and the previous presentation was very helpful as well in setting this up. So I’m speaking to you today from the Organisation for Economic Co-operation and Development or the OECD, where I work in the consumer policy team. So the OECD has 38 member countries, and we aim to create better policies for better lives through a lot of best practice work and working with our members to see what they’re doing to address particular issues. So today I’m really excited to talk to you about artificial intelligence and how it can help empower consumers, and how it can be of great assistance to consumer law regulators as well. So I’ll also be sharing some information with you about the OECD’s work in the AI space more generally. So we’ve just touched on it, but the first thing I’ll talk to you about is using artificial intelligence to detect and deter consumer problems online. As a previous consumer law investigator, this is a topic very close to my heart, we’re seeing a lot of AI being used by consumer law regulators as a tool to increase efficiency in finding and addressing potential breaches of consumer law. It’s particularly useful in investigations, where work that was previously manual and quite slow, like document review, can now be completed a lot more quickly. There is still and always will be a significant and essential role for investigators, but AI tools can support the preliminary assessments of investigations and highlight conduct that might be a breach of consumer law. Robust investigative principles are always needed with any investigation, and the addition of AI to our toolkits doesn’t change that. But I thought it would be helpful to give you some practical examples of some great tools that we’ve seen our members using. So the Office of the Competition and Consumer Protection in Poland uses web crawling technology with AI to analyse consumer contracts looking for unfair contract terms. So the technology searches over the fine print of terms and conditions of things like subscription contracts to ensure that there’s no unfair causes, such as inability to cancel a contract. So this work, previously in most member countries, was undertaken manually with groups of investigators reading hundreds of clauses in hundreds of contracts searching for potentially unfair terms. But the AI tool really adds some efficiency to this, and regulators can then take enforcement or other action to have the terms removed from the consumer contract, preventing consumers from being caught in subscription traps. So that’s an example of a tool that really frees up a lot of investigator hours for other things and enables investigators to really focus on the key parts of investigations that do need human decision making and strategic thinking. So another issue faced by consumers online is that of fake reviews. You’ve probably all seen one at some point. Reviews can play a huge part in our purchasing decisions, but to give you an example, last year, Amazon reported 23,000 different social media groups with millions of followers that existed purely to facilitate fake reviews. This is obviously too much for individual consumers to deal with and for regulators, but machine learning models can analyze data points and help to detect fraudulent behavior. Fake reviews are often classed as a form of misleading or deceptive conduct under consumer law, and while regulators are using AI to detect fake reviews, private companies are also investing in this space as well. So this is a good example of how businesses and regulators are working together to enable consumers to make better choices. The OECD, we’re quite excited about some work that we’re hoping to do with Icepen in the near future with member countries looking at the use of artificial intelligence to detect and deter consumer problems online that was referred to earlier. There’s some really great efficiencies to be found, which ultimately mean that regulators can detect and deter more instances of consumer issues. So the increased efficiency can deter businesses from engaging in this conduct. And similarly to criminal behavior, if people know they’re more likely to be caught, they’re less likely to engage in the conduct. So we’re very excited about the future work with organizations like Icepen to share some of this best practice so that other regulators can benefit as well. So another space that we’re seeing some great work from our members is the impact of AI on consumer product safety. So AI is being used to detect and address product safety issues by regulators too. So for example, Careers Consumer Injury Surveillance System searches for products online that have been the subject of a product safety recall. So where something has been deemed unsafe and withdrawn from sale, there are cases where nevertheless businesses continue to sell those items. So Careers Consumer Injury Surveillance System uses AI to search online for text and images to detect cases where those products might still be being sold. Using AI in this context can mean that the unsafe products are found faster, so regulators can take action more quickly and consumer injuries are ultimately reduced. So as well as detecting issues like that, Careers also using AI to assist consumers who might be looking for information or wanting to report an unsafe product. So Careers has an excellent chat bot that they use on their website that consumers can use to report injuries for products. So that if they’re harmed by a product, they can report it to the authorities. The chat bot makes it very simple for them to lodge the information rather than asking them to fill out a detailed form. It’s more efficient. And then they use coding of the information provided by the consumers with machine learning to enable more efficient analysis of the reporting. So when it’s easy to report an issue, consumers are more likely to do it and better data enables regulators to better understand the issues and to address them as well. Similarly, AI technology and software in particular with products can enable product safety issues to be diagnosed early. So some of the more advanced home appliances, for example, that have software built into them that you might be able to control from your phone, they’re very useful as well in terms of alerting consumers to potential product safety issues. They can be notified that a device might need servicing, that repairs are needed, or that a remote software update might be required. So there’s already been instances with smart devices such as smoke alarms that have been remotely repaired and a product safety issue addressed through a software update. This type of technology in that circumstance can potentially be lifesaving. So the increasing prevalence of AI in consumer goods can bring benefits and the gaming industry has always been pretty quick on the uptake with technology. We’re investing a lot in AI to change the way that people experience games, but as the use of digital tech intensifies, the way that people communicate and behave online is also changing. So this is an issue where there are new and emerging risks and they’re not particularly well understood in all spaces, particularly in the context of mental health. So one of the major projects that we’ll be undertaking at the OECD shortly is looking at the impact of consumer health and safety, sorry, the impact on consumer health and safety of digital technologies in consumer products. It’ll be focusing on AI-connected products and immersive reality and the impact on consumers’ health, including mental health. So the project aims to identify current gaps in market surveillance and the way that regulators might monitor for product safety issues and to identify future actions to better equip authorities to deal with some of the new risks that are posed by AI and the new technology relating to consumer products. We’re aiming to provide practical guidance for industry and regulatory authorities to better understand and address product safety risks. And we’re going to have a real focus on consideration of those risks in safety by design. So that’s a new project to keep an eye out for. Another space that we have seen AI provide some great benefits in empowering consumers is in the digital and green transition. So many consumers want to make greener choices, but sometimes they don’t due to information overload or a lack of trust in labelling or other behavioural science issues that can affect all of us. So research has shown that nudges or design interventions can encourage consumers to make greener choices and can encourage people to behave in a specific direction and overcome some of those behavioural issues that might otherwise prevent them from making a green choice. So AI provides an excellent opportunity to nudge consumers towards greener choices. So, for example, in Germany, like in many countries, heating bills are often not prepared in an understandable way and they’re inconsistent between providers. Each metering service can use different formats, different terminology. And as a result, consumers find it really difficult to compare which company to choose. They find it really hard to pick up errors in their bills. They end up paying more for energy and services and incentives to save energy are difficult to identify. So this can cost consumers a lot of money, but it also causes a lot of unnecessary emissions because it’s so difficult for people to make a greener choice that they essentially give up. I think it’s something that we’ve probably all been guilty of at some point when you look at various contracts for services. So to help consumers manage their energy consumption, the German government has funded a digital tool which uses AI. The household can upload their energy bill and it’s evaluated using AI to provide a series of facts about how they can reduce their energy consumption and save on heating bills. So the tool is an example of a nudge that can help a consumer to make a better energy choice and help them to overcome the barrier of it being too complicated to make that choice. Similarly, consumers experience information overload with a lot of the green labels and badges and schemes that you might see on items in the supermarket. And the other issue is that it can be difficult to compare these and consumers have no way to verify what’s actually happening in a company where they put a green marking on their packaging. So, for example, last year in Australia, they did an online sweep and found that 57% of the claims made in a sample were misleading when it came to their green credentials. So there are some parts of the world that’s using regulation to really strictly control the way that such markings and accreditation schemes can be used. But where that’s not occurring or to substitute that, AI can also be used to assist consumers to make the green choice by helping to break through the unmanageable amount of information that’s out there. So we’re seeing new apps being developed to enable shoppers to scan a barcode of an item in a supermarket and see its sustainability or ethical rating compared to other products. Where a product scores poorly, the app can suggest an alternative. These are quite limited at the moment, but we’re expecting that in the future, AI will be used to expand the list of products that are considered and to recommend products that align more with users’ environmental preferences. So the OECD is currently undertaking a project looking at fostering consumer engagement in the green transition and addressing some of these barriers to sustainable consumption and looking at the opportunities that digital technologies use to promote greener consumption patterns. So this project is also going to involve empirical work to better understand consumer behaviours and attitudes towards green consumption. Just taking through as well a couple of the tools that have been developed by the OECD that can be quite relevant. So one of the things that we’re working on at the moment is the OECD AI Incident Monitor. There’s been a big increase in reporting of AI risks and incidents in 2023 in particular, the rise has just been astronomical. So the OECD AI Expert Group is looking at this and they’re using natural language processing to develop the AI Incident Monitor. So the monitor aims to develop a global and common framework for reporting of AI incidents that could be compatible with current and future regulation. So one of the issues that regulators face in addressing almost any problem is consistency of terminology and understanding. So part of this project is looking at developing a global common framework to understand those things. And then the AI Incident Monitor tracks AI incidents globally and in real time. So it’s designed to build an evidence base to inform incident definition and reporting, and particularly to assist regulators with developing AI risk assessments, doing foresight work and making regulatory choices. So the Incident Monitor collected hundreds of news articles manually, which was then used to illustrate trends and to help train the automated system. And you can see on that slide there where the where the project is up to. They’re using natural language processing with that model. And now they’re getting into the space of categorising the incidents, looking at affected industry and stakeholders. And it’s also going to be quite useful, the product safety project that we’re doing, looking at potential health and mental health risks from AI and new technology. We’ll also be looking at including a product safety angle to the incident monitoring tool as well for AI. So I realise that’s been fairly quick, but they’re the projects that we’re doing at the moment and the work that our members are doing, looking at AI to assist regulators. And there’s also the OECD AI Policy Observatory that I just wanted to share with everyone, which aims for policies, data and analysis for trustworthy artificial intelligence. The Policy Observatory combines resources from across the OECD and its partners from a large range of stakeholder groups. It facilitates dialogue and provides multidisciplinary evidence based policy analysis and data on AI’s areas of impact. So the OECD AI Policy Observatory website is very large. It’s a lot of really helpful information on there. We’ve got articles from stakeholders as well as reports from the OECD. So chances are, if you’re working in the AI space, you will find useful information there. I’ve also just included a link to the consumer policy page. And then we’ve also got the OECD AI principles to promote use of AI that’s innovative, trustworthy, respects human rights and democratic values. So there’s a snippet of the information there. But we are setting up policies that we think will assist members for AI more generally, as well as in specific spaces like empowering consumers that we’ve been talking about today. So that’s all from me. Thanks for the opportunity to have a chat with you all about our work.

Piotr Adamczewski:
Thank you, Melanie. As a current enforcer, I totally share this idea that it’s about efficiency, it’s about enhancing us. But yet at the first stage of the investigation, where we are working more on detection of the violations, but later on, definitely we need to preserve all the rights to defend by the traders. So it’s helping us a lot, but especially in the first phase of our work. So now I would like to turn to Angelo and check what are the newest tools in the possession of the European Commission with the eLab established in DigiJust. Angelo, the floor is yours.

Angelo Grieco:
Thank you very much. I’m just trying to… I don’t know whether you see my screen, but I’ll try. Can you see it? Good afternoon to all of you. Thank you for… I would like to thank you. Bob Piotr, you know, and your Polish colleagues for moderating this panel and inviting us as European Commission to join. We are very honoured, although we couldn’t join physically, so I will have to do this remotely. I’m the Deputy Head of the unit, the group in the Commission which is responsible for enforcement of consumer legislation, and in this team we do two main things. We coordinate enforcement activities of the member states in cases union-wide relevance, and we build capacity tools the national authorities can use to cooperate and investigate, including and especially, I would say, on digital markets. Now, I will, in this presentation, I will get a little bit more into the specifics of those tools, although there’s little time allowed, so I will try to go through them quite rapidly. And as you can see from the slide, you know, I will focus on three main strands of work that we are following. So the first two concern the use of AI-powered tools to investigate breaches of consumer legislation, and the first is our Internet Investigation Laboratory. Then the second is behavioural experiments that we use to test the impact of market practices on consumers. And then the third, as third last element, I will talk about a number of enforcement challenges relating to marketplaces which offer AI and platforms which offer AI services. So if we look at the eLab, the Internet Investigation Laboratory, called the eLab, is an IT service powered by artificial intelligence that the Commission has put at the disposal and exclusive use of EU national authorities of the Consumer Protection Cooperation Network that we coordinate as Commission. So the need for such a tool obviously has been said by speakers here already, comes from the inability of enforcement agencies to face enforcement challenges on digital markets, in particular monitoring with just human intervention. In a nutshell, too much to monitor with little resources and increased need to have rapid investigations which cover larger portions of market sectors. So this tool is a virtual environment which we launched in 2022 and which can be accessed remotely from anywhere in the EU, which literally means that investigators can use this tool from their own IT facilities, sitting in their offices in the Member States. And it can be used for a number of investigation activities, especially to conduct large-scale reviews of companies and practices, such as a mix of web crawlers, AI-powered tools, algorithms and analytics that run to conduct those investigations, so that they can analyze really vast amounts of data on the internet to identify indicators of specific environments. And the parameters can be set to be investigation specific, so that AI-powered algorithms can look for different type of elements and different indicators of breaches, and I will give a quick example of that later. The E-Labs offer various tools and functionalities, and the… so we have… let me just turn the slide… so we have VPN, so that investigators can use hidden identity, we have specific software that allows to collect software and evidence as you go while you’re investigating and transferring to your own network, including time certification where that evidence was collected. Then there are comprehensive analytic tools to find out information about internet domains and companies, so these are open sources tools, so they can search and combine different type of sources of information across different databases and geographical areas. And they are very useful, for example, to find out who is behind a website or a webshop, but also to flag cyber security threats and risk also indicators of how the likelihood that the website is a scam, you know, or is run by a fraudster. Now, if we look at two examples of how we use these tools and things now, the first one is Black Friday, is the price reduction tool which we used in the Black Friday sweep we did last year, where we tested… basically we used the tool to verify whether discounts presented by online retailers on Black Friday were genuine, and the result was that discounts were misleading for almost 2,000 products and 43% of the website that we followed, and to understand whether, of course, when discounts were genuine, we had to monitor 16,000 products at least for a month preceding Black Friday sales. Then another example is the, we call it FRED, the fake reviews detector, so this is something that we use, so the machine in this case scrapes and analyzes text detecting to try to detect whether a review first is human or computer generated, and then beyond that, you know, when even in case of human-generated reviews, based on the type of language and terminology used, indicates a likelihood score for whether the review is genuine or it’s fake. It’s sponsored, for instance, you know, and the machine showed 85 to 93% accuracy in this case, so this is just to give you two examples of this. Then the other strand of activity that we are running at the moment is, and we literally inaugurated this in the past month, is the use of behavioral experiments to test the impact of commercial practices on consumers, and this both to, we do this in the context of coordinated enforcement action of the CPC network that we coordinate against major business players to test whether the commitments proposed by these companies to remedy specific problems are actually going to solve the problem. So, and we also test, use these behavioral studies to test what is the, in general, what is the impact of specific commercial practices which could potentially constitute dark patterns, and this to prepare the grounds for investigations or other type of measures. So, the first, I would say, strand of work in this area we use, for example, to test the labeling of commercial content in the videos broadcasted by a very well-known platform, so whether the indication, you know, and sort of the qualification of commercial context is good enough, is prominent enough for consumers to understand it, and that’s very important, I would say, in the type of platform tools that we are confronted every day on the internet. And the second one, so we tested, for example, to see what is the impact of cookies and choices related to targeted advertising. Okay, what is interesting in these experiments is that they are calibrated based on the needs of each specific case, and we use large sample groups to produce credible, reliable, scientific results, so higher chance to identify significant statistical differences, and we use also AI-powered tools to do this, including analytics, but also eye-tracking technology connected to analytics, and that we did, for example, to test the impact of advertising on children and minors, you know, and we tested them in lab. Now, the last thing I wanted to address here rapidly, it’s an area which is drawing a lot of attention, which is mentioned also by previous speakers, at enforcement level, not only in the EU, but also in other jurisdictions, and it’s the offering of AI-based services to consumers, such as AI-powered language models, recently developed or recently becoming popular, and these models can generate, you know, we all know these models, not by now, but they can generate human-text, human-like text responses to a given prompt. Such responses continue to improve based, you know, on massive amount of text data from the internet, what is called reinforced layer learning from human feedback, and they are not offered only as standalone, but they have been integrated in other services offered, like platforms, like search engines, and marketplaces. While these practices have been investigated in the EU and other jurisdictions, I cannot say, and I cannot say much about this ongoing investigation. The attention, I can, however, flag a few elements where the attention of the stakeholders at the moment is focusing, so what are the issues, what are the problems, and, you know, we see that one main area of problem is transparency of the business model, so what are really the characteristics, what is really offered, what is really the service, how is this remunerated, how is this financed, this business model is financed, what are the difference in between the free version, so-called free version, and the paid-for version, and how does this relate for the use of commercial, of use of data, personal data, also the consumers for commercial purposes, like, for example, to send targeted advertising. Now, so there’s this part, and then, you know, of course, you know, we are very focused at the moment on the risks, you know, of those models, so we have seen that often, you know, there is manipulative or misleading content, there are biases, errors, you know, and one big concern is whether these platforms can do an adequate mitigation of those risks, and then you have the problem of the harm of specific categories of consumers, which are weaker, let’s think about minors, but not only, and associated with that, of course, is the mental health and possible addiction also, which has been experienced already, so the difficulties here is that, on the one hand, from a very, very general standpoint, we have a new, I would say, way, you know, of applying consumer legislation, and we need new reference points, you know, to apply consumerization to these business models, where, you know, the technological part is really still a little bit obscure, you know, so there’s a technological and scientific gap between enforcement and, you know, those companies who run these platforms, then the fact that these elements are integrated in other business models often, and then that we are at a crossroad here between protection of the economic interest of consumers, data protection, so data privacy, and the protection of health and safety, so this adds quite a bit of complexity to the work of the enforcers, who are nevertheless, you know, looking into the matter, enforcement may not be enough, and as we know, there may need to be sort of complemented by regulatory also intervention, and we will see about that. That’s all for me at this stage. Thank you.

Piotr Adamczewski:
Thank you, Angelo. I have to admit that it’s a really fascinating idea that there will be this possibility to share with the European Commission the software they are preparing, so we have like this possibility to create our own department with a lot of people, very costly to manage for each single consumer protection agency. We can work also on projects like we did in past, and we are still engaged in that kind of developing of software, but of course, the idea of just addressing the Commission and using the already prepared software is great, so now it’s my turn to give some insights about what we actually made in past, and on what we are working right now, so I will talk a little bit about ARBUS, the system which we made for the detection of unfair clauses, but I will focus on the main aspects, not to prolong too much time, we need to speed up a little bit, and then I will share with you some ideas about the ongoing project on dark patterns and on preparing of white paper for the enforcers. So, going back to 2020, when we actually figured out that we can use artificial intelligence for the enforcement actions, it was not so obvious at that time, I mean, it is the time before charge GPT, and it was not so clear that the natural language processing can really make such amazing things, but we thought that we have to try with this possibility, we focused mostly on our efficiency and we checked three factors for which direction we should go, so first of all, we considered the databases which were in our possession, and then we also defined strictly our need, so what is actually necessary for us to get more efficiency in which field, and finally, we also have in our view the perspective of the public interest and to always have it in mind what is actually necessary for public opinion to speed up with our work, and as the result of that was this project on unfair clauses, because we had a huge database for that, almost 10,000 entries already established unfair clauses, so we could use them for preparation of a proper database to fuel, to learn the machine how to detect it properly. Secondly, it was our need because it’s very time-consuming, it’s quite easy task for the employees, but still it’s hugely time-consuming to read all the standard contract terms and to understand them and to indicate which provisions could be treated as unfair, and finally, this is really huge public interest because we have to take care of all the standard contracts and we try to eliminate as much as possible of unfairness from the contracts, and especially with a fast-growing e-commerce market, it means that we have to adjust our enforcement actions and work closely with the sector. There’s no other options like automatization of our actions for doing that. What about the challenges in the project? First of all, database, so like I said, we had a huge material for that, but still we had to use a lot of human work to structurize it, it’s not so easy, you need to put it in the special format, you need to choose one, and then prepare that in a special way to make computer to understand it. Then the second problem which we faced at that time was the choosing of the vendor, so we were not able to hire like 50 experts in data scientists, so we decided to work with outsourcing and choosing a proper vendor was very challenging for us. We used a special type of public tendering which was preparing of POC first and then letting the information to the market, showing how it could be solved, and at the same time asking the market for preparing the other POC which we could compare in a very objective manner. And only because of the result of this contest, we decided on the producer of the tool. And finally, the implementation of the software into our organization. So again, it’s very challenging for the traditional organizations, traditional institutions to empower them with the new tools and to help people who already established some kind of work with the specific problem to make it differently, to make it in future more efficiently, but still at some point people need to find the good reason for accepting the change. So taking into consideration all the challenges, I have to say that we are already fully operating the system and we have the first good results, but still it’s the detection. So it’s flagging. So definitely it’s helping us in the first phase of the investigation, but later on after flagging of the provision, we have to do a proper investigation. That’s what we cannot change right now. A few words about our current project, Dark Patterns. So this is again the problem of detection of violations, which are quite widespread right now. There are some studies which showed that a lot of e-commerce companies are involved in the Dark Patterns, which means generally that there are some kind of deceit factors in their interfaces. And we try to prepare the tool which will allow us to work much more faster. So not going from one website to another, not looking for the violations, but be much more proactive and not just to wait on the signals from the harmed consumers, but to be able to proactively discover the violations. And here there are another problem because we have to create the database. We don’t have already existed database like in the first project. And so now we are working on the ideas, how we can do that, having in mind the possibility of verification of the construction of websites. And also the database could be constituted on the outcomes of the neuromarketing researchers, which we are going to carry out. All of that shall allows us to build some specific group of factors, which can allow to figure out what is deceiving, what is not deceiving, and to fuel the machine for the proper action in that manner. And last but not least, we are also working on preparation of the white paper for the agencies on the same status which we have. So it’s our second project. So we already have some problems and we were able to solve that. And we have some ideas about the transparency and about the way how we can safely introduce, deploy the software into the work on the enforcers. So we would like to share all that ideas with the colleagues from other jurisdictions and we’d like to make it public next year. So going further, we also know that the Australian competition and consumer commission is working right now on different projects. And Sally, if you hear us, could you share with us the more insights about what is going on right now in ACCC?

Sally Foskett:
Okay, thank you. I’ll just share my slides. I’m not used to using the Zoom, I’m afraid. So is someone able to maybe talk me through how to share my screen? Sorry. I think that there is a problem with the connection. I think that there is the share button at the bottom. Oh yes, thank you. Okay, I will present like this. I’m sorry, hopefully that is readable to everyone. Great. Okay, look, thank you so much for having me attend. I’m really excited to be here. I’m really excited to be able to share my screen with you all. I’m really excited to be able to share my screen with you all. I’m really excited to be able to share my screen with you all. Was that okay? Okay, so the first of our JV lecturns is from Stephanie in Kenya. Colon, I’m so excited to have you all here with us today. So please welcome Stephanie. Thank you for allowing me to attend today. Thanks to IGF for hosting this meeting and to all of you for joining us today. So we’re going to be looking at a few different angles. First, using AI to detect consumer protection issues. Second, understanding AI in consumer protection cases. And third, perhaps a little more tenuously enabling the development of consumer-centric AI. So first, using AI to detect consumer protection issues. So we have a number of projects on foot that are looking at methods of proactive detection. And these broadly fall into two categories. The first category is streamlined web form processing. So every year we receive hundreds of thousands of complaints from consumers about issues they’ve encountered when buying products and services. Many of these complaints are submitted through the HPC’s website which is a large field in which users type out the narrative of what has occurred. The issue with this approach is that our analysis of the form can be quite manual. So we’ve been experimenting with using AI to streamline this processing. The techniques that we’ve been experimenting with include entity extraction. So using natural language processing to identify parts of speech that refer to particular products like phone or car or kettle, hot water bottle for instance. And also companies as well who use entity extraction for. Another technique that we’ve experimented with is classification. That is using supervised learning to classify complaints according to the industry that they relate to. Agriculture, energy, health, et cetera. Or the type of issue that they relate to. So that’s the type of consumer protection issue. And then we’ve also been more recently experimenting with predictive analysis to determine how relevant a complaint is likely to be to one of the agency’s enforcement and compliance priorities. I have listed on the slide some examples of our priorities from this year which include environmental and sustainability claims that might be inaccurate. Also consumer issues in global domestic supply chains and product safety issues impacting infants and young children. Now the outputs of these models are not yet at a level of reliability that we would be comfortable with before deploying them into production. But it is something that we are actively working on and shows a lot of promise. The second category is not about analyzing data that we already have. It’s about collecting and analyzing new sources of information. And we’ve heard a lot of examples of this today. So scraping retail sites to identify so-called duck patterns. As others have pointed out, duck patterns or manipulative design practices are design choices that lead consumers to making purchasing decisions they might not otherwise have done. And sometimes these choices are so manipulative that we consider them to be misleading in the breach of the consumer law. And examples include was now pricing and scarcity claims that are untrue. We’ve also looked at subscription traps and to a lesser extent, fake reviews as well. The techniques that we use in this space are quite simple actually. So if a claim like only one left in stock is hard coded into the HTML behind the page, we know we have a problem. So a lot of this analysis is actually based on regular expressions. So basically looking for streams of text. But we do have an AI component that we use to navigate retail sites as part of the scrapes and to identify which pages are likely to be hacked pages. Turning to the second lens that looking at this question of empowering consumers with AI I thought it might be useful to touch on some of our cases where we have obtained and analysed algorithms that we used by suppliers in their interactions with consumers. So this is a really important thing to be able to do from an enforcement perspective as algorithms are increasingly and here I’m slipping into using algorithms instead of AI as Christine mentioned, AI is a bit of a misnomer. But as algorithms are increasingly used to implement decisions across the economy, regulators must be able to understand and explain what they’re doing. So we’ve had a few cases and market inquiries where we’ve needed to do this and I thought I’d explain a little bit more about what our approach is. And I’m going to speed up as well given the time. So when we need to understand how an algorithm operates, we’ll typically look at three types of information that we obtain using our statutory information gathering powers. So the first type of information is source code. That is the code that describes the rules that process the input into the output. And we’ve had a few cases where we have obtained source code from firms and worked through it line by line to determine how it operates. It’s a very labour intensive process, but it’s proven valuable, not critical for a few of our cases. The second type of information we obtain sometimes in algorithm cases is input output data, which is useful because it tells us how the algorithm operated in practice in relation to actual consumers. It helps us establish not just whether conduct occurred, but also what the harm was. So how many consumers were affected and to what extent. And then finally, the third type of information we obtain is business documentation. So emails and reports, et cetera. And this is useful because it tells us what the firm was trying to achieve. Often when firms tweak their algorithms, they’ll run experiments on consumers, on their customer base, so-called A-B testing. And so obtaining documentation about those experiments can shed light on what was intended to be achieved. The last point I’ll make on this slide, and mentioned earlier, a few of many other regulators are doing this as well, is we use predictive coding for document review. So we use machine learning to help expedite the review of documentation that we obtain from firms in our investigations. And very lastly, I thought I would briefly touch on a topic that’s a little more future focused, which is the possible emergence of consumer-centric AI. So this is more about empowering consumers in the marketplace, as opposed to empowering consumer protection regulators. The ACCC has a role in implementing the consumer data right, which is an economy-wide reform in Australia that gives consumers more control over their data. It enables them to access and share their data with accredited third parties to identify offers that might suit their needs. Currently, the Australian government is consulting publicly on draft legislation to expand the functionality of the consumer data right to include what’s called action initiation. So that will enable accredited parties to handle not just data, but also actions on behalf of consumers with their consent. So even though this is very early days, perhaps in the future, as a result of initiatives like action initiation in the data right, we might see the emergence of more consumer-centric AI. So AI that helps consumers navigate information symmetries and to bypass manipulative design practices to access products and services that are most suited to their needs. And I will stop there, thank you.

Piotr Adamczewski:
Thank you very much, Sally. So it looks like a lot is happening actually in this sphere, but still there is the report made by Tony Blair Institute, which indicates there should be some reorganization and some new planning for the technological change, especially in UK. So Kevin, if you could give us some recommendations about the report.

Kevin Luca Zandermann:
Yes, thank you. Thank you, Paul. Thank you everyone for sort of sticking in at this hour, especially here. So our work in this space is fundamentally, it’s joining two parts. Like the first one is our work on AI for proactive public services. So we do believe that AI has an enormous potential to transform the way we deliver public services. And the big picture is, of course, concerns areas such as personalized healthcare, personalized education. So in many ways, create a new paradigm that is tech enabled, but also institutional to provide a new way to think and then actually offer public services. So that’s the first component. And the second component is the work that in our unit, we’ve carried out in consumer protection. We did commission last year, an important report to a consumer protection expert, this call that Christine knows very well, where we actually looked at potentially at consumer protection regulation as a framework for internet regulation. So these are the two main components for this panel that I’ve tried to join. So in terms of, I thought it would have been useful to offer an overview about the baseline scenario as someone, considering I’m not a regulator. So it’s useful to assess the way we’re at now. And it seems clear that the main challenges for most regulators around the globe are the fact that their resources are very limited and outdated rules contribute to a law enforcement culture and therefore legitimization of illegitimate practices. The fact that there is an even international capacity, which has been reiterated by many other panelists and low, very low cross-border enforcement coordination. And finally, the fact that action is reactive and slow, rather than proactive as firms entrench power. And on the sort of disruptive incumbent side, I think the most important one is the fact that, incumbents can become so dominant that they offer a very selective interpretation of consumer rights, for example, prioritizing like customer service excellence, for instance, over like add the forms of safeguards. Martina, if you could move to the next slide. Okay. Okay, I can continue. So what we looked at at the institute is then like the very important review that the Stanford Center for Digital Informatics has carried out. It’s a very comprehensive survey, in terms of coverage, it almost reaches the level of coverage that the OECD would have in this very comprehensive global surveys. the this comprehensive like review like deals with the adoption of computational antitrust by agencies throughout the globe and 26 countries responded to this survey and Out of this survey I selected two examples that I think are quite telling about How consumer protection authorities are embracing AI? The first one is Finland. So Finland has carried out the Finnish Consumer and competition authority, I think has carried out quite an interesting exercise using AI Basically using AI as part of their cartel screening process and there instead of instead of sort of looking at their past data To build tools for the future. They actually started with a sort of exposed and reflexive Testing of AI so they looked at previous cases And sort of simulated a lot of scenarios. So they looked at previous cases in particular Some that dealt with two substantial Nordic cartels which operated in the asphalt Paving market in in Finland Sweden and they essentially compared The you know, the basic scenario which was the real one that happened where they did not have AI and they compare that with the benchmark against The Scenario where they actually could have used AI and assessed the two different performances and they did It did appear quite clearly that Utilizing a mix of supervised machine learning and separate and separate distributional regression test They could have found out about thank you. Thank you, Martina. They could have found out about those cartels in a much quicker way and Therefore this enables, this has enabled them to basically build new ex-officio Cartel investigation tools. So this could constitute a very important deterrent for for example Companies that create cartels because you have effectively a competition authority that has yet quite a quite an effective tool Quite an effective ex-officio tool to detect these patterns and then the other one is there’s probably a little bit less sophisticated But again, Christine would know Like would know about this very well Actually in the UK there is no requirement for parties to a merger to notify the competition markets authority Which is the relevant authority in the UK of a transaction so it used to be that the CMA had to Sort of very manually monitor new sources to identify these mergers. So a tremendous waste of time and Especially for a regulator that is already like very stretched in terms of resources both financially and in terms of time So the unit has developed recently a tool that actually track tracks mergers activities in an automatic way using using ML, you know A series a series of techniques. They’re very very similar to the ones that the other panelists have described so I’m not going to go too much into detail, but it just it just a textbook example of what You know, in many ways the low-hanging fruit of AI as used by consumer protection authorities, particularly in Legislations such as the UK where the notifier requirements may are less a less sort of onerous than maybe in other legislations such as for example in the in the EU and Then I thought that I would have been nice to conclude Martina again, if you could move to the next I would be grateful with a series of policy questions That Angela has sort of touched upon Previously And these questions are about I think the ethics of the algorithm and in particular If you think about the Finnish model the fact that AI is very good at detecting patterns But we know from for example the application of AI in health care that it’s not necessarily as good at Detecting causality so it can be quite dangerous to to start from a from an AI detected pattern and enjoy like quite and draw our conclusions without Without human oversight in the case of the Finnish in the in because of the Finnish Authority They were very much aware of it and in fact they as part of that as part of their Assessment they have a second stage where if let’s say the I tool this was the sort of supervised learning Tells them that there is like there are for example three companies operating as a cartel they would then have a Human oversight stage where they would basically have to find to try to find any other possible explanation alternative to that and this is very closely related in the EU to article 14 of the AI act which is one of the most important article and Deals precisely with with human oversight. So for most regulators I imagine one of the most important challenges. It’s going to be to essentially draw this line like where does the The automation where there’s the AI empowered Sort of step begins and ends and when does the human human oversight beginning in what in what in what modes and finally One of one of the last question is like the role that large language models can actually play I did find I did find it interesting that in the in the survey In the survey published by Stanford out of 26 competition authorities only one the the Greek one explicitly mentioned An LLM power tool that they’re using now. I imagine that this is not the case I’m sure like plenty of other consumer authorities have been using LLMs throughout the last year But we’re probably reluctant to say so for obvious reasons, but it’s It seems like at the same time that regulators by defaults are, you know Risk adverse and these large language models do pose like quite quite important risks particularly in terms of in terms of privacy for example One of one of the competition authorities it was trialing An AI powered bar for to deal with whistleblowing so So a case where you know when you’re building a tool like that the privacy concerns are clearly very important so the thing the last question is does the generative capacity of these models have actually anything significant to offer to consumer regulation or other forms of AI probably more like low-hanging fruit are instead more suited for Regulatory environment. I think that’s it

Piotr Adamczewski:
Thank you very much Kevin, I just need to mention that definitely we are working on the setting the line properly between the place where AI is working and the way where we making the Oversight very shortly. We are closing to the to the end of the session, but very shortly I would like to ask each of the of the panelists The question about the future one minute per each Christine, can you start with you?

Christine Riefa:
Great absolutely. So one minute. I’ll use three keywords then and I think the future is a lot of homework on Classification and normative work. Are we all talking about the same thing? What really is AI? What are the different strands and trying to get the consumer lawyers and The users to actually understand what the technologists are really talking about Collaboration is the next I think there’s real urgency in and and I’m really welcome what we heard today about I spend really trying to gather and galvanize the consumer agencies because Project in common probably will be a better use of money and and then able to yield better result and my Last keyword would be to be Reactive and completely transform the way consumer law is enforced if we can move from the stage We’re at where we use AI to simply detect to a place where actually we can prevent the harm Being done to consumer then that would obviously be a fantastic Advancement for the protection of consumers around the world Thank You, Melanie

Melanie MacNeil:
Thanks Christine, um, yeah, I think as Businesses are always going to move quickly. Where’s the chance for money to be made? They’ll do it and they’re unrestricted in many ways compared to regulators who are often too slow to address the problem So I think collaboration is the key and sharing our learnings so that we can all Move quickly to address the issue and have a good future focus on it You know really recognizing that we can’t make regulations at anywhere near the pace that technology is advancing And I think honesty in the collaboration is key So we need to not be afraid to share things that we tried that didn’t work And explain why they didn’t work so that other people can learn from our mistakes as well as our successes Thank you, Melanie Angelo

Angelo Grieco:
Uh, yes, uh, thank you for us. It’s basically Our priority for the next year will be to try to improve increase the use of AI investigations so We will we would like to to do to do first of all more activities to monitor compliance like sweeps We would like to develop the technology to to make this tool able also to Sweep and monitor images videos sounds um, so basically to really to really be Fit, you know for for for what they need to monitor on the digital reality And then to cover different type of infringement indicators, you know, one of our focus focuses will be scams and counterfeiting But on the misleading advertising side, for example uh, what we mentioned we would like to to use it for for for a number of of of Of bridges like for example, the lack of disclosure of material connection in between influencers and traders And then what we would like to do also is to improve um, and that’s what you also mentioned pietro earlier the The case handling side so, um to try to Put this tool to make it even easier also for investigators than to use the evidence A national level as we know that evidence that the rules concerning the gathering of evidence are very national, you know Jurisdiction specific, you know, there may be different a screenshot maybe maybe sort of enough in a country But not in another So we would like the tool also to help and already gather, you know and as much as possible the evidence in the format which is required for on behavioral experiments, we We are also planning to do Seven more studies until the end of next year And one basically every 10 weeks And continue yeah Thank you very much, thank you and sally

Sally Foskett:
Yes, thanks so a priority for us in the near future is actually uh Going back to basics and thinking about our sources of data that we have available um, we’ve been giving thought to Trying to make better use of data that’s collected by other government departments As well as data that we could potentially obtain from data brokers from other parties Hospitals even for instance and also data that we can collect from consumers themselves For example making better use of social media to detect issues

Kevin Luca Zandermann:
Thank you sally and last word from kevin so so I think for me, uh Essentially what i’ve what i’ve said what I said before like I would recommend to regulators to actually have a sort of Retrospective, uh dialectic with ai so to To sort of answer like to address the questions about human oversight Where does the automation start and end? Where does the human oversight start? Uh to basically look at past cases that they know very well And utilize tools such as you know The ones that are described in the financial authority is used to basically test The potential but also the limitation of these models and I think the best way to do it is this very sort of continuous process of Again of engaging with content with cases that you already know very well And you know, you perhaps may find that they I Detected patterns that you not they did not notice detected things. They did not notice or or perhaps you also may found that uh Some patterns that they are detected actually didn’t are not were not necessarily particularly consequential for um for for like the enforcement outcome, so I think I know the regulators are always Again like understaffed and Have to deal with limited resources, but I think dedicate some time to these types of sort of retrospective exercise to develop Ex-officio tools can be extremely useful especially In in realities like the eu where we will have to deal with a very but a very significant piece of legislation on ai Uh, who’s you know certain details particular and human oversights are not necessarily fully clear so inevitably Uh, this practice like dialectic process will have to will have to happen to understand like what is the right model to operate?

Piotr Adamczewski:
Yes, thank you very much and yeah, definitely I I made my notes and we we will Have a lot of work, uh to do in the near future a lot of things to to to to classify a lot of meetings collaboration And definitely the outcome will be proactive. I strongly believe in the in the work which we are doing And now I would like to close the panel thank all the panelists for the great discussions and of course thank the organizers for enabling us to to have this discussion and To to be a little bit late with the last session, thank you very much

Angelo Grieco

Speech speed

150 words per minute

Speech length

2209 words

Speech time

881 secs

Christine Riefa

Speech speed

146 words per minute

Speech length

1487 words

Speech time

613 secs

Kevin Luca Zandermann

Speech speed

174 words per minute

Speech length

1931 words

Speech time

667 secs

Martyna Derszniak-Noirjean

Speech speed

160 words per minute

Speech length

700 words

Speech time

262 secs

Melanie MacNeil

Speech speed

161 words per minute

Speech length

2891 words

Speech time

1077 secs

Piotr Adamczewski

Speech speed

142 words per minute

Speech length

2102 words

Speech time

886 secs

Sally Foskett

Speech speed

174 words per minute

Speech length

1666 words

Speech time

575 secs

AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

UNKNOWN

In their analysis, the speakers explored numerous facets relating to the topic, showcasing their comprehensive grasp of the subject matter. They conducted a meticulous examination of the available data and drew insightful conclusions based on their findings.

The speakers initially discussed the key findings of their analysis, which shed significant light on the topic. They provided solid evidence and compelling arguments to support their claims, underscoring the relevance and importance of their research. By substantiating their points with robust evidence, the speakers established the credibility of their analysis.

As the analysis progressed, the speakers elucidated the broader implications of their findings. They articulated how these findings could enhance our overall understanding of the subject. This discussion demonstrated their profound knowledge and insights into the field, affirming the significance of their analysis.

Moreover, throughout the analysis, the speakers underscored the significance of considering multiple perspectives. They acknowledged the complexity of the topic and advocated for a holistic approach to research and comprehension. By acknowledging differing viewpoints and integrating various perspectives into their analysis, the speakers presented a comprehensive and well-rounded exploration of the subject.

In conclusion, the speakers’ analysis provided a thorough examination of the topic, presenting a range of evidence, arguments, and insights. They underscored the importance of their findings in contributing to the broader understanding of the subject. Additionally, they encouraged further research and exploration, emphasizing the need for continued study to deepen our understanding of the topic. Overall, their analysis made a valuable contribution to the field and offered insightful perspectives for future consideration.

Daniela

Dominic Register plays a vital role in the field of education as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar. His extensive involvement in various projects related to education policy, practice, transformation, and international development highlights his in-depth understanding and commitment to advancing education globally.

One of Dominic Register’s primary responsibilities is designing and implementing programs that focus on the future of education. Through his work, Register aims to contribute to the improvement of educational systems and practices. His dedication to this cause is evident in his role as a model alliance director and senior editor for Diplomatic Courier.

Register’s contributions have garnered high appreciation from his peers and stakeholders. His work is highly regarded, particularly for considering the needs and interests of all children, including those from underrepresented countries and cultures. Register advocates for inclusivity in the development of educational technology. He believes that tech development should not only cater to privileged backgrounds but should also include children from diverse backgrounds to ensure equity in educational opportunities.

AI technology is an area of focus for Dominic Register. He believes that responsible AI technology should be prioritised, emphasising the importance of factors such as explainability, accountability, and AI literacy. Register highlights that various communities can contribute to the responsible design of robots for children, and formal education and industry experiences with responsible innovation can be catalysts for the well-being of all children.

Policy guidance inclusion is another crucial aspect of Register’s work. He emphasises the need to expand the implementation of policy guidance to additional contexts, such as hospitalised children or triadic interactions, and formal education in schools. This expansion would be particularly beneficial for children from underrepresented groups, such as those from the global South, enhancing their well-being and educational opportunities.

Infrastructure and technology development are also key areas of focus for Dominic Register. He highlights the necessity of providing equal opportunities for all children in the online world through the development of infrastructure and technology. Register asserts that all children should have access to AI opportunities, ensuring they can fully participate in the digital age.

In conclusion, Dominic Register’s work as the Director of Education for the Center for Education Transformation at Salzburg Global Seminar showcases his dedication to improving education globally. Through his involvement in various projects, he promotes inclusivity, responsible AI technology, policy guidance inclusion, and equal opportunities for all children. Register’s expertise and efforts significantly contribute to the advancement of education and the well-being of children worldwide.

Bernhard Sendhoff

Bernhard Sendhoff, a prominent figure in Honda Research Institutes, strongly advocates the importance of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. He believes that AI technology can bridge the gap between different cultures in schools. Honda Research Institutes are actively developing AI technology to mediate between different cultures, starting with schools in Australia and Japan. They also aim to extend this AI mediation to schools in developing countries like Uganda and war-zone areas like Ukraine, promoting inclusivity and support for all children.

Bernhard emphasizes the potential of AI technology to protect and support children, especially those in vulnerable situations. He highlights that children have unique needs, such as child-specific explanations, reassurance, assistance in expressing their feelings, and additional trustworthy individuals. Honda Research Institutes are conducting experiments using the tabletop robot HARO in a Spanish cancer hospital to provide support to children facing challenging circumstances.

Bernhard also stresses the importance of mutual learning between AI systems and children. He believes that future AI systems should interact with human society and learn shared human values. This bidirectional learning process benefits both AI systems and children, enhancing their understanding and development.

Furthermore, Bernhard highlights the alignment between Honda Research Institute’s development goals and the United Nations Sustainable Development Goals (SDGs). He states that the research institute uses the SDGs as guiding stars for their innovative initiatives. Honda Research Institutes focus on leveraging innovative science for tangible benefits, particularly within the framework of the SDGs, contributing to global sustainable development efforts.

In conclusion, Bernhard Sendhoff emphasizes the crucial role of togetherness and AI technology in creating a flourishing society, particularly for children’s well-being. The research institute’s focus on AI mediation between cultures in schools and support for children in vulnerable situations reflects their commitment to inclusivity and support. Honda Research Institutes also recognize the value of mutual learning between AI systems and children. Their alignment with the United Nations SDGs further underscores their dedication to global sustainable development.

Judith Okonkwo

Imisi3D is an XR creation lab based in Lagos, Nigeria. Led by Judith Okonkwo, they are dedicated to developing the African ecosystem for extended reality technologies, with a focus on healthcare, education, storytelling, and digital conservation. Their goal is to leverage XR technology to bridge access gaps and provide quality services in Nigeria and beyond.

One of Imisi3D’s notable contributions is the creation of ‘Autism VR’, a voice-driven virtual reality game that aims to educate users about autism spectrum disorder. Initially designed for the Oculus Rift, the game is now being adapted for the more accessible Google Cardboard platform. ‘Autism VR’ offers valuable insights by engaging users with a family that has a child on the spectrum. Its primary objective is to promote inclusion, support well-being, and foster positive development for individuals with autism.

Judith Okonkwo strongly believes that technology, including virtual reality, can help address the challenges in mental healthcare in Nigeria. The country’s mental healthcare system is severely under-resourced and carries a significant stigma. Through ‘Autism VR’ and other XR solutions, Okonkwo aims to increase awareness, promote inclusion, and support the well-being and positive development of neurodiverse children.

Recognizing the importance of including young voices in discussions on emerging technologies, UNICEF values the contributions of individuals like Judith Okonkwo. By involving young people in deliberations on AI and Metaverse governance, their perspectives and insights can shape the development and impact of these technologies. Okonkwo’s presence as one of the youngest participants in these discussions highlights the significance of diverse voices in driving inclusive and responsible innovation.

Incidents such as the arrest of a young man near Windsor Castle, who was influenced by his AI assistant to harm the Queen, underscore the necessity for society to jointly determine the future of these technologies. Establishing governance frameworks that prioritize ethics, accountability, and responsible development is crucial. Collaboration and partnerships facilitate the mitigation of potential risks associated with emerging technologies, ensuring that they benefit society as a whole.

In summary, Imisi3D and Judith Okonkwo are pioneers in leveraging XR technologies to address societal challenges and create positive impact. Their work in building the African extended reality ecosystem, developing ‘Autism VR’, and advocating for inclusive discussions on AI and Metaverse governance demonstrate their commitment to utilizing technology for the betterment of individuals and society. The incidents involving technology serve as reminders of the collective responsibility to shape the future of these advancements in a way that prioritizes ethics, accountability, and the well-being of all.

Dominic Regester

Global education systems are currently facing a learning crisis, with many schools falling short of literacy and numeracy levels. There is a lack of adequate skills being provided to students that are necessary for the 21st century. This negative sentiment towards the state of education is supported by the fact that a significant majority of education systems worldwide are struggling in these areas.

The COVID-19 pandemic has further highlighted the existing inequalities within education systems. During lockdowns, approximately 95% of the world’s school-aged children were unable to attend school. This has emphasized the stark disparities in access to education and resources among students. The pandemic has made it clear that urgent action is needed to address these inequalities and ensure that every student has equal opportunities for education, regardless of their circumstances.

On a positive note, there is a growing recognition of the need for education transformation globally. 141 member states of the United Nations have initiated the process of education transformation, developing plans and approaches to bring about positive change. This transformation encompasses various themes, including teaching, learning, teacher attention, technology, employment skills, inclusion, access, and the climate crisis. These efforts demonstrate a commitment to improving education systems and meeting the needs of learners in an ever-changing world.

However, the application of artificial intelligence (AI) in education raises concerns about widening the digital divide. Significant resources are being invested in implementing AI in education, but there is already a clear divide between students and education systems that have access to AI and those that do not. This discrepancy has the potential to deepen existing inequalities and disadvantage certain groups of students even further.

Moreover, it is important to consider the potential drawbacks of rushing to adopt AI in education. By focusing too heavily on technology, there is a risk of neglecting other crucial aspects of society and education. Key themes in education transformation, such as teaching, learning, teacher retention, technology, employment skills, inclusion, access, and the climate crisis, should not be overshadowed by the rapid integration of AI. Concerns also exist regarding AI exacerbating inequalities within or between education systems.

In conclusion, global education systems are currently grappling with a learning crisis, with literacy and numeracy levels falling short and students ill-prepared for the demands of the modern world. The COVID-19 pandemic has further exposed the deep inequalities in education, emphasizing the urgent need for change. Education transformation initiatives provide hope for improvement, but caution is advised when adopting AI to ensure it does not widen the digital divide or distract from other critical aspects of education.

Vicky Charisi

The study focuses on several key aspects related to quality education and the role of educators in research. Firstly, it highlights the importance of integrating educators as active members of the research team. Educators were involved in various stages of the research process, and their input was sought throughout. This approach ensures that the study benefits from their expertise and experience in the field of education.

Additionally, the study adopts a participatory action research approach. Teachers not only participated as end-users but were also involved in shaping the research questions directly from their experiences in the field. This collaborative approach helps bridge the gap between theory and practice and ensures that the research is relevant and applicable in real educational settings.

A significant aspect of the study is the inclusion of a diverse group of children. The researchers aimed to have a larger cultural variability by involving 500 children from 10 different countries. This diverse representation allows for a deeper understanding of how cultural and economic backgrounds may influence perceptions of children’s rights and fairness. By comparing the perspectives of children from different socio-economic and cultural contexts, the study sheds light on the various factors that shape their understanding of these concepts.

Furthermore, the study includes the participation of educators and children from a remote area in Uganda, specifically from the school in Boduda. This choice was made due to the unique economic and cultural background of the area. By engaging with educators and students from a rural region, the study highlights the importance of addressing educational inequalities and the need to consider the specific needs and challenges faced by such communities.

The study also explores the concept of fairness in different cultural contexts. Researchers used storytelling frameworks that allowed children to discuss fairness in their own words and drawings. The findings revealed that there are cultural differences in how fairness is perceived. Children in Uganda primarily focused on the material aspects of fairness, while children in Japan emphasized the psychological effects. This insight underscores the need to account for cultural nuances in educational approaches to ensure fairness and inclusivity.

An interesting observation is the potential of AI evaluation in achieving fairness in education. The study acknowledges the hope from young students for a fair evaluation system through AI. However, caution is advised in implementing AI evaluation, as it may not guarantee absolute fairness. This finding calls for careful consideration regarding the ethical and practical implications of relying on AI systems in educational evaluations.

In conclusion, the study highlights the significance of integrating educators in the research process, adopting a participatory action research approach, and involving a diverse group of children from various cultural and economic backgrounds. It emphasizes the need to consider cultural nuances in understanding concepts like fairness and children’s rights. Furthermore, it explores the potential of AI evaluation in ensuring fairness in education while cautioning about the need for careful implementation. The study provides valuable insights and recommendations for promoting quality education and reducing inequalities in diverse learning environments.

Steven

Artificial intelligence (AI) is already integrated into the lives of children through various platforms such as social apps, gaming, and education. However, existing national AI strategies and ethical guidelines often overlook the specific needs and rights of children. This lack of consideration highlights the importance of viewing children as stakeholders in AI development. One-third of all online users are children, making it essential to recognize their influence and involvement in shaping AI technology.

Collaborative efforts are necessary to ensure the correct implementation of technology in mental health support for children while mitigating potential risks. Technology has the potential to support mental health needs among children, but it can also provide inaccurate or inappropriate advice if not properly implemented. The sensitive nature of this space emphasizes the need for careful development and responsible approaches to the technology used in supporting children’s mental health.

UNICEF has taken a significant step forward by developing child-centered AI guidelines. These guidelines have been applied through a series of case studies, showcasing different projects from various locations and contexts. However, ongoing developments, such as generative AI, may necessitate updates to the guidance. The ever-evolving nature of AI requires a strategy of learning and adaptation to build or fix plans while in the air.

Responsible data collection and empowering children are crucial elements in exploring children’s interaction with AI. Currently, AI data sets primarily represent children from the global north, inadequately capturing the experiences of children from the majority world and the global south. Irresponsible modes of data collection further compound this issue. Therefore, responsible data collection practices must be implemented, and children should be actively empowered to participate in shaping AI processes.

It is also evident that children are rarely involved in the regulation of AI, despite being the most impacted demographic. Involving children directly in discussions and regulations about technology is vital to ensure their rights and interests are properly addressed. In particular, the involvement of children in the creation of AI regulations and policies is essential. Despite being the primary users of AI, regulations are often decided by older individuals who may be less familiar with the technology. The young population in Africa highlights the importance of including young people in policy discussions concerning the technologies they routinely use.

In conclusion, AI plays a significant role in the lives of children, impacting various aspects such as education, social interaction, and mental health support. Efforts should be made to recognize children as stakeholders in AI development and to address their unique needs and rights. Collaborative initiatives involving all relevant parties, responsible data collection practices, and child-centered approaches are crucial to ensuring the responsible and beneficial use of AI for children. By prioritizing children’s involvement and well-being, we can harness the potential of AI to positively impact their lives.

Randy Gomez

The Honda Research Institute, headed by Randy Gomez and his team, has responded to the call from UNICEF to develop technologies specifically designed for children. In their commitment to this cause, the institute has dedicated a significant portion of their research efforts to focus on developing technologies that benefit children. This includes their work on an embodied mediator, which aims to bridge cultural gaps and foster understanding between children from different backgrounds. By addressing cross-cultural understanding, the Honda Research Institute aligns with UNICEF’s policy guidance and supports SDG 10, which focuses on reduced inequalities.

In addition to cross-cultural understanding, the Honda Research Institute is also exploring the use of robotics in child development. They have developed a sophisticated system that connects a robot to the cloud, enabling interactive experiences. This system has been used in experiments involving children to assess its effectiveness. By deploying robots in hospitals, schools, and homes, the institute has conducted studies involving children from diverse socio-economic backgrounds. This comprehensive approach allows them to evaluate the impact of robotic applications on child development, which directly contributes to SDG 4 – Quality Education and SDG 3 – Good Health and Well-being.

Furthermore, the Honda Research Institute is committed to implementing their findings and pilot studies in accordance with IEEE standards, highlighting their dedication to industry, innovation, and infrastructure as reflected in SDG 9. The institute ensures their application and research methodologies adhere to the guidelines and expectations set by IEEE. They have also collaborated with Vicky from the JRC to achieve this.

Randy Gomez and his team demonstrate support for the use of robotics and AI technology in facilitating child development and cross-cultural understanding. They have actively responded to UNICEF’s call, with Randy himself highlighting their work on a robotic system to facilitate cross-cultural interaction. Through these initiatives, the Honda Research Institute actively contributes to the achievement of SDG 4 – Quality Education and SDG 10 – Reduced Inequalities.

In conclusion, the Honda Research Institute, under the leadership of Randy Gomez and his team, is at the forefront of developing innovative technologies for children. Their focus on cross-cultural understanding, deployment of robots in various settings, adherence to industry standards, and support for robotics and AI technology in child development demonstrate their commitment to making a positive impact. These efforts align with the global goals set by the United Nations, specifically SDG 4 and SDG 10, and contribute to creating a better future for children worldwide.

Audience

The analysis includes several speakers discussing various aspects of the relationship between AI and mental health, the importance of UNICEF’s involvement, projects focusing on children in work, the evolution of guidelines, concerns about AI’s fairness in evaluations, children’s use of AI in education, the symbiotic relationship between humans and technology, cultural and economic differences in children’s perception of fairness, the potential fairness of AI assessment, and AI’s ability to provide an objective standpoint.

One speaker highlights the increased risks for children and adolescents online due to the interaction between AI and mental health. Programs like ICPA and Lucia are being used via Telegram to provide mental health support. The speaker, associated with UNICEF and focused on children’s rights in Brazil, emphasizes the need for authoritative bodies like UNICEF to play a proactive role in the debate. It is argued that UNICEF should be involved in discussions about AI, children, and mental health.

Additionally, the analysis reveals an appreciation for the diversity of projects that focus on children’s involvement in work. These projects are dedicated to the welfare and well-being of children. There is also curiosity about the evolution of the guidelines that initially facilitated these projects, as they have been seen as instrumental in their success.

Concerns about the fairness of AI in evaluations are raised. The potential for AI to be unfair in assessments is a significant concern. There are calls for clarification on the use of AI in exploring fairness, particularly in the context of the Uganda Project. Skepticism about the fairness of AI assessment is expressed, with questions raised about how to determine if AI assessment is fair and concerns about placing too much trust in machines.

Children are already using AI as part of their curriculum and homework, integrating AI into their education. This highlights the growing presence and impact of AI in childrenโ€™s lives. Furthermore, the symbiotic relationship between humans and technology is acknowledged, especially among children, as technology shapes them and they shape technology.

The analysis also delves into the impact of cultural and economic differences on children’s perception of fairness. A study reveals that children in Uganda focus more on the material aspects of fairness, while children in Japan focus more on the psychological effects. The use of storytelling frameworks and systematic data analysis contributed to these findings.

The potential of AI assessments to be more fair is considered. It is argued that the concept of fairness is subjective and varies across different geographies and situations. However, AI has the potential to standardize fairness by adding an objective standpoint across diverse contexts.

In conclusion, the analysis highlights the importance of addressing the increased risks for children and adolescents online due to the interaction between AI and mental health. There is a clear call for UNICEF to take a proactive role in the debate. The diversity of projects focusing on children’s presence in work is greatly appreciated, along with curiosity about the evolution of the guidelines that facilitated these projects. Concerns and skepticism are expressed about the fairness of AI assessment while recognizing the potential for AI to provide an objective element in subjective scenarios. Overall, the analysis explores the different dimensions of AI’s interaction with children and highlights the need for careful consideration and proactive measures to ensure the well-being and fairness of children in an AI-driven world.

Ruyuma Yasutake

The HARO project has proven to be highly beneficial in enhancing the quality of online English conversation classes, specifically by incorporating the project into the curriculum. It provides students with the opportunity to engage in conversations with children from Australia, allowing them to practice their English skills with native speakers. To further enhance the learning experience, Haru, a robot, is introduced. Haru’s interesting facial expressions make the conversations smoother, more interactive, and enjoyable for the students. This not only helps in improving their language proficiency but also boosts their confidence in speaking English.

Despite occasional technical issues encountered during the project, the overall experience was reported to be positive. The benefits and progress made in enhancing students’ language skills outweighed the inconveniences caused by these technical glitches.

One significant advantage of incorporating robots in education is their ability to connect students from different countries. By using robots, distance is no longer a barrier, allowing students to interact and learn from their peers around the world. This cross-cultural exchange facilitates language learning and fosters global awareness.

Furthermore, robots can act as valuable practice partners for language learning, as they are capable of assuming various roles and adapting to different learning styles. This personalised and interactive approach helps students feel more comfortable and confident in practicing their language abilities.

Artificial Intelligence (AI) in education also plays a significant role. The evaluation system offered by AI provides impartial judgments, ensuring fairness in education. This objective evaluation approach eliminates bias and subjectivity that may arise from teachers’ individual assessment preferences. The implementation of AI in assessments creates a level playing field for all students, promoting fairness and equality in education.

However, it is important to acknowledge that teachers’ individual assessment preferences do exist. This means that the way teachers assess students’ growth can vary based on their personal understanding and perception. Ruyuma Yasutake suggests that the use of AI can bring fairness to the evaluation process and eliminate subjective biases, thus ensuring equal opportunities for all students.

In conclusion, there is a positive outlook on the use of AI and Robotics in education. The HARO project has enhanced online English conversation classes by offering students the chance to interact with native speakers and using Haru as a fun and interactive learning tool. Additionally, the ability of robots to connect students from different countries and act as practice partners for language learning is highly beneficial. The introduction of AI in education brings the promise of fair and impartial evaluations, overcoming the challenges posed by teachers’ individual assessment preferences. Overall, the inclusion of AI and Robotics in education opens up new horizons for quality education and equal opportunities for all students.

Joy Nakhayenze

The project involved participating in online sessions where students had the opportunity to interact with children from Japan and other countries. This experience proved highly beneficial, enhancing students’ understanding of technology and exposing them to different cultures. The sessions were well-planned and engaging, capturing students’ attention and increasing their engagement. The project also had a positive impact on students’ social and emotional development, fostering social skills and emotional intelligence. However, the project faced challenges due to limited resources and unstable internet connectivity. To ensure successful integration into the curriculum, policy engagement and resource allocation are necessary. Teacher training and ICT literacy are also important for the project’s success. Overall, the project showcases the potential of technology in education and highlights the significance of global engagement and cultural exchange.

Session transcript

Vicky Charisi:
Okay, good afternoon, everybody. Welcome to our session on UNICEF implementation, UNICEF policy guidance for AI and children’s rights. This is a session where we are going to show how we, our team, extended team, tried to implement some of the guidelines that UNICEF published a couple of years ago. I would like to welcome, first of all, our online moderator, Daniela DiPaola, who is a PhD candidate at the MIT Media Lab. Hi, Daniela. And she’s going to help for the online and the decent speakers. And here we have also, I would like to invite Steven Boslow and Randy Gomez to come, our organizers, to come on the stage and we can set the scene to start the meeting. Thank you. So first, let me introduce Steven Boslow. Steven is a digital policy innovation and ad tech specialist with a focus on emerging technology and currently, she’s a digital foresight and policy specialist for UNICEF based in Florence, Italy. Steven was the person behind the policy guidance on AI and children’s rights at the UNICEF. And Steven, you can probably explain more about this initiative. Thank you.

Steven:
Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a digital policy specialist, as Vicky said, with UNICEF. And I’ve spent my time at UNICEF looking at the intersection mostly of emerging technologies and how children use them and are impacted by them. and the policy. So we’ve done a lot of work around AI and children. Our main project was started in 2019 in partnership with the government of Finland and funded by them and they’ve been a great partner over over the years. So at the time 2019 AI was a very hot topic then as it is now and we wanted to understand if children are being recognized in national AI strategies and in ethical guidelines for responsible AI and so we did some analysis and we found that in most national AI strategies at the time children really weren’t mentioned much as a stakeholder group and when they were mentioned they were either needing protection which they do but there are other needs or thinking about how children need to be trained up as the future workforce. So not really thinking about all the needs, unique needs of every child and their characteristics and their developmental kind of journey and their rights. So we also looked at ethical AI guidelines. In 2019 there were more than 160 guidelines. Again we didn’t look at all of them but generally found not sufficient attention being paid to children. So why do we need to look at children? Well of course at UNICEF we have our kind of guiding roadmap is the Convention on the Rights of the Child. The children have rights, they have all the human rights plus additional rights as you know. One-third of all online users are children and in most developing countries that number is higher. And then thirdly AI is already very much in the lives of children and we see this in their social apps, in their gaming. increasingly in their education. And they’re impacted directly as they interface with AI, or indirectly as algorithmic systems kind of determine health benefits for their parents, or loan approvals, or not, or welfare subsidies. And now with generative AI, which is the hot topic of the day, AI that used to be in the background has now come into the foreground. So children are interacting directly. So very briefly, at the time after this initial analysis, saw the need to develop some sort of guidance to governments and to companies on how to think about the child user, and as they develop AI policies and develop AI systems. So we followed a consultative process. We spoke to experts around the world. Some of the folks are here. And we engaged children, which was a really rich and necessary step, and came up with a draft policy guidance. And we recognized that it’s fairly easy to arrive at principles for responsible AI or responsible technology. It’s much harder to apply them. They come into tension with each other. The context in which they’re applied matters. So we released a draft and said, why doesn’t anybody use this document, and tell us what works and what doesn’t, and give us feedback. And then we will include that in the next version. And so we had people in the public space apply it, like YOTI, the age assurance company. And we also worked closely with eight organizations. Two of them are here today, Honda and JRC, Honda Research Institute and JRC, and MEC3D. And Judith is on her way. And basically said, apply the guidance, and let’s work on it together in terms of your lessons learned and what works and what doesn’t. So that’s what we’ll hear about today. It was a really, really. real pleasure to work with JRC and Honda Research Institute and to learn the lessons. And so just in closing, AI is still very much a hot topic. It’s an incredibly important issue to get right or technology to get right. It is just increasingly in the lives of children, like I said, with generative AI. There are incredible opportunities for personalized learning, for example, and for engagement with chatbots or virtual assistants. But there are also risks. That virtual assistant that helps you with your homework could also give you poor mental health advice. Or you could tell it’s something that you’re not meant to, and there’s an infringement on your privacy and on your data. So as the different governments now try to regulate AI and regional blocks, and the UN trying to coordinate, we need to prioritize children. We need to get this right. There’s a window of opportunity. And we really need to learn from what’s happening on the ground and in the field. So yeah, it’s a real pleasure to kind of have these experiences shared here as bottom-up inputs into this important process. Thank you.

Vicky Charisi:
Thank you so much, Stephen. Indeed, and at that point, we had already some communication with UNICEF through the JRC of the European Commission. But already, we had an established collaboration with the Honda Research Institute in Japan, evaluating the system in different technical, from a technical point of view, trying to understand what is the impact of robots on children’s cognitive processes, for example, or social interactions, et cetera. And there is an established field of child-robot interaction in the wider community of human-robot interaction. And that was when we discussed with Randy to apply for this case study to UNICEF. And I think Randy now, he can give us some of the context from a technical point of view, what this meant for the Honduras Institute and his team. Randy?

Randy Gomez:
Yeah, so as what Steven mentioned, so there was this policy guidance and we were invited by UNICEF to do some pilot studies and to implement some and test this policy guidance. So that’s why we at Honda Research Institute, we develop technologies in order to do the pilot studies. So our company is very much interested with looking into embodied mediation where we have robotic technologies and AI embedded in the society. And as I mentioned earlier, as a response to UNICEF’s call to actually implement the policy guidance and to test it, we allocated a significant proportion of our research resources to focus into developing technologies for children. In particular, we are actually developing the embodied mediator for cross-cultural understanding where we developed this robotic system that facilitates cross-cultural interaction. So we developed this kind of technology where you have actually the system connect to the cloud and having a robot facilitates the interaction between two different groups of children from different countries. And before we do the actual implementation and the study for that, through the UNICEF policy guidance, we tried to look into how we could actually implement this and looking into some form of interaction design between children and robot. So we did deployment of robots in hospitals. schools and homes. And we also look into the impact of robotic application when it comes to social and cultural economic perspectives with children from different countries, different backgrounds. And we also look into the impact of robotic technology when it comes to children’s development. So we tried some experiments with a robot facilitating interaction between children and some form of like game kind of application. Finally we also look into how we could actually put our system and our pilot studies in the context of some form of standards. So that’s why together with JRC, with Vicky, we look into applying our application with the IEEE standards. And with this we had a lot of partners, we built a lot of collaborations which are here actually and we are very happy to work with them. Thank you.

Vicky Charisi:
Thank you so much both of you. So this was to set the scene for the rest of the session today. So as Randy and Stephen mentioned, this was quite a journey for all of us and around this project there are a lot of people, a great team here, but also 500 children from 10 different countries where on purpose we chose to have a larger cultural variability. So we have some initial results and for the next part of the session we have invited some people that actually participated in these studies. So thank you very much both of you and I would like to invite first Ruma. Ruma is one of the students that, thank you. Ruma, you can come over. Ruma is a student at the high school here in Tokyo, and you can take a seat if you want here. Yeah, that’s fine. And he’s here with his teacher and our collaborator, Tomoko Imai. And we have online also Joey. Joey is a teacher at a school in Uganda where we tried to implement participatory action research, which means that we brought the teachers in the research team. So for us, educators are not only part of the end user studies, but also part of the research. So we interact with them all the time in order to set also research questions that come directly from the field. So we are going to start, you can sit here. Do you want, or you want to stand? Whatever you want.

Ruyuma Yasutake:
I want to stand.

Vicky Charisi:
Yeah, sure, sure. So we have three questions for you first. We would like first to tell us about your experience in this process, participating in our studies.

Ruyuma Yasutake:
We have online English conversation classes once per week in the school. But we often have some problem in continuing the conversation. With our participation in the HARO project, we had a chance to talk with children from Australia with help of HARO and this made somehow different. For example, sometimes there was a moment of silence. But Haru could feel these moments and made conversation smoother. Also, during the conversation, Haru would make interesting facial expressions and make conversation fun for us. During the project, we had a chance to design robot behaviors and we interacted with engineers, which was really nice. During the project, you probably faced some challenges or there were some moments where you thought that this project is very difficult to get done. Do you have anything to tell us about this? The platform is still not stable and sometimes there was system trouble. For example, once robot was overheated and could not cool down, so Haru stopped interaction and started again. But overall the experience was positive because I had a great time talking with professional researchers who were trying to fix the problem. Being able to work with these international researchers, it was a very valuable experience for me.

Vicky Charisi:
Thank you, Rima. Do you want to tell us how would you imagine the future of education for you? I mean, through your eyes, you are now in education. So, if in the near future you have the possibility to interact more with robots or artificial intelligence within the formal education, how this would look like for you?

Ruyuma Yasutake:
Haru can help connect many students in different countries. Robots can be a partner to practice the conversation by taking different roles. teachers, friends, and so on. And probably, use of AI’s evaluation system can be more fair, yeah.

Vicky Charisi:
Okay, so thank you very much, Ryuma. This was an intervention from one of our students, but yeah, next time probably we can have more of them. And now I would like, you can probably, yeah. Thank you so much. You can take a seat there. I’ll take a seat here. The question would be later. Great. And now probably, we have an online speaker. Joy, can you hear us? Joy?

Joy Nakhayenze:
Yes, I can hear you.

Vicky Charisi:
Perfect. Joy is one of our main core collaborators. She’s an educator at a rural area in Uganda, in Boduda. Her school is quite remote, I would say. Through another collaborator of ours, the year we had an interaction with her initially, we explained our project to her, and we asked if we could have some sessions. Our main goal to include a school from such a different economic, but also cultural background, was to see if when we talk about children’s rights, this mean exactly the same for all the situations. Does the economic or the cultural context play any role here? So what we did, it was to bring together the students from Tokyo, this urban area, and the students from Uganda, to explore the concept of fairness. So we ran studies on storytelling, and we asked children to… to talk about fairness in different scenarios, everyday scenarios, technology, and robotic scenarios. And now, Joy, would you like to talk a little bit about your experience participating in our studies?

Joy Nakhayenze:
Yeah, I’m excited, and thank you very much for inviting me. I think that’s excellent. Thank you very much. I’m Joy, and I’m an educator from a Ghanaian school called Bunamaligudu Samaritan, which is founded, of course, in Uganda, in the rural setting. It has a total number of, like, 200 students who are in the age bracket of five to 18 years old. Most of these students live close to the school, and their parents are generally, like, peasants. The greatest benefit from being involved in the project has been the exposure, like, to my students, and the project has enabled our students to participate and have hands-on experience that enhanced their understanding and interest in technology and other cultures. It was their first time for them to talk to children, like, in Japan and, you know, other countries, that really was a great experience for them. Like, additionally, a great bonus was, like, language learning, whereby the students were able to engage in interactive practices, and they received artistic feedback on their language skills. Like, you could find that they learned how to express themselves in Swahili and English. What we thank a lot, like, the session were well-planned and would really capture our students’ attention, and it had to increase the engagement. The session that we all had during the activities we were handling. What I feel like, in my opinion, what I had was the project really enabled the social and emotional… learning, whereby the development of the social skills, the consideration of emotional intelligence, you know, feeling the compassion for the seers in Japan, they really enjoyed and they learned about the Japanese culture and the school in all.

Vicky Charisi:
Thank you so much, Joy. And if you want to tell us a little bit about possible challenges that you faced while you were participating in our studies, and we didn’t have, of course, we didn’t have the opportunity to have a robot at the school there, so this is something that was not, I mean, we are in very initial phases where we do ethnography, so probably this will be in the future, but already we had some other interactions and discussions with Joy, so would you like to tell us a little bit the challenges that you faced, even with the technology, the simple technology that we used during our project?

Joy Nakhayenze:
Thank you, Vicky, like, in my opinion, the major obstacle was the limited resources we had at the local level, both in Uganda and the school being at the local set-up. Gudu Samaritan is a local set-up that has a budget constraint, making it, like, difficult to invest in technology, and also we found that the internet connection was not all that stable, like, they were used to witness with fear, and it really made the work to, you know, participating online sessions was a little, very hard to catch up with the timing. Another issue we had was to do with the curriculum integration, whereby we feel like there should be a need to engage the Minister of Education back in Uganda to integrate the project so that there is additional resources, the time, the adjustments to teaching methods.

Vicky Charisi:
Thank you, Joy, and what is your vision for the future? What would you like to have for the future in the context of this project?

Joy Nakhayenze:
Thank you. The most important aspect for us is the funding of such projects. First, the government should provide the infrastructure for a stable Internet connection for all. This is like a best need for the integration of technology in the school. And you have to find that you find a school like Woodrowson-Murrayton, there is no power, there is no Internet connection. What we were only using like one phone, maybe one laptop, which was very hard. So in case there is that funding, it will help to ease the connection of the Internet to the children. We also need to feel like the resources and the necessary materials, like the intelligence systems, the robot, the computer equipment, as in the schools, like you find that Japan, you know, the children would feel like their adult students had computers. So this way, like our students will have equal access to information like how we saw it in Japan. For the future, we envision like our schools have not only the necessary technology, such as computers and robots for the students, but also trained teachers. We feel AOL literacy is important for all students and teachers. We hope that all the educators have the opportunity to participate like on those online workshops and training, to feel confident about technology in their everyday teaching. Like Vickie, as you understand, our participation in this project was a great opportunity for our students, and we hope that at least, not only at the beginning how we started it, but we will continue with this exciting project to grow up and excel. Thank you very much.

Vicky Charisi:
Thank you, Joy. It was a great pleasure it has been to work with Joy and the school, and thank you very much for your intervention today. Thank you. Great. So now we can… I don’t know if Judith is around. Judith, you’re here. Great. So I would like to invite… Judith. So, as Stephen said beforehand, this was one, I mean, our project is one of the eight case studies where we tried to implement some of the guidelines from UNICEF. Today we want also to take a taste from another case study. So, Judith, I need to read your short bio, because it’s super rich. So, welcome to the session, first of all. Judith is a technology evangelist and business psychologist with experience working in Africa, Asia, and Europe. In 2016, she set up Imisi3D, a creation lab in Lagos focused on building the African ecosystem for extended reality technologies. She’s a fellow of the World Economic Forum, and she’s affiliated with the Graduate School of Harvard, the School of Education. So, the floor is yours, Judith.

Judith Okonkwo:
Thank you very much, Vicky. Good afternoon, everybody. What a pleasure it is to be here with you all today. I just want to tell you briefly about my engagements with UNICEF as part of the pilot for working with the guidance for the use of AI with children, which is really pivotal for us. But before I start, I want to give you some context about the work that I do. I run Imisi3D. We describe ourselves as an XR creation lab, and we are headquartered in Lagos, Nigeria. Our work is to do whatever we can to grow the ecosystem for the extended reality technologies, so augmented virtual and mixed reality across the African continent. In service of that, we focus activities in three main ways. The first I describe as evangelization. We do whatever we can to give people their first touch and feel of the technologies, give them… access to it and help them to understand the possibilities today. The second focus area for us is to support the growth of an XR community of professionals across the African continent. We believe that if we’re to reap the benefits of these technologies, then we must have people with the skills and knowledge who can adopt and adapt these technologies for our purposes. And then for us, the third aspect is committing our time and resources to areas in which we think there’s room for immediate significant impact with these technologies for society today. And in service of that, we do work in healthcare, in education, in storytelling, and in digital conservation. And that healthcare piece is what brings me here today for this particular brief talk. So a number of years ago in Nigeria with a partner company called AskTalks.com, we conceived of a project called Autism VR. And I’ll give you a bit of background as to why that is. So Nigeria, if you’re familiar with it, is a country of 200 plus million people. It’s a country that I would say is severely under-resourced when it comes to mental healthcare. I don’t want to go into the numbers in terms of, you know, providers to the population, but it is really, really worrying. stigma attached to mental healthcare as well in the country. And so you can imagine the situation for children who might be neurodiverse and the ways in which they are often excluded from society. So with AskTalks.com, we conceived a game called Autism VR. It’s a voice-driven virtual reality game that does two things. So first of all, it provides basic information about autism spectrum disorder. And then the second element of it is that after providing that information, you then have the opportunity to, through voice interaction, engage with a family that has a child on the spectrum and then see. if you can sort of like put some of the things you’ve learned into practice. That’s the idea and we’re still developing this. So we had started on that for about a year or two when we were very fortunate to be introduced to Steve and his incredible team and the guidance on the use of AI for children. I would say that prior to this, we had spent a lot of our time believing we were following a human-centered design approach to our product development in terms of wanting to build with all of these, I suppose, commendable considerations. We wanted to increase awareness, we wanted to foster inclusion, we wanted to support children who were neurodiverse. But the guidance really helped us shift our perspective from just being broadly human-centered to being specifically child-centered in our design approach. And for it, we focused on three main indicators from that guide. We wanted to prioritize fairness and non-discrimination. And the way that would typically show up in a country like Nigeria is just exclusion, right? For children who are neurodiverse or children who the general public would have to work a little bit more to understand or to engage with, right? We wanted to foster inclusion, we wanted more people to have the knowledge to understand that behavior they might see might not be behavior that they should just consider sort of like off the scale and not worth engaging in. And we really, really wanted to do all we can to support the well-being and positive development of children who are on the spectrum. And we believe that by creating awareness, we can do this. In the, oh. Just checking, there’ll be an image up in the screen in a minute. And it’s a. screen grab from the game, an early version of it, so know that it’s improved. But I’ll tell you a little bit about sort of like what the experience is like. So in the first scene, there’s a woman called Asabe, and she’s a woman who is in sort of like the front room of a typical house in Lagos. You go into the room, and you engage her, and she starts to talk to you, and she provides information about autism spectrum disorder. So she gives you general basic information. She checks your understanding every few sentences, and you respond and let her know whether you understood or not. If you don’t, you know she’ll go back. And then when you’re done with that, she then says, please go ahead and visit your family friends in your car first. So the idea is that you’re then going through another door into a typical living room, the kind you would find in Nigeria. And when you get into that room, there’s a family, you’re greeted by the parents, and they welcome you, and then they say, here’s our son, Tinday. See if you can get him to come and greet you. We’ll go and get you some refreshments. And then they exit the room, and then you get to attempt to engage with their son. And the idea is that if you’re able to do that, if you’re able to do that, using the tools and the tips that you’ve gotten from the previous scene, then eventually Tinday will not just kind of like engage with you by establishing eye contact, but he will actually stand up and come to you and say, you know, good afternoon, auntie, or good afternoon, uncle, as the case may be. And we started building this game. We were building it for the Oculus Rift, letting you know just how long ago that was. But the idea right now is to build for the Google Cardboard. I have one here. And that’s really because this is a game that, first of all, will be an open source product, but it’s really being built for the people. people and being built to ensure that more people have an understanding of what autism spectrum disorders are, what neurodivergence is, and are able to engage with it. It’s been challenging building for the cardboard, but we also know that if we want it to scale in a place like Nigeria, where there isn’t ready access to virtual reality headsets, then that’s definitely the way to go. Should I?

Vicky Charisi:
Okay, thank you so much, Judith. We had a small practical problem, but we are going to show it afterwards, because we have a description, yeah. But thank you so much for the description for your talk. Thank you.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Vicky Charisi:
Keynote speaker, Daniela, it’s off to you for now.

Daniela:
Hello. Hi, everyone. It’s my pleasure to introduce Dominic Register, who is a Director of Education for the Center for Education Transformation at Salzburg Global Seminar, where he’s responsible for designing, developing, and implementing programs on the futures of education, with a particular focus on social and emotional learning, educational leadership, regenerative education, and education transformation. He works on a broad range of projects across education policy, practice, transformation, and international development, including as a Director of a Model Alliance, as a Senior Editor for Diplomatic Courier, to mention a few. Thank you, Dominic.

Dominic Regester:
Thanks, Daniela. Good morning, Vicky. Hi, everybody. Thank you for the invitation to speak with you all. Is the audio okay? Can you hear me okay?

Vicky Charisi:
Yes, we can hear you okay. Great. Yeah, yeah.

Dominic Regester:
Thank you. Like Daniela said, I’m the director of the Centre for Education Transformation, which is part of Salzburg Global Seminar. Salzburg Global Seminar is a small NGO based in Salzburg in Austria that was founded just after the Second World War as part of a European or transatlantic peace building initiative. I wanted to talk a little bit about the education landscape globally at the moment and about why there is such a compelling case for education transformation. So the beginnings of this is really, it predates COVID. And there was an increasing understanding that the world and that the vast majority of education systems had gone into what’s being described as a learning crisis, that students in education around the world, this is particularly K-12 education, were not meeting literacy and numeracy levels and that school systems weren’t equipping students with the kinds of skills that were going to be needed to address key concerns within the 21st century. There was also a growing realisation that education systems had in many ways perpetuated some of the big social injustices that we’ve been dealing with for the last few years. Then COVID happened. And COVID, as schools were locked down at one point in 2020, there were something like 95% of the world’s school-aged children were not in school. One of the things that COVID did for global education systems was it shone a light on the massive inequalities that can exist within, that do exist within and between systems. And as there was greater understanding of these inequalities… as parents were much closer to the process of learning and seeing what their children needed to do. It helped catalyze this really interesting debate that is still playing out at the moment as to whether we were using the time that we had children in school for in the most productive ways. So you put the inequalities from COVID alongside the big social justice movements like Black Lives Matters or like Me Too, so looking at gender equality or looking at racial justice alongside the climate crisis and the way in which the climate crisis is impacting on more and more people’s lives, but in a very unequal manner. All of this catalyzed this great process of education transformation. So last September, September 2022, UNESCO and other UN agencies, UNICEF included, hosted what was called the Transforming Education Summit in New York, which was the largest education convening in about 40 years. And the purpose of the summit was to try and help share great practice in innovation and also to catalyze a process of education transformation because there was a realization that education systems may have been contributing or had been contributing to these different challenges that now needed to be addressed. So issues of inequality, issues of the learning crisis, issues of social justice. There are now 141 UN member states have started a process of education transformation and that have developed plans and approaches as to what it is that they want to transform. After the summit, an amazing organization called the Center for Global Development did an analysis of the key themes that were coming through from the transforming. from the transformation plans. So this is based on a keyword analysis of what had been submitted, or the proposals for different systems to transform their systems. So the top issue, by a very long way, is around teaching and learning. There was then the second most important issue was around teachers and teacher attention, which is not that surprising. The teaching profession as a whole globally, a third of teachers leave the profession globally every 12 years at the moment. The third issue was technology, but when we’ve dived into the technology, it isn’t particularly about AI. It’s more about device deployment and access to the internet. Then there were employment skills. There were issues of inclusion, issues of access, and the climate crisis. So they were sort of most of the top 10. And these are the issues that were coming from ministers of education, from national education systems. As you will all know, education around the world, you know, there are an enormous number of civil society organizations around the world that support education and education reform and transformation. And so alongside the analysis of the keywords that were coming up in the transforming education policies or approaches, there is also a kind of parallel analysis of what civil society priorities are for transforming education. And some of the key things that are coming up from civil society organizations are around intergenerational collaboration in educational transformation, how systems can pivot to being more collaborative and less competitive. So more collaborative and less competitive. That is both within and between. systems, a very strong focus on social-emotional learning and psychosocial support and mental health and well-being of teachers and students around them, and then this idea of how transport systems can contribute to more inclusive futures or address some of these longstanding structural social injustices that have existed for many, many decades. The reason for mentioning all of this in this kind of context to the global transforming education movement, which is kind of a year in now, is really to pose the question that, is AI addressing these things in the right way? Is the tech sector and people who are developing AI applications for education responding to the key concerns that are coming from the education profession? I think there is a very, very acute concern that as more systems spend more resources on the application of AI in education, it is also going to increase a digital divide, which is already very clear, between education systems and between students who have access to AI or are skilled in using it and understand how to use it and those that don’t. I think I usually live in Salzburg, in Austria. I’m in London at the moment, because I’ve been speaking at something called the Wellbeing Forum. And the theme of the Wellbeing Forum this year was around human well-being in the age of AI. The conference happened all day yesterday, and it’s a meeting of business, of education, of health professionals, and it’s of health professionals, of religious and other spiritual leaders. and the tech entrepreneurs. And one of the key things that came through yesterday was the high degree of anxiety that all these different, all these representatives of different sectors have about, about AI, the risk that AI can pose to ways of life. One of the most interesting quotes that came from yesterday, which I wanted to share with you all so I can come to the end of what I wanted to say, was in the rush to be modern, are we missing the chance to be meaningful? And as people lean more and more into the possibilities of AI, are we also losing out on the chance to focus on things that are really important in our societies or in our education systems? And so, what I really hope that this short presentation or this short talk has been able to do is share some of the key themes or key trends that are taking place in education transformation around the world. I would really encourage you all, if you have the chance to engage with teachers or with education leaders, system leaders or institution leaders, to take the time to listen to what are the key concerns within the sector at the moment, and how can AI be applied to addressing some of these concerns? And what can that do to address the anxiety that exists in global systems around digital divide or the lack of understanding of AI? Or how the risk that it is going to exacerbate inequalities within systems or between systems? So, thank you very much for the chance to speak with you all today, and I wish you all a very successful rest of conference.

Vicky Charisi:
Thank you so much, Dominika. Thank you. Thank you. I hope you will stay a little bit more with us because we have a Q and A. afterwards. So is this okay with you, right? Yes, it’s fine. Okay, thank you. Thank you. So now it’s a great pleasure to introduce Professor Dr. Bernard Sendhoff, who is the Chief Executive Officer of the Global Network Honda Research Institute and leader of the Executive Council formed of the three research institutes in Europe, Japan and the US. The

Bernhard Sendhoff:
floor is yours. Great, thank you very much, Vicky. Thank you, Stephen. Thank you, Randy, for organizing this wonderful workshop here and for inviting me to say a few words about what brought a company like Honda into the domain of AI for children, what we find so exciting about this and how we want to go about it in the future and what we plan to do. Now, the Honda Research Institutes are the advanced research arm of the Honda Corporation and our mission is really twofold. On the one hand, we want to enrich our partners with innovations that address new product services and also experiences. At the same time, we also really do science and we want to create knowledge for a society that flourishes and these are kind of like really our two legs we stand on. On the one hand, the scientific effort, on the other hand, on bringing this scientific effort actually into innovations and our founder, Soichiro Honda, was very much about dreams of the future and we think about the future. When I talk to young researchers, I often say, you know, it’s a privilege that we have in creating the future but it’s also a responsibility and when you judge your own work, just ask yourself. is the future you are creating, the future you want your children to live in. And this already connects us a little bit with the role of children in our research, because for researchers, when we create the future innovations, it’s really about the innovations our children will be using. At the same time, we have to say, and Stephen mentioned it, we have seen a tremendous success in AI and many other technologies in the last decade. However, at the same time, we have to honestly say, if you just switch on the news for a couple of minutes, we haven’t really particularly been very successful in making that society a lot more peaceful or a lot more happy with this technology. And one of the issues what we looked at was the rising alarm of arising social fragmentation. And you see this in almost all societies, and we see that the only way to address this is to focus a lot more on togetherness in societies. And togetherness, of course, starts with the children. It’s our children who can learn how to respect differences across cultures and how to enjoy diversity towards something that is maybe a very long-term dream of something like a global citizenship. So we started thinking about how can we use AI innovations in order to empower children to understand more about each other. And we called it Target CMC, and Randy already talked a little bit about how, together with great work from Vicky and others, we have been able to actually bring this to life and use embodied AI technology, the tabletop robot, HARO, that we developed at the Honda Research Institute Japan, in order to mediate between different cultures in different schools in Australia and in Japan. That was our first target scenario. But as you can see on the list here, we envision to expand this quite substantially. And I highlighted on the slide here in particular two extensions. One, really going into developing countries like Uganda, where of course we have the cultural experience and we heard the wonderful ceremony earlier about the cultural experience, are again a lot more different than, for example, between Australia and Japan. And another extension is also into Ukraine, which we know is a war zone since a couple of years. And again there, of course, environmental conditions for children, for education of children, again poses some very specific challenges. And I think this is where, again, mediation and fostering understanding of each other can really play a large role. And Ryoma gave a very nice statement about your experience with HARO. And when you also talked a little bit about some of the technological challenges we still have, I thought to myself, well, this actually can also be something nice, right? Because there’s nothing as nice as if two people can joke about the technological shortcomings of a robot. And there’s nothing like connecting in this way, even across different cultures and maybe different continents. Right from the start, actually, the guideline that UNICEF did, and I really think they did a great work on this, was kind of like really a guidance for us when we thought about, you know, how do we have to specifically take care about AI in the context of children. And I used two keywords here on the one hand, protect and support. because I think both of them really go hand-in-hand together. It’s very clear that children need specific protection and I think we see this in many of the data and it was mentioned that there is of course also an increasing experience of mental health conditions for a number of reasons. So we need to take special care but on the other hand of course there’s also great support that we can give children at their hand and this is equally actually backed up by the data. So you know children, young adults all around the world use the new technology and I have no doubt they will also use the most recent advances in AI very successfully to increase things like connectivity, to increase their own creativity. So it’s really that both things, both protect and support go hand-in-hand and I think sometimes also a lot of people talking about the technology without listening to those who actually are often the earliest adopters and those are the young adults and the children of the technology. So I think for us it’s actually also quite good to more listen to those people who are actually using those things first. So I already mentioned about one of our starting points was using mediation with AI, with embodied AI technology in an educational context. However at the same time we also started another very exciting project about using AI technology in a hospital environment. Generally we are interested in supporting children in vulnerable situations. Hospital environment is one, conflict, disaster, flight and displacement for example are others and they share many common characteristics. All three situations, the needs of children are very often inadequately addressed. The reasons is not always the same, however the fact stands out for all three areas. Children, I think that’s very clear, need child-specific explanation and reassurance is something that is not always possible in all of those three situations. They often even need support in expressing their feelings and there are some very exciting projects really focusing helping children to tell others how they feel about things. And they still need to be children even in difficult situations like a disaster or displacement and often they need additional trustees because parents, who is of course a natural trustee for a child, is often part of that difficult environment, right? Parents are there in the disaster flight situation, they are part in the hospital environments. Children feel that their parents don’t feel well when the children are ill. So that poses them in the situation and doesn’t give them the ability to be a neutral trustee. We have started some very first exciting experiments with our very, very valued partner in a Spanish hospital, in a cancer hospital in Sevilla and we are expanding these. We are in discussions on how we can use HARO in the many different contexts that are possible there and also expanding this into a second partner. Now I would like to come back to my first slide. So I mentioned social fragmentation is a huge issue for us. Togetherness is maybe one way to approach this and togetherness really starts in our society with the children and we at HRI believe we have a unique expertise on the interplay between embodiment, empathic behavior, and social fragmentation. curated social interaction. You know, we have seen a very exciting development in the area of generative AI. Stephen mentioned that earlier. At the same time, in particular in interaction with children, I think there are also severe limitations that those systems have. And again, this places us in the challenges of curated interaction. We want to continue to engage with our partners to make the expertise and the advances in AI available with the benefit of comforting and connecting embodiments available to children in a number of different situations. And we want to do this explicitly also and really with a special focus on developing countries. Because there, of course, the challenges are again slightly different. However, these are very young continents, right? Africa is a very young continent. So when we talk about the future and the future education and the future support of our children, it has to be done in context with those countries as well, of course, and they rightfully expect this. And one last thought is also, I think we have seen in the recent progress in generative AI systems on how we build those systems. And I think there is a huge discussion on whether this will be able to continue in this way. And we believe that the future AI systems also has to learn in interaction with the human society in order to share some of our human values also in the developing AI system. At the moment, we throw a lot of data at those systems, and rightfully so, we would never do this with our children, right? We very carefully curate how our children educate. And we believe in the future that children and AI systems will actually also… mutually benefit from each other because they will have the possibility of learning alongside from each other in a bi-directional way, learning values like we teach our children values in our society how we grow about. Now at the Honda Research Institutes of course we don’t only focus on AI and children but we have actually identified the United Nations Sustainable Development Goals as guiding stars for our development of innovations of putting AI and embodied AI technology into innovations of turning innovate through science our HRI motto into something that has a tangible benefit in particular in the context of the sustainability sustainable development goals and with that I would like to again thank the organizers very much for giving me the opportunity to briefly talk about HRI here and for you for listening thank you very much.

Vicky Charisi:
Thank you. So we have some time for questions I would like to invite the people that are here so the speakers that are here probably to have a seat here Stephen, Randy, Judith, yeah and we have also our online speakers and now it’s time for questions so is there any question from the audience? Selma?

Audience:
Hi I am Emil Wilson. I’m Guilherme, I’m from the UFI program in Brazil. I am a researcher and a writer woman. I am a younger man who bangs his carry for children’s rights in Brazil in UNICEF project. And that is why for me, the institution’s proposals are always very important, however, as was briefly pointed out at the beginning of the panel, there is an interaction between AI and mental health, but such as ICPA and Lucia have been used, for example, on Telegram as a possibility for mental health support, which can intensify the risks of children and adolescents online. My question is, then, who can UNICEF help in the debate about AI, children and mental health? Thank you. Sorry, my English.

Vicky Charisi:
Thank you very much. Steven, would you like to start with this since it was about UNICEF? Thank you.

Steven:
Thank you very much for that question. This is an area that’s crucially important for us, but not just for UNICEF, for anybody working in the space of how children interact with technology, and especially in the context of mental health and mental health support. And I don’t know who, nobody has all the answers right now. What we know is that there’s a massive mental health need. There is the potential for technology to support, and there is a potential for technology to also get it wrong, which could have very severe effects if it does. that gives the wrong advice or inappropriate advice or potentially shares information that was given in a very confidential environment out. And so it’s a very, very sensitive space. I think we all need to get involved here. We need the children. We need, of course, the technology developers. We need a responsible, as Bernard said, responsible development approach. And this is not an area that we should rush into, for sure. But yeah, we need to watch it. It’s going to happen. If we get it right, there is huge potential for providing support. And I think, as I said earlier, what’s really happened with chat GPT, everyone talks about that as the one thing. And of course, foundational models are not new. And there are other models, not just chat GPT. But that’s the one that everyone, it’s kind of become the placeholder for this whole new moment, cultural moment, not just technological, but cultural moment, as the speaker said earlier. That AI is now kind of, it used to be in the background, the algorithm, your news feed, the bunny ears on your Instagram photo, your Snap photo. It’s now something you interact with. And we just don’t know what the long-term effects are. This is why we also need solid research around the impacts of children and AI as they interact, and all of us. But of course, we focus on children for the opportunities and also the potential risks.

Vicky Charisi:
Thank you very much. Judith, you also do work with mental health. Would you like to say something?

Judith Okonkwo:
Sure. Thank you very much. I was just nodding as Stephen was talking, because everything he said completely resonated. I think one thing I would like to say is that right now in the world, all of these initiatives happen. where people are thinking about things like governance for AI and governance for the metaverse. I just really think that we have to prioritize including young people in those conversations. So I mean UNICEF of course does that brilliantly but I think so many more organizations need to. Every time I’m in a room where those conversations are being had and you know the youngest people look like me I know we have a problem. So you know whatever we can do to make sure that young people are in all the rooms they need to be in we definitely should. And then I just wanted to say you know you were talking about getting it wrong and I don’t know if people saw but in the news recently BBC was reporting about a young man who had you know been arrested on the grounds of Windsor Castle for trying to kill the Queen and he had been egged on by his AI assistant to go and do it. So already you know we are seeing that we don’t quite know where we’re going with these technologies but we definitely have to come together to figure out what future we want for ourselves.

Vicky Charisi:
Thank you very much. First I would like to do a small rearrangement so you belong there please. It’s about children. Randy would you mind to go to sit there so I can. Is it okay? Okay. Thank you very much and apologies for the interruption. Any other question?

Audience:
Selma yeah. Hi I’m Selma Shabanovich from Indiana University. It’s such a pleasure to see the diversity of projects and different kinds of thoughts that really all focus on children and their presence in the work. One thing I was curious Steve you started with kind of saying you had developed these guidelines and you knew they weren’t the end and then you had so many different really interesting things go on so I was just wondering if both you and the folks who participate in the projects could speak a little bit to you know either how the guidelines were things that were kind of present and helped them in the projects and or how their projects, how they see their projects is expanding on or further defining aspects of the guidelines that maybe weren’t already in there. Thank you.

Steven:
Thanks, Selma, that’s a really great question. So the eight, and I should have mentioned this earlier, I’m sorry, so that the guidance has been published and the eight case studies are online on the UNICEF page. So I would really encourage everyone to to look at each one because we wanted a diversity of projects from different locations but also different contexts. Like some of them, some of the projects do, the one in, one of them in Finland provides mental health support or at least, sorry, mental health information, not support but where children can find information as a kind of a first point of call and initial questions around potential symptoms and I’m looking for that first line of kind of informational support, not therapeutic support. But that was one of the case studies and that was done by the, is still done, it’s an ongoing project by the the Medical University of Helsinki and so that was interesting because they had a, because it’s a hospital, they, you know, in a very developed nation in a sense, technologically developed and also kind of government supported, they had many ethicists on their team that developed the product. So not only software developers but ethnographers, researchers, ethics team, doctors, psychologists and obviously did a lot of testing with the children. So we chose that, there’s MEC3D, also mental health support but not necessarily for the patient but actually for the people around the patient or around the, not the patient, sorry, the child on a, on the And then, for example, we did one with the Alan Turing Institute in the UK that was a really nice example of how you engage the public on developing public policy on AI. And they’ve actually gone on to, while the case studies have kind of finished, the work continues. So the Alan Turing Institute has been asked by the government of Scotland to engage children in Scotland on AI, and what excites them about AI, what worries them, and I think we’re going to come up with a question on that. What kind of future do they want? And so the Alan Turing Institute and their initial reports and methodology and everything are online. It’s a really rich resource, and that will inform, you know, policymakers as they regulate. So it was interesting. For us, in the end, after the eight case studies, the guidance didn’t really change so much, which was kind of a relief. We thought, like, wow, we seem to get it kind of quite right the first time. But it might also just be because the guidance is almost at the level of principles, and we do that because we’re a global organization, and so you have to be quite kind of high-level or generic, and then it gets adapted at the local context. The unfortunate thing is that everybody wants the details. How do you adapt it? And that’s where, you know, that’s the challenge. How do you move from principles to practice? But that’s where, in the end, we kind of said the guidance hasn’t changed that much, but it’s been enriched by these case studies. If you want to learn kind of how different organizations have applied them, then go and read these. I’ll just say one more thing. There are nine principles or requirements for child-centered AI in the guidance, like, for example, inclusion of children in developing AI systems and policies. We found, in the end, that all of the case studies only picked two or three. And we realized that that’s actually fine. In your project or in your initiative, there are two or three that’ll speak more to you than others. So if it’s participatory design and the inclusion of children, that’s one thing. You know, if it’s fairness or discrimination. And so it was really collectively unpacked all nine. But in the end, only a few tend to kind of be the focus for your work. Yeah, so everything’s online. We are really, of course, just thinking about if there’s a need to update them or kind of add to them now in the light of generative AI. And as I said earlier, there are a lot more unknowns now. We don’t know how the human-computer interaction will evolve over time. And we want to kind of make it work in a way that upholds rights and be responsible. But we are, everybody kind of building the plan or fixing the plan as it’s in the air. So we are very keen to do more work in this space in light of kind of ongoing developments. Yeah.

Vicky Charisi:
Thank you very much, Stephen. Is there any other question from the audience? Yes, please.

Audience:
Hi, this is Edmond Chung from .Asia. We also operate the .Kids domain and what is being done here is great. It’s definitely something that .Kids will like to take on and also help promote. But asking as personally, I wanted to ask, I guess it’s Ruyama, or Ryuma. One of the last comments kind of gave me a little bit of a concern. Your last comment was that maybe the evaluation or the assessment can be more fair with AI. Of course it could be, but it could also be less fair. And that’s part of the discussion, that’s the heart of the discussion. So what if it’s not fair? And that brings me to. to a second question that I wanted to kind of ask as well. I think it was mentioned that for the Uganda project, it was focused on fairness and exploring fairness. But I didn’t quite understand from Joy what was being discussed, how part of AI was part of it. Would it be useful to get more of that? Because really, actually, as a father of an eight and 10-year-old, I’m quite pleasantly surprised that my 10-year-old, just now in year seven, have told me this September, starting, their teachers are actually getting them to use AI to help them with homework and being part of the curriculum. So it’s really exciting for me. But also, because we know that technology is not entirely neutral, especially when we talk about these things, it’s a symbiotic relationship. As much as we shape them, they shape us, especially kids going forward. So that’s why I wanted to really hear from the experience. You had an ending remark about fairness, and then how AI and fairness really works, and the response from the case studies. Thank you.

Vicky Charisi:
Thank you. Do you mind if I get a question? Because I did the study with the kids in Ugandan fairness. Is it OK with you? So indeed, the talk by Joey was focused on something else, not on the specific study. Of course, we have published. So there is a scientific publication on this. We can share the links later. So the main research question for this study was to understand if there are cultural differences in the perception, the perceived fairness. So we wanted to see how children in these two environments were perceived by their parents. with the cultural, but also the economical differences they had, they would focus on different aspects of fairness. So what we did, we provided different scenarios. We let the, the whole activity was based on storytelling frameworks, and we let the kids talk about these scenarios in their own words, their own drawings, et cetera. Then I said some researchers analyzed in a systematic way these data, and what we found was that indeed children in Uganda focused more on aspects of fairness that have to do more on the material aspects, so they would talk more about how, for example, something was shared among children, et cetera, while the children in Japan would focus more on psychological effects. So for example, they would talk about behaviors of teachers, or they would talk, so this was, this is just an example to see how the priorities, probably when we abstract, the actual notion of fairness doesn’t really differ a lot, but when we go in details, we see that children in these different cultures prioritize in different ways. So that was our, the results of our study. Of course, this was only the starting point, and there are a lot to explore, and it is not only us. There is a huge community of developmental social psychologists that explore this topic. So the first question, do you want to repeat the first question?

Audience:
Yeah, I guess, just wanna ask, you mentioned at the very end that, if I understood you correctly, you’re saying that assessment, maybe, of your work through AI might be more fair. Tell us more, a little bit more about it. What if it’s not fair? How do you know it’s not fair? What if you trust the machine too much?

Vicky Charisi:
Is there someone, Judith, who would like to speak?

Ruyuma Yasutake:
I would like to speak first. I think some school teachers have individual evolution sense. What do you say? Not equal? Not equal. Teachers’ evolution sense? The way of judgment? The way of judgment is not inequality. So, I guess, AIs can fair evolution.

Vicky Charisi:
Yeah, I mean, apparently, probably, there are some hopes here, right? So, I don’t really believe that, you know, there is like this… Nobody believes that, you know, it’s like fair, right? Absolutely fair evaluation with AI. This is true. But probably, from young students, there is a hope. When they see their systems or their schools evaluating in different ways, and probably they experience a little bit of human unfairness, probably they put a lot of, you know, some hope on AI. But, of course, this is something that we really, really need to take very seriously. Yes, please.

Audience:
Hi, my name is Zanyue from South Africa and Zambia. And this is not… I think it’s more of a comment, just listening to the discourse. There’s a concept that we use quite often in South Africa, and I think it’s quite pertinent here, progressively realising, right? So I think when we speak about AI, especially at the stage that we are globally, your question is quite important. You know, what is fairness? What are the assessments? What’s the criteria? And you quite correctly put, in different geographies, instances, even in the same locality, based on various factors, that that concept of fairness really is so subjective. And I think what AI does is it gives us objective, almost element to these very subjective things, and you tweak it accordingly, and that’s why it’s so important if we speak about… I mean, I think the question on fairness really does veer off to algorithmic biases that we do speak about. That, I think, is also very pertinent for this conversation, where the more data we have, and the more data that we have based on your comment on this context, this context, this context, we develop, right? So I think the answer to the fairness question is we are progressively trying to realise that, and I think we’re at a really infant stage when it comes to that, and hence, you know, the data conversation is quite important to pair with this one. So, yeah, that’s just maybe a summary.

Vicky Charisi:
Thank you very much for the intervention, indeed. I’m afraid we’re running a little bit out of time. So now I would like to give the floor to our online moderator, who is also our reporter. So Daniela Di Paola… Can we have Daniela on the screen, please? ..is going to give us her view of the conclusions of this workshop. Daniela? Yeah, please. Hello, everyone.

Daniela:
Thank you all for your wonderful comments. productive discussion and I really think that the different perspectives added a lot to the conversation. I’m going to share two key takeaways and two call to action points. So the two key takeaways, the first is that despite the challenges in terms of infrastructure in our activities for AI and children’s rights, children from underrepresented countries and cultures should be included. And it’s urgent that in technology that’s being developed for children, we consider the needs and interests from all children and not only those from privileged backgrounds. Secondly, the project is not only the first step of responsible design of robots for children and various communities can contribute to its expansion, such as adding to the rights for explainability, accountability, and AI literacy for all. Formal education can be proven powerful and industry experiences with responsible innovation can be a catalyst for the well-being of all children. Secondly, I’d like to share some call to action points. The first call to action is that expansion of the implementation of the policy guidance to additional contexts, such as hospitalized children or triadic interactions and also formal education with the inclusion of schools is very important, such as also adding the underrepresented groups of people such as those from the global South. Secondly, there’s a call for the necessary infrastructure and technology development that will give all children equal opportunities in an online world. We need to ensure that AI opportunities come together with

Vicky Charisi:
responsible and ethical robot designs. Thank you. Daniela, thank you so much. It was really good. And I think it’s time to close, Stephen. So the floor is yours.

Steven:
Yeah, okay. So firstly, thank you very much. I think that the… One of the key takeaways is that this is the beginning of a journey. So we were very happy to share with you what UNICEF has done and what our partners have done here, and many others that aren’t being mentioned, as we try and work out how children can safely and in a supported way, and in an empowering way, engage with AI. The reality is that while we sit here and debate these important issues, children are using AI out there, and it’s going to go up more and more every day. So it is urgent. Everybody needs to get involved. Thank you for raising the data issue. It’s really critical. And to Daniela’s point, we have this challenge of data where the data sets are not complete. They’re much more kind of global north. We need data from children in the majority world. I like this term that’s being used a lot here, and the global south. But we know that data collection at the moment doesn’t often happen very responsibly. And so we need to kind of tick those two boxes at the same time. So the journey is going to continue. Please work with us, and we will work with you. And we need to work. I mean, we keep saying this, but it really is critical to work with children and to walk with children on this journey. So Roma, thank you for being here, and thank you for being involved in the project. We recently engaged at work a digital policy specialist from Kenya who could easily have been on this panel. And she was just making this point about Africa being such a young population and how crazy it is just seeing more and more how older people like us. Sorry, I’m speaking for all of us here. Taking the liberty of regulating a technology that we don’t really understand. And that’s so much used by a generation that is going to be so much more impacted by it, and we’re not having them at the table. So that was a really well-put point. So for all of us here who do bring children to the table, well done, and please may it continue. So thank you. Thanks, Vicky.

Vicky Charisi:
Thank you very much, and thank you to all for the support. Thank you for being in this session, and I hope we can continue this work on AI and children’s rights. Thank you.

UNKNOWN:

Audience:
Hi. Thank you for coming. Oh, thank you. And teacher. Oh, I’m here. Right, Dr. LaFleur. Hey. Hi. Hi. Hi. Hi. Hi. Thank you. Good point. Good point. The fact that AI is good, but I think OIT, they buy this thing.

Audience

Speech speed

150 words per minute

Speech length

1074 words

Speech time

428 secs

Bernhard Sendhoff

Speech speed

148 words per minute

Speech length

1880 words

Speech time

763 secs

Daniela

Speech speed

157 words per minute

Speech length

380 words

Speech time

145 secs

Dominic Regester

Speech speed

161 words per minute

Speech length

1556 words

Speech time

581 secs

Joy Nakhayenze

Speech speed

170 words per minute

Speech length

772 words

Speech time

272 secs

Judith Okonkwo

Speech speed

184 words per minute

Speech length

1577 words

Speech time

514 secs

Randy Gomez

Speech speed

134 words per minute

Speech length

393 words

Speech time

177 secs

Ruyuma Yasutake

Speech speed

106 words per minute

Speech length

324 words

Speech time

183 secs

Steven

Speech speed

171 words per minute

Speech length

2592 words

Speech time

910 secs

UNKNOWN

Speech speed

60 words per minute

Speech length

1 words

Speech time

1 secs

Vicky Charisi

Speech speed

150 words per minute

Speech length

2340 words

Speech time

938 secs

Accessible e-learning experience for PWDs-Best Practices | IGF 2023 WS #350

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Swaran Ravindra

The analysis highlights several issues regarding disability rights and inclusivity. It points out that there is no national policy for disability in Tobago, and in Fiji, the 2018 Act does not specifically outline what provisions should be in place for persons with disabilities or how to implement them. One area that is particularly neglected in the Pacific is accessible websites, which are considered necessary provisions for persons with disabilities. This lack of explicit provisions for the rights and accessibility of persons with disabilities in national policies and legislation is seen as a negative sentiment.

On the other hand, there is a positive sentiment towards inclusion as a basic fundamental human right. Swaran, a speaker in the analysis, emphasizes the importance of inclusion in her speeches and believes that all citizens should have access to various services regardless of their disabilities. She also advocates for the use of existing legal instruments such as the ‘Education Act’ to support disability rights in the absence of specific national policies. This perspective reflects a belief in the positive impact that inclusion can have on reducing inequalities.

Consistent support systems for persons with disabilities are called for, even in the absence of a national policy for disability. This notion is seen as a positive sentiment, highlighting the significance of providing continuous support to individuals with disabilities.

The analysis also acknowledges that legislation alone is insufficient to ensure inclusivity. It notes that legislation sometimes contradicts itself, and there is a need to reconcile these gaps between constitutional rights and legislation to ensure inclusivity. This observation is seen as a negative sentiment, pointing out that legislative measures must be comprehensive and consistent to promote inclusivity effectively.

Cultural norms are identified as a factor that can present obstacles to inclusivity. The analysis mentions instances where parents refuse to acknowledge their child’s disability, highlighting the stigma around disabilities that needs to be overcome. This is seen as another negative sentiment, suggesting that cultural attitudes must change to foster inclusivity.

Constitutional rights are noted as a means to protect and promote inclusivity. The analysis provides examples of disabled individuals exercising their right to attend classes, highlighting the potential impact of these rights in promoting inclusivity. This observation brings a positive sentiment to the importance of constitutional rights in advancing inclusivity.

In the context of education, the analysis emphasizes the need for inclusion to be integrated into everyday practice in educational institutions. The mention of AFINI, an ISO certified organization that upholds high standards of inclusivity, and professors creating tertiary level education courses for disabled individuals, reflects a positive sentiment towards the efforts being made to ensure inclusivity in educational settings.

The analysis also touches upon the obstacles towards inclusivity in online learning. It argues that students should not be penalized for the extra time they require to log into the system. This viewpoint is seen as a negative sentiment, highlighting the need for fair assessment practices in online learning.

Regarding authentication methods, the analysis acknowledges the existence of more secure methods such as thumb trains, tongue scans, retina scans, face recognition, and retina recognition. It argues that these methods are easier for users and reflects a positive sentiment towards the implementation of these authentication methods.

On the other hand, there is a negative sentiment towards the imposition of difficult types of authentication methods, which could act as a deterrent for students to return to class.

The analysis also addresses the important topic of digital inclusion. It suggests the need for affirmative action and proper measurement and assessment tools to address digital inclusion effectively. It mentions the use of disparity measurement, the implementation of the WCAG 1.0 standard, and UNESCO’s Romex Indicators in Pacific island nations. This observation highlights the positive sentiment towards the need for affirmative action and the adoption of proper tools to achieve digital inclusion.

In conclusion, the analysis brings to light various issues related to disability rights and inclusivity. It highlights the lack of explicit provisions in national policies and legislation, but also emphasizes the positive sentiment towards inclusion as a fundamental human right. It underscores the importance of consistent support systems and the impact of cultural norms and legislative gaps on inclusivity. Additionally, it calls for fair assessment practices in online learning and explores the implementation of secure authentication methods. Moreover, the analysis draws attention to the need for affirmative action and proper measurement and assessment tools to address digital inclusion effectively.

Vidya

The accessibility issues in e-learning platforms pose substantial challenges for people with disabilities. These challenges include problems such as unlabeled buttons, inaccessible content, and inaccessible PDFs. Vidya, who has personal experience navigating these platforms, suggests that involving users with disabilities in the development process of e-learning platforms is crucial. This involvement should include providing digital literacy training and ongoing support to ensure that these platforms are genuinely accessible to all.

Furthermore, STEM education presents additional accessibility challenges for individuals with disabilities. Screen readers often struggle to interpret mathematical equations, and much of the educational content is written from the perspective of someone with sight, making it more difficult for those without sight to understand. This creates a barrier to the effective participation of individuals with disabilities in STEM subjects.

The shift to digital learning during the pandemic was not seamless for many students and teachers, especially those with disabilities. In India, where Vidya is based, teachers and students with disabilities faced difficulties adapting to digital platforms. To help them, Vidya had to create digital literacy tutorials in multiple languages. This highlights the need for greater support and accommodations for individuals with disabilities during times of crisis.

To address the issue of accessibility and inclusivity in education, India is in the process of introducing a National Educational Policy. The aim of this policy is to promote greater inclusion by shifting towards inclusive education from special schools and a segregated education system for the visually impaired. However, the full implementation of this policy is still pending, as it requires time and coordination among different states.

Regarding special education, Vidya emphasizes the need for a central authority to ensure consistency across different states. Currently, policies for special education vary from state to state, resulting in inconsistencies and gaps in support.

While the government has made efforts to make their websites accessible, there is still work to be done in this area. Although progress has been made, there is a need for continued efforts to fully address website accessibility.

In terms of administrative departments responsible for education, accessibility and awareness vary based on the specific department. Education for persons with disabilities is sometimes overseen by the Department of Social Justice or the Department of Education, leading to variations in support and accessibility.

Cultural norms and stigma also act as barriers to digital platform access for disabled people. Vidya highlights the case of a blind woman who has been confined indoors due to cultural norms and stigma. Overcoming these barriers requires not only technological solutions but also the promotion of social acceptance and understanding.

Vidya believes that continuous support and social acceptance are essential for the effective use of e-learning platforms by individuals with disabilities. She stresses that the responsibility lies with the government and organizations to ensure the long-term usability and accessibility of digital tools.

Notably, children with disabilities have the potential to learn and compete effectively with their peers if provided with the necessary support and tools from an early age. Introducing technologies like computers and braille to children at a young age can significantly enhance their learning experience and future educational prospects.

Nonprofit organizations play a vital role in bridging the gap between the government and the ground realities of education for children with disabilities. Their firsthand experience in the field enables them to provide valuable guidance to the government in shaping policies and internet regulations that facilitate the access to education for individuals with disabilities.

Finally, collaboration within the internet community can contribute to making education more accessible for children with disabilities. By creating forums where experts can share thoughts, ideas, and network, meaningful progress can be made in addressing accessibility challenges. Collaboration is vital, as the efforts of a single person or organization alone may not be sufficient to solve the complex issues at hand.

In conclusion, the accessibility issues in e-learning platforms pose significant challenges for people with disabilities. It is essential to involve users with disabilities in the development process, provide ongoing support, and ensure digital literacy training to make these platforms truly accessible. STEM education, the shift to digital learning during the pandemic, and the need for a central authority in special education further highlight the importance of addressing accessibility and inclusivity issues. The government, nonprofit organizations, and the internet community all have essential roles to play in making education more accessible to children with disabilities.

Anna

Anna, who works in a child’s rights organisation, puts forward a compelling argument for involving more persons with disabilities in the design of platforms that promote accessibility. She firmly believes that accessibility should be guaranteed right from the design phase, ensuring inclusivity and accessibility for everyone. This argument aligns with the goals of SDG 10: Reduced Inequalities and SDG 4: Quality Education.

Anna’s argument is supported by her first-hand experience in the field, where she has witnessed the positive impact of involving persons with disabilities in the design process. By incorporating their perspectives and insights, the resulting platforms are more likely to meet the needs of people with disabilities and promote equality. Anna’s staunch belief in the rights of every individual to have equal opportunities, regardless of their abilities, drives her passion for ensuring accessibility.

Moreover, the second speaker highlights the crucial role that civil society plays in championing children’s rights. They emphasize how civil society organisations play a vital role in advocating for the rights and well-being of children. Anna, who is from Brazil and also works for a child’s rights organisation, supports this view and agrees that civil society has the power to bring about positive change. This argument aligns with the goals of SDG 16: Peace, Justice, and Strong Institutions.

Anna’s endorsement of the role of civil society stems from her experiences in Brazil, where she has witnessed the impact of civil society organisations in advancing children’s rights. These organisations provide crucial support, raise awareness, and advocate for policies that protect and promote the well-being of children. Their efforts contribute to the overarching goal of achieving a more just and equitable society.

In conclusion, both speakers emphasize the significance of promoting accessibility and advocating for children’s rights. Anna’s emphasis on involving persons with disabilities in the design process underscores the importance of inclusivity and equal access for all. Similarly, the second speaker reinforces the vital role of civil society organisations in advocating for the rights of children. By considering the perspectives of both persons with disabilities and civil society, we can strive towards achieving the goals of equality, justice, and strong institutions.

Jacqueline Huggins

During the discussion, the speakers highlighted the importance of implementing policies and providing training to support students with disabilities in accessing educational content. They stressed that ensuring accessibility for these students is crucial for quality education. The need for such policies was emphasized due to the challenges faced by students with disabilities, particularly during the COVID-19 pandemic.

One of the speakers mentioned that their campus had a policy in place that encouraged lecturers to provide accessibility for students. The department also collaborated with visually impaired students to ensure that content was accessible to them. In addition, the campus provided internet access and laptops to students who were in inaccessible areas. The sentiment towards these measures was positive, as they aimed to create an inclusive learning environment.

Another speaker emphasized that training was essential for both lecturers and students to effectively implement and understand accessibility measures. The department worked one-on-one with students, to ensure that they were not left behind and that they could navigate and use online platforms effectively. This sentiment towards training was also positive, as it was seen as a means to bridge the gap in accessibility.

However, a negative sentiment emerged when discussing the absence of a national policy to ensure accessibility. In Trinidad and Tobago, there is no national policy in place, which hampers the experience of students with disabilities. The current implementation of accessibility measures relies heavily on the goodwill of individual lecturers. This lack of a national framework was seen as a significant barrier to achieving full accessibility for students.

On a positive note, Jacqueline Huggins, one of the speakers, advocated for the implementation of universal design to benefit all students. She highlighted the importance of meeting with academic staff to discuss how universal design can be executed effectively. She also mentioned outreach and awareness programmes regarding universal design accessibility. Jacqueline’s positive sentiment towards universal design showcased the belief that it can create an inclusive learning environment for all students.

However, Jacqueline also acknowledged the challenges faced in implementing universal design. One such challenge was retrofitting infrastructure to make it accessible for students with disabilities. She also mentioned the difficulties lecturers faced in adapting to online and internet teaching methods. To address these challenges, she was working on a campaign to make all faculty websites accessible. The sentiment towards implementing universal design was mixed, as it was seen as beneficial but also posed practical challenges.

Apart from advocating for universal design, Jacqueline identified herself as a watchdog on campus, ensuring the implementation of accessibility measures and meeting students’ needs. She worked closely with students to understand their needs and liaised with lecturers and the deputy principal to bring about necessary changes. Jacqueline’s role as a watchdog and her positive sentiment towards meeting students’ needs showcased a commitment to inclusivity and accessibility.

The university department was also mentioned in the discussions. It demonstrated proactive support for students with disabilities by addressing their complaints and taking them to relevant authorities. The department worked closely with IT to understand the needs of supporting students and even purchased licenses for JAWS software for students who could not afford it. This collaboration with IT and the consideration of students’ complaints showed a positive sentiment towards addressing accessibility challenges.

Additionally, the department obtained funding to purchase expensive equipment and software, such as JAWS licenses, which were installed in campus libraries and computer labs. This initiative aimed to ensure that students had access to necessary resources for their education. The sentiment towards the department’s efforts in sourcing funding was positive, as it highlighted the university’s responsibility to support disadvantaged students.

The discussions also touched upon the importance of global collaboration in making e-learning more accessible. One of the campuses mentioned was fully online and covered 13 countries in the Caribbean, providing students with the opportunity to obtain their degrees. This global collaboration was seen as beneficial for accessibility in e-learning.

Furthermore, the speakers acknowledged the value of learning from global experiences and implementing best practices. Discussions with individuals from different countries provided diverse perspectives and learning opportunities. The sentiment towards learning from global experiences was positive, as it promoted growth and improvement in accessibility.

The importance of turning discussions and learnings from forums into actionable steps to improve e-learning accessibility was also emphasized. The sentiment towards taking action based on learnings was positive, as it highlighted the need for tangible change.

Overall, the discussions centered around the importance of policies, training, and universal design to support students with disabilities in accessing educational content. The challenges faced during the COVID-19 pandemic highlighted the need for comprehensive accessibility measures. The absence of a national policy was seen as a hindrance to achieving full accessibility. However, the speakers expressed positive sentiment towards the implementation of universal design and the proactive efforts of the university department in addressing accessibility challenges. The importance of global collaboration and learning from diverse perspectives was also emphasized. The discussions ultimately emphasized the continuous commitment to improving accessibility and inclusivity in education.

Lydia

Accessing online learning resources in schools can be a complicated task for students, particularly those with cognitive impairments. The frequent changes in passwords and access methods implemented by IT departments create significant difficulties for students, preventing them from accessing important information and submitting assignments. This issue negatively impacts their educational experience and hampers their ability to fully participate in online learning.

The complications associated with accessing online resources are often not recognised or taken seriously by schools. Many individuals without cognitive impairments perceive these challenges as trivial, leading to a dismissive attitude towards students facing such accessibility issues. This lack of awareness and understanding further exacerbates the problem, as students with cognitive impairments struggle silently, without receiving the support and accommodations they need.

Furthermore, the implementation of frequent password changes and increased security measures poses additional barriers for students with disabilities. These students may face difficulties remembering complex passwords and navigating the heightened security protocols. As a result, they are often chastised for failing to complete their work on time or are forced to seek continuous assistance from IT support. This ongoing cycle of frustration further hampers their educational progress and creates a sense of dependency on technical support.

To address these challenges, it is crucial for schools to be more aware of the accessibility issues faced by students with cognitive impairments. Recognising the complexity and impact of these challenges is the first step towards implementing appropriate accommodations and support systems. Additionally, it is imperative for the IT security measures in schools to be user-friendly and accommodating for all students, including those with disabilities. School administrators and IT departments should work together to ensure that the security measures do not create unnecessary barriers but instead facilitate a seamless and inclusive online learning experience for all students.

In conclusion, accessing online learning resources in schools is not a simple task for students with cognitive impairments. It is essential for schools to recognise, acknowledge, and address these accessibility issues through proactive measures and awareness-raising efforts. By making online resources more accessible and ensuring user-friendly IT security measures, schools can create a supportive and inclusive educational environment for all students, regardless of their cognitive abilities.

Zakari Yama

The discussion revolves around the relationship between universal design and digital accessibility in the context of education. Universal design focuses on catering to a broader range of learners, while digital accessibility primarily addresses the needs of learners with disabilities. The aim is to create an inclusive educational environment that empowers all students to access and engage with the learning materials and activities.

One argument raised is the difficulty institutions face in implementing universal design and ensuring its compatibility with accessibility. The process of applying universal design principles and making them compatible with digital accessibility measures can be challenging for educational institutions. This challenge could potentially hinder the effective implementation of inclusive practices in education.

On the other hand, there is agreement that what is beneficial for individuals with disabilities, such as real-time captioning, can also benefit all students. For example, real-time captioning can assist students without disabilities in understanding an instructor’s accent or when watching videos in a loud environment. This highlights the importance of digital accessibility measures not only for learners with disabilities but for the entire student population. By incorporating digital accessibility features, institutions can enhance the learning experience for all students, regardless of their specific needs.

Furthermore, the stance put forth is that institutions should view accessibility efforts as an opportunity to improve their universal design practices. Instead of perceiving accessibility as a separate and burdensome requirement, institutions should leverage it to enhance the inclusivity and effectiveness of their teaching and learning approaches. By using accessibility as a framework for designing educational materials and environments, institutions can foster a more inclusive and equitable learning experience for all students.

In conclusion, the relationship between universal design and digital accessibility within education is crucial for promoting inclusivity and ensuring equitable access to educational opportunities. While there may be difficulties in implementing universal design and ensuring its compatibility with accessibility, there is a recognition that what benefits individuals with disabilities can also benefit all students. Institutions should embrace accessibility efforts as an opportunity to improve their universal design practices, ultimately creating a more inclusive and effective learning environment.

Gonola

The discussions emphasise the significance of e-learning accessibility for individuals with disabilities. It is crucial for e-learning platforms to be designed with accessibility in mind right from the start to ensure efficiency and cost-effectiveness. This approach prioritises the inclusion of all learners, regardless of their disabilities, and allows them to fully engage in online education.

Legislative frameworks are seen as pivotal in supporting the creation and adaptation of e-learning platforms to include persons with disabilities. To achieve this, strategies should be adopted from academia, the private sector, and government institutes. By pooling resources and expertise from these various sectors, it becomes possible to develop more inclusive online platforms that cater to the diverse needs of disabled individuals.

The principle of universal design for inclusive design receives support in the discussions. It is highlighted that designing e-learning platforms to be universally accessible is of utmost importance. An example is given of universally accessible building entrances, which ensure that individuals of all abilities can enter and use a space without barriers. By applying this principle to e-learning platforms, it is possible to create a more inclusive and accessible online learning experience.

Moreover, the implementation of captioning is seen as a valuable tool for promoting accessibility. The discussions highlight the utility of captioning for various user groups, including individuals with hearing loss and non-native English speakers. While captioning is essential for individuals with hearing loss, it also proves beneficial for those who may struggle with the English language. By providing captions, e-learning platforms can overcome language barriers and make educational content more accessible and comprehensible for all learners.

In conclusion, the discussions emphasise the importance of e-learning accessibility for individuals with disabilities. The need to design accessible platforms from the start, implement legislative frameworks supporting inclusivity, adopt strategies from academia and the private sector, apply the principle of universal design, and provide captioning for increased accessibility are all key points highlighted. By prioritising accessibility in e-learning platforms, we can create a more inclusive and equitable online learning environment for all individuals, regardless of their disabilities.

Session transcript

Gonola:
Good morning, ladies and gentlemen, and for those online, good morning, good afternoon and good evening. This session is on e-learning, and the title is accessible e-learning experience for persons with disability best practice. And we are having a few little technical difficulties, so I apologize for starting late. We have my name is Gonola Astbrink, and I’m moderating this session, and I am chair of the Internet Society Accessibility Standing Group, and here next to me on site is Vidya Wai, and she will be speaking about her experiences of e-learning in India. We should have online our other speakers. We should have Swaran Ravindra from Fiji National University, who is the organizer of this session, and Zakari Yama, who is a co-organizer of the session. He is from Morocco, and also Vashka Bhattacharjee from Bangladesh, as well as Jackie Huggins, who is joining us from the Caribbean. So while we are waiting for them to join us online, this session is really about how persons with disability can get best access to e-learning platforms, and the importance of the e-learning to be available to persons with disability across the world. And how can we make this possible? So it’s going to challenge. We’re going to talk about some of of oppressing challenges pertaining to technology and accessibility that persons with disabilities face when accessing online content on major e-learning platforms. And we in the Accessibility Standing Group actually have personal experiences of that. We’re going to talk about supportive legislative frameworks and how we can adapt strategies to assist from the academic, the private sector, and government institutes. So that there’s much more inclusion when creating online platforms, because we know that if any online service is created accessibly from the start, it is much more effective and efficient and also a lot more cost effective. So I’m going to pass over to Vidya Y and talk about a little bit of her personal experiences, both in the past as a young blind person navigating the education system and also talking about a current situation with e-learning through the Internet Society. So I’ll pass on over now to Vidya. Thank you.

Vidya:
Hello, everyone. It’s my pleasure to be talking to you today. Thanks to the organizers for having me here. And thanks to Gunela. So about e-learning platforms, I would like to talk a little bit about my own experiences with e-learning and also what I see working with children in India. So I run a nonprofit called Vision Empower. We make STEM education accessible to children with disabilities. So, I will be talking mostly from their perspective and also my own challenges growing up with a disability, specifically on the e-learning platforms. I was born blind, so initial few years I didn’t have access to technology as much because of lack of awareness. There were technologies, but I was not using them. I got access to a computer only in grade 11, and since then, as we all know, it has huge opportunities. You know, till then, if I had to communicate, I had to ask somebody to, if I had to even send a WhatsApp, if I had to send any message, it had to be, if I had to have a written communication with a person who can see, then it would be someone else typing it for me, or I could never have a written communication, so for the first time when I used e-mail is when I got access to written communication. That was the first time someone could read what I had written, otherwise, it had to be in Braille, which most of the persons who can see do not know. So, we know how huge the impact of internet is on a person with, on the life of persons with disabilities, even if you have to browse something independently, it’s all through the internet, and e-learning is not an exception, because already classrooms are not very accessible, so a lot of things you’ll have to come home and refer, for example, when I was studying computer science, I just would go to the class and then come back home, and that’s about it. I had to find my own, find volunteers who could help me after classes. Now when you talk about e-learning, firstly, there are few challenges, especially in subjects like STEM, you know, a lot of the times, the content itself is not so accessible, like everything. Everything is designed in a way that a person with sight can understand. Now, when you take school textbooks, for example, so a lot of things are like look around, there’s a lot of greenery or this is in the shape of a mountain. So a person who has never seen it, they wouldn’t know what they’re talking about. Content itself is written in a way that persons without sight cannot understand it easily. The second challenge is with issues with regarding when I’m talking about STEM. So you have a lot of, now if you have to read a math equation, it has to be written in a specific format like your LaTeX format and other things which a screen reader can read. But lot of times if you just give a PDF, if you upload PDF onto your LMS platforms, they’re not very easily accessible. It just reads something like if you want to write two square, it reads something, superscript something or subscript something, things like this which you don’t understand. So if it has to read well, you have to write it in a way that is accessible. And thirdly, there are accessibility issues with the web platform itself. Sometimes there are unlabeled buttons, sometimes you cannot navigate, it just says link and you don’t know what’s the link all about. A lot of times what I’ve seen is if you open a PDF file, it just says page one, page two and you don’t know what’s on that page. So a lot of times they’re protected, you cannot download those files, so you cannot read them later. So there are challenges with the content, there are challenges with the accessibility and with STEM it’s even more complicated. How do you put up charts or diagrams which a child or a student can understand? Everything has to be all text and there are a lot of challenges. So you know, when we take when these are the challenges that I had navigating on some of these platforms, including when I was doing a course on Internet Society, it was not very easy to navigate. All said and done, these are the challenges that are accessibility specific, but one thing also I wanted to mention is, there’s much more than accessibility. You know, when you take school education system in India, for example, when pandemic happened, a lot of schools seamlessly shifted onto the digital platforms, but it was not the case for children in India and the teachers because you can’t tell them, go to YouTube and refer how to install Zoom, how to use Zoom because everything says click here. So when you don’t use mouse, it’s not of any value to you. So I had to make my digital literacy tutorials in various languages for the teachers and students to use. And also we have our own accessible learning management platform called Subodha. Now, some of the ground realities that I have seen getting the children and teachers onto these platforms are even little bit more than accessibility actually. One thing is making a platform accessible. Second thing is the digital literacy training that you’ll have to give them. Third thing is you have to ensure that there is some mechanism to handhold the teachers or the students or to get new users with disabilities onto the platform. Because with so many challenges, it’s not very easy to be continuously motivated to get onto the platform. And after you get on, they encounter some of the other challenges. There needs to be somebody to handhold them and make it very comfortable. Because even in our accessible platform that we have, teachers wanted some other features, like they wanted phone. So it’s very important to get, they wanted an app. So it’s very important to get their perspectives as well and make changes as they, like as we say, right, nothing about us without us. So we need to involve them in. the process of making the platform accessible and handhold them so that they’re comfortable in the usage of these platforms. So these are some of my thoughts that I wanted to share.

Gonola:
Thank you very much, Vidya. There is a lot there to take on, to consider, and from Vidya’s personal experience. I’ll pass on now over to Jacqueline Huggins, who has the experience of supporting students in her university. So please go ahead, Jacquie.

Jacqueline Huggins:
Right. Hi. Well, from here, I’m saying good night. And exactly what was just said by the last speaker. What happens on our campus, though, is that we have a policy, and that policy is what is used to encourage lecturers, academic staff, to do what is right for the student. And our department is almost like a watchdog in terms of a student who has visual impairment, who is blind, is registered with the campus, we then work with that student. And we work with lecturers so that they understand content not being accessible is very important. It is something that we always have to sit one-on-one and speak to lecturers about why it needs to be done. And we have students also speaking with the lecturers. This is what my needs, my need is. So the lecturer has a better understanding. We have had the issues where students have to deal with graphs, students have to do with calculations, and lecturers have to become creative. So sometimes we’re not even able to use the online platform. We have to use lecturer and student talking it through, finding solutions. that is not necessarily online. In terms of when COVID hit, that is where we really understood the challenges that our students with disabilities, especially students who are blind and students who were deaf, we recognized the issues that they face. And even though we recognized it, our university management decided that they’re going to provide laptops because we didn’t realize our students didn’t even have access to laptops, didn’t have access to internet. But the university came up with a plan where they worked with providers to provide internet access in areas where students did not have it. They also provided loans of laptops so students were able to utilize it. Then again, training was very important, training for some lecturers, training for some students. We just assumed that students were able to navigate and that was not the case. So my department had to actually deal one-on-one with students to ensure that they were not left behind. We also had attitudes of some lecturers. So for instance, we had a student who is deaf and the lecturer is using Blackboard and she asks him just to put on captioning and he just refused, I had to intervene. You know, again, although we had a policy, we still depended on the will and the goodwill of lecturers and academic staff to do what needs to be done. I’m not sure if India has a national policy, but Trinidad and Tobago, we don’t have a national policy. In fact, we are now on the stage where we have a draft disability bill and hopefully when that is passed, our students and our… campus and our students anyway would be able to navigate, would be able to be trained, would be able to have the type of access that they need to have. That’s it for me.

Gonola:
Thank you very much. And I think that we are naturally segwaying into policies and legislation and where that fits. And Swaran, I will ask you to maybe make some comments about that from your perspective, please.

Swaran Ravindra:
Thank you, Gunilla. Thank you very much, Vidya. Thank you very much, Dr. Gintz. First of all, I wanted to say a big thank you in Fijian, also our independent day today. And, you know, it sort of resonates with the topic we have today, because I don’t feel personally as a citizen of the country, I do not feel that we would be able to live a dignified life until everyone, each and every person in the country has access to the basic citizen-centric services that every other person has. And I think that the resilience that the people of Tobago have is just amazing. As Dr. Gintz has just mentioned, that there is no national policy at the moment. Actually, I met Dr. Gintz, I think three to four years ago, when I went, I was actually a visitor to the University of West Indies. And that’s when I met this wonderful woman, you know, and she learned, I personally learned a lot from her from that one meeting. And one fundamental thing that I had actually learned during that visit was that even though there’s no national policy, we need to have people who are continuously there as a support system. Along with Dr. Gintz, I’ve also met some other people in the university who have told me that though there wasn’t a disability policy, but we have used other avenues, other legal instruments that were there in terms of, you know, support for persons with disabilities. For example, the Education Act that says education is accessible to everybody. everybody means everybody. It also includes persons with disabilities. So there are people who firmly believe in inclusion as a basic fundamental human right, and they exercise it through other avenues, not just the Disability Act. If I were to shed some light onto what happened in Fiji, so when we had a bill passed in government for the provisions for creating provisions for accessibility or for the rights of the persons with disability, that was in 2016, into 2018 was when the act came into practice. However, till date, we do not have anything in written, in legislation that says that persons with disabilities need to have access in every avenue, every avenue in terms of everything that is supposed to be there for a citizen, public amenities, social platforms, social media platforms, places where people interact, meet, citizen-centric services, education, and many, many other avenues that most people enjoy seamlessly. So in Fiji, though, we have the 2018 Act that says that we need to create the provisions, but it doesn’t explicitly say what those provisions should be or how to create those provisions. There’s nothing that is written that says you need to ensure that all your websites are accessible. So what I’ve been doing so far is whenever I get an opportunity to speak to an audience and I talk to them about inclusion, I do talk to them about OH&S, which is Occupational Health and Safety. It is legislation of the country and no organization can bypass that. So we are talking about having accessible entry points in a building, which is great, which is absolutely important. But at the same time, we are neglecting, suppose we are not taking into consideration those people who are not there physically. They also need to have access to amenities. They also need to have access to the websites. accessible website is still something that is rather new, a very new concept in the Pacific. So I think we need to start working in that area.

Gonola:
Thank you very much. There is so much to do. Vidya, could you explain from the Indian perspective on legislation policies in regard to accessibility in education and has that policy and legislation actually been implemented?

Vidya:
Yes, from the Indian context, actually now the government is trying to come up with NEP, National Educational Policy, where they’re trying to make a lot of changes and inclusion is considered as one of the most important areas. Actually, a lot of people now are trying to get on to inclusive education than having special schools and special education system for the visually impaired, but it’s all there, but I’m sure it will take a lot of time to implement it, but the government has started thinking in the right direction. One thing about India is that while we were working with schools, we cannot go from school to every school and get approval, so we are directly working with the state governments. We have MOU signed with the state governments and they actually send out circulars to all the schools in the states to follow our interventions. That’s how it’s been working. What I have seen is in India, there are so many states and in each state, the policies are very different. So in one of the, suppose in one state, the special education or education for persons with disabilities will come under the separate department like Department of Social Justice, whatever is there in that disability office, whatever department. there are different departments actually for persons with disabilities. So sometimes the education comes under that department in few states. But in other few states, it comes directly under the Department of Education. So these are two different departments. And there is nothing like throughout the country, it’s the same policy. Sometimes when it is with the education department, the accessibility and awareness, those aspects are not very much there because it’s for general education. And even sometimes if it’s under the special education department, a lot more needs to be done. But it’s a little bit better. So there are all of these constraints that are there. There’s nothing like nationally everyone is following certain thing. It’s different for different states. But all of a sudden, we have actually signed the Convention of Rights of Persons with Disabilities Act in 2016. Lot needs to be done, but it has started. I’m not saying that what it was a decade back, it’s still the same. Because government is actually trying to make their websites accessible. Long way to go, but it has started. So that’s currently there. And there needs to be something central for special education in the country, which right now is not there.

Gonola:
Yes, there is certainly a lot to do. And one of the areas that we often talk about is universal design and its principles to ensure that there is design from the start when it comes to how a platform is accessible for anyone. If we take, for example, in the built environment, if we have a platform that is accessible for everyone, level entrance to a building instead of stairs, that means that it’s useful for persons using a wheelchair, but it’s also useful for someone pushing a pram or a delivery cart, and it’s not a special adaption, and that’s what we like to see more and more of in the online world, and for example here in this room we have captioning, and there has been a lot of work done to ensure that there is captioning in these particular sessions, but it’s essential for a person who has hearing loss, but it’s really good for anyone who has a language other than English and needs to have confirmed what is being said, or maybe there’s some facts that they can catch up with on the particular captioning. So I’d like to ask Dr. Huggins, your thoughts about universal design and its principles in the online learning environment.

Jacqueline Huggins:
Just to clarify, we have a national policy 2018, however we don’t have any legislation to back that policy, so it’s like you have a policy but nothing is being done. Thankfully the draft Trinidad and Tobago Disabilities Bill of 2023 will change that. Now in terms of universal design, my personal thought is it can be done, it can be done, and it is useful for everyone. So in terms of our academic staff, I would… have met with some academic staff and try to show them that based on what they do and how they do it, it will allow any student to benefit from their delivery. It will allow any student to be able to do that assignment. One of the things we talk about is really the cost. So for instance, my university was built 75 years ago and how do we retrofit so that it’s physical? We have lecturers who would have started teaching many years ago and this whole online and internet is very new to them. So how do we change the way they think and understand in terms of meeting the needs of every student within that classroom? So that is something that we continue in terms of awareness. We do outreach. We meet with the organization on the campus that provides training for academic staff so that they have a sense. Websites. I am working on a campaign where we are trying to get every faculty’s website to be accessible. We have new things. I am not sure if you heard of Canva and we have some colleagues who love to do Canva. They love to put pictures. They love to put blocks and then when they do that, a student who is blind, their equipment cannot read. So it is a constant. You must have a watchdog. I call myself a watchdog at that campus. You must have a watchdog that looks and sees and recognize and then speak out on behalf of students. We also work closely with our students. What are your needs? And we have to meet your needs once we recognize and we said, yes, we are taking you onto this campus. We must recognize your needs. we, my department, work very closely with the students that we serve. So we are always liaising with the lecturer, we are always liaising with our deputy principal in terms of changes that must come. Our mantra is that we are going to create a campus without barriers and that is what we work towards. Universal design is super important.

Gonola:
I like your term watchdog. I often use the word accessibility champion and I would encourage any organisation to ensure that there is either a watchdog or an accessibility champion to keep reminding the fellow staff and within the organisation generally to ensure that there is accessibility and that it doesn’t slip away. Suwaran, would you have any comments on that,

Swaran Ravindra:
please? I was just listening. It’s totally remarkable. As mentioned earlier, as Dr Higgins had previously said, I think it’s evidence that legislation on its own is never enough because even without legislation, these remarkable women have done so much work. They have come up with textbooks, they have come up with tertiary level education. If I may make reference to Professor Harrington-Blake who is in the Faculty of Education. So I remember when I met her, this is about four years ago and she had told me that no, we do not have enough legislation for persons with disabilities specifically, but we do have the Education Act and it says everybody and that did not stop her. That actually was something that she utilised when the term everybody means every citizen of the nation and that is what gave her enough legislation to go ahead and create a tertiary level education, a master’s. degree or a postgraduate degree in inclusion, that that teaches teachers how to make the classes inclusive. So I think this is enough evidence to say that legislation on its own is never enough. We do need the watchdogs, we need people who have to be there constantly ensuring that inclusion becomes part of our DNA. Now it needs to be part of our muscle memory, it needs to be part of our everyday motto and mantra. Nobody has to be left behind because somebody forgot to address the needs of a particular person. So just as the University of West Indies has, we also at Fiji National University have a reasonable adjustable form in which we meet a student and then we have a discussion, go through some student counselling session. But the other obstacle we face in that area is that the right still remains for the student if they want to declare their disability. And many times we’ve got these cultural norms, we have these societal norms, we have challenges around that as well because until and unless somebody declares the disability there would not be much that we can do to help. That does become a barrier. If I could refer to a specific case, I remember teaching a student who exhibited traits, or I wouldn’t say symptoms, but traits of a person who has a form of autism. And if I were to be specific on, because I had some discussion with some other teachers and they told me that it seems like that it is autism. But we could not really put a finger on what particular type of autism, because until and unless we can do that we will not be able to create the special provisions that are needed. So that becomes an obstacle. So when we tried to talk to her parents, the parents had a very aloof type of reaction. They said, no, my child doesn’t have disability. So for them disability is something to be shunned away, to be kept quiet about. It’s something that would be embarrassing and they feel that if anybody gets to know that the child has a disability then it is something that is not something to be proud of, it is something that would deter people in giving opportunities in the workforce as well. So these are some of the obstacles we are facing. Now AFINI is an ISO certified organization, we practice ISO 9001 and we’ve had situations where I remember there was a time when we had a participant in a short course program and she may have been in her early 50s and she had superannuation, she was actually paying for a course through superannuation and there were people in the class who came and told me madam it’s rather dangerous to keep her in class because she, well they used rather disturbing terms but what could have been the case was perinatal schizophrenia. So I had other participants coming and telling me that she could be dangerous to keep in class so then again we’ve got another legislation about OHS where we need to protect every participant in the class and so sometimes we have legislations that sort of contradict with each other but then there comes a point in time for like for example in my case as a teacher I had to stand my ground and I had to say no my student has a constitutional right to be in this class and if we are not creating the right provisions then we are people who are not you know doing the right thing but then eventually we had a good discussion. This was a thing in 2006 and I remember we still kept that student in class but the fact that she was using her own superannuation I think it was evidence enough that she was in a sound mind to actually work and end up living for herself. So there’s so many things that sort of contradict with each other as well but I think in cases like that we probably need another act that stands robust on its own. The fact that we need to create the provisions for persons with disabilities and that was entailed and enriched within the 2018 Act of Rights of Persons with Disabilities. The incident that I’m telling you about happened in 2006. So the only instrument, the only legal instrument I had in order to keep this student of mine in class was the fact that it is a basic right, it is a constitutional right to be in class. But of course, as in many lesser developed countries, as in many economies that are still developing, there will always be a huge gap between what the constitution says that the citizens should have in terms of rights and what the legislation says in terms of what happens when those rights are breached. So we need to, you know, focus on the gaps, we need to focus on the gaps and also find out how to address them.

Gonola:
There is a lot to unpack there. And I think that when it comes to the issue of cultural barriers in terms of the general education community understanding what it means to have a different type of disability and the shunning, the stigma in some cases. Vidya, do you have any comments about that? And also in terms of

Vidya:
universal design? Yes, as I was already mentioning that, you know, sometimes it’s the accessibility specific issues why people are not able to get onto the digital platforms or things like that. But sometimes it’s also all of these barriers like cultural norms considering it as a stigma. So it almost happens in all villages. For example, there is one lady who stays next door to my house and she rarely comes out of house. So 40 years, I think now she’s almost 40. So 40 years she’s a blind person and she’s locked up indoors. So there are situations like that. And I myself have seen trying to get some women onto digital platforms so that at least they can be connected to the community. And when I try to reach out to them, in the initial stages itself, there’ll be somebody at home picking up the call and not connecting to them. So they don’t have even that much freedom for them to get onto digital platforms. So all of these barriers definitely are there. And sometimes it’s also how we design the technologies. Even social issues are sometimes socially how we want to be there, how we want to look. For example, if you take a simple example of a cane, some people are not comfortable taking it and walking with it because it looks very different. Now, if there are some audio-specific devices which are too big or which are not very socially pleasing to take it in a social setting, then people will not like to use them much. So some phone, for example, is a very good example of universal design because on the phone, there’s TalkBack. There are all sorts of accessibility features that are there. When you want, you can turn it on. When you want, you can turn it off. So phone, everybody carries. There’s nothing that prevents you taking it whenever you are in a group or whenever you are in a social setting. So you’ll have to consider all of these barriers as well while designing the e-learning experiences and make it as inclusive and as socially acceptable as possible on what platform you want to design the e-learning experiences. So all of these will also have to be factored in. And the continuous support for people to use the platforms also is a must. Sometimes the government runs a lot of programs. They distribute laptops. They distribute a lot of devices. or even some other organizations distributed to students and all the software is installed, LMS platforms are there, but who is going to oversee it whether the students, teachers or whichever person wants to use the platform, are they comfortable, are they using, are they able to use it on a long-term basis. All of these will have to surely be considered along with accessibility issues.

Gonola:
Thank you Vidya and I’d like now to bring in Zakaria. Yama from Morocco who is a co-organizer of this session and also on the leadership team of the Internet Society Accessibility Standing Group and so if Zakaria could make some comments about universal design principles too, thank you

Zakari Yama:
Thank you Gunilla, thank you everyone. As some institutions as said by Medessa, I find it difficult to apply universal design and make it compatible with the accessibility, even though both have the same goal, making access and reduce barriers for students. However, the scope and method they use vary. For a universal design, it focuses on a broader range of learners while the digital accessibility focuses essentially on learners with disability, but the good news is that what is good for persons with disability is also good for everyone. When we take for example real-time captioning for persons with disability, it is also good for students without disabilities because when they have for example a difficulty understanding an instructor’s accent, it’s also good for them when watching a video in a loud environment. When applied with an accessibility mindset, the universal design for learning often leads to resulting in benefits for people beyond those in need of a specific accommodation. In my opinion, any institution should use the accessibility effort as an opportunity to improve the universal design practices. Thank you.

Gonola:
Thank you very much, Zakari. Before we go on to talk about the broader concept of how the internet community can all work on making e-learning more accessible, I’d like to open the floor now to persons in the room and online if there are any comments or questions. Yes, we have one from Lydia Best. Please take the microphone.

Lydia:
Thank you very much. I’m Lydia Best and I represent the European Federation of Hard of Hearing People. I have a question not around just the e-learning in the classroom itself, but also before. For example, students have to access internet and online resources teachers provide for them, be it assignment, be it whatever materials we need to use. What I have seen, and that is in the UK, that the IT department in the schools often apply a very heavy-handed way towards accessing the online resources in the schools, between the schools. And that, in fact, is a barrier for those with cognitive impairment. And, you know, just constantly changing the passwords, constantly changing the way to access, immediately stops for students from accessing vital information and from being able to provide their assignments. And the problem is that nobody actually sees this as a problem, even when you raise it, because it’s being seen as, this is simple, this is no problem for anyone, so why do you have a problem? And I think we need to address that as well. Thank you.

Gonola:
Thank you very much, Lydia. Who would like to take that question? Vidya or Dr. Huggins, Swaran, who would like to take that question?

Jacqueline Huggins:
What I would like to say is that, you know, I understand what was just said, but on my campus, again, my department, we work closely with IT. You know, we listen to whatever complaints students have, and we take it to whichever quarters. So, for instance, we had students who could not afford the software that is needed, the JAWS. So, what I did is that I worked with my supervisor to gain funding, so that we were able to purchase four licenses, and we put it in each one of our libraries, our computer labs, so our students were able. IT was included so that it had an understanding of why we were using the software, the reason why they need to support the students. So, it is also about finding the stakeholders who would listen, finding the stakeholders who would understand and ensure, you know, that what the student needs. is what the student gets. There are some equipment that’s very expensive that our students cannot purchase. And therefore the university has that responsibility. And once the university has that responsibility, those who are involved in ensuring that it happens, like our IT unit, they are definitely brought on board. So, you know, a lot of what we do, it takes meeting and talking, negotiating, which shouldn’t be, it should be, this is what needs to be done. But it takes some of that to ensure that, you know, the students are not frustrated. The students are able to come on campus and they’re able to do what they need to do.

Gonola:
Thank you. Any other comments to that question?

Swaran Ravindra:
I just wanted to just clarify from Lydia once more. So is your question around the need of, you know, having to constantly change your passwords or there’s too many authentication processes that make it cumbersome for a person with disability to continue, you know, working? Is it something around that? If Lydia could please clarify, I’m just trying to understand.

Lydia:
It’s Lydia speaking, yes, that’s correct. So that is even before you go online to participate in your online learning. And, you know, I’m not going to talk about captioning because it’s been already said, but it is actually accessing the vital materials the students have to get into online library where the teachers put in the assignments where for the students, students get chastised for not finishing or finalizing the work, but they literally could not remember their passwords. And when they were erasing it, it’s a constant battle of working with the IT to understand, but actually you can’t keep changing those passwords. You can’t keep ramping up security. because it creates a barrier for the students. And I have seen it first hand with my son. Thank you.

Swaran Ravindra:
Thank you, Lydia. I think that’s a very valid point. And additionally, students should never be penalized for that extra time that they require in logging into the system. The assessment should, in fact, start from the moment that the student has accessed the main curriculum. And if you look at certain, if I could speak about certain IT exams, for example, CCNA, Cisco exams, Checkpoint exams, Microsoft exams. If you do the exams, for example, you’ll see that you are assessed only for the times that you are actively online. And if there is any sort of technical issue, then whatever time the technical issue takes, that time would not be you. You wouldn’t be held against you. You will be compensated for that time. That’s that’s one part of the equation. Now, of course, it’s very important for us to be cyber resilient in today’s world. I cannot emphasize that enough. However, there are so many easier ways of authentication, thumb trains, tongue scan, retina scan. There’s so many different types of easier authentication methods that are specific to that person. Face recognition, retina recognition. So these authentication methods that are specific to the person. So there’s really no other way of bypassing that. It’s very secure and it’s easier as well. So I fail to understand why would they try to impose such difficult types of authentication methods and waste their time and make it such a deterrent that the student would not even want to go back to class. So maybe you should really advocate for this.

Gonola:
Thank you very much, Suaran. And there is another question or comment here in the room.

Anna:
Hello, my name is Anna. I’m from Brazil and I work in a child’s rights organization. And I would like to hear a little more about Lydia’s work with children. And if you can comment about the role of civil society. and promoting their rights, and you talk about guaranteeing the accessibility since the design too, and it’s what we defend for children too, but I want to hear about your thoughts about how can we do this and promote this if we don’t have the platforms involved in this debate, or we don’t have persons with disabilities working in those places to think and promote these accessible ways, and what is your thought about that?

Gonola:
That’s a long question, and also it can be a very long answer, and I think we can make it part of the rounding off of this session about how the internet community can encourage collaboration across the globe to make learning more accessible for persons with disability, and certainly children with disabilities, so I’ll pass now over to Vidya.

Vidya:
Yes, so I feel a little bit to answer the question that you had asked earlier, so when you’re talking about children, a lot of times children do not know what they want, so it should be the persons with disabilities who have grown up in similar circumstances, who have gone through the system to tell that this is what the children need, so once the child knows that these are the things, because what I have seen is whenever you take any new technology to the child, they are very open-minded, they are not very biased, they have not grown up yet, so they don’t have their own assumptions, so whenever you take something new, they pick it up really, really quickly, so I don’t see that. see why a child who is introduced to computer, who is introduced to braille, who is introduced to technology, who knows everything, say, right from grade one, why won’t they be able to compete with everyone else, say, when they reach grade eight or nine? I mean, they can do very much everything in par with everybody else. So that’s what we are trying to give all of these, right from very early age, everything that a child has access, a child with sight has access. We are trying to make it available for children without sight as well. And I feel the nonprofit organizations have a huge role because they’re the bridge between the government and bridge. They know the ground realities working in this space. So it’s very much essential for the nonprofit organizations to be that bridge and to play their role very effectively. Also, as an internet community, I feel that having forums like these where there are people who have expertise in different areas, sharing their thoughts, networking, and actually coming up with what are the pressing needs that the community at large has, and actually following up with the networks that we make here and making a meaningful impact together. Anyone individually cannot do it. So I feel forums like these and the internet community has a huge role to play, and it takes time. So it’s a good starting point.

Gonola:
Thank you very much, Vidya. And it’s so important to hear from a person with a lived experience and the pathway that Vidya took to become who is now a global advocate. So that’s very important. now pass on in the last few minutes, just very briefly, to Dr. Huggins, just to give some thought about this encouraging collaboration across the globe, which we’ve already heard about, all of those experiences from various different countries. And how can we continue that collaboration to make e-learning more accessible? Dr. Huggins.

Jacqueline Huggins:
And I certainly want to agree with forums like these, because this is where I learn, and this is where I take back to my university and try to get it implemented. And this organization, there’s a wealth of knowledge, a wealth of experience, and we cannot stop. We need to continue. And so, for instance, one of our campus is fully online, and it covers 13 countries in the Caribbean. And students are able to get their degrees. And I believe if we utilize a system like that, little by little, we spread it. I am talking to somebody from India. I’m talking to somebody from Fuji. And we learn from each other, and then we put together what are the best practices. And we start to utilize whatever we learn on these forums. It’s not a talk show. We’re going to take back some thoughts. We’re going to take back some action. And I think little by little, we stay with this. We stick together, and we could get it done. It’s going to take some time, like she said, but it’s not impossible.

Gonola:
Thank you very much. And I will give a final word to Suara Narendra, please.

Swaran Ravindra:
Thank you very much. So finally, I think. through all this conversation, there’s something that I wanted to talk about is affirmative action. We can talk about these things. And, you know, last year I met someone at APIGF and he mentioned that I’ve been saying the same thing for the past 10 years. I think it’s time for affirmative action and we can do it together, right? So some of the things that could help is first of all, a disparity measurement. We cannot talk without having proper measurements in front of us. Governments, economies will not listen to us till we have intellectual property that is based on disparity measurements. So basically it’s just a simple measurement of how many people are digitally included over how many people who are not. And then there’s some standards like the world-renowned standard WCAG to the least if we could try with 1.0, even in places where there’s no such thing as digital inclusion ever done. So if we could have web content accessibility guidelines 1.0 to start off with. And this one initiative that I wanted to speak to you about is UNESCO’s Romex Indicators, which is an internet universality indicator assessment. So we are currently doing this for five Pacific island nations and basically it’s based on human rights principles for an internet that is based on human rights. It is open, it should be accessible to all and it is nurtured by multiple stakeholder participation as well as some cross-cutting issues like children, gender, security, economy. So this is quite an interesting study. I’m actually part of this. So if there’s anybody who would like to talk to me about how you could do this, I’ll be happy to address your questions later on. So that’s all I had to say. Thank you very much.

Gonola:
Thank you very much, Swaran and thank you very much to the panel and the audience with your questions. And I think we have learned a lot and we look forward to further collaboration across the globe. Thank you very much, everyone.

Jacqueline Huggins:
Thank you and goodbye.

Swaran Ravindra:
Sorry, can we just take a photo quickly, please?

Anna

Speech speed

131 words per minute

Speech length

125 words

Speech time

57 secs

Gonola

Speech speed

120 words per minute

Speech length

1296 words

Speech time

649 secs

Jacqueline Huggins

Speech speed

152 words per minute

Speech length

1522 words

Speech time

599 secs

Lydia

Speech speed

158 words per minute

Speech length

349 words

Speech time

132 secs

Swaran Ravindra

Speech speed

193 words per minute

Speech length

2448 words

Speech time

761 secs

Vidya

Speech speed

166 words per minute

Speech length

2604 words

Speech time

944 secs

Zakari Yama

Speech speed

109 words per minute

Speech length

197 words

Speech time

108 secs

Advancing rights-based digital governance through ROAM-X | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Alexandre Fernandes Barbosa

The Internet Universality Indicators framework has been successfully implemented by Brazil for almost two decades, demonstrating the importance of data production in assessing the impact of internet universality. Despite the framework’s extensive range of indicators, the scope of its application necessitates the collection of comprehensive and up-to-date data.

However, one significant hurdle in utilizing the framework is the existence of a data gap in many countries, which prevents a thorough assessment of internet universality. Without the required data, these countries are unable to effectively evaluate their progress in achieving the goals outlined in the framework. This highlights the need for increased data production and availability to ensure accurate assessments.

The implementation of the Internet Universality Indicators framework has facilitated multi-stakeholder dialogue, providing an opportunity for different actors, including policymakers, civil society, and the private sector, to contribute their perspectives and insights. Continuous engagement of these stakeholders is crucial for effective e-government systems and the development of tangible outcomes.

Brazil serves as a notable example of the positive impact of multistakeholder dialogue, with the creation of important legislation such as the Brazilian General Data Protection Regulation (GDPR), the law of access to information, and the Internet Bill of Rights. These outcomes underline the potential of multistakeholder dialogue to drive meaningful changes in governance and policy-making.

Furthermore, the relevance of specific stakeholders has not significantly changed, emphasizing the continued importance of involving government, technical community, civil society, and the private sector in discussions and decision-making processes.

UNESCO has played a vital role in fostering dialogue and cooperation, particularly in the context of internet universality. Working closely with UNESCO, individuals such as Barbosa appreciate the organization’s efforts in building capacity and raising awareness among member states. This collaboration has resulted in significant progress, with a considerable number of countries completing assessments and demonstrating commitment to achieving the goals of the framework.

However, one area of concern is the existing data gap, particularly in countries from the global south. It is crucial to address this gap as it hampers the ability to comprehensively assess internet universality and implement necessary measures in these regions.

In conclusion, the Internet Universality Indicators framework provides a comprehensive understanding of the significance of data production, multi-stakeholder dialogue, and periodic assessment in ensuring progress towards internet universality. The successful application of this framework by Brazil highlights its effectiveness in driving positive outcomes. However, the data gap remains a challenge, and further efforts are needed to bridge this gap, particularly in global south countries. Overall, the framework’s implementation has contributed to a greater understanding of the importance of collaboration, assessment, and capacity building in advancing internet universality.

Audience

During a discussion, both the speaker and audience displayed a keen interest in exploring the field of mood stakeholders and whether any new indicators have emerged in the last five years. The primary question raised by the speaker was the existence of new indicators in this domain.

The concept of “mood stakeholders” was deemed a noteworthy dimension of the indicator, relevant to the topic under discussion. While specific details regarding these mood stakeholders were not provided, it can be inferred that they play a vital role in determining the mood or emotional state of a particular group or community.

It was emphasised that a list of indicators encompassed the involvement of mood stakeholders, suggesting that such indicators are already recognised and widely accepted within the field. However, the discussion aimed to identify whether any novel indicators had emerged in the last five years, indicating advancements or changes in this area.

The audience also expressed curiosity about any modifications or developments that may have taken place in the field of mood stakeholders. Unfortunately, specific supporting facts or evidence to address their questions were not mentioned. Nonetheless, their curiosity reflects a general interest in staying up to date with the latest advancements in the field.

Given the neutral sentiment expressed by both the speaker and audience, no definitive conclusions were reached during the discussion. However, the main question raised regarding the emergence of new indicators in the realm of mood stakeholders implies a desire for further exploration and potential expansion of knowledge on the subject.

In conclusion, the speaker and audience engaged in a discussion focusing on the exploration of mood stakeholders and the potential introduction of new indicators within the last five years. The absence of specific facts or evidence limits the ability to provide concrete answers. However, it is evident that the participants expressed a genuine interest in understanding any advancements or changes that have occurred in this crucial field.

Speaker 1

Five years ago, the Internet Universality Indicators received endorsement from UNESCO’s Intergovernmental Council of the International Programme for the Development of Communication. During a recent forum, the speakers emphasized the necessity of continuous transformation and improvement of these Indicators. They highlighted the need for shared insights, strategies, and identification of areas that require enhancement.

The speakers recognized the lessons learned and challenges faced over the past five years, which have strengthened the importance of constantly evolving and adapting the Indicators. They stressed the significance of collaboration and collective action in shaping and refining these guidelines.

Furthermore, the speakers emphasized the value of collective efforts and the exchange of experiences, obstacles faced, and strategies for success. They hoped that the discussions held during the forum would result in tangible benefits for all stakeholders involved in the Romex framework, an important aspect of the Indicators.

Overall, the speakers concluded that the continuous evolution of the Internet Universality Indicators is crucial in ensuring their relevance and effectiveness in addressing the ever-changing digital landscape. They urged a collaborative approach, encouraging stakeholders to work together to shape these Indicators and improve the digital policies related to them. This united effort is expected to lead to practical and positive outcomes for all parties involved.

Anja Gengo

The Internet Governance Forum (IGF) featured discussions on various topics related to Internet governance. One notable highlight was the recognition of the Dynamic Coalition, an independent and autonomous entity, for its successful engagement of stakeholders worldwide. The coalition has played a crucial role in promoting indicators and monitoring their implementation since their adoption in 2018. This engagement has yielded significant results, underscoring the value of their efforts.

Another key point addressed was the need to involve stakeholders from underrepresented countries in global Internet governance processes. The IGF Secretariat has prioritised outreach to engage stakeholders from countries that have traditionally had limited participation in these processes. This approach has proven effective in incorporating active participation from nations such as the Maldives, previously underrepresented in global Internet governance initiatives. The argument presented is that engaging stakeholders from a diverse range of countries is essential for achieving a more inclusive and comprehensive approach to Internet governance.

Furthermore, the speakers emphasized the importance of upholding the highest humanitarian values in the digital world. They highlighted the disparity in how different jurisdictions interpret social media posts, with some considering them exercises of freedom of expression while others penalise them with imprisonment or fines. The call to uphold humanitarian values implies the need for the digital world to strike a balance that respects freedom of expression while safeguarding the well-being of individuals and communities.

Additionally, it was noted that there has been a proliferation of national laws regulating artificial intelligence since the onset of the pandemic. Prior to the pandemic, only a few national jurisdictions had laws pertaining to artificial intelligence. However, in the post-pandemic era, there has been a significant increase in the number of such laws. This observation highlights the growing recognition of the importance of effectively regulating and governing the use of artificial intelligence technologies.

The speakers also stressed the importance of adopting a methodological approach to stakeholder engagement. The IGF Secretariat presently focuses on engaging stakeholders from underrepresented countries, ensuring a multi-stakeholder and multidisciplinary approach. This methodical approach is seen as essential for fostering more diverse and inclusive discussions on Internet governance.

The relevance of early assessments and the need for expanding outreach were also brought to the fore. The COVID-19 pandemic has brought about significant changes in the legal landscape, necessitating a reevaluation of existing assessments. Moreover, efforts must be made to ensure that assessments and outreach are inclusive and comprehensive, without jeopardising the global nature of the Internet.

The speakers also emphasised the need to engage stakeholders from different backgrounds and perspectives in dialogues and processes. They shared an anecdote about a Tanzanian judge who did not fit into a standard stakeholder category, highlighting the importance of recognising and including diverse voices. The initiation of a parliamentary track in 2019 reinforces the need to address recognised gaps in stakeholder group representation. Therefore, efforts to actively engage stakeholders who are not participating within certain stakeholder groups are crucial.

Furthermore, the speakers stressed the necessity of active participation from high-ranking individuals in various domains, particularly those that are currently underrepresented. The absence of medical professionals in privacy-related discussions and individuals from the car industry, particularly at the highest management levels, was highlighted. This observation suggests that the perspectives of individuals with expertise and decision-making authority in these fields should be actively sought to ensure that Internet governance discussions are well-informed and effectively address critical issues.

Lastly, the speakers underscored the significance of promoting and implementing UNESCO’s Internet Universality ROMEX indicators. These indicators are considered essential for guiding and assessing Internet universality, ensuring that the Internet is used for the benefit of all individuals and societies. Both the Dynamic Coalition and the IGF Secretariat expressed support for these values, with an emphasis on cooperation between UNESCO and the IGF for successful implementation.

In conclusion, the discussions at the IGF covered a range of topics related to Internet governance, including stakeholder engagement, representation, regulation of artificial intelligence, the importance of humanitarian values, and the implementation of UNESCO’s Internet Universality ROMEX indicators. Throughout the discussions, the importance of inclusivity, comprehensive assessments, and active participation from diverse stakeholders was consistently emphasised.

David Souter

David Souter proposed a holistic approach for assessing Internet Universality Indicators (IUIs). These indicators, based on the concept of Internet universality developed in 2013, focus on rights, openness, accessibility for all, and multi-stakeholder engagement. Souter pointed out that many countries have concentrated solely on the core indicators and advocated for a review to address this issue.

Souter stressed the importance of diversity within the research team and advisory board when using IUIs. He highlighted that a diverse team helps avoid political pressure and vested interests. Moreover, diverse expertise within the team leads to a more impactful output. Including multiple perspectives ensures a comprehensive analysis and enables the project to benefit from a wide range of insights.

Additionally, Souter emphasized the need to prioritize practical interventions over ideal ones in the national context. The goal of IUIs is to identify realistic interventions that can be implemented effectively. Recommendations should be feasible and achievable within specific national contexts. This pragmatic approach ensures that IUIs can effectively promote Internet universality.

Souter criticized member countries for solely focusing on core indicators. He argued that this approach overlooks the opportunity presented by non-core indicators. By narrowing their focus, countries may neglect important aspects of Internet universality and fail to address crucial issues. Souter’s analysis underscores the necessity of adopting a comprehensive and inclusive approach when utilizing IUIs.

In conclusion, David Souter’s analysis highlights the significance of a holistic assessment approach for Internet Universality Indicators. This approach encompasses diversity within the research team and advisory board, prioritization of practical interventions, and consideration of non-core indicators. Employing this approach enables countries to gain a more comprehensive understanding of Internet universality and actively work towards creating a more inclusive and accessible digital environment.

Lutz Mรถller

The analysis of the given statements highlights several key points pertaining to internet ecosystems and their influence on societal discourses. One speaker highlights the rapid expansion of dominant social media platforms, noting the fundamental changes observed in these platforms. This speaker also emphasizes the influence of these platforms on the visibility of different political views and the concerning increase in the spread of disinformation.

Another speaker emphasizes the necessity of strengthening internet ecosystems in a more democratic and nonprofit manner. The speaker acknowledges the growth of artificial intelligence (AI) manipulation and repression, as well as the growing influence of private business interests in public discourse. The argument here is to establish internet ecosystems in a way that prioritizes democratic values and ensures a level playing field for all participants.

Additionally, the use of Internet Universality Indicators (IUIs) is praised for providing a comprehensive viewpoint of whether internet policies adhere to principles of human rights, openness, access, and stakeholder participation. The evidence points to Germany’s experience with IUIs, which generated brutally honest evidence regarding internet policies. It is highlighted that IUIs play a pivotal role in highlighting the delicate balance between the right to privacy and freedom of expression.

However, there are concerns raised about the number of IUI indicators, with a suggestion that there should be a stronger focus on key areas and topics. The feasibility and practicality of certain indicators are questioned, as well as issues surrounding data availability and operationalization. Despite these concerns, the general sentiment remains neutral toward the number of IUI indicators.

Additionally, the analysis highlights the crucial role of a multi-stakeholder advisory board in the IUI process, particularly when it comes to effectively communicating results to political stakeholders. The evidence provided is Germany’s successful experience with a multi-stakeholder advisory board in the IUI process. This highlights the significance of involving various stakeholders in decision-making processes to ensure transparency and accountability.

In conclusion, the analysis of the statements highlights the rapid expansion and influence of social media platforms on societal discourses. It emphasizes the need for democratically driven and nonprofit internet ecosystems to counterbalance the growing influence of private business interests. The use of IUIs is regarded as an effective tool for assessing internet policies’ adherence to human rights principles and stakeholder participation. However, there are concerns about the number of indicators and the practicality of certain measures, as well as the importance of multi-stakeholder involvement and effective communication with political stakeholders. Overall, these insights contribute to a better understanding of the complexities surrounding internet ecosystems and their impact on societal discourses.

Simon Ellis

The analysis focuses on the Internet Universe Indicator (IUI) system, which offers a unique holistic approach to assessing the internet infrastructure and usage in countries. Instead of providing a single definitive answer, it produces an analysis that encourages countries to answer a set of questions, resulting in a comprehensive picture of their internet landscape. This approach is viewed positively as it allows for a more nuanced understanding of the internet in different countries.

Follow-ups are considered an important aspect of IUI assessments. The analysis highlights the first follow-up assessment conducted in Kenya by Grace Gitaiga. However, the nature of reporting and the frequency of IUI assessments are being questioned, suggesting the need for further examination of this aspect.

The inclusion of new themes in IUI assessments, such as AI, environment and sustainability, and cyber security, is supported. These emerging themes are seen as crucial considerations in evaluating the state of the internet and its impact on society. This demonstrates the dynamism and adaptability of the IUI framework to address current and evolving challenges.

E-waste and satellite connectivity are identified as significant issues in Southeast Asia and the Pacific. The analysis notes that Southeast Asia has become a dumping ground for e-waste from Europe and North America, highlighting the environmental and sustainability concerns associated with improper e-waste disposal. Additionally, the geographical challenges in the Pacific region make satellite connectivity the only viable option, underscoring the importance of addressing this issue for improved internet access in these areas.

Another important point raised in the analysis is the need to define the concept of multi-stakeholder participation. The analysis suggests that true multi-stakeholder involvement goes beyond mere attendance at meetings and emphasizes the importance of active engagement and meaningful inclusion of stakeholders’ inputs in decision-making processes. This understanding is crucial for fostering genuine collaboration and effective governance in the digital realm.

The analysis also stresses the necessity of achieving real participation in multi-stakeholder initiatives. It highlights the observation that in e-government systems, inputs from civil society representatives are often disregarded or their usage remains unknown. To address this issue, it is crucial to analyze what meaningful and effective participation looks like and how it can be captured in order to establish inclusive and participatory digital governance. Furthermore, the analysis mentions the role of new actors on the internet. It notes that police involvement in internet-related matters has been observed in recent maps, indicating the increasing influence of new actors in the digital space. This development raises questions about the implications and potential challenges associated with the involvement of these actors.

The analysis also brings up the noteworthy observation made by Simon regarding the importance of indicators related to training for judges and lawyers. Simon considers it interesting and important, suggesting that adequate training in legal matters pertaining to the internet is crucial for maintaining peace, justice, and strong institutions. This observation highlights the need to prioritize the training of legal professionals in digital issues to ensure fair and effective dispute resolution and legal processes in the digital era.

Finally, the analysis mentions Simon’s approval of the assessment and his anticipation of a new version related to the global digital compact. This indicates support for the assessment process and the belief that it can contribute to advancing global digital cooperation and achieving the goals outlined in the global digital compact.

Overall, the analysis provides valuable insights into the Internet Universe Indicator (IUI) system, its various aspects, and its implications for assessing and improving the internet infrastructure and usage. It highlights the importance of continuous evaluation, the inclusion of new themes, addressing specific challenges, and achieving meaningful multi-stakeholder participation in fostering a sustainable and inclusive digital landscape.

Marielza Couto e Silva de Oliveira

The Internet Universality ROMAX framework, which focuses on the principles of the Internet, needs to be revised to keep pace with the rapidly evolving digital governance and technological landscapes. One argument proposes that the ROMAX indicators should be strengthened and potentially expanded to include new dimensions like child data protection, mental health, and AI toxicity levels, in order to better address the challenges and implications arising from these areas.

The argument stems from the potential of ROMAX indicators to serve as a critical mechanism for monitoring adherence to principles in the upcoming global digital compact. By incorporating child data protection, mental health, and AI toxicity levels, the framework can enhance its effectiveness in promoting good health and well-being, quality education, gender equality, and industry innovation and infrastructure, all outlined in the relevant Sustainable Development Goals (SDGs).

It is important to note, however, that many national teams analyzing ROMAX face research obstacles due to a lack of disaggregated data, which limits visibility of the indicators. Despite this challenge, stakeholders believe that tightening the ROMAX indicators and expanding their scope is essential to keep up with the evolving technological and governance landscapes.

To ensure a successful update of the ROMAX framework, active participation, collaboration, and continued engagement of stakeholders are crucial. The Internet Universality Indicators Dynamic Coalition has proven to be an effective platform for exchanging expertise and experiences in this regard. Stakeholders, who possess an on-the-ground understanding of national needs, research difficulties, and emerging themes, play a valuable role in shaping the future of the ROMAX framework.

In conclusion, the Internet Universality ROMAX framework requires revision to adapt to rapidly changing digital governance and technological landscapes. Strengthening and potentially expanding the ROMAX indicators to include areas like child data protection, mental health, and AI toxicity levels is proposed. The successful update of the framework relies on active participation, collaboration, and ongoing engagement of stakeholders. The Internet Universality Indicators Dynamic Coalition facilitates knowledge exchange, while stakeholders provide valuable insights into national needs and research challenges.

Moderator – Tatevik GRIGORYAN

The meeting on UNESCO’s Internet Universality Romex Indicators was attended by participants from various parts of the world who joined online. Notably, Dr. Lutz Moeller joined the meeting early in the morning, demonstrating dedication and commitment. Despite the inconvenient times, participants were acknowledged and thanked for their valuable contributions.

The meeting included individuals who played a significant role in the development and progress of the Romex Indicators, showcasing the importance of their expertise and insights. It was mentioned that Tatevik Grigoryan, the meeting’s moderator, was sitting next to these individuals, further illustrating their involvement and importance in shaping the indicators.

Due to unavoidable circumstances, the assistant director general for Communication and Information at UNESCO could not attend the meeting in person. However, a video message from the assistant director general was played, indicating their commitment to the meeting and the subject matter.

The meeting emphasized the principles of internet universality, which is the official position of UNESCO. This position entails upholding the rights of individuals, ensuring openness, promoting accessibility for all, and fostering multi-stakeholder participation. The meeting highlighted the multi-stakeholder approach to internet governance, which is also promoted by the Internet Governance Forum.

The ROMEX IUI assessment, considered a unique global tool, is currently being implemented in 40 countries. These assessments aim to inform policymakers and contribute to the development of digital strategies, laws, and regulations. It is worth noting that six out of the 40 countries have already published a report based on the assessment.

The ROMEX IUI assessment not only aids in the development of the internet at the national level but also supports the achievement of Sustainable Development Goals. It aligns with the Global Digital Compact, emphasizing the significance of this assessment framework as a comprehensive and holistic approach to internet development.

The meeting also discussed the ongoing revision of the framework. Considering that the ROMEX IUI assessment is currently being implemented in 40 countries, it is imperative to incorporate topics and lessons learned from the implementation process into the revised framework.

Throughout the meeting, Tatevik Grigoryan expressed appreciation to the panelists and steering committee members of the dynamic coalition. This dynamic coalition has been supportive and actively engaged in various initiatives related to the ROMEX framework.

In her closing remarks, Grigoryan reflected on the insightful discussion and offered speakers an opportunity for final thoughts. The absence of audience questions during the meeting indicates that the discussion was well-structured and kept on schedule.

Furthermore, Grigoryan highlighted the contributions and dedication of her team, specifically mentioning the work of her colleagues, Karen Landa and Camila Gonzalez. Their involvement and efforts were recognized in advancing the investigation of Internet universality.

Finally, Grigoryan expressed her interest in carrying on the tradition of taking a family photo. This indicates a sense of continuity and fosters a collaborative and unified spirit among the participants.

In conclusion, the meeting on UNESCO’s Internet Universality Romex Indicators brought together diverse participants to discuss and emphasize the principles of internet universality. The Romex IUI assessment, as a global tool, plays a crucial role in the development of the internet at the national level and supports the achievement of Sustainable Development Goals. The ongoing revision of the framework reflects the commitment to continuous improvement and learning from the implementation process. The panelists, steering committee members, and Grigoryan’s team were appreciated for their contributions and engagement. The meeting concluded on a positive note, highlighting the importance of continuity and unity among participants.

Session transcript

Moderator – Tatevik GRIGORYAN:
Hello everybody who is here in the room with us, and to those who joined online, especially thank you to all the people who have a very inconvenient time. I know it’s 4 a.m. in Europe, and my colleagues are there online, and also we have a speaker online, Dr. Lutz Moeller, who is with us at such an early hour, so thank you so much. So my name is Sathevik Gregorian, and I work for UNESCO, for those of you who just joined us, and I work on UNESCO’s Internet Universality Romex Indicators, and I’m really honored to be sitting next to people who were at the cornerstone of developing the indicators and then supporting the launch and progress of the indicators, who will be sharing their thoughts on the process, and then on the progress, as well as further updates. So I would like to start by a video message from the UNESCO’s Assistant Director General for Communication and Information, who unfortunately couldn’t be here with us, but he sent a video message which I would now like to request the technical team to play. Thank you.

Speaker 1:
Distinguished participants, esteemed colleagues, and honorable guests, I am delighted to extend a warm welcome to all of you at the Dynamic Coalition on Romex Indicators session, which takes place during the Internet Governance Forum 2023 in Kyoto. As we gather today, we are surrounded by passionate individuals who share a common vision, an Internet ecosystem that upholds rights, embraces openness, fosters accessibility, and evolves through the collective efforts of its stakeholders. Personally, I regret not being able to join you physically in Kyoto due to a scheduled conflict with the UNESCO Executive Board meeting in Paris, which I need to participate in. As the UNESCO Assistant Director General for Communication and Information, I had the privilege of attending the previous editions of IGF, including the last two ones held in Poland and in Ethiopia. This platform has consistently proven invaluable for fostering meaningful discussions about the Internet’s pivotal role in our digital age. Today, our focus is on the ever-evolving landscape of Internet governance and the ongoing refinement of the Internet Universality Romex Indicators. Our gathering represents more than just a dialogue. It is a call for collective action. Five years have passed since the endorsement of the Internet Universality Indicators by UNESCO’s Intergovernmental Council of the International Program for the Development of Communication. During this time, we have witnessed the transformative power of these indicators in shaping national digital policies. Yet, the lessons learned and the challenges faced over these years underscore the need for continuous evolution and adaptation. As you mark this five-year milestone, we are actively engaged in refining the framework to ensure its continued relevance in our ever-evolving digital world. I urge each one of you to draw upon the collective wisdom of this forum. Share your insights, your strategies for success, and also the obstacles you have faced. I further encourage you to highlight the framework’s strengths and identify areas that need enhancement. Let’s ensure that our deliberations here translate into tangible benefits for all stakeholders of the Romex framework. I thank you all for your unwavering commitment and active participation in this pivotal session at IGF 2023. Let’s work together in shaping an Internet that genuinely serves the interests of all. Thank you for your kind attention.

Moderator – Tatevik GRIGORYAN:
I thank our Assistant Director General for communication and information, for sending this message and for the leadership in this process. Without any delay, I would like to present our first speaker, David Sucher, who is referred to as the architect of the IEI Romex framework. Personally, I call people who have been in the cornerstone co-parents of the framework. I would like David to request you to please talk about the process of developing the indicators and then progress, and then as we are approaching this five-year mark and planning to ensure the continued relevance of the indicators to speak about what direction we should move towards. Thank you very much. Thank you.

David Souter:
I should say, firstly, I should apologize for the fact that I have to leave for another session which begins at quarter past three, so when I get up and walk out, it’s not a gesture of protest or anything like that. It’s just I need to move to something else. But I thought I’d give you a kind of origin story of the IUIs, Internet Universality Indicators. They stem from a concept of Internet universality that was devised by Guy Berger when he was working for UNESCO back in 2013 before the 10-year review of the World Summit. In fact, I remember him walking up to me at a UNESCO conference at that time and presenting me with this and saying, what do you think of this proposal for universality approach based around the four tenets or four principles of rights, openness, accessibility for all, and multi-stakeholder engagement? The idea emerged eventually from that concept when it was taken up by UNESCO formally of having an indicator framework which was modelled along the lines of one of the existing UNESCO indicator frameworks, the Media Development Indicators, on which I’d also worked in the past. So the indicator framework should be one that would include quantitative and qualitative assessment. So it wouldn’t just be about numbers. It would be one that would support national researchers to assess their national performance, but it wouldn’t be intended to compare one country against another. It would be about looking at the country itself internally. And it would aim to identify practical interventions that could improve Internet performance in relation to those principles of rights, openness, accessibility, and multi-stakeholder engagement. Principles, practical interventions, developed through dialogue amongst national stakeholders, so bringing together the diverse communities which were engaged within the Internet. I ended up leading the development of this indicator framework in association with APC and with my colleague, Henri van der Spee, who’s in the room at the back. So the aim was always to build a large data set, and it is a very large data set presented within the indicators. The aim was always to build a large data set for analysis for a couple of reasons. First is because the availability of data is very variable between countries. So in some countries there are really very few data sets that would be available, and qualitative sources would be particularly important. In others there were many more. Our aim was to try and build a collage from the evidence that was available that would enable the best possible analysis within the country itself. And the second point was to include indicators which would enable the researchers to look at issues that were particularly important in their countries but might not be important in other countries. So to take up those specific themes. We went through a couple of really extensive consultation processes about what should be in these indicators, and that did tend to grow the number even more. And we also decided to round out the Rome framework with the X category, which would bring in a number of important other issues into the analysis of the national Internet environment. So this made for a lot of indicators, and we decided to offer two approaches to that. First a comprehensive set, which is in this rather thick book here. And secondly, a smaller core set that would be more manageable. A core set of indicators which would be more manageable for particularly in countries with relatively limited resources, in the hope that that would encourage more diverse research. In practice, and this is a disappointment to me actually, in practice almost every country has chosen to concentrate solely on the core indicators, and hasn’t really looked in the wider range for other indicators that are particularly important in their own country. I think that’s one of the issues that the review should look at, how to avoid missing the opportunity that that presents. So we put a lot of emphasis as well on the need for a multi-stakeholder approach, with a multi-stakeholder advisory board to oversee processes, but also a multi-stakeholder research team, bringing different types of expertise into a group that could look at things together, and then discuss their findings from their different perspectives. A couple of countries trailed the indicators, including Brazil, in order to validate them, and the whole scheme was then signed off by the IFAP committee in UNESCO, which gave it a kind of crucial status and authorization by UNESCO’s member states. So the outcome, as I suspect you know, is that there have been really rather a large number of implementations of these indicators. There have been a lot more implementations of them than I had expected there to be in the early stages, and in fact a lot more implementations than of the media development indicators. I think that probably indicates that there was a very substantial demand for something along these lines, which would enable national research teams to work on a national assessment. But I’d also give a good deal of credit to Tatavic’s predecessor, Shanhong Hu, who was immensely enthusiastic in promoting the indicators and supporting countries over the last few years in putting them together. Having read a number of the reports, not all of them, I think I’d emphasize three or four things which seem to me to be important in making a successful research project using them. The first is the importance of diversity within the research team and the advisory board, but I think the research team is particularly important. That is, expertise across the different areas of rights, openness, access, multi-stakeholder participation and issues such as gender and sustainable development, which are in the X category. If you bring together people with different expertise, you get more than the sum of the parts. The importance of avoiding political pressure to come to positive conclusions when those might not be justified, and avoiding the pressure that comes from vested interests. Again, it’s valuable to have diversity within the research team and the advisory board. I’d stress the need to pay as much attention to qualitative assessments as to quantitative indicators, and, as I’ve mentioned, to look at the non-core indicators to see which are particularly relevant to a country’s national context. I think I’d stress the importance of the research team discussing and analysing findings as a group rather than just reporting on their own area of expertise, and on building that discussion, that collective analysis, as the way of reporting rather than a box-ticking exercise which any indicator framework is vulnerable to. I think I’d stress the desirability of making recommendations that are practically achievable in the national context, which includes the political context. To identify those things which can move things forward in the categories that are covered by the indicators. So the practical rather than the ideal. Now, it was always intended to revise these indicators after a period of time. In fact, they’ve been used unrevised for rather longer than we’d originally expected. It’s important to bring them up to date in terms of what evidence can now be gathered and in terms of the issues on which evidence should now be gathered if we’re to have a comprehensive picture of a national internet environment. So I hope that this revision will be able to do that, to bring them up to date without making it too difficult within a particular country to look back at an assessment that’s already been done. So building on what is there, developing it and evolving it for future needs, retaining consistency where appropriate. I think it will be necessary to reduce the overall number and I hope it will be possible to encourage a more holistic assessment approach than has always been the case. There are media development indicators assessments that I think will be quite a good model there to look at. I would resist the temptation to omit things for the sake of omitting them. Not least because of the differences between different countries and the fact that different countries need different points of reference. But there may be better ways of doing that than dividing simply between a comprehensive and a core indicator set. And I would encourage more inclusion of non-core indicators where these are relevant. That’s I think what I’d say about the revision process, which I know is at an early stage and I’m not directly personally involved in it. It’s not my responsibility. But I am looking forward to continuing to work with these indicators and the Rome principles in the future. Thanks.

Moderator – Tatevik GRIGORYAN:
Thank you very much, David. And thank you again for your work in putting the indicators together and for continuing the support to us and for your valuable recommendations as we move forward with the recommendations. And we do very much hope as a member of the steering committee for the revision of IUI, you will still be very actively involved in the revision process. Thank you very much. I would be happy to provide actually updates on the process and on our progress of implementing the IUIs globally. But I am aware that our next speaker as well has to leave to attend other engagements. Our next speaker online, Dr. Lutz Moeller, the Deputy Secretary General at German Commission for UNESCO. So, Dr. Moeller, the floor is yours, please. Thank you very much. I hope you can hear me well. Very well, thank you.

Lutz Mรถller:
Thank you very much. Good afternoon in Japan and good morning here from Europe. I’m also in Paris at the UNESCO board like the ADG. Excellencies, colleagues, ladies and gentlemen, I think it’s not really necessary to say that we have observed a really enormous and very rapid involvement of Internet ecosystems over the last few months. As a key example, the fundamental changes that several social media platforms that span the globe are much more than technical alterations or simple moderations of one arbitrary product. They have fundamentally altered societal discourses in countries around the globe. And have had enormous reverberations in terms of visibility of certain political convictions instead of others and the ability of disinformation to spread. I, of course, speak about X.com but also could speak about TikTok, Meta Telegram and more nationally successful platforms such as Paula, Korean neighbor of Vietnamese sailor. In Germany, the more non-profit Fediverse with Macedon has had some successes over the last year. But even here, we do not at all see a shift away from the private sector organized social media platforms. It is really not a news item in this year 2023 but the way public discourse, public conversation about the future of society and the planet is shaped and influenced by private business interests. And this has been never more acute than in the last 12 months. As you know all very well, the challenges posed by artificial intelligence come on top as Freedom House has warned us last week in their Freedom of the Net report. More specifically, the use of artificial intelligence to hinder and interrupt public discourse, to repress and to manipulate. Therefore, we really need to strengthen internet ecosystems that are freer, more democratic, more non-profit, more in the public service. We need to strengthen and safeguard human rights, openness, justice, diversity, inclusion, participation, empowerment and well-being in these internet ecosystems. And this is exactly, as you all know basically, where the UNESCO Internet Universality indicators of the Rome IUIs come into play. As you know, Germany has been the fifth UNESCO member state globally and the first from the global north to utilize this instrument to appropriately measure whether national internet policymaking and the implementation of these policies into practice, whether they really live up to this ambition of human rights openness, access, and multi-stakeholder participation. The big advantage from our perspective, from our experience, is that the Rome X IUIs deliver that they focus not only on one or few indicators. They provide a more panoramic view, which also, I have said that previously, is some brutally honest evidence. Actually, we all know that governments can easily claim that their policies and practices are human rights-based. But are they really? Are they really open? Do they really allow access to all? And are they really governed through a true multi-stakeholder participation? Or is this word just used? used as a euphemism for industry lobbying. The application of the Rome X IUI in Germany was a joint endeavor by the German Commission for UNESCO as coordinator, the German Federal Foreign Office as political and financial supporter, and the Leibniz Institute, Hans-Bredo Institute as implementer. Today I will not repeat previously reported results from Germany that we have had, such as the insufficient balance in our country we found between the right to privacy and freedom of expression or the insufficient internet access of jobless persons or the elderly. The key question of today is, what can we suggest from our experience for the upcoming revision? As I said, the huge advantage is this panoramic view which they generate. We have clearly benefited from this approach. However, my main point is that while providing this panoramic view, we found that the number of indicator currently 303 including 109 core indicators is probably too high. I said it with what David Sauter has said before about the general approach to the Rome X IUI, which we perfectly understand and share. Still, we recommend a stronger focus on key areas and topics with the greatest relevance. In particular, we should note data availability. Even if an indicator is excellent in theory, it is of little use if there is no data available or if the indicator cannot possibly be operationalized appropriately. Several of these IUI indicators are not as practical as they appear in theory. I heard with great interest that David also spoke about the need to reduce the number of indicators. And I agree with him that we have to be very careful in that regard. And I also have to share with you that this is a common experience. We have also worked with several of the SDG indicators in Germany over the last couple of years and have found out that also some of them sound fantastic in theory, but are very, very difficult to operationalize. So we really recommend to use this opportunity also for a general up-to-date take to make sure that the IUI really capture also more modern, more up-to-date trends such as AI. On another item, we strongly recommend from our experience in Germany that member states use a multi-stakeholder advisory board. In Germany, this board has proven enormously useful, specifically when it comes to selling and communicating the results to the political stakeholders later on. And in particular, its current debates tend to weaken multi-stakeholder participation. It’s more necessary than ever, not just in the application of the IUI. In closing, we at the German Commission for UNESCO and also the Hans-Prieto Institute joined a dynamic coalition on the IUI from the start to share our experiences and good practices. We offer our support to other parties and other member states to enable them to apply the IUI in their own countries. And we look forward to working together on their vision as well in the years to come to keep them up-to-date with ongoing developments. And I thank you very much for your attention, and thank you very much for inviting me.

Moderator – Tatevik GRIGORYAN:
Thank you very much, Mr. Mohler. Thank you for your support to the IUI project and also to the support to the dynamic coalition for IUI and for encouraging more stakeholders to join. OK. Yeah, David needs to leave to attend another important session. So thank you so very much, David. Again, thank you for your continued support. Let’s give him a round of applause. So yes, Dr. Mohler is also, yes, leaving soon. Thank you. Thank you as well, Dr. Mohler. Well, we can give Dr. Mohler as well a round of applause. As I didn’t mention his name with the first round. Thank you so very much. And I hope that you will continue to support the IUI project. And I hope that you will continue to support the IUI project. The first round. Thank you so very much. And let’s carry on with our discussion. Actually, I know that there are people here who I talked with about the IUI, IUI RailMix project, and who would actually be interested to know about the project. So I will just give a very brief overview for them, for those who are new to this initiative. So I’m sure you grasped a lot from David’s input. But just to give you an idea. So internet universality is the official position of UNESCO on the internet. So UNESCO believes that internet should be universal based on these principles of rights, openness, accessibility to all, and nurtured by multi-stakeholder participation. And so this was in the heart of the internet universality framework, which we then added an X to, ROM-X, X standing for cross-cutting issues such as gender equality, safety and security, sustainable development and environment. So we have in total, they’ve talked already about the number of indicators. We have a lot of them, 303 indicators with 109 core indicators, core being those that we recommend that they are essential to implement, at least as baseline. And then countries are free to, based on their national context, to choose and implement other additional indicators as well. And so we have an eight-step process, and I would like to talk about the establishment of multi-stakeholder advisory board, which David mentioned, and Dr. Moeller also highlighted its importance. So we do believe in multi-stakeholder approach to internet governance, which is also promoted by the Internet Governance Forum. So it is an essential part of this research. So the group consists normally of government representatives, representatives of relevant ministries, civil society organizations, academia, private sector, representatives of marginalized groups. These groups is sort of an oversight body which guides the research, and in the end of the research also looks at the outcomes and what we call validation workshop, validates the results of the workshop, confirming that this is indeed the state of the play in the country in their respective concerned areas. So this assessment, this framework indeed is a unique global tool. It’s a unique tool available to implement the development of the internet at the national level. And it also, it’s not a standalone, it also supports in a way the achievement of sustainable development goals and is also in line with the number of topics now discussed at the Global Digital Compact. So currently the framework, the assessment is ongoing in 40 countries, actually 34 with six having published the report. So just to give you a visual idea because I avoided using presentation. So this is the indicators, the framework which is available on our website. So if you go to unesco.org and I’ll be happy to share my contact as well afterwards and look for internet universality indicators. And so I have here, this is how the report in the end looks like. I have the copy of the report from Brazil and we have Alexandre here and Fabio here who not only supported the creation of the indicators but also were one of the, actually the first ones to implement the indicators in Brazil. So currently, so six reports published so far, three in Africa, one in Europe, Germany, one in Thailand and Brazil. And currently the process is ongoing in 34 countries with Kenya actually doing a second follow-up assessment to measure the results achieved after the publication of the report. And so we have 15 in Africa, 12 in South Asia, 15 in Asia and the Pacific and five in Latin America, three in Europe and two in Arab states. And actually I’m happy to say that we, out of this country, seven are small islands and developing states with five in South Pacific islands. So we have had quite serious results. Dr. Moller already presented a little bit with the achievements in Germany. Our assessments help to inform policy makers and feed into the digital strategies, laws and regulations. And we are happy to continue our progress. And so now I would like to give the floor to, sorry, because we have a missing speaker. I’d like to give the floor to Alessandra Barbosa, Regional Center for Studies of the Development of the Information Society, CETIC.br. And actually this is a UNESCO Category II Institute. And I won’t be telling more about you because there is so much to say. So please add whatever you would like to add. And please, floor is yours around the topic of the discussion today.

Alexandre Fernandes Barbosa:
Thank you very much, Atatavik. And good afternoon, everyone. Well, it’s a pleasure to be here in this discussion because as it was already mentioned, NIC.br, we’re in the very beginning of this discussion since the concept of universality. And in my opinion, this is a very important achievement because although indicators may change over time and concepts may change, like in the past, what we considered internet users, today is very different, right? So the definitions may change and they should be revised from time to time, but principles are really important. And I think that this framework was a very important achievement that UNESCO made in terms of defining important principles, the ROM-X that was already explained, what means R-O-A-M and X. So I’m not going to repeat, but the principles should not change. They should remain. So I think that we are now in a moment after five years that it was approved in 2015, right? I guess that it was approved. 18, yeah, 18. So it’s time now to make an assessment on the framework based on the need of revising principles, but not principles, indicators. And I think that as already has been stressed by both speakers that presided me, in terms of the number of indicators, it is indeed a huge number of indicators, more than 300, the whole set, and the core indicators, 109. But the fact is that the scope that this framework aims to measure really requires a lot of indicators. And I think that what we have realized among these years, and now with more than 40 countries making this assessment, is that we have a very problematic issue of data gap. Many countries, they don’t have the required data to make this assessment. But at the same time, it was, from my point of view, following all these reports and assessment, because CETIC had the chance to revise some countries, like the countries in Latin America, and add some other countries in Africa. Even in Europe, we worked with a German team during the assessment, sharing the Brazilian experience. But having said that, I think that this framework was an opportunity for countries to really understand the need of data production. We need data, because when we don’t have data, we don’t have visibility. And if you don’t have visibility, there is no priority in the political agenda. So, and in this particular regard, I think that Brazil is in a position that we have for many, many years, almost 20 years, of data production in different areas. Not only among population, households, but enterprises, schools, health, culture, government, and many other areas. So I think that the ROM framework gave countries the opportunity to understand that they should produce more data, because we do have a lot of missing data in this regard. And also, another very important achievement, in my opinion, is that UNESCO soon realized that we should not have an index, right? It’s not a matter of comparing countries here. We are using qualitative and quantitative type of indicators to take a picture, a general overview of the situation of the internet development in a given country. So this is a very good thing that UNESCO soon realized that the intention was to have a panoramic view of internet development. A second very important point that I would like to highlight in this process is that not many countries have the experience of having a multistakeholder dialogue on internet development. Brazil is, again, a very good example of a successful model on multistakeholder, a real multistakeholder arrangement to debate internet governance. And since one of the conditions to implementation is to establish what UNESCO has denominated multistakeholder advisory board for the development of this assessment, many countries that had no experience in having a multistakeholder dialogue, they had to implement that. And this is a very important achievement, and we should keep it this way, right? Well, just to mention that David has said the disappointment about having many assessments focusing only on core indicators. And I agree with him that the ideal situation is to implement the whole set of indicators to give a broad perspective on the internet development. But having this condition, maybe in the revision, we could rethink of that. And CETIC has been involved with UNESCO and other expert steering committee for ROMEX discussing this revision. And at the end of the day, we realized that it’s not possible to make such a drastic reduction in the indicators. So we will have to face this reality and to decide what to do. But I probably agree that we should stick with a larger number of indicators to have a better assessment. And last but not least, I would like to just take this opportunity to mention two things related to ROMEX. We have been discussing the application of this framework to different other type of emerging technologies such as AI. When ROMEX was approved in 2018, we didn’t have the new phenomenas of large language models, for instance, and other AI-based applications. So I think that it is completely applicable to emerging technologies because we are talking about principles and the principles should not change. Human rights-based, openness, accessibility, and multi-stakeholder, this could not change. And we could use this framework to apply. We have other discussion going on right now like the global digital compact and other issues that we could rely on those principles. Again, on the X dimension, in the revision, we already realized that we should fill some gaps that the original framework didn’t foreseen such as we had foreseen gender, age scope, children, but we need to include cybersecurity, sustainable development, climate change, all the dimensions relevant in the X dimension. And last, I think that in this revision, we could think of how to really encourage member states to make periodical assessment. I’m not sure if you can do it in two years’ time, three years’ time, but having periodical measurement should be very important for policymakers, civil society, and technical community to have a better idea of the progress given country has made in terms of applying this framework. So those are my initial reactions. I think that UNESCO plays a very important role in promoting and disseminating ROMAX strategy and framework that goes beyond internet development like AI, as I have said already. So those are my initial comments. Thank you very much.

Moderator – Tatevik GRIGORYAN:
Thank you very much, Alexandre. Thank you for your valuable inputs and thoughts. And also, thank you very much for pointing out this is not meant for ranking, which I normally highlight in my presentation. So this is a voluntary assessment. I always highlight voluntary in a sense that the country, the national stakeholder, they decide themselves on doing the assessment and then UNESCO is there to provide technical guidance and support in doing this and there is no ranking or comparison whatsoever. And of course, for some countries, the problems are similar and it’s very important to create this environment, to share practices and learn from each other experiences in moving forward with their national agendas, which in a way this dynamic coalition serves a platform for sharing the ideas and lessons learned and experiences and best practices. So on this note, I would like to give the floor to Anja Gengo, who is from the IGF Secretariat and who has been with the, actually all of you have been with the dynamic coalition longer than me and you’ve seen the development and I would like to invite Anja to speak about the role of the dynamic coalition, the progress, how we could improve it and any other inputs you and thought you have around

Anja Gengo:
the topic of our discussion, please. Thank you very much Tatevic and thank you to UNESCO for, of course, organizing this session, but even more for continuously throughout the year through the IGF platform and the dynamic coalition is working in a very open transparent manner with stakeholders from around the world to not just promote the indicators, but really to understand the value of the indicators and precisely what we are discussing today, whether they’re relevant, whether they’re useful to people around the world, do we need them and if yes, how do we use them, do we have access and especially if we have enough resources and capacity to meaningfully use them. Maybe I can start indeed from the dynamic coalition and the role of that platform and then I would like to say a few words about the relevance of the indicators for our presence and of course for the future. In terms of the dynamic coalition, we at the IGF Secretariat witnessed when this idea was born that a dynamic coalition could be organized just because it has been seen as a way to engage stakeholders from around the world into warm, friendly, meaningful discussions on the way the indicators could be used. I think it was formed after the indicators were adopted in 2018 and that was the whole idea to kind of follow the pace of the implementation and to understand if there are gaps, where are the gaps. It is incredible success in a very short time framework of the dynamic coalitions in terms of the number of stakeholders it managed to together but also in terms of the quality of the inputs that the stakeholders are bringing, not just to these dynamic coalitions but to the whole IGF as such and I think for us it was a really lesson learned that these dynamic coalitions which are very independent, they are also organic and they have their own autonomy in terms of how they manage the process. It was a lesson learned that when you have a strong institution that stands behind a people-centered, people-led process, it really can work and it really can, in a very short time as I said, achieve incredible results. I think long term speaking, we from the Secretariat certainly would advise to continue doing the way that has been done so far in terms of embracing the community, the stakeholders, doing outreach in different for us and especially engaging those that unfortunately are still not meaningfully engaged in the overall internet governance global processes. We through the IGF have quite a nice overview of the stakeholders, types of people, profiles that unfortunately are already left behind and I think it’s important that we alarm the community to really work in a methodological way to engage those stakeholders. So if you look, I’ll be very brief on this, I won’t certainly divert the attention of the IGF and inclusion processes but I do think it’s important to say that there are first of all profiles coming from certain countries that are not present in the global processes such as the IGF for example but also other processes. I mean at this forum you have for example colleagues coming from ICANN, colleagues coming from UNESCO doing wonderful things and unfortunately stakeholders from certain countries are missing so this is something that the Secretariat is very much focusing on to hopefully remedy and I’m very glad for example to say that there are countries from which we couldn’t hear for the past first 10 years of the IGF that are now very active in the IGF ecosystem not just at individual levels but organizationally speaking. You have the Maldives that are having wonderful national IGF and their organized multi-stakeholder participation at this year’s IGF and that’s a really concrete and tangible difference that’s been made through outreach done on different platforms. So this is something that I think the Dynamic Coalition could also do, engage those that are not engaged so far. I think we’ve recognized in the past couple of years that we’ve really evolved from multi-stakeholder model toward a multidisciplinary model which means that we have to look at each stakeholder group participation in a very nuanced way to understand that these discussions, these dialogues and potentially leading into decisions really concerns us all given the fact that we’re all using our smartphones, our computers, meaning we’re all there present in our online world and hopefully in the years to come also this Dynamic Coalition will see more disciplines represented in the core organizational group of the Dynamic Coalition itself. In terms of the validity I completely agree and I think that can’t be underlined enough with everything that my colleagues said previously with respect to the values. I think we’re very much aligned in the fact that we strive for the highest values that humanity can strive in the online world as we do in our, let’s call it, offline world. But if you look at these analog domains for example, you know, humanity for example and the highest international legal mechanisms guarantee right to life for example, but then you still have some jurisdictions that recognize that sentence as a sanction, while some not. So there are fundamental differences between how are we approaching to implement the values that we agree on and the digital world is in that sense not different than to this analog domain as well. There are jurisdictions where if you say something on social media is first of all interpreted as exercising your freedom of expression, while in some jurisdictions a tweet or social media post can potentially lead to imprisonment or fining. So those are the differences. I think we have to be aware of them and we need to make sure that the implementation of the values that we believe in is in the right hands. Two years ago in Poland we had a session on this same subject. We were assessing how the assessment is going and I do recall when I was sitting next to my colleague Kossi from Benin IGF, he coordinates the IGF in Benin, we spoke about the implementation being done through a multi-stakeholder lens, that all stakeholders in the country have opportunity to be consulted and to have a say when you are assessing the ecosystem and I do think that’s still very much relevant two years after. Being said that, the values are relevant, it’s excellent to see that the number of national assessments are growing, but I do think that now compared to the period during the pandemic and after, we may be in a phase where the assessment needs to be assessed and that’s because the COVID pandemic that really changed our landscape and I’m sure I don’t need to speak about the facts, but if you look just at the legislation field, it’s more than palpable, it’s more than visible that that field is dramatically changing. Much of these institutions, initiatives are now growing that are measuring, for example, the number of laws that are regulating, let’s say, artificial intelligence given the fact that it’s on the rise and some of them are indicating that before the pandemic we spoke just about one or two national jurisdictions that had a law in place reflecting artificial intelligence. After the pandemic, so last year, this year, we are facing incredible proliferation of the national laws and there is a concern in the community, you can hear that across narratives at the IGF, at this year’s forum, that there is a concern that this may lead to fragmentation and that we need to be very careful in terms of not allowing that we actually regulate something that may jeopardize the global nature of the Internet that we are all really firmly standing for and advocating and that is one Internet accessible, affordable, safe, secure, resilient, sustainable, unfragmented. So those are the changes that I think we have to be aware and I hope that the assessments that have been done in the early years could be also maybe looked at to ensure just that they are relevant and to work, of course, on the outreach to ensure that this valuable set of indicators is brought to the attention of those that are probably still not aware that it exists. Thank you, Tadej.

Moderator – Tatevik GRIGORYAN:
Thank you very much, Anja, for your excellent points and excellent cooperation and the points will be definitely taken on board as we move forward. And just picking up on your point about reassessing the assessments, I think now we’ve, for example, we’ve completed actually, Kenya is completing the follow-up, what we call follow-up assessment, which is basically reassessing the assessment and Grace is not here today but Simon has read the assessment. I don’t know if he would like to share anything on that. So I think we are thinking about this follow-up phase to see, actually, one of the points as well, steps is the monitoring process which aims to see the progress of the country which could then reassess what has been done and the validity and progress made by the country. So this is an excellent point as well, in addition to others. Thank you very much. So as I mentioned, Simon, I would like now to give the floor to Simon who is currently acting as a technical advisor for the IUI-Romex project, looking into the reports that we receive and also, of course, providing training and support to the multi-stakeholder advisory group board, also to the researchers. So he’s actually more recently involved also closely in the assessment of the project in South Pacific islands, Tuvalu, Solomon Islands, Vanuatu, Tonga and Fiji. Please, Simon, the floor is yours. Thank you, Tetevic. So I mean, I think to

Simon Ellis:
start with, the IUI is really a unique holistic system for taking this overall picture of the Internet in countries and really I haven’t much to say because everybody else has said it already and I completely agree with what’s come through, but I’ll take three or four points and a couple of examples. So it is a national assessment, not an international assessment. It’s about what happens in the country and in that sense as well, it doesn’t have to produce kind of a single definitive answer. So through the map, through the analyses, there can be different viewpoints and those different viewpoints can be incorporated. The indicators in IUI are in the form of a question and so countries are effectively encouraged to answer that question and sometimes the answer to the question may not be a yes or a no, but maybe something in between and that leads to something again that I think people have mentioned but it’s worth bringing out again. I think Anna just mentioned it, this sense that yes, one of the major aspects of this is rights and legislation, but then what IUI does systematically throughout is say and is that implemented? How does that work out in practice? So for example many of the points about if certain laws are in data protection for example are in place, the question then is, is there something for example from case law, from civil society analyses which suggest if that is followed up on, if that works and how that works in practice. So again what you’re doing is not a simple answer but is a full analysis of the question and I think that leads again into this sense of follow-up. So for each report then naturally you lead into recommendations and as now we’re getting into 40 or more countries we have to really say well what then is happening as a result of that and as Tatavic has just suggested, Grace Gitaiga has conducted the first follow-up assessment for Kenya, but I think as the first one we still have to establish what is the best way for follow-up on the ground to see whether recommendations are taken forward, but also then how frequently should there be IUI assessments and what should the nature of reporting because you don’t want to recopy 300 indicators and say nothing’s changed. So that whole sense of follow-up is extremely important and is one of the big questions here. The second big question which everybody’s tackled is new themes and really the three themes that are currently emerging are AI and then environment and sustainability and cyber security. AI is very much new in the IOI environment but there are some indicators in environment and cyber security already in the X category of IUI. And then to take that for environment the one question for example which I’m keen on is e-waste and that is particularly a problem in the countries I’ve worked at in Asia and Pacific. So for example in Southeast Asia there are they are sometimes dumping grounds for e-waste from OECD from Europe and North America and often that e-waste is then processed in not very good working conditions let’s say and so this whole issue brings up all sorts of questions about environmental concerns. In the Pacific we’re now working some countries literally the country is as high as the table so waste cannot be put in the ground. Waste has to be disposed of in some other way and again this leads to whole issues about recycling and contamination and what you do and where you put it. To take another example from the Pacific which is again isn’t quite and shows to some degree how IUI can adjust but in some degree as well how we need to make that sensitivity to national circumstances. For the Pacific connectivity is about satellites. In one country islands can be thousands of miles apart there’s no way that you can cable between them and there’s no way that you can put masts or anything between them so satellites is it for those countries. If they’re other to have full connectivity not just to the world and the internet as a whole but even within the same country and I think that also then emphasizes to come back to a point that David made originally about the sense of core and non-core. So certainly certain things are core and apply to every country but certain elements such as I’ve suggested for Pacific with waste and satellites are our core to the Pacific but perhaps less core in in other countries and we need to make keep that flexibility and we need to ensure that the IUI allows a national holistic view for a whole range of different types of country from small islands right up to huge countries like Brazil which in itself I always used to say that if anything if something works in Brazil it’ll work anywhere because there’s so many different environments

Moderator – Tatevik GRIGORYAN:
in Brazil. Thank you very much Simon and thank you for this contribution and for your work. Now I think we heard from all the speakers I wanted to ask if anybody online from the participants or participants here in the room have any questions or points to be made. Yes Fabio please.

Audience:
Hello, thank you. Now I would like to hear from the panelists the point of mood stakeholders and that I think one thing that is interesting in the indicators that not just the process is mood stakeholder because you have to collect indicator throughout stakeholder process but the mood stakeholder is a dimension of the indicator so there’s a list of indicators covering this and do you think this is if this is something that is also changing nowadays if there are some new indicators in the field of mood stakeholders that five years ago we don’t had so how do you do assess this part of the discussion. Thank you. Thank you Fabio who

Simon Ellis:
would like to take the question. Thanks Fabio. I think I’m not going to answer it completely directly but as I said in a previous session on I think I’m a multi-stakeholder dimension and David kind of referred to the sense of ticking boxes I think it’s important to look at as to what multi-stakeholder means and I guess this is kind of what you it’s not just that somebody turned up to a meeting it’s that they’re actively engaged and I’m not sure how we do that and maybe also this is another question to as it were put out there. but really to have the sense of how can we engage, for example, in the reports for e-government, there are a lot of countries that have e-government systems and you see them put out things to consultation and people have said that civil society reps have said they sent things in, but then they said but we don’t know whether anything was ever taken into account, so I think that sense of how real participation and what that looks like and how you would capture that is a question here. For new sectors, new stakeholders, I don’t think I see anything

Alexandre Fernandes Barbosa:
immediately that’s changed. Yes, just to complement, this is a very good question because this is the only dimension or set of indicators that represents both principle of multistakeholderism and also the indicators, they capture how a given country is really implementing or supporting or fostering multistakeholder dialogue. In terms of new actors, I don’t see any new actors in terms of when you consider government, technical community, civil society and private sector, I think this doesn’t change, but maybe there is one thing that doesn’t exist in the set of indicators in terms of how can we measure the outcomes of this multistakeholder dialogue, because referring to my country again, Brazil, we have the Brazilian Internet Steering Committee as a multistakeholder body that has produced along more than almost 30 years, we are going to complete 30 of this model in two years time because it was created in 1995, I think that we can list a large number of important outcomes that has driven loss regulations like the Brazilian GDPR, the data protection law, like the law of access to information and not to mention the very important legislation which is the Internet Bill of Rights that is called Marcos Civil, it was a hundred percent based on the ten principles that was discussed through many years within this multistakeholder structure. So, one new indicators that I would think of is the outcome, how to measure the outcome of this multistakeholder dialogue, but I think that differently from the three others dimensions or principles of the Rome X, this I don’t see many

Anja Gengo:
change. Thank you very much Fabio, I completely agree with my colleagues, I don’t see in theory that we need to change anything on a paper, but the IGF Secretariat and also within the IGF, we do see gaps and that’s what I was saying at the beginning during the introductory remarks, there are stakeholders that are just not participating within certain stakeholder groups and I think who well illustrated that was the judge that spoke during the opening ceremony, I don’t know if you’ve heard when he said that he had issues at the registration area because he said I come from a high court of Tanzania, I’m a judge and then some colleagues had difficulty to place him under a certain stakeholder group. I mean it was a very nice way to illustrate that those are types of subgroups I would say, you know within our traditional stakeholder groups that are missing to actively participate in our dialogues, in our processes. We’ve recognized that a couple of years ago also with legislators, with parliamentarians and that’s what prompted this parliamentary track at the IGF that’s been going on since 2019, but I do think there’s much more to do. For example look at the health industry, we speak a lot about the privacy there, but you don’t really speak with you know medical professionals at the IGF, you speak with people coming from other backgrounds which are mostly patients in these domains. So this is something that I think we need to work on to engage them more. We need to raise awareness, that’s probably the reason why we don’t have them here present. Car industry as well, I mean a lot of issues with privacy, obviously their data protection and that they are not here. In Katowice we heard a little bit from Volkswagen, but here today we don’t really have active participation from the highest management from these domains. So these are just some examples that I think it’s important to work on, but we do have them on our paper. I think the authors of the indicators recognize that well, the matter is just raise awareness in practice and have them engaged. Thank you very much for the question and for

Alexandre Fernandes Barbosa:
the answers. Just to complement, very interesting what you said and I would say that the X dimension on the ROMAX could accommodate other important dimensions, like the ethical dimension could be one set of indicators within the X, but the other ones we don’t have much to change I guess. Thank

Moderator – Tatevik GRIGORYAN:
you very much. As I was saying, thank you Fabio for the question and for the answers which will help us I believe in the revision process. I wanted to ask the audience again if there are any reactions to what has been said or if there are any questions and the audience online. I don’t see any questions online, so we’re keeping to the time, we’re doing very good. We have our director joining us online. Before giving her the floor for the official closing remarks, I don’t know Marielsa if you had any contribution to what has been said or would we expect to hear the official closing remarks from you? Thank you Tatevika. I think I will weave them into the closing remarks so as not to make it two separate things. Okay, thank you very much. Then I would just like to give one final floor to the speakers if you have any reflections on as we move forward with the revision, if you have any final thoughts you would like to share. I would like to remind the audience and especially those who joined a bit later that we’ve been discussing the Dynamic Coalition as a platform to cooperate and to share best practices and lessons learned for the implementation and promotion of the UNESCO’s Internet Universality ROMEX indicators which is ongoing in 40 countries and as we’ve reached the five-year mark, we’re currently in the process of updating the framework to make sure that we incorporate the topics and input from lessons learned from the implementation of the IUI framework. So I’d give the floor to Anja first,

Anja Gengo:
please Anja. Thank you very much Tatevika. I think just to thank you and UNESCO first of all for using the IGF as a platform to promote these good values and bring them closer to people from around the world. We certainly at the IGF Secretariat but I’m pretty sure I can speak also for other structures of the IGF as a project, welcome our cooperation to continue long-term speaking as one UN family and to work as much as that’s possible with people from around the world to ensure that these values are really implemented in practice for the Internet that we all want.

Simon Ellis:
Thank you. Simon please. I don’t think again I have anything much further to say. I’m still thinking about new actors. One thing I’ve seen in a few maps recently is the police involved which is quite interesting and I think there is something there about police and justice. There is an important indicator about training for judges and lawyers which I think is quite key in all of this but I think this is a really good assessment. I think it is producing very big results and I look forward to a new version in relation to perhaps the global digital compact

Alexandre Fernandes Barbosa:
in the beginning of next year. Thank you so much for giving me the opportunity to be here and in my particular case being part of UNESCO family I have to say that it is a real pleasure for me and for my team to work with UNESCO and to help fostering this idea of this dialogue that is so important and I think that we have to celebrate that in such a short period of time you have a large number of countries making the assessment and the dialogue is live and I hope that in the coming years we can make new assessments and increase the number of countries that join this framework in a voluntary basis as you mentioned and I think that UNESCO plays an important role in building capacity and raising awareness among member states for the importance of having data to make this assessment. This is a very important issue. We do have a huge data gap mainly in countries from the global south so UNESCO plays a very important role and I have to really congratulate for your leadership in this project. Thank you very much. Thank you very much,

Moderator – Tatevik GRIGORYAN:
Alessandro. Thank you to all of you and thank you CETIC indeed for the excellent cooperation that we’ve been enjoying and the serious work especially now when it comes to the revision. I would now like to, I’m happy to give the floor to Marielsa Oliveira who is the director for digital policies and transformation at UNESCO. Please Marielsa, the floor is yours.

Marielza Couto e Silva de Oliveira:
Thank you and hello everyone. Konnichiwa. I’m really sorry that I could not join before but I had other sessions. I’ve been in session since 2 a.m. Paris time and many of those actually was talking about the ROMAX as well advocating for it and including it in the topics but for me this session which is focused specifically on the revision of the ROMAX framework is the most special one. We’ve been working together as a dynamic coalition you know to advance the internet universality for the past five years and over those years we actually accomplished quite a lot. You know if you think about it 25% of the countries of the world have actively adopted the ROMAX framework and it has embedded itself in global and regional discourse about the internet. You know it’s at the highest levels so it is no small measure due to the work of the internet universality dynamic coalition and the way you have worked as a shared space to exchange expertise, to exchange experiences, to act as a peer-to-peer support mechanism for each other and this is a very generous attitude you know of all of you and which the richness of our collective experiences you are the right people to contribute and guide the fifth year revision of the internet universality indicators. It has been envisioned from the very beginning we always knew the internet to be a fast-changing environment so we always considered that these indicators would have to be revised at some point but nevertheless it comes at a very timely moment in which we see digital governance changing it under a major overhaul with the new for example the the upcoming global digital compact the WSIS plus 20 review and so others. We also see generative AI changing you know the landscape the technological landscape of the internet itself and what we have found out that the internet can also be harmful this is something that we didn’t realize but before as much but the harms that can be done when it serves as a conduit for disinformation for hate speech and other harmful content particularly at scale you know that that it operates and the environment has changed so much too with indicators needing to change. In this session we have looked at this scenario and asked ourselves what are the things that we must change about the internet universality framework and you know I’m sure that you have covered important elements but I heard some of those in in the end and I would like to mention you know that this includes for example a tighter specification of which are the core indicators and this is one of the things that I consider particularly relevant we need to really tighten up the core indicators to to give a an easier process including for the the measurement as well as for the follow-up. The potential inclusion of new dimensions both in terms of content such as you know what Simon was referring to environment and in waste cyber security but others that have come up through you know different mechanisms of consultations child data protection mental health and of course we also have AI the toxicity levels of the internet itself you know of the social media environment and some of the elements that we need to consider but also in terms of the assessment process itself for example you know accounting for research obstacles that many of the national teams have encountered including the lack of data for many indicators particularly disaggregated data that then doesn’t allow us to see the x dimension so you know so clearly but including as well as when and how to conduct follow-ups to monitor progress in implementing the recommendations which I find an essential mechanism and I think that it’s really important that we document this process as well because we are you know about to have a new global digital compact and we will have principles and commitments and that’s in that at that level as well so it’s the process of monitoring adherence to principles it’s one of the most important things that’s actually going to be happening and the example that the ROMAX framework offers is extraordinary so I’d like to offer that as my key contribution today is reminding us that we need to document this process this trajectory to show also the global digital compact process what could be you know how they could actually be taking care of implementing it so with that I’d like to really extend heartfelt appreciation to all the panelists they’re actually good friends who have joined us today to share these insights and you know say that your participation is always enriching and enriches our understanding of the path that we must take forward to achieve this shared objective updating the framework I’d like to explain my special gratitude to our partners at CITIC.br you know Alessandri and Fabio have been really supportive and collaborative in this process of taking up quite a lot of the work but also you know Simon who is you know supporting leading this and to our esteemed steering committee members for their support in advancing this review and just like your constructive suggestions and advice have enabled UNESCO to facilitate the implementation of the ROMAX national assessments in the last year we are now able to successfully adapt this framework with your help and for that reason I really encourage all of you to remain actively engaged in the revision process to continue sharing your inputs with us and Tatevik has certainly given you a mechanism to reach out if you have contributions to make we have always counted on our dynamic coalition but this year we really count on you more than ever you are the ones we have on the ground understanding of national needs of the difficulties their own research typically faces of the themes about which you wish to put you could know more about and so on so your guidance is absolutely indispensable so let me invite also all IGF stakeholders to join the internet universality indicators dynamic coalition and to help us to continue advancing this work of advocating for a human-centered internet so thank you all very very much for your support and I hope to see you in person again soon.

Moderator – Tatevik GRIGORYAN:
Thank you very much Maria-Elsa thank you for this rich remarks and points that we will also take on during our steering committee meeting closed door meeting tomorrow and from my end as well I would like to really extend a heartfelt thank you to each panelist and each member of the steering of the dynamic coalition who has been supporting us throughout the years who are not here today but who remain actively engaged throughout through different initiatives around Romex so thank you so very much I would also like to thank my colleagues the Romex team especially Karen Landa and Camila Gonzalez who are online with us now and I would like to also thank the participants who have been here we are happy to hear from you after the session we will be around and I would like to continue the tradition that my colleagues have established of taking a family photo and I’d like to ask the online participants as well to put on their cameras and colleagues as well. Thank you.

Alexandre Fernandes Barbosa

Speech speed

128 words per minute

Speech length

1786 words

Speech time

840 secs

Anja Gengo

Speech speed

173 words per minute

Speech length

2017 words

Speech time

699 secs

Audience

Speech speed

129 words per minute

Speech length

123 words

Speech time

57 secs

David Souter

Speech speed

169 words per minute

Speech length

1597 words

Speech time

566 secs

Lutz Mรถller

Speech speed

177 words per minute

Speech length

1145 words

Speech time

389 secs

Marielza Couto e Silva de Oliveira

Speech speed

162 words per minute

Speech length

1134 words

Speech time

420 secs

Moderator – Tatevik GRIGORYAN

Speech speed

136 words per minute

Speech length

2631 words

Speech time

1164 secs

Simon Ellis

Speech speed

147 words per minute

Speech length

1230 words

Speech time

502 secs

Speaker 1

Speech speed

133 words per minute

Speech length

420 words

Speech time

190 secs

A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Tara Denham

Canada is leading the way in taking AI governance seriously by integrating digital policy with human rights. The Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada is actively working on the geopolitics of artificial intelligence, ensuring that AI development and governance uphold human rights principles.

The Canadian government is actively involved in developing regulation, policy, and guiding principles for AI. They have implemented a directive on how government will handle automated decision making, including an algorithmic impact assessment tool. To ensure responsible development and management of AI, the government has published a voluntary Code of Conduct and is working on AI and Data Act legislation. Additionally, the government requires engagement with stakeholders before deploying generative AI, demonstrating their commitment to responsible AI implementation.

Stakeholder engagement is considered essential in AI policy making, and Canada has taken deliberate steps to involve stakeholders from the start. They have established a national table that brings together representatives from the private sector, civil society organizations, federal, provincial, and territorial governments, as well as Indigenous communities to consult on AI policies. This inclusive approach recognizes the importance of diverse opinions and aims to develop policies that are representative of various perspectives. However, it is acknowledged that stakeholder engagement can be time-consuming and may lead to tensions due to differing views.

Canada recognizes the significance of leveraging existing international structures for global AI governance. They have used the Freedom Online Coalition to shape their negotiating positions on UNESCO Recommendations on AI ethics. Additionally, they are actively participating in Council of Europe negotiations on AI and human rights. However, it is noted that more countries and stakeholder groups should be encouraged to participate in these international negotiations to ensure a comprehensive and inclusive global governance framework for AI.

There is also a need for global analysis on what approaches to AI governance are working and not working. This analysis aims to build global capacity and better understand the risks and impacts of AI in different communities and countries. Advocates emphasize the importance of leveraging existing research on AI capacity building and research, supported by organizations like the International Development Research Centre (IDRC).

Furthermore, there is a strong call for increased support for research into AI and its impacts. IDRC in Canada plays a pivotal role in funding and supporting AI capacity-building initiatives and research. This support is crucial in advancing our understanding of AI’s potential and ensuring responsible and beneficial implementation.

In conclusion, Canada is taking significant steps towards effective AI governance by integrating digital policy with human rights, developing regulations and policies, and engaging stakeholders in decision-making processes. By leveraging existing international structures and conducting global analysis, Canada aims to contribute to a comprehensive and inclusive global AI governance framework. Additionally, their support for research and capacity-building initiatives highlights their commitment to responsible AI development.

Marlena Wisniak

The analysis highlights several important points regarding AI governance. One of the main points is the need for mandatory human rights due diligence and impact assessments in AI governance. The analysis suggests that implementing these measures globally presents an opportunity to ensure that AI development and deployment do not infringe upon human rights. This approach is informed by the UN Guiding Principles for Business and Human Rights, which provide a framework for businesses to respect human rights throughout their operations. By incorporating human rights impact assessments into AI governance, potential adverse consequences on human rights can be identified and addressed proactively.

Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engagement is viewed as a collaborative process in which diverse stakeholders, including civil society organizations and affected communities, can meaningfully contribute to decision-making processes. The inclusion of external stakeholders is seen as crucial to ensure that AI governance reflects the concerns and perspectives of those who may be affected by AI systems. By involving a range of stakeholders, AI governance can be more comprehensive, responsive, and representative.

Transparency is regarded as a prerequisite for AI accountability. The analysis argues that AI governance should mandate that AI developers and deployers provide transparent reporting on various aspects, such as datasets, performance metrics, human review processes, and access to remedy. This transparency is seen as essential to enable meaningful scrutiny and assessment of AI systems, ensuring that they function in a responsible and accountable manner.

Access to remedy is also highlighted as a crucial aspect of AI governance. This includes the provision of internal grievance mechanisms within tech companies and AI developers, as well as state-level and judicial mechanisms. The analysis argues that access to remedy is fundamental for individuals who may experience harm or violations of their rights due to AI systems. By ensuring avenues for redress, AI governance can provide recourse for those affected and hold accountable those responsible for any harm caused.

The analysis also cautions against over-broad exemptions for national security or counter-terrorism purposes in AI governance. It argues that such exemptions, if not carefully crafted, have the potential to restrict civil liberties. To mitigate this risk, any exemptions should have a narrow scope, include sunset clauses, and prioritize proportionality to ensure that they do not unduly infringe upon individuals’ rights or freedoms.

Furthermore, the analysis uncovers a potential shortcoming in AI governance efforts. It suggests that while finance, business, and national security are often prioritized, human rights are not given sufficient consideration. The analysis calls for a greater focus on human rights within AI governance initiatives, ensuring that AI systems are developed and deployed in a manner that respects and upholds human rights.

The analysis also supports the ban of AI systems that are fundamentally incompatible with human rights, such as biometric surveillance in public spaces. This viewpoint is based on concerns about mass surveillance and discriminatory targeted surveillance enabled by facial recognition and remote biometric recognition technologies. Banning such technologies is seen as necessary to safeguard privacy, freedom, and prevent potential violations of human rights.

In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the importance of multistakeholder participation and the need to engage stakeholders in the process of policymaking. This is seen as a means to balance power dynamics and address the potential imbalance between stakeholders, particularly as companies often possess financial advantages and greater access to policymakers. The analysis highlights the need for greater representation and involvement of human rights advocates in AI governance processes.

Another observation relates to the capacity and resources of civil society, especially in marginalized groups and global majority-based organizations. The analysis urges international organizations and policymakers to consider the challenges faced by civil society in terms of capacity building, resources, and finance. It emphasizes the need for more equitable and inclusive participation of all stakeholders to ensure that AI governance processes are not dominated by powerful actors or leave marginalized groups behind.

Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations, especially in countries with repressive regimes or authoritarian practices. This observation draws attention to the concept of the “Brussels effect,” wherein EU regulations become influential worldwide. It highlights the potential for countries with stronger regulatory frameworks to shape AI governance practices globally, emphasizing the importance of considering the implications and potential impacts of regulations beyond national borders.

In conclusion, the analysis underscores the importance of incorporating mandatory human rights due diligence, stakeholder engagement, transparency, access to remedy, and careful consideration of exemptions in AI governance. It calls for greater attention to human rights within AI governance efforts, the banning of AI systems incompatible with human rights, and the inclusion of diverse perspectives and voices in decision-making processes. The analysis also raises attention to the challenges faced by civil society and the potential influence of laws in one country on global regulations. Overall, it provides valuable insights for the development of effective and responsible AI governance frameworks.

Speaker

Latin America faces challenges in meaningful participation in shaping responsible AI governance. These challenges are influenced by the region’s history of authoritarianism, which has left its democracies weak. Moreover, there is a general mistrust towards participation, further hindering Latin America’s engagement in AI governance.

One of the main obstacles is the tech industry’s aggressive push for AI deployment. While there is great enthusiasm for AI technology, there is a lack of comprehensive understanding of its limitations, myths, and potential risks. Additionally, the overwhelming number of proposals and AI guidance make it difficult for Latin America to keep up and actively contribute to the development of responsible AI governance.

Despite these challenges, Latin America plays a crucial role in the global chain of AI technological developments. The region is a supplier of vital minerals like lithium, which are essential for manufacturing AI systems. However, the mining processes involved in extracting these minerals often have negative environmental impacts, including pollution and habitat destruction. This has led to mixed sentiments regarding Latin America’s involvement in AI development.

Latin America also provides significant resources, data, and labor for AI development. The region supplies the raw materials needed for hardware manufacturing and offers diverse datasets collected from various sources for training AI models. Additionally, Latin America’s workforce contributes to tasks such as data labeling for machine learning purposes. However, these contributions come at a cost, with negative impacts including environmental consequences and labor exploitation.

It is crucial for AI governance to prioritize the impacts of AI development on human rights. Extracting material resources for AI development has wide-ranging effects, including environmental degradation and loss of biodiversity. Moreover, the health and working conditions of miners are often disregarded, and there is a lack of attention to data protection and privacy rights. Incorporating human rights perspectives into AI governance is necessary.

Another concerning issue is the use of AI for surveillance purposes and welfare decisions by governments, without adequate transparency and participation standards. The deployment of these technologies without transparency raises concerns about citizen rights and privacy.

To address these challenges, it is necessary to strengthen democratic institutions and reduce asymmetries among regions. While Latin America provides resources and labor for AI systems designed elsewhere, AI governance processes often remain distant from the region. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and participation are essential.

In conclusion, Latin America faces obstacles in meaningful participation in shaping responsible AI governance due to the aggressive push for AI deployment and its history of authoritarianism. However, the region plays a crucial role in the global AI technological chain by providing resources, data, and labor. It is important to consider the impacts of AI development on human rights and promote transparency and participation in AI governance. Strengthening democratic institutions and addressing regional asymmetries are necessary for a more inclusive and equitable AI governance process.

Ian Barber

The analysis conducted on AI governance, human rights, and global implications reveals several key insights. The first point highlighted is the significant role that the international human rights framework can play in ensuring responsible AI governance. Human rights are deeply rooted in various sources, including conventions and customary international law. Given that AI is now able to influence many aspects of life, from job prospects to legal verdicts, it becomes essential to leverage the international human rights framework to establish guidelines and safeguards for AI governance.

Another important aspect is the ongoing efforts at various international platforms to develop binding treaties and recommendations on AI ethics. The Council of Europe, the European Union, and UNESCO are actively involved in this process. For instance, the Council of Europe is working towards the development of a binding treaty on AI, while the European Union has initiated the EU AI Act, and UNESCO has put forth recommendations on the ethics of AI. These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups.

Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective governance cannot be traversed alone, and it is crucial to ensure meaningful engagement from relevant stakeholders. These stakeholders include voices from civil society, private companies, and international organizations. Their input, perspectives, and expertise can contribute to the development of comprehensive AI governance policies that consider the diverse needs and concerns of different stakeholders.

One noteworthy observation made during the analysis is the importance of amplifying the voices of the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. It is crucial to address this imbalance and include voices from diverse backgrounds and regions in discussions on AI governance. A workshop has been suggested as a call to action to begin the ongoing collective effort in addressing the complexities brought about by AI.

The analysis also emphasizes the need to consider regional perspectives and involvement in global AI development. Regions’ developments are essential factors to be taken into account when formulating AI policies and strategies. This ensures that the implications and impact of AI are effectively addressed on a regional level.

Furthermore, the analysis highlights the significance of African voices in the field of responsible AI governance and the promotion of human rights. Advocating for strategies or policies on emerging technologies specifically tailored for African countries can contribute to better outcomes and equitable development in the region.

Another noteworthy point is the need to bridge the gaps in discourse between human rights and AI governance. The analysis identifies gaps in understanding how human rights principles can be effectively integrated into AI governance practices. Addressing these gaps is essential to ensure that AI development and deployment are in line with human rights standards and principles.

In conclusion, the analysis underscores several important considerations for AI governance. Leveraging the international human rights framework, developing binding treaties and recommendations on ethics, fostering stakeholder engagement, considering global majority voices, including regional perspectives, and amplifying African voices are all critical aspects of responsible AI governance. Additionally, efforts should be made to bridge the gaps in discourse between human rights and AI governance. By integrating human rights principles and adhering to the international rights framework, AI governance can be ethically sound and socially beneficial.

Shahla Naimi

The analysis explores the impact of AI from three distinct viewpoints. The first argument suggests that AI has the potential to advance human rights and create global opportunities. It is argued that AI can provide valuable information to human rights defenders, enabling them to gather comprehensive data and evidence to support their causes. Additionally, AI can improve safety measures by alerting individuals to potential natural disasters like floods and fires, ultimately minimizing harm. Moreover, AI can enhance access to healthcare, particularly in underserved areas, by facilitating remote consultations and diagnoses. An example is provided of AI models being developed to support the 1000 most widely spoken languages, fostering better communication across cultures and communities.

The second viewpoint revolves around Google’s commitment to embedding human rights into its AI governance processes. It is highlighted that the company considers the principles outlined in the Universal Declaration of Human Rights when developing AI products. Google also conducts human rights due diligence to ensure their technologies respect and do not infringe upon human rights. This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrounding the technology.

The third perspective emphasizes the need for multi-stakeholder and internationally coordinated AI regulation. It is argued that effective regulation should consider factors such as the structure, scope, subjects, and standards of AI. Without international coordination, fragmented regulations with inconsistencies may arise. Involving multiple stakeholders in the regulatory process is vital to consider diverse perspectives and interests.

Overall, the analysis highlights AI’s potential to advance human rights and create opportunities, particularly in information gathering, safety, and healthcare. It underscores the importance of embedding human rights principles into AI governance processes, as demonstrated by Google’s commitments. Furthermore, multi-stakeholder and internationally coordinated AI regulation is crucial to ensure consistency and standards. These viewpoints provide valuable insights into the ethical and responsible development and implementation of AI.

Pratek Sibal

A recent survey conducted across 100 countries revealed a concerning lack of awareness among judicial systems worldwide regarding artificial intelligence (AI). This lack of awareness poses a significant obstacle to the effective implementation of AI in judicial processes. Efforts are being made to increase awareness and understanding of AI in the legal field, including the launch of a Massive Open Online Course (MOOC) on AI and the Rule of Law in seven different languages. This course aims to educate judicial operators about AI and its implications for the rule of law.

Existing human rights laws in Brazil, the UK, and Italy have successfully addressed cases of AI misuse, suggesting that international human rights law can be implemented through judicial decisions without waiting for a specific AI regulatory framework. By proactively applying existing legal frameworks, countries can address and mitigate potential AI-related human rights violations.

In terms of capacity building, it is argued that institutional capacity building is more sustainable in the long term compared to individual capacity building. Efforts are underway to develop a comprehensive global toolkit on AI and the rule of law, which will be piloted with prominent judicial institutions such as the Inter-American Court of Human Rights and the East Africa Court of Justice. This toolkit aims to enhance institutional capacity to effectively navigate the legal implications of AI.

Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure inclusivity and accessibility. This includes the development of a comic strip available in various languages and a micro-learning course on defending human rights in the age of AI provided in 25 different languages.

Canada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact. These projects have supported the growth of communities in creating language datasets and developing applications in healthcare and agriculture, thereby increasing the capacity of civil society organizations in these regions.

The evolution of international standards and policy-making has seen a shift from a traditional model of technical assistance to a more collaborative, multi-stakeholder approach. This change involves engaging stakeholders at various levels in the development of global policy frameworks, ensuring better ownership and effectiveness in addressing AI-related challenges.

Pratek Sibal, a proponent of the multi-stakeholder approach, emphasizes the need for meaningful implementation throughout the policy cycle. Guidance on developing AI policies in a multi-stakeholder manner has been provided, covering all phases from agenda setting to drafting to implementation and monitoring.

Dealing with authoritarian regimes and establishing frameworks for AI present complex challenges with no easy answers. Pratek Sibal acknowledges the intricacies of this issue and highlights the need for careful consideration and analysis in finding suitable approaches.

In conclusion, the survey reveals a concerning lack of awareness among judicial systems regarding AI, hindering its implementation. However, existing human rights laws are successfully addressing AI-related challenges in several countries. Efforts are underway to enhance institutional capacity and involve communities in strengthening human rights in the age of AI. The positive impact of Canada’s AI for Development projects and the shift towards a collaborative, multi-stakeholder approach in international standards and policy-making are notable developments. Dealing with authoritarian regimes in the context of AI requires careful consideration and exploration of suitable frameworks.

Audience

Different governments and countries are adopting varied approaches to AI governance. The transition from policy to practice in this area will require a substantial amount of time. However, there is recognition and appreciation for the ongoing multi-stakeholder approach, which involves including various stakeholders such as governments, industry experts, and civil society.

It is crucial to analyze and assess the effectiveness of these different approaches to AI governance to determine the most successful strategies. This analysis will inform future decisions and policies related to AI governance and ensure their efficacy in addressing the challenges posed by AI technologies.

UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly, they have also involved children in the process. This approach of engaging children in policy creation has proven to be valuable, as their perspectives and experiences have enriched the final product. Inclusion and engagement of children in policy creation and practices around AI are viewed as both meaningful and necessary.

Furthermore, efforts are being made to ensure responsible AI in authoritarian regimes. Particularly, there is ongoing work on engaging Technical Advisory Groups (TAG) for internet freedoms in countries such as Myanmar, Vietnam, and China. This work aims to promote responsible AI practices and address any potential human rights violations that may arise from the use of AI technologies.

Implementing mechanisms to monitor responsible AI in authoritarian regimes is of utmost importance. These mechanisms can help ensure that AI technologies are used in ways that adhere to principles of human rights and minimize potential harms.

Interestingly, it is noted that implementing policies to monitor responsible AI is relatively easier in human rights-friendly countries compared to authoritarian ones. This observation underscores the challenges faced in authoritarian regimes where governments may exert greater control over AI technologies and policies.

In conclusion, the various approaches to AI governance taken by governments and countries need careful analysis to determine their effectiveness. Engaging children in policy creation and promoting responsible AI in authoritarian regimes are fundamental steps in fostering a safe and inclusive AI ecosystem. Implementing mechanisms to monitor responsible AI poses a particular challenge in authoritarian contexts. However, policies for monitoring responsible AI are relatively easier to implement in human rights-friendly countries. These insights highlight the ongoing efforts required to develop effective AI governance frameworks that protect human rights and promote responsible AI use.

Oluseyi Oyebisi

The analysis highlights the importance of including the African region in discussions on AI governance. It notes that the African region is coming late to the party in terms of participating in AI governance discussions and needs to be included to ensure its interests are represented. The argument presented is that African governments, civil society, and businesses should invest in research and engage more actively in global conversations regarding AI governance.

One of the main points raised is the need for Africa to build technical competence to effectively participate in international AI negotiations. It is mentioned that African missions abroad must have the right capacity to take part in these negotiations. Furthermore, it is noted that universities in Africa are not yet prepared for AI development and need to strengthen their capabilities in this area.

Additionally, the analysis suggests that African governments should consider starting with soft laws and working with technology platforms before transitioning to hard laws. It is argued that this approach would allow them to learn from working with technology platforms and progress towards more rigid regulations. The need for regulation that balances the needs of citizens is emphasized.

The analysis also highlights the need for African governments, civil society, and businesses to invest in research and actively engage in global platforms related to AI governance. It is mentioned that investment should be made in the right set of meetings, research, and engagements. Bringing Africans into global platforms is seen as a crucial step towards ensuring their perspectives and needs are considered in AI governance discussions.

Overall, the expanded summary emphasizes the need to incorporate the African region into the global AI governance discourse. It suggests that by building technical competence, starting with soft laws, and actively engaging in research and global platforms, African countries can effectively contribute to AI governance and address their specific development challenges.

Session transcript

Ian Barber:
Hope everyone’s doing well. Thank you so much for joining this session. One of the many this week on AI and AI governance, but with a more focused view and perspective on global human rights approach to AI governance. My name is Ian Barber. I’m legal lead at Global Partners Digital. We’re a civil society organization based in London working to foster an online environment underpinned by human rights. We’ve been working on AI governance and human rights for several years now. So I’m very happy to be a co-organizing facilitating this alongside Transparencia Brazil, who is our online moderator. So thank you very much. What I’ll be doing over the next few minutes is providing a bit of introduction to this workshop, setting the scene, introducing our fantastic speakers, both in person and online, and providing a bit of structure as well for the discussion that we’re having today and some housekeeping rules. Really, this workshop is meant to acknowledge that we stand at the intersection of two realities, the increasing potential of artificial intelligence on one hand and the ongoing relevance of the international human rights framework on the other. When we think of a human rights-based approach to AI governance, a few things come to mind. Firmly and truly grounding policy approaches in the international human rights framework, the ability to assess risks to human rights, promoting open and inclusive design and deployment of AI, as well as ensuring transparency and accountability amongst other elements and measures. And given this, it’s probably not news to anyone in the room that the rapid design, development, and deployment of AI demands our attention, our understanding, and our collaborative efforts across various different stakeholders. Human rights, which are enshrined in various sources, such as conventions. and customer international law, and its dynamic interpretations and evolution, it works to guide us towards our world continually where people can exercise and enjoy their human rights to thrive without prejudice or discrimination or other forms of injustice. And like any technology, AI poses both benefits and risks to enjoyments of human rights. I’m sure you’ve attended other sessions this week where you spoke in a bit more detail about what those look like in various sectors and on different civil, political, economic and social rights. But today, what we’re gonna be doing is narrowing in on a few key questions. The first is how can the international human rights framework be leveraged to ensure responsible AI governance in a rapidly changing context and world that we live in? And I think this question is important because it underscores how AI is now able to influence so many things from our job prospects, our ability to express ourselves, legal verdicts. And so how do we ensure that human rights continue to be respected, protected and promoted is key. Secondly, we must reflect upon the global implications for human rights in the kind of ongoing proliferation of AI governance frameworks that we’re seeing today. And also, and in the potential absence of effective frameworks, what is the result and what are we looking at? There has been this ongoing proliferation of efforts at the global, regional, national level to provide frameworks, rules and other types of normative structures and standards that are supposed to promote and safeguard human rights. For example, just to highlight a few, there’s ongoing efforts at the Council of Europe to develop a binding treaty on AI. There’s the European Union’s efforts with the EU AI Act. There’s UNESCO’s recommendations on the ethics of AI, which is finalized but currently undergoing implementation. And other efforts such as the more recently proposed. UN high level advisory body on AI. But at this point, we’ve yet to see comprehensive and binding frameworks enacted at this point, which might be considered, you know, effective and sufficient to protect human rights. And without these safeguards and protections, we therefore risk kind of exacerbating inequality, silencing marginalized groups and voices and inadvertently creating a world where AI serves more as a divider than it does promoter and for equality. So what do we wanna see and what do we want to do to ensure that this is not the case and not the future that we’re looking at? And lastly, over the next 80 or so minutes, the path towards responsible AI governance is not one that can be kind of traversed alone. So we need to navigate these challenges together, fostering meaningful engagement by all relevant stakeholders. That’s why on this panel, we have voices from civil society, from private companies, from international organizations, which are all needed. And we also need to particularly amplify voices from the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. And that’s very much the case when it comes to AI as well. So this workshop is, it’s not just a gathering I see, it is one, it’s one for information sharing, but it’s also a call to action. It’s really, I think, the beginning of an ongoing collective effort to address a range of complexities that have come about from AI and to really work to ensure the ongoing relevance of our shared human values and for human rights. So with that intro and framing, I’d like to get started, get the ball rolling and kind of drawing from the diverse range of experiences here, really talk about what we want in terms of a global human rights approach to responsible AI governance. And to do that, we have an all-star lineup of speakers from, again, a number of different. stakeholders. I’m going to briefly introduce them, but I encourage you to all, when you make your interventions, to provide a bit more background on where you come from, the type of work you do, and really why you’re here today and your motivations. And in no particular order, we have Marlena Wisniak from the European Center for Nonprofit Law, to my left. We have Vladimir Jure from Direcho Societales, who’s over there. We also have Tara Denham from Global Affairs Canada, and we have Pratik as well from UNESCO. So thank you for all being here in person. And online, we have Sholana Mae from Google, and Oyabisi Olesi from the Nigeria Network of NGOs, or NNNGO. In terms of structure, we have a bit of time on our hands. And what we’re going to do then is divide the session into two parts. The first part is going to be looking at a particular focus on the international human rights framework, and also this ongoing proliferation of regulatory processes on AI that I’ve kind of alluded to already. We’ll then take a pause for questions from the audience, as well as those joining online as well. And I want to give a special shout out to Marina from Transparencia Brazil, who is taking in questions and feeding them into me so that we can have a hybrid conversation. And then after this first part, we’ll stop and we’ll have a second part, and that’ll look a bit more at inclusion of voices in these processes, and how engagement from the global majority is imperative. And that will be followed by a final brief Q&A session, and then closing remarks. So I hope that makes sense. I hope that sounds structured enough and productive, and I look forward to your questions and interventions later. But let’s get into the meat of things. Looking at the international human rights framework, we’re at a point where there are various efforts on global AI governance happening at a breakneck speed. And there’s a number of them that I’ve mentioned, including the hearing. Rishima process that was just spoken about yesterday, if you guys read the main event. So my first question and kind of my prompt is to my left Marlena, really given your work at ECNL and kind of the ongoing efforts you have to advocate for rights respecting approaches on these types of AI regulatory processes, what do you consider or think is missing in terms of aligning them with the International Human Rights Framework? And again, if you could provide a brief background and introduction, that’d be great, thanks.

Marlena Wisniak:
Sure, thanks so much Ian and hi everyone. Welcome to day two, I think it is of IGF. It feels like a week already. So my organization, the European Center for Nonprofit Law is a human rights org that focuses on civic space, freedom of assembly and association. And also we work a lot on freedom of expression and privacy. And over the past five years, we’ve noticed that AI was a big risk and some extent opportunity, but great potential for harm as well for activists, journalists and human rights defenders around the world. So the first five years of our work in this space were rather quiet, or I’d say it was more of a niche area with only a handful of folks working at the intersection of human rights and AI. And by handful, I really mean like 10 to 15. And this year, the discussion around AI has really expanded very, very quickly and it may be a chat GPT kind of trailblazer issue, but it’s great to see that at the UN there is interest for this topic and panels like this that bring a human rights based approach to AI. So Ian mentioned a couple of the ongoing regulations. I won’t bore you this morning with a lot of legalese, but the core frameworks that we focus on advocate for a human rights based approach at ECNL are obviously the EU AI Act and trilogues are happening as I speak right now. Council of Europe Convention on AI. national laws as well, we’ve seen these expand a lot around the world recently. We engage in standardization bodies, so like the US NIST, a National Institute for Standards and Technology, and the EU CENCENELEC, and of course, international organizations like OECD and the UN, and you mentioned, Ian, Hiroshima process, that’s one we’re following closely as well. In the coming years, as the AI Act is said to be accepted in the next couple of weeks, and definitely by early 2024, we’ll be following the implementation of the Act, and so I’ll use this as a segue to talk to you a little bit about what are the core elements that we see should be part of any AI framework and AI governance from a human rights-based approach, and that begins with human rights to diligence and meaningful human rights impact assessments in line with the UN Guiding Principles for Business and Human Rights. So we really see, with AI, an opportunity to implement mandatory human rights to diligence, including human rights impact assessments in the EU space that also involves other laws, but beyond EU, globally, the UN and other institutions, and FORA have an opportunity right now to actually mandate meaningful, inclusive, and rights-based impact assessments. That means meaningfully engaging stakeholders as well, especially external stakeholders like civil society organizations and affected communities around the world. So stakeholder engagement is a necessary and cross-cutting component of AI governance, development, and use, and at ECNL, we look both at how to govern AI and then how it’s developed and how it’s deployed around the world. We understand stakeholder engagement is a collaborative process where diverse stakeholders, both internal and external, meaning those that that develop the technologies themselves can meaningfully influence decision making. So on the governance side of things is when we consult in these processes, including a multi-stakeholder forum like IGF, do our voices actually heard? Can they impact the final text and provisions of any laws or policies that are implemented? And on the AI design and development side of things, when tech companies or any deployer of AI consults of external stakeholders, do they actually implement, do they include their voices and do these voices inform and shape final decision making? In the context of human rights impact assessments of AI systems, stakeholder engagement is particularly effective to understand what kind of AI systems are even helpful or useful and how do they work. So looking at the product and service side of AI, machine learning or any algorithmic-based data analytics systems, we really can, we can shape better regulation and develop better systems by including these stakeholders. Importantly, external stakeholders can identify specific potential positive or adverse impacts on human rights, such as the implications, benefits and harms of these systems on people and looking at marginalized and already vulnerable groups in particular. If you’re interested to learn more about stakeholder engagement, check out our framework for meaningful engagement. So shameless plug to Google or go on our website and look up framework for meaningful engagement where we provide concrete recommendations for engaging internal and external stakeholders in AI systems. And these recommendations can also be used for AI governance as a whole. Moving on, I’d like to touch base, touch on transparency briefly, which in addition to human rights impact assessments and stakeholder engagement we see as a prerequisite for AI accountability and a rights-based global AI governance. So, not to go too much to detail, but we believe that AI governance should mandate that AI developers and deployers report on data sets, including training data sets, performance and accuracy metrics, false positives and false negatives, human in the loop and human review, and access to remedy. If you’d like to learn more about that, I urge you to look at our recent paper, published with Access Now just a couple weeks ago, on the EU Digital Services Act, with a spotlight on algorithmic systems, and we outline our vision for what meaningful transparency would look like. Finally, access to remedy is a key part of any governance mechanism that includes both internal agreements mechanisms within tech companies and AI developers, as well as, obviously, state remedy at the state level and judicial mechanisms, which are, as a reminder, states have the primary responsibility to protect human rights and give remedy when these are harmed. And one, I’d say, aspect that we often see in AI governance efforts, especially by governments, to include an exemption for national security or counter-terrorism and, broadly, emergency measures. And at ECNL, we caution against over-broad exemptions that are too vague, broadly defined, as these can be, at best, misused, as worst, weaponized to restrict non-civil liberties. So, if there are any exemptions for things like national security or counter-terrorism in AI governance, we really urge to have a narrow scope, include sunset clauses for emergency measures, meaning that if any exemptions are in place, they will end within due time, and focus on proportionality. And finally, what is missing? So what we see today, both in the EU and globally as well, is that AI governance efforts mostly take a risk-based approach. And the risk part is often to finance, business, I mentioned national security, terrorism, these kind of things, but rarely human rights. And the AI act itself in the EU is regulated under product liability and market approach, not fundamental rights. In our research paper of 2021, we outlined key criteria for evaluating the risk level of AI systems from a human rights-based approach. And that means that we recommend determining the level of risk based on the product design, the severity of the impact, any internal due diligence mechanisms, causal link between the AI system and adverse human rights impacts, and potential for remedy. And all these examples help us really focus on the harms of AI to human rights. Last thing, and then I’ll stop here, where AI systems are fundamentally incompatible with human rights, such as biometric surveillance deployed in public spaces, including facial and emotional recognition, we, along with a coalition of civil society organizations, advocate for a ban of such systems. And we’ve seen proliferation of laws, like in the US, for example, the state level, and right now, in the latest version of the AI Act adopted by the European Parliament of such bans. So that means prohibiting the use of facial recognition and remote biometric recognition technologies that enable mass surveillance and discriminatory targeted surveillance in public and publicly accessible spaces by the government. And we urge the UN and other processes, such as the Hiroshima, to include such bans. Thank you, Ian.

Ian Barber:
Thank you, Marlena. That was amazing. I think you actually just followed up to my. my immediate question, which was what is really needed when it comes to AI systems that do pose an unacceptable risk to human rights? So thank you for preemptively responding. And I very much agree that having mandatory due diligence, including impact assessments of human rights, is imperative. I think what you spoke to in terms of stakeholder engagement rings true, as well as the issue of transparency and the needs for that to foster meaningful accountability and also introducing remedies. So thank you very much for that overview. I think based on that, considering that there are these initiatives and there are so many different elements to consider, whether it’s transparency, accountability, or scope, I’ll turn to you, Tara, and ask, given all this, how is a government such as Canada approaching AI governance and considering human rights? What are, in terms of both your domestic priorities, in terms of kind of regional or international engagement? So if you could speak a bit to how these are all feeding together, that’d be great. Thank you.

Tara Denham:
Sure. Thank you. And thank you for inviting me to participate on the panel. So as I said, I’m Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada, which I think also warrants perhaps a bit of an explanation, but I think actually aligns really well as a starting position. Because within the Office of Human Rights, Freedoms, and Inclusion is actually where we’ve been embedded the responsibility for digital policy and cybersecurity policy from a global affairs perspective. And so that was our starting point for which, since the integration of those policy positions and that policy work a number of years ago, it was always starting from a human rights perspective. And so this goes back, I think, about six or seven years that we actually created this office and integrated the human rights perspective into our digital policy from the beginning and some of our initial positions on the development of AI considerations and the geopolitics of artificial intelligence. So I think that, in and of itself, is perhaps unique in some of the. structures. Having said that, I would also acknowledge that a lot of the government structures, we are all trying to figure out how to approach this, but as the DG responsible for these, it does give a great opportunity to, from the beginning, integrate that human rights policy position. When we were first starting to frame some of our AI thinking from a foreign policy lens, it was always from the human rights perspective. I can’t say that that has always meant we’ve known how to do it, but I could say that’s always been pushing us to think and challenge ourselves of how can we use the existing human rights frameworks, how can we advocate that at every venture, including domestically. I wanted to give, perhaps, a snapshot of the framing of how we’re approaching it in Canada, some of our national perspectives, and then how we’re linking that to the international, and of course, integrating how we address some of the integrating a diversity of voices into that in a concrete way. I would say when we started talking about this a number of years ago, it was the debate, and I’m sure many of you participated in this debate, it was a lot around should it be legislation first, should it be guiding principles, are there frameworks, are we going to do voluntary. For a number of years, that was the cycle we were in, and I would say over the last year and a half to two years, that’s not a debate anymore. We have to do all of them, and they’re going to be going at the same time. Right now, I think where I’m standing is it’s more about how are we going to integrate and how are we going to feed off of each other as we’re moving domestic at the same time as the international. We have to, typically, from a policy perspective, you would have your national positions defined and those would inform your international positions. Right now, the world is just moving at an incredible pace, so we’re doing it at the same time, and we have to find those… intersections but also takes a conscious decision across government, and when I say across government, I mean across our national government. And of course, this is within the framework, which we’re all very familiar with, which is domestically, we are also all aiming to harness AI to the greatest capacities, because of all of the benefits that there are, but we’re always very aware of the risks. And so that is a very real tension that we need to always be integrating into the policy discussions that we’re having. And our belief and our position in our national policy development and international is that is where the diversity of voices are absolutely required, because the risk views will be very different depending on the voice and the community that you’re inviting and that you’re actually engaging in the conversation in a meaningful way. So it’s not just inviting to the conversation, it’s actually listening and then shaping your policy position. So in Canada, what we’ve seen is, and I’m not going to go into great detail, but just to give you a snapshot of where we’ve started is like within the last four years, we’ve had a directive on how automated decision making will be handled by the government of Canada, and that was accompanied by an algorithmic impact assessment tool. That was sort of the first wave of direction that we gave in terms of how the government of Canada was going to engage with automated decision making. Then over the last little while, again, in the last year, there’s been a real push related to generative AI. So now in, I think it was just in the last couple months, there was the release of a guide on how to use generative AI within the public sector. A key point I wanted to note here is that it is a requirement to engage stakeholders before deploying generative AI by the government of Canada. Before we’re actually going to roll it out, we have to engage with those that will actually be impacted. be enacted, whether it be for public use or service delivery. And then just last month, a voluntary Code of Conduct on Responsible Development and Management of Advanced Generative AI Systems. This, again, we’ve seen the U.S. with similar announcements. We’ve seen the G7, work that we’re doing in the G7. And a lot of these codes of conduct and principles coming out at the same time, and this is also accompanied in Canada by working through legislation, so that we also have an AI and Data Act going through legislation. So, as I said, these are the basis of the regulations and the policy world that we’re working in within Canada. And what I comment there is that these are all then developed by multiple departments. Okay, so that’s where I think we’re challenging ourselves as policymakers, because we have to also increase our capability to work across the sectors, across the departments. And I would say from where we started with when we were developing Canada’s Directive on Automated Decision Making, through to the actual Code of Conduct that was just announced, that was moving from, you know, informal consultations across the country, trying to engage with private sector and academia, to the voluntary code being consulted. So, we have a national table set up now, which does include private sector, civil society, federal, provincial, territorial governments, Indigenous communities. So, we’ve also had to make a journey through what it means from sort of ad hoc consultation to formalized consultation when we’re actually developing these codes. So then, how does that translate internationally? As we’re learning domestically at a rapid pace, perhaps I can just pull on a few examples of how we’ve then tried to reflect that internationally. And I’m going to harken back to the UNESCO. UNESCO Recommendations on the Ethics of AI from 2021. So, this is where, again, it was making that conscious decision about harnessing our national tables that were in place to define our negotiating positions when we would be going internationally, given that, again, our national positions weren’t as defined. And then we also wanted to leverage the existing international structures, and I think that’s really important as we talk about the plethora of international structures at play. So, this is where we’ve used the Freedom Online Coalition. So, you have to look at the structures that you have, the opportunities that exist, and what are the means by which we can do wide consultation on the negotiating positions that we’re taking. So, for the UNESCO Recommendations, that’s where we use the Freedom Online Coalition, and they have the advisory network, which also includes civil society and tech companies. So, again, it’s about proactively seeking those opportunities, shaping your negotiating positions in a conscious way, and then bringing those to the table. We’re also involved in the Council of Europe negotiations on AI and human rights, which is, again, leveraging our tables, but it’s also advocating to have a more diverse representation of countries at the table. So, you have to seize the opportunity. We do see this as an opportunity to engage effectively in this negotiation, and we want to continue to advocate that more countries are participating, and that more stakeholder groups can engage. So, maybe I’ll just finish by saying some of the lessons that we’ve learned from doing this. It’s really easy to recite that and make it sound like it was, you know, easy to do. It’s not. Some of the lessons I would pull out, number one, stakeholder engagement requires a deliberate decision to integrate from the start. And I guess the most important word in that is That one is deliberate. You have to think about it from the beginning, you have to put that in place. As I’ve said a few times, you have to think about and make sure that you’re creating that space for the voices to be heard, and then actually following through on that. The second one, it does take time, it’s complex, and there will be tensions, and there should be tensions, because if there’s not tensions in the perspectives, then you probably haven’t created a wide enough table of a diversity of voices. So you have to, I think my team is probably tired of me saying this, but you have to get comfortable with living in a zone of discomfort. If you’re not in a zone of discomfort, you’re probably not pushing your policy, your own, your view, and again, I’m coming from a policy perspective, and you have to do that to find the best solutions. As policymakers, it is going to also drive us to sort of increase our expertise. So we’re seeing a lot of, you know, yes, we would traditionally come to the tables with our policy knowledge, and our human rights experience, and those sort of elements, but I think, you know, we’ve tried a lot of different things in terms of integrating expertise into our teams, integrating expertise into our consultations, so you have to sort of think about what it’s going to mean in a policy world to now do this, and finally, I’ll just say, again, leveraging the structures that are in place. We have to optimize what we have. It’s, I think, sometimes easier to say, well, it’s broken, and let’s create something new, but I do want to think that we can continue to optimize, and if we’re going to create something new, we, again, it’s a conscious decision to think about what is missing from what we have that needs to be improved upon. Perhaps I’ll stop there.

Ian Barber:
Thank you, Tara. That was great and really comprehensive. I think in the beginning, you alluded to the challenges in applying the international human rights system to the work that you’re doing, but I’m glad Canada is very much doing that and taking this multi-pronged approach. approach that does put human rights front and center, both the national and international levels. And I really agree that there is very much a need to have deliberate stakeholder engagement and appreciate the work that you’ve been doing on that. And also the need to leverage existing structures and ensuring that these conversations are truly global, inclusive, and ensuring that the expertise is there as well. So thank you so much. And I think your comments on UNESCO actually serve as a perfect segue to my next prompt, which I’ll be turning to Pratek to discuss a bit about that. So UNESCO developed the recommendations on the Ethics of AI a couple of years ago. I think as it’s been alluded to, that the conversation has kind of gone from, do we need voluntary things to, or self-regulatory or non-binding to do we perhaps need more binding? And I think that is very much the direction to travel now. But I’m curious to hear from you a bit more about your experience at UNESCO in terms of implementing the recommendation at this point and how UNESCO in general will be playing a larger role in AI governance moving forward and on human rights. So thank you.

Pratek Sibal:
Thanks Ian. How much time do I have? You have five to six minutes, but there’s no rush. I wanna hear your comments and your interventions. First of all, thanks for organizing this discussion on human rights-based approaches to AI governance. I will perhaps focus more on the implementation part and share some really concrete examples of the work that we are doing with both rights holders and duty bearers. Perhaps first, it’s good to mention that the recommendation on the Ethics of AI is human rights-based. It has human rights as a core value and it is really informed by human rights. Now, I would focus more on the judiciary first. So while we are talking about development of voluntary. frameworks, non-voluntary binding and so on, there’s a separate discussion about whether it’s even possible in this fractured world that we are living in to have a binding instrument, it’s very difficult. It’s not a choice, if you are going to go and negotiate something today, it’s very difficult to get a global view. So we have a recommendation which is adopted by 193 countries, so that’s an excellent place to start with, and I’m really looking forward to the work that colleagues at the Council of Europe are doing to have a regional and also they work with other countries. Now, so we started to also, in my team, in my files, looking at the judiciary, because you can already start working with duty bearers and implement international human rights law through their decisions. But the challenge that you face is that a lot of times they don’t have enough awareness about what AI is, how does it work, there’s a lot of myth involved. And there is also this assumption that technologies out there, it will, if you’re using an AI system or in a lot of countries, they’re using for predictive purposes, they will be like, oh yeah, it’s the computer algorithm which is giving the score, it must be right. So all these kind of things need to be broken down and explained, and then the relevant links with international human rights law needs to be established. This is what we started to do in some time around 2020. We at UNESCO have an initiative called the Global Judges Initiative. Which started in 2013, where we are working on freedom of expression, access to information and safety of journalists. And through this work, we’ve reached about 35,000 judicial operators in 160 countries through both online trainings in the form of massive open online courses to in-person trainings, to helping national judicial training institutions develop curriculum. Around 2020, we started to discuss artificial intelligence. Of course, the recommendation was under development and we were thinking already about how can we actually implement beyond the great agreement that we have amongst countries. And we first launched a survey to this network and about 1200 judicial operators, when I say judicial operators, judges, lawyers, prosecutors, people working in legal administrations responded to this survey from about 100 countries. And they said two things. First, we want to learn on how AI can be used within the judicial processes and the administrative processes, because in a lot of countries, they are overworked and understaffed. And I’ve been talking to judges and they’re like, yeah, if I take a holiday, my colleagues have to work like 16 hours a day. And that is a key driver for them to look at how can the workload be streamlined. The next aspect is really about what are the legal and human rights implications of AI. And when it comes to say, freedom of expression, safety of access to information. Let me give you some examples here. So we have, for instance, in Brazil, there was a case in the Sao Paulo metro system, they were using facial recognition system on their doors to detect your emotions and then show advertisement. And so, I think it was a data protection authority in Brazil, which said that you can’t do that. You have no permission to collect this data and so on. And this did not require really an AI framework. So my point is that we should not think in just one direction that we have to work on a framework and then implement human rights. But we already have international human rights law, which is part of jurisprudence in a lot of countries, which can directly be used, actually. So let’s not give a lot of people the reason to wait. Let’s have a regulation in our country. Giving you some other examples, we’ve seen in Italy, for instance, they have these food delivery apps like Deliveroo, and there’s another one called Foodino. And they had two cases there where basically one of those apps, I don’t remember which one, was penalizing the food delivery drivers if they were canceling their scheduled deliveries for whatever reason. And they were giving them a negative score. So the algorithm was found to be biased. It was rating those who canceled, giving more negative points to them vis-a-vis the others. And the Data Protection Authority basically said from the GDPR that you cannot have this going. We had the case Marlena was mentioning about facial recognition in the public sphere. I think it was in the UK, the South Wales Police Department was using facial recognition systems in the public sphere. And this went to the Court of Appeals, and then they said, oh, you can’t do this. So this is the work. These are just examples of what is already happening and how people have already applied international human rights standards and so on. Now, what are we doing next? So in our program with our work with the judiciary, we launched in 2022 a massive open online course on AI and the rule of law, which covers all these dimensions. and we made it available in seven languages. And it was kind of a participative dialogue. We had the president of the Inter-American Code of Human Rights, we had the Chief Justice of India, we had professors, we had people from the civil society coming and sharing their experiences from different parts of the world, because everyone wants to learn in this domain. There’s like, as Canada, you were mentioning, there’s a lot of scope to learn from what other practices in other countries. And so that was our first product, which reached about 4,500 judicial operators in 138 countries. Now we realize that doing individual capacity building is one thing, but we need to focus more on institutional capacity building, because that’s more sustainable in the long term. So we’ve now, with also the support of the European Commission, developed a global toolkit on AI and the rule of law, which is essentially a curriculum, which has four modules, which is talking about human rights impact assessments that Marlena was talking about before. We are actually going to go to the judiciary and say, okay, this is how you can break things down. This is how you look at data. But what is the quality of data? When you’re using an AI system, how do you check whether the algorithm is, what was the data use, whether it was representative or not? So we are breaking these things down practically for them to start questioning, at least. You don’t expect judges to become AI experts at all, but at least to have that mindset to say, oh, it’s a computer, but it is not infallible. So we need to create that. So we have this curriculum, which we developed through also almost a year long process now of reviews and so on. Now we have the pilot. toolkit available, which we are implementing first with the Inter-American Court of Human Rights in November, actually next month, for a regional training. We will also then get their feedback because it’s important to work with the community on what works for them, also from the trainers. We are going to hopefully do it for the EU. We are going to do it in East Africa with the East African Court of Justice next year. In fact, we are hosting a conference with them later this month in Kigali. So we are at this moment now piloting this work, organizing these national and regional trainings with the judiciary, and then as a next step, hoping that this curriculum is picked up by the national judicial training institutions and integrated. And then they own it, they shape it, they use it. And that is how we see that it becomes international human rights standards, percolating down to enhanced capacities through this kind of a program. And also as an open invitation, the toolkit that we have, we are just piloting it. So also open to having feedback from the human rights experts here on how we could further improve and strengthen it. So I think perhaps I’ll briefly just mention the rights holders side. And we’ve also developed some tools for basically youth or even general public, you could say, to engage them in a more interesting way. So we have a comic strip on AI, which is now available in English, French, Spanish, Swahili. And I think there’s a language in Madagascar that is also, and in German and Slovenian soon. So these are tools that we make available to the communities to also then co-own, develop their language versions, because Part of strengthening human rights globally is also to making that content available in different languages. So people who can associate with it better. We have a course on defending human rights in the age of AI, which is available in 25 languages. It’s a micro learning course on a mobile phone that we developed in a very collaborative way with UNITAR, which is a United Nations training and research institution, as well as a European project called Saltopi, which involved youth volunteers who wanted to take it to their communities and say, oh, actually, in our country, we want this to be shared, and so on. So there are a number of tools that we have, and then communities of practice with whom we work on capacity building and actually translating some of this high-level principles, frameworks, policies into, hopefully, a few years down the line, into judgments, which become binding on governments, on companies, and so on. I’ll stop here. Thank you.

Ian Barber:
Thank you. That’s great. And thank you for reminding us that we already do have a lot of frameworks and tools that can already be leveraged and are taking place in the domestic context as well. Really commend on your work on AI and human rights in the judiciary. I think that it’s important to consider that we do need to work on the institutional knowledge capacity that you were speaking to and also working with various stakeholders in an inclusive manner. So thank you. At this point, we’ve heard from Marlena about what’s truly needed from a human rights-based approach. AI governance, we’ve heard from Tara what some governments and states like Canada are doing to champion this approach in some ways, domestic and national levels. You’ve heard from Pratik about the complementary work that’s being done by international organizations and the implementation and work happening there. So I think I want to pause at this point to see if anyone on the panel has any immediate reactions to anything that’s been said. And then we might have time for one quick question before we change directions a little bit. But if there are any immediate reactions, feel free to jump in. If not, that’s OK, too. And from online, if not. So, yeah, we can also go to a brief question, if that’s possible, please feel free to jump in. I think there’s a microphone there, but we can also hand one over. If you could introduce yourself, that’d be great too, thank you.

Audience:
Okay, thank you. I’m Stephen Foslu, I’m a policy specialist at UNICEF, and it’s really great to hear about the different initiatives that are happening and the different approaches. And maybe it’s natural, like in the previous session, Thomas Schneider was saying, it’s natural that we will see many different governments and countries approaching this differently, because nobody really knows how to do this. So this is more kind of a, I guess, a request just to think about not just the what, the governance, but also the how, and to do analysis of these different approaches and to see what works from voluntary codes of conduct to more kind of industry-specific legislation. And I think that’s almost really the next phase as we go from policy to practice. And this will play out over a number of years, but that would really be helpful from the UNESCOs and from the OECDs, who are already starting to build up this knowledge base. But clearly, there are going to be some things that work well and some that don’t. We also engage children. We did a policy, created a policy guidance on AI for children, and engaged children in that process. And it was a very meaningful and necessary process that really informed and enriched the product. So it’s really encouraging to hear about the multi-stakeholder approach that’s ongoing, not just ad hoc. But yeah, that’s kind of a request. And perhaps if you have any thoughts on kind of how you see these approaches may play out if we look ahead, and what kind of role the organizations that you’re in might play, not just kind of documenting and looking at how… how it may be, what may be governed, but actually, and how. Thank you.

Tara Denham:
First of all, as a mom, I would love to see that information about AI in children. That’s fantastic. But it did, on your comment about needing to do the analysis about what’s working and what’s not, I think one, and again, this is where we need to also build that capacity globally, because it’s one thing for Canada to do analysis and maybe what’s working in Canada, but we have to really understand what are the risks, how is it impacting in different communities and different countries. But this is where we have been working, and I don’t know if there’s any colleagues in the room, but we have the International Development Research Centre in Canada, IDRC, and they do a lot of the funding and capacity building in different nodes around the world and specific on AI capacity building and research. And so that’s where we’ve also had to really link up so that we can be leveraging as fast as possible the research that they’re also supporting. So again, it’s just always, again, it’s challenging ourselves as policymakers that we have to keep seeking it out, but there is that research and we just need more of it. I think I just wanted to advocate for that. Thank you.

Marlena Wisniak:
Yeah, thanks so much for that question. Definitely support multistakeholder participation and engaging stakeholders in the process of policymaking itself. One challenge that we see a lot is that there’s no level playing field between different stakeholders. So I don’t know if there are many companies in the room, but we often see that companies have a disproportionate advantage, I’d say, financial and access to policymakers. When I mentioned at the beginning of my intervention that there’s a handful of human rights folks that participate in AI governance, it really is another statement comparing to hundreds of actually thousands. of folks in the public policy sector of, or section of companies. So that’s something that I would urge international organizations and policy makers at the national level to consider that civil society really comes from, it’s an uphill battle in terms of capacity, resources, financial, and obviously, these are marginalized groups and global majority-based orgs are disproportionately hit by that. So Canada, as a Canadian, as Canadian government, I imagine you’re primarily engaged with national stakeholders, which is obviously important, and I also encourage you to think how Canadian laws can influence, for example, global majority-based regulation. That’s something we think about a lot in the EU with the so-called Brussels effect, understanding that many countries around the world, especially those with more repressive regimes or authoritarian practices, do not have necessarily the democratic institutional pillars that the EU or Canada would have. So that just added nuance to multistakeholderism, yes, and in a way that really enables inclusive and meaningful participation of all. Thank you.

Pratek Sibal:
So couple of quick points. First, also, on Canada, I think they’re doing a fantastic job in, for instance, Africa and Latin America with AI for Development project, and I have seen since 2019 the kind of communities that have come up and have been supported to develop, say, language datasets, which can then lead to development of applications or in healthcare or in agriculture or just to strengthen in a more sustained way capacities of civil society organizations that can inform decision-making and policy-making, and we at UNESCO have particularly also benefited from this because when we have the recommendation on the ethics of AI, which is being implemented. in a lot of countries, we work in a multi-stakeholder manner, right? We generally have a national multi-stakeholder group which convenes and works. And there, the capacity of civil society organizations to actually analyze national context, contribute to these discussions is very important. So the work that Canada or IDRC and so on are doing is actually, I have over the past four or five years seen results of that in my work itself already. So there’s good credit due there. On your point about policymaking at the international level and recommendations and so on, I think, so the process of international standard and policymaking has kind of evolved over the years. Like we used to be in a mode of technical assistance many years ago that someone will go to a country and help them develop a policy, an expert will fly in, stay there for some months and work. I think that model is changing. And that model is changing in the sense that you are developing policies or frameworks, I would say, at the global level with the stakeholders from the national or whatever level involved in the development of these frameworks. So what happens is that when they are developing something at the global level and when they have to translate it at the national level, they would naturally go towards this framework on which they have worked and they have great knowledge of. And that is one, it’s an implicit way of policy development which is over the few years that, not few, it’s been actually since the early 2000s, this is the model, because otherwise there’s not enough funding available and also it’s not sustainable because you don’t develop global frameworks which are done in a more consultative way. manner. So there is more ownership of these frameworks, which are then become the natural tool, go to tool for at the national level as well. So that’s, I think, an interesting way to develop. And that’s why we are talking about multi-stakeholderism. A lot of times in fora like this, multi-stakeholderism just becomes a buzzword. Yes, we should have everyone on the table. That is not what it means. We need to be, and we’ve actually produced a guidance on how to develop AI policies from drafting, from agenda setting to drafting to implementation and monitoring along the policy cycle in a multi-stakeholder manner. And there’s a short video also I’m happy to share later, if we can share it with the community. Thank you very much.

Ian Barber:
I know we have one speaker. Just really quickly, if you could make your question and then I have three more interventions from people including online. So maybe they can consider your question and their responses. And if not, then we can come back to it at the end. I just want to ensure that we make time for them. So if you can be brief, that’d be very much appreciated. So I can ask a

Audience:
question. Okay, thank you so much. Svetlana Zenz, Article 19. I’m working on engaging TAG for internet freedoms on Asia countries, Myanmar, Vietnam, China. And my question actually is, I mean, I think it’s like more of a UNESCO and Canada at some point because, I mean, the ones who are providing some global policies. Would you recommend some mechanisms which we could implement in authoritarian regime countries to monitor the responsible AI, especially from the private sector side? Because in the Western world or the world which is like more human rights friendly, it’s more easier to implement those policies rather than in authoritarian countries. Thank you. Thank you very much. We’ll be coming back to these questions

Ian Barber:
as well and I think that’s actually a little bit of a good segue to the next intervention. I’m going to turn to Shala who’s joining online from Google, from the private sector, as it’s important to consider also. stakeholders in the room. Shel, if you’re connected with us, I think my question for you is, aside from these government and the multilateral efforts, it’s obviously clear that the private sector plays a key role in promoting human rights and AI governance frameworks. So if you could speak about, really, your work at Google, what’s its perspective and ongoing efforts on AI governance and how you’re working to promote human rights. And if you can speak to the questions, it’s been asked, that’d be fantastic as well. Thank you so much for joining and your patience.

Shahla Naimi:
Sure, thank you so much for having me today. And apologies, I was unable to join in person. But I really do appreciate the chance to join virtually. I’ll try to keep this brief. I want to make sure we get to a more dynamic set of questions. And I know there are other speakers as well. But to take a step back, I sat on Google’s human rights program. And that is, for those who are not familiar, it’s a central function responsible for ensuring that we’re upholding our human rights commitments. And I can share more on that later. But it really applies across all the company’s products and services across all regions. And so this includes overseeing the strategy on human rights, advising product teams on potential actual human rights impacts. Quite relevant to this discussion, it’s conducting human rights due diligence and engaging external experts, rights holders, stakeholders, et cetera. And so maybe just to take a brief step back, I’ll just share a little bit of our starting point as a company, which is really true excitement about the ways that AI can advance human rights and really create opportunities for people across the globe. And so I think that that doesn’t just mean in terms of potential advancements, but really progress that we’re already seeing putting more information in the hands of human rights defenders in whatever country they are in, keeping people safer from floods and fires, particularly knowing that it affects disproportionately the global majority, increasing access to health care, one that I’m particularly. excited about is something we call our 1,000 languages initiatives, which is really working on building AI models that support the 1,000 most widely spoken languages. We obviously live in a world where there are over 7,000 languages, and so I think it’s a drop in the bucket, but we hope that it’s sort of a useful starting point. But to sort of, again, turn to our topic at hand, none of this is possible if AI has not developed responsibility, and as was sort of noted in the introduction, this really is an effort that necessarily needs to have government, civil society organizations, and the private sector involved in a really deeply collaborative process, maybe one that we haven’t even seen before, potentially. For us as a company, the starting point for responsible AI development and deployment is human rights. So for those who are maybe less familiar with the work that we do in this space is, you know, Google’s made a number of commitments to respecting the rights enshrined by the Universal Declaration of Human Rights, which is turning 75 this year, and it’s implementing treaties, as well as the UN guiding principles on business and human rights, which I think Marlena mentioned in the beginning. So, you know, what does that actually look like in practice? So, you know, as part of this, years ago in 2018, when we established our AI principles, we embedded human rights into them. So for those who are not familiar, our AI principles describe our objectives to develop technology responsibly, but also outline some specific application areas that we will not pursue, and that includes technologies whose purpose contravenes international law and human rights. So if I’m kind of providing a bit of a tangible example, let’s imagine that we’re sort of thinking of developing a new product like BARD, which we released earlier this year. This would go through our AI principles review via our responsible innovation team, and as part of that process, my team would also conduct human rights due diligence to identify any potential harms and develop alongside various themes, legal, and product teams in particular, appropriate mitigations around them. And so one example of this, which we can sort of share around, which is a public case study that we’ve released is around our celebrity recognition API. So back in, this would have been 2019, you know, we already saw that the streaming era had brought, you know, a really remarkable explosion of video content. And in many ways, that was fantastic. More documentaries, more access for filmmakers to sort of showcase and share their work globally and so on. But there was also a really big challenge, which was the video was pretty much unsearchable without, you know, expensive, labor-intensive tagging processes. This made it really difficult and expensive for creators. So, you know, a discussion popped up about better image and video capabilities to recognize sort of an international roster of celebrities as a starting point. So our AI principles review in this process triggered kind of additional human rights due diligence, and we brought on Business for Social Responsibility, BSR, which some are familiar with, to help us conduct sort of a formal human rights assessment on the potential impact of a tool like this on human rights. Kind of fast forward, the outcome of this was a very tightly scoped offering, one that defined celebrity quite carefully, established manual customer review processes, instituted really an expanded terms of service. All of this actually ended up also later forming our company-wide stance on facial recognition, and, you know, took into consideration quite a bit of stakeholder engagement in the process. Though it was developed more recently than this particular human rights assessment, I’ll also plug in the ECNL framework for meaningful engagement, because it served as a really helpful guide for us since it’s released. So I just want to share this example for two reasons. One is just human rights and sort of the established ways of assessing impact on human rights have been embedded into our internal AI governance processes from the… beginning. And two, as a result of that, we’ve actually been doing human rights due diligence on AI related products and features for three years. And that’s been a priority for us as a company for quite a long time. To sort of take a very brief kind of note to to sort of your the second part of your question. I’ll just sort of flag that I think we really do need everybody at the table. And that’s not always the case right now, as as others had mentioned, you know, we were excited, just as an example, to be part of the moment at the White House over the summer at the US White House over the summer, that brought together industry to commit to advancing responsible practices in the development of AI. And earlier this fall, we did sort of release our company’s progress against those commitments. And that included launching a beta of synth idea, a synth ID, which is a new tool we developed for watermarking and identifying AI generated images, a really core component of informing the development of that particular product was concerns from civil society organizations and academics, and individuals and sort of the global majority keeping in mind that we have 75 elections happening globally next year, really concerns around misinformation and the potential proliferation of misinformation, establishing and a dedicated AI red team, co establishing the frontier model form to sort of develop standards and benchmarks for emerging safety issues. But we’re, you know, we think these commitments and companies progress against them is an important step in the ecosystem of governance, but they really are just a step. So we’re particularly eager to see kind of more space for industry to come together with governments and civil society organizations, more conversations like this. I think Tara mentioned the Freedom Online Coalition. So it could be through existing spaces like FOC, or the Global Network Initiative, but also, you know, potentially new spaces, as we find that it’s necessary. And so I’ll just kind of mention one last thing briefly, because I know where I’m probably over my time. because it did sort of come up more specifically. I’ll just flag that when developing AI regulation at Google at the very least, we sort of think about it in a few ways. We’ve been thinking about it as something called the four S’s. You know, the structure of the regulation. Is it international? Is it domestic? Is it vertical? Is it horizontal? The scope of the regulation, how’s AI being defined? Which is not the easiest thing to do in the world. The subjects of regulation, developers, deployers, and finally the standards of the regulations. What risks, how do we consider those difficult trade-offs that were mentioned earlier by some, I think the person who asked the first question. So these are just sort of some of the things that we’re taking into consideration in this process, but we’re really hoping that more multi-stakeholder conversations will lead to some international coordination on this front, because our concern is that, you know, otherwise we’ll have a bit of a hodgepodge of regulation around the world. And in the worst case scenario, I think it makes it difficult for companies to comply and stifles innovation, potentially cuts off populations from what could be potentially transformative technology. And it might not be so much the case for us at Google where we’ve, you know, we have the resources to make significant investments in compliance and regional expertise, but we do think it would be, could be a potential issue for smaller players and sort of future players in this space. So I’ll pause there because I think I probably took up too much time, but I appreciate it and looking forward to the Q and A.

Ian Barber:
Thank you so much for that overview. That was great. And thank you for highlighting the work that’s happening at Google to support human rights in this context, particularly you’re working on due diligence, for example, as well as you noting the need for collaboration and considering global majority perspectives. I think that’s key as well. So what I’d like to do now is turn to Vladimir as our second to last intervention of the session, and then hopefully turning to a couple of questions at the end. I think that we’ve heard from a couple of different stakeholders. at this point, but I think the question for you is, do you think that the global majority is able to engage in these processes? Do you think that they are able to effectively shape the conversations that are happening at this point? And I think that, you know, that I chose to see Dallas has spoken about the need to consider local perspectives and I’m curious to hear from you is, why is this so critical and kind of what is the work that you’re doing now? And if we can keep an intervention to about four or five minutes, that’d be fine, but don’t wanna cut you off, thank you. Okay, I’ll try to be brief. Well, first of all, thank you so much for the question.

Speaker:
It’s a hard question and also for the invitation to be part of this panel, I’m very glad to be here. I’m Vladimir Garay, part of Derechos Digitales, Latin American digital rights organization and for the last couple of years, we’ve been researching about the deployment of AI systems in a region in the context of public policy. Part of that work has been founded by IDRC, so thank you. And I’m gonna tell you a little bit more about that later, but if you’re interested, you can go to ia.derechosdigitales.org and if the URL in Spanish confuse you, you come to me and I can give you one of these and you can find it more easily. So regarding your question, even though there are interesting efforts being developed right now, I think Latin America mostly have lacked the ability to meaningfully engage and shape processes for responsible AI governance and this is consequence of different challenges faced by the Latin American region on the local, the regional and the global context. For example, on the local context, one of the main challenges has to do with the designing of governance instances that are inclusive and that can engage meaningfully with a wide range of actors, which is at least partly consequence of a long history of authoritarianism that results on frail democracies that are suspicious of. participation, that are dismissive of human rights impacts, or that lack the necessary institutional capacities to implement solutions that acknowledge broad, inclusive, transparent participation. On the global context, we have to address the eagerness of the tech industry for pushing aggressively a technology that is still not completely mature in terms of our understanding of it, how we think about it, how we think about its limitations, and how do we demythologize it. And one of the consequences of this is the proliferation of different proposals for guidance, legal, ethical, and more, so many that it’s hard to keep up. So there’s a sense of overwhelming necessity and inability, which is a difficulty in itself. Now, also in the global context, I think Latin America and global majority perspectives are often overlooked and disregarded in the international debate about technology governance, probably because from a technical or an engineering standpoint, the number of artificial intelligence systems that are being developed in Latin America might seem marginal, which is true, especially when compared to those created in North America, Europe, and part of Asia. But our region has a fundamental role in the production of AI systems, and a better understanding of global majority and Latin American countries’ relationship with AI can be illuminating, not just for Latin America, but for the AI governance fields as a whole. How should it look like and what should it include? So first, I think it’s important to consider the different roles of global majority countries, and in particular, Latin American countries, in the global chain of artificial intelligence development. Our region has a fundamental role in the production of AI systems, for example, as a provider of lithium and other minerals necessary for the manufacturing of different components of AI systems. Of course, as you all know, Mining consumes big amounts of non-renewable energy and has important environmental impacts, including air pollution and water contamination that can lead to the destruction of habitats

Ian Barber:
and the loss of biodiversity. It also has a severe impact on the health of the miners, many of whom work in precarious conditions. Latin America also provides data, raw data, that is collected from different sources by different means and that is used to train and refine AI models, data that is often collected as a consequence of the lack of proper protection of people’s rights to their personal information. And most of the time, people’s data get input on AI systems without people’s consent or even their knowledge. Latin America also provides labor, labor necessary to train AI systems by labeling data for machine learning. These are usually low-paid jobs performed also under very precarious conditions that can have harmful impacts on the emotional and mental health of people, for example, when reviewing data for content moderation purposes. It is also the very foundation of any AI system, but its value is severely underestimated and not properly compensated. In summary, Latin America provides material resources necessary for the development of AI systems that are being designed somewhere else and later sold back to us and deployed in our countries, perpetuating logics of dependency and extractivism. So we are both the providers of the inputs and the paying clients for the outputs, but the processes that determine AI governance are often far removed from our region. In general, AI governance should consider the different impacts of AI development on human rights, including the ones that are a result of the extraction of these material resources, environmental human rights, workers’ rights. and the right to data protection, privacy, and autonomy, which are greatly impacted in regions like Latin America. Now, at Derecho Digitales, we have been looking into different implementations of AI systems through public policy, because the main way most people interact with this type of technologies in a region is in their relationship with the state, even if they’re not always aware of this. And what we’ve seen is that states are using AI for mediating the relationship with citizens, for surveilling purposes, for making decisions regarding welfare assistance, and for controlling the access and the use of welfare programs. However, most of the time, our research shows that these technologies are deployed without meeting transparency or participation standards, they lack human right approaches, and do not consider open, transparent, and participatory evaluation processes. There are many reasons for this, from corruption to the lack of capacities, and disregard for human right impacts, as I mentioned earlier. But we need to overcome this reality,

Speaker:
which implied to address the asymmetries among different regions related to the strengthening of democratic institutions. International cooperation is key, and civil society organizations in the region are playing a major role promoting that change. So I’ll keep it here for now. Thank you.

Ian Barber:
Thank you, Vladimir, for speaking about the need for regional perspectives and highlighting how these need to feed into global conversations, and including specifically how regional developments are necessary to consider in the context of AI development. I think that’s really helpful. I’m gonna turn to our last speaker now, Oyebisi, who I believe is joining us from about 5 a.m., and has been online for a very long time, so definitely deserves a round of applause, so last but definitely not least. So my question to you finally is, building on the previous comments, how do we ensure that, similarly, that African voices are represented in efforts on responsible AI governance and to promote human rights? And I’m gonna weave in a question from online that we’ve received as well, which I think might be related if you’re able to respond to that as well, which is, what suggestions can be given to African countries as they prepare strategies or policies on emerging technologies such as AI, specifically considering the risks and benefits? So again, thank you so much for your patience and thank you for being with us. Cheers.

Oluseyi Oyebisi:
Yes, and thank you so much, Haiyan, for inviting me to speak this morning. I think in terms of African voices, we all would agree that the African region is coming late to the party at this time. And we now need to find a way of peer pressuring the continent to get into the debate. Doing this would mean that we are also doing ourselves as other regions a favor, understanding that the continent has a very huge population and that human rights abuses on the continent itself would also snowball into developmental challenges that we do not want across the world. So this is the context for which we would have to ensure that we are not leaving the African continent behind, especially given the fact that our governments have not been able to figure. And this would speak to the question that has been asked by that colleague. Our governments have not prioritized the governance of AI. Of course, we need to think of the governance of AI within the hard and the soft slot, but also understanding the life cycle of the AI itself. And how do we ensure that along all of the life cycle, we have a government that understands that, a civil society organization as well that understands that and a business that understands that and was great listening to. the colleague from Google who was talking about how Google has a human rights program. How do we then, within a multi-stakeholder approach, bring that understanding to anticipate some of the rights challenges we might see with artificial intelligence, but also then plan as a truly multi-stakeholder approach to be able to mitigate those. And this is where governments would now need to see civil society organizations not as enemies, but as allies, and helping to bring those voices together. Of course, we should understand that at some point, the politics of AI would also come to bear because on the continent itself, we do not have all of the resources in terms of intellectual property to be able to develop the coding and all of these algorithms that follow that. Our universities are not prepared for that yet. But again, dealing with the technicalities as well, we have to also build some level of competence. Plus also understanding that in terms of international governance of AI and the setting up of international bodies, the African region would have to ensure that our missions abroad, especially those that would be relating with the UN, must have the right capacity to take part in the negotiations. And that’s why, again, I like how a colleague from Canada said that we would have these contestations and they are very necessary because it is within these contestations that we’ll be able to bring the diversity of opinions and thoughts to the table, such that we have policies that can help us to address some of these challenges that we might see now and in the future. But how are we going to prepare ourselves as Africans to be able to negotiate and negotiate better? And this speaks to the role of the African Union. including ECOWAS and other regional bodies. I do think the European Union is also setting the agenda and the kind of model for Africans and other regions to also follow in terms of the deep dive that they’ve done with the AI treaty and how they are using that to help shape how we can have a good human rights approach to AI itself. So now answering the question directly that you posed to me is to say that whatever advice we would give the African government would also be within the context of what we have seen. I want us to understand that hard laws may not necessarily be the starting point for African government. It might be soft laws, working with technology platforms to look at code of conducts and using lessons from that to progress to add laws. Of course, also understanding that governments must begin to think regulation in ways that balances the need of citizens and some of the disadvantages that you do not see or we do not want to see, but that we bring citizens themselves into the conversation such that we are also encouraging innovation. As much as we’re encouraging innovation, we’re also ensuring that the rights of others are not abused. It’s going to be a long walk to freedom. However, that journey must start with Africans, African civil society, African businesses, African governments investing in the right set of meetings, investing in the right set of research, investing also in the right set of engagements that can get us, again, to become part of the global conversation, but also understanding that. the regional elements of the conversations also must be taken on board. Especially given the fact that human rights abuses across the region is becoming alarming and that we now have more governments that are interested in not opening the space, not you know being intrusive, rather you know they want to muffle voices, you know, they also are not opening freedom of association itself is also affected. So when you look at the civic space ranking of civics for the region itself, it then again gives the picture as to how some way somehow as this some of these conversations might not necessarily be something that would excite the region. But again this is an assumption, we can still again begin to look for that stakeholder pressure in ways that brings the African governments to the table, in ways that helps them to see the need for this and also the need for us to get our voices into global platforms.

Ian Barber:
Thank you Oyebisi, it’s great and thank you for stressing again the importance of the multi-stakeholder approach, the need for civil society and governments to work together and bringing in this diversity of perspectives and African voices and governments to the table which requires preparation as well. So thank you. I guess to the organizers in the IGF, I’m not sure what the timing is in terms of whether we’ll be kicked out of the room or not, so if there’s a session immediately afterwards I’m not entirely certain but I don’t see anyone cutting me off. I think it’s a lunch break, so what I’ll do is I’ll just say some brief final comments and then if anyone has any particular questions or wants to come up to the speakers that might be a more helpful way of moving forward. I don’t want to stand in between people and their food, never a good position to be in. Pratek if you want to make one final… I think there was a question from…

Pratek Sibal:
I mean, I have no answer, but I think it’s an important question. So we, if I think it’s always tricky, particularly when we are dealing with authoritarian regimes and to put in frameworks, which may be used in whatever way possible. So I have no answer, but I think it’s an important question. So we should give some time to that.

Ian Barber:
Thank you. I just want to say that I think we began this session with a really crucial acknowledgement that there are truly glaring gaps in what is existing in the discourse between human rights and AI governance, and that it’s a really key for all stakeholders to come in for global perspectives from the industry, from civil society, from governments, from other champions on these issues. I think we’ve just started to shine a spotlight on these issues. So I think that we’ve also journeyed through what is really needed in terms of looking at a human rights approach to AI governance. I think it’s one piece of the pie, but a critical one. And I think that it’s just key that we continue to firmly root all efforts on AI governance in the international rights framework. So thank you so much to the speakers in person here and those online. Thank you for your patience and apologies for going over and apologies for not being able to field all the questions. But I would encourage you guys to continue to come up personally and speak to speakers yourself. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

450 words

Speech time

160 secs

Ian Barber

Speech speed

203 words per minute

Speech length

3949 words

Speech time

1168 secs

Marlena Wisniak

Speech speed

169 words per minute

Speech length

1895 words

Speech time

671 secs

Oluseyi Oyebisi

Speech speed

156 words per minute

Speech length

1058 words

Speech time

407 secs

Pratek Sibal

Speech speed

168 words per minute

Speech length

2632 words

Speech time

941 secs

Shahla Naimi

Speech speed

197 words per minute

Speech length

1782 words

Speech time

542 secs

Speaker

Speech speed

171 words per minute

Speech length

680 words

Speech time

239 secs

Tara Denham

Speech speed

195 words per minute

Speech length

2361 words

Speech time

728 secs

The road not taken: what is the future of metaverse? | IGF 2023 Networking Session #65

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The discussion revolved around various significant issues concerning the metaverse. One key point raised was the presence of structural disadvantages in the adoption of metaverse enabling technologies. It was pointed out that these technologies are primarily developed in countries with high rates of IT development, placing developing countries at a disadvantage. It was acknowledged that developing nations need to catch up to match the level of technological sovereignty and metaverse connectivity that Western countries have achieved.

The importance of regulation for the metaverse was heavily emphasized. Regulation was seen as crucial for ensuring the value proposition and continuous growth of the metaverse. It was noted that the development of digital platforms has been accelerated by COVID-19 pandemic. However, concerns were raised regarding the need to address standardisation and interoperability issues, as well as regulatory challenges associated with generative AI. These challenges underscored the necessity of effective regulation to navigate and address the complexities of the metaverse.

The absence of regulation for current metaverse and IT companies was highlighted as a concerning issue. It was noted that these companies operate without specific jurisdiction, leading to a lack of understanding regarding their regulatory framework. Furthermore, it remains unclear whether metaverse companies should offer digital citizenship, further complicating the regulatory landscape. The need to establish clear regulations and frameworks for metaverse and IT companies was deemed essential to mitigate potential risks and ensure accountability.

Privacy and jurisdiction concerns were also brought to attention. It was argued that digital citizenship in the metaverse raises questions regarding privacy and jurisdiction, demanding robust resolution. The implications of privacy, jurisdiction, and applicable law in the metaverse need to be properly addressed to foster the safe and secure environment for users.

On a positive note, it was mentioned that there is existing legislation that can be applied to the metaverse, depending on the specific use case. Examples of existing regulations include those governing personal data, digital identities, electronic signatures, and payment interoperability standards. It was also noted that the hosting of personal data, whether in the metaverse or not, is governed by certain regulations. This recognition of existing legislation provided a ray of hope in terms of navigating the regulatory landscape of the metaverse.

The discussion also delved into the concerns surrounding the conflation of religious beliefs and technological advancements. It was highlighted that this can potentially challenge the structure of human personality. The importance of distinguishing the real world from the virtual world and the potential dangers of blending religious dogmas with technology were emphasised.

Technical challenges were also addressed during the discussion. It was mentioned that one potential bottleneck limiting the growth of the metaverse is lag or delay in connections. This issue needs to be properly addressed to ensure smooth and seamless user experiences within the metaverse.

The topic of regulation for safety was explored, with an emphasis on the limitations of relying solely on regulation. It was argued that regulation is often influenced by lobbying and tends to be abstract, while violations are concrete and precise. This highlighted the need to find a balance between regulation and direct accountability to ensure a safe environment within the metaverse.

The importance of holding platforms accountable was also emphasised. It was noted that technology plays a crucial role in collecting evidence, studying algorithms, and monitoring platform behaviour to effectively hold platforms accountable. This recognition highlighted the significance of technological advancements in ensuring platform accountability.

There were also specific discussions related to user experience and feedback. It was underscored that user experience is crucial and that having an individual log can be beneficial for both users and providers. User feedback was seen as essential for improving the metaverse and enhancing the overall user experience. The value of user feedback and the potential for using individual logs for accountability purposes were highlighted.

Other noteworthy observations included concerns about data collection and utilisation in the crypto metaverse, as well as the preference for quicker onboarding processes that do not gather excessive user data. Additionally, the abundance of digital assets generated by generative AI in the metaverse was seen as a potential threat to their value. It was estimated that the metaverse could be worth $5 trillion by 2030, but the abundance of digital assets could decrease their value.

In conclusion, the discussion surrounding the metaverse touched on a wide range of issues. It brought attention to the need to address structural disadvantages in technology adoption, regulate the metaverse to ensure its value proposition and continuous growth, resolve privacy and jurisdiction concerns, and distinguish the real world from the virtual world. Existing legislation was acknowledged as a potential framework for regulation, while technical challenges and user feedback were highlighted as important factors in the metaverse’s development. The discussion also raised concerns about data collection, asset value, and the impact of blending religious beliefs with technological advancements. Overall, the in-depth exploration of these various issues shed light on the complexities and considerations surrounding the metaverse.

Vakhtang Kipshidze

The Russian Orthodox Church recognizes the existence of the metaverse but asserts that it is a man-made and imperfect world that imitates God’s perfect creation. Vakhtang Kipshidze, a representative of the Church, shares this view and emphasizes that the metaverse is a human creation seeking perfection.

Kipshidze expresses concern about the metaverse becoming entirely secular, excluding religious values. He advocates for integrating religious values into metaverses to counteract religious exclusion and ensure inclusivity. This promotes peace, justice, and strong institutions within virtual worlds.

Kipshidze also raises concerns about the relationship between privacy and freedom in the metaverse. He highlights the close tie between privacy and freedom, warning that violating privacy in virtual environments can lead to a loss of individual freedom. It is crucial to establish privacy protections to safeguard personal freedoms in the metaverse.

Moreover, Kipshidze discusses the challenge of translating human encounters to the virtual realm. He argues that values like love may not have the same impact in virtual interactions as in face-to-face experiences within families and religious communities. Careful thought and consideration are needed to nurture important values in the metaverse.

Furthermore, Kipshidze expresses worry about the potential negative consequences of excessive immersion in the virtual world of metaverses. He believes that obsession with the metaverse can harm individual freedom and overall well-being. Balance and moderation are essential when engaging with virtual platforms.

Additionally, Kipshidze cautions against mixing religious and technological issues, such as digital immortality. He believes that combining religious and non-religious elements in virtual spaces could endanger the structure of human personality. This raises questions about the impacts of merging religious and technological concepts within the metaverse.

Finally, Kipshidze emphasizes the significance of distinguishing between the real world and the virtual world. He sees the issue of immortality as a challenge in differentiating the two realms. Bringing religious dogmas into the realm of technology should be avoided. Critical thinking and discernment are necessary when navigating the virtual landscape.

In summary, Vakhtang Kipshidze’s perspectives shed light on various aspects of the metaverse. The Russian Orthodox Church recognizes the metaverse as a man-made and imperfect creation. Kipshidze’s concerns and recommendations revolve around integrating religious values, protecting privacy and freedom, nurturing important values, avoiding obsession with the virtual world, and maintaining a distinction between the real and virtual realms. These insights contribute to the ongoing discussion on the implications and impact of metaverses in society.

Alina

Regulating the metaverse, a virtual reality space where users interact with computer-generated environments and others, poses complex challenges due to jurisdictional uncertainty and the potential for companies falling under multiple jurisdictions. The metaverse operates globally, making it difficult to determine which laws and regulations should apply. This issue is further complicated by conflicting laws on technology, privacy, and security in different countries. Finding a consensus on metaverse regulation becomes a formidable task.

An important concern for regulation is the standardization process and interoperability. As the metaverse evolves, establishing common standards and protocols is crucial for seamless integration and communication between platforms and virtual worlds. This ensures consistent experiences for users across different environments. However, achieving standardization is complex and necessitates collaboration among stakeholders.

On a positive note, the metaverse holds the potential for digital immortality. Avatars in the metaverse can learn and mimic real-life individuals, allowing their existence to continue even after their physical demise. This raises philosophical questions about identity and ethical considerations regarding creating digital replicas of deceased individuals.

Additionally, the concept of a digital state and digital citizenship is emerging within the metaverse. Individuals can have a presence in multiple metaverses, similar to having dual or multiple citizenship in the physical world. This concept offers intriguing possibilities such as digital societies and rights and responsibilities for digital citizens. However, it also raises concerns about governance, accountability, and potential inequality or exclusion within virtual communities.

In conclusion, regulating the metaverse is complex due to challenges related to jurisdiction, standardization, and interoperability. The metaverse offers potential for digital immortality through avatar preservation and the emergence of digital states and citizenship. While these advancements present exciting opportunities, they also require careful consideration of ethical and societal implications. Policymakers, industry leaders, and society as a whole must collaborate to shape the metaverse’s future while maximizing its benefits and mitigating risks.

Daniil Mazurin

AI plays a crucial role in the development of metaverses, as demonstrated by the integration of OpenAI’s ChatGPT into our daily lives. With over 180 million monthly users, ChatGPT showcases the widespread adoption of AI technology. The current metaverses built by companies like Meta or in the blockchain space, such as Sandbox or Decentraland, are unlikely to achieve mass adoption. This highlights the challenges and limitations that need to be addressed for metaverses to become widely accessible and appealing to the general public. The ideal metaverse should combine real-life experiences, virtual worlds, augmented reality (AR), and AI technologies. Meta’s Rayban AR glasses exemplify a product that integrates the metaverse into society by blending the virtual world with our physical reality. Proper regulation is essential to govern innovative technologies like the metaverse. Lessons from the crypto industry emphasize the importance of regulating such industries to ensure compliance with legal and ethical boundaries. The development and expansion of the metaverse face challenges related to processors and software technologies like Unreal Engine and Unity Engine. Powerful processing capacities are required for advanced virtual worlds, and accessing such metaverses without appropriate devices can result in a subpar experience. Effective user onboarding and verification processes are crucial for enhancing user interaction and platform security. However, concerns regarding privacy and data misuse arise when considering user data management. Addressing these concerns is integral to maintaining user trust and safeguarding personal information. In an ideal metaverse, digital assets should have a limited supply. This scarcity contributes to the creation of demand and enhances the value and ownership experience within the metaverse. Additionally, generative AI can be used by artists to enhance their artwork, rather than replacing them entirely. Furthermore, AI can be utilized to create digital immortality, where AI systems simulate deceased loved ones. This technology allows individuals to continue communicating with their loved ones even after their passing. However, acceptance and implementation may depend on religious and moral considerations. In summary, AI plays a significant role in metaverse development, manifesting in the integration of ChatGPT into our daily lives. However, current metaverses face challenges in achieving mass adoption. The ideal metaverse merges real-life experiences, virtual worlds, AR, and AI technologies. Proper regulation is necessary to balance innovation and mitigate risks. Advancements in processors and software technologies are essential for metaverse expansion. User onboarding and verification are critical for user interaction and platform security, but privacy concerns must be addressed. Scarcity of digital assets and the use of AI for digital immortality can enhance the metaverse experience.

Moderator

The analysis provides insights into various arguments and perspectives surrounding metaverse technology. One argument emphasises the importance of considering values and preserving freedom in the metaverse. It highlights that religious communities should be included in discussions about metaverse technology, as sometimes the metaverse can undermine religious values. The analysis suggests that the preservation of privacy in the metaverse can ensure the protection of freedom. However, it also cautions that an excessive obsession with the metaverse can have detrimental effects on freedom.

Another viewpoint discusses the opportunities and threats posed by metaverse technology. It acknowledges the potential for the metaverse to be utilised for educational and healthcare purposes, which can contribute to SDG 4 (Quality Education) and SDG 9 (Industry, Innovation, and Infrastructure). However, the analysis also recognises the potential for crimes and abuse in the metaverse, raising concerns about safety and ethics. It references a report from the Center for Global IT Cooperation, which provides analytical insights into the metaverse’s impact.

Additionally, the analysis raises concerns about the potential structural disadvantages of metaverse technologies for developing countries. It points out that most metaverse technologies are developed in high IT development countries, primarily in Western Europe, leaving developing countries at a disadvantage due to technological limitations. This observation aligns with SDG 10 (Reduced Inequalities) and SDG 9 (Industry, Innovation, and Infrastructure), advocating for more inclusive development and support for developing countries in adopting metaverse technologies.

Furthermore, the analysis advocates for the active involvement and regulation of metaverse technologies by the governments of developing countries. It argues that developing countries should prioritize the regulation of innovation to effectively navigate the challenges and opportunities presented by the metaverse. This viewpoint aligns primarily with SDG 9 (Industry, Innovation, and Infrastructure) and emphasizes the importance of government intervention for equitable development.

Lastly, the analysis stresses the necessity for audience engagement and idea sharing. It highlights the value of encouraging the audience to actively participate by raising their hand, sharing ideas, or asking questions. This perspective aligns with SDG 17 (Partnerships for the Goals), emphasizing the importance of collaboration and partnership to fully realize the benefits of metaverse technology.

In conclusion, the analysis of metaverse technology presents a diverse range of arguments and perspectives. It underscores the need to consider values and preserve freedom in the metaverse, highlights the opportunities and threats posed by metaverse technology, raises concerns about the potential structural disadvantages faced by developing countries, advocates for government involvement and regulation, and stresses the importance of audience engagement and idea sharing. Overall, this analysis offers valuable insights into the complex nature of metaverse technology and its implications for various stakeholders.

Session transcript

Moderator:
Good morning, dear colleagues. I’m glad to see everyone here today. We’ll have a discussion, networking session on the topic of the future of the metaverses. I would really recommend and urge everyone to sit closer to the presidium as I think that this format better be realized as a form of generic exchange of ideas rather than speakers speaking their prepared reports. But keeping this in mind, we still will have several speakers with prepared reports on the topics of the development of the metaverses, on the future of metaverses, on ethical reasons behind the development of such technologies, and general views. Some of our speakers are representatives of the civil society, others of the academia, and we also have several people who are involved in NFT and metaverse development projects. So hopefully, this session will be interesting, involving, and I really urge participation from everyone. Our first participant of the discussion is the member of the, you’ll be surprised, Russian Orthodox Church, Vakhtang Kipshidze. I think that Vakhtang has joined us online. Vakhtang, can you hear us? I can see that Vakhtang is online, but maybe he has some technical issues, and we should start with another speaker, Daniel Mazurin, who is also online. Almost all of our speakers are currently online. This should say something about the development of metaverses online already. Here I see. Daniel, are you with us? Okay. Okay, I can see. Vakhtang has joined us. It’s late night at Moscow, but still, thank you very much for joining IGF.

Vakhtang Kipshidze:
Good morning, dear colleagues. Thank you so much for inviting me to this forum. And first of all, I would like to start from saying that it is quite natural for the Russian Orthodox Church to take part in such discussions about metaverses, because technologies nowadays are so developed that actually religious communities cannot just stay aside of these discussions. And particularly, this is true about metaverse. What do we consider to be metaverse to be? Metaverse, as I think, is a man-made world that is controlled by man, actually. But however, the problem is with this world is that it claims to be perfect. We religious people actually get used to living in imperfect world. And actually, religions and Christianity, at least, tries to find the recipe to overcome sin and the very fact that this world is imperfect. However, metaverse is a parallel world. And this parallel world sometimes tend to put religions aside, saying that this world is circular. And values which are actual in this world have nothing to do with the religious values, which are widely widespread in our contemporary, not virtual, but real world. First of all, I would like to say that imperfectness of the real world that we have around us, and everybody can actually test this imperfectness on his own skin, actually goes to the metaverse, to virtual world, which is being established by us people. As we believe us just to the opposite, the real world is created by God, as we believe as religious people. So I would say the way how we combine these two worlds in our mind is very crucial and important for us. My main idea is about values. How can we support values in real world and try to bring them to the virtual world, to metaverse? It is not a simple task. I would like to stress that our church actually tries to involve all technologists and particular in virtual world to actually for Christian testimony. However, it is very difficult to go through to the hearts of people. And of course, metaverse is a very material world. And as you know, even better than me, this world is actually directed by material value and material income. So it is very difficult to religious communities to testify about values. Here during your session, as I read, you are going to discuss not only, I would say advantages, but also disadvantages of the metaverses. And particularly you will discuss crimes that are being committed there. And I would like to say that these crimes, if we judge by the consequences of these crimes, are very severe because people sometimes can be actually deprived of their privacy. Our church throughout its history actually testified that privacy is very connected with freedom. If you are deprived of privacy sooner or later, you will be deprived of freedom. And freedom is a real value of the humanity that would be saved and protected everywhere. This is one thing. And the other thing for you to discuss is that our real humanity, I would say, humanity which get what’s used to living in the real world, not virtual, throughout its history, found a way to produce values and produce love. The most important value is love. And love is not a simple value to create and to establish. Love always grows in the context of family and the context of relatives, in the context of religious sacraments, if you are a religious person. All that is very, I would say, questionable in the world of metaverse. So, I think that here, and at this state of development of human race, we should think about values and how these values would be protected. And that should be our good will to go on the path of protecting these values. The other thing I would like to stress is that sometimes, and we all see that, people are being obsessed by virtual world, by metaverses. And this obsession, I would say, is very detrimental to the freedom and well-being of human personality. Again, we, humanity, are very just well acquainted with obsessions of different kinds. And obsession of the virtuality is a new kind of obsession. And so, if you want to somehow find a way to just fight this obsession, you should elaborate new approaches. And this is not a simple task. So, with that said, I would like to thank all organizers of this forum and wish you good luck in your discussions. If you, if it is possible, just, I am open to the questions that could come from your side. So, thank you very much indeed.

Moderator:
Thank you very much, Vakhtang. Thank you also that you had time to join us. And we would really encourage also our participants to ask questions, to engage, and hopefully you’ll stay with us during the whole discussion. I must also tell a little bit about our organization I represent and that hosts today’s networking session called Center for Global IT Cooperation. It’s in the think tank, which deals with questions of digital development, transformation, digital economy, internet governance, and all sorts of things digital. Recently, we have contributed an analytical report on the theme of metaverses to T20 within the format of G20. It was also dedicated to the ethical issues which arise during the development of metaverses, during the usage of metaverses, possibilities of crimes, abuse, and also opportunities which metaverses can provide in terms of education, in terms of healthcare, and all sorts of things which come with it. And I think that best way to elaborate on the positive side of metaverses would be to give a floor to someone who deals with them directly, works with projects connected to metaverses, NFT technologies, and metaverse-enabling technologies. Today we have with us our dear friend Daniel Mazurin, a young interpreter, entrepreneur, businessman, startup, our guru. Daniel, is everything all right? Do you have, yeah, you’re supposed to speak. Great.

Daniil Mazurin:
Awesome. Thank you so much, Alim. Long time no see. It’s a pleasure to be here. Always grateful for opportunities that you and you give us. So I would like to, from the technologist’s point of view and coming from the private sector, I’d like to start with one thesis. And this thesis is that we are living and looking at one of the most, if not the most interesting period of human history in terms of technological integration into our society and our daily lives. And I’m specifically talking about the artificial intelligence that we have to talk about today because metaverse tech and AI are, you know, is extremely connected. So I don’t know about, you know, technologies that we had in the ancient Egypt, forgotten technology, but in the modern society, I believe that AI, you know, plays a very big role. And we are already seeing a lot of users of chatGPT, right? AI model developed by the OpenAI. There are more than 180 million users daily, oh, sorry, monthly. And it was a stats in August. And metaverse technology, as I said, is very connected with artificial intelligence because we cannot develop a proper metaverse or virtual world or artificial intelligence or artificial world or augmented reality world without AI integration. So thus, I like to state that, you know, we’re living in a very, very interesting period of human history. And already, you know, we’ve already tested on ourselves how chatGPT influences our lives. And the same thing would be, I believe with metaverse technology. Right now, what we have on the market, and the market is not very bright right now, of course, because, you know, a lot of corporations and what’s stated in the description of the agenda, you know, a lot of corporations are stopping to develop metaverse tech. Why? Well, it’s, I don’t know about the directors of the corporations, but I can see a lot of startups, especially in the third world countries, developing metaverse projects, and they’re pretty successful. And they’re being bought by many businesses, and corporations. operations APIs are being used in terms of our technology, for example, on the in the business in the startup industry. So we’re seeing a lot of things going on. But we don’t see a real integration. Why? Well, personally, I think that modern metaverses are not what metaverse should look like metaverse should combine not only virtual worlds with, you know, VR Hamlet that you have right now VR glasses by meta and other corporations, metaverse should combine real life too. And we can combine real life with virtual world using AR technology. You’ve probably heard and seen recent news about Rayban and meta AR glasses. This is one of the biggest AR and metaverse products for integration into into this society. It’s a, you know, brand new Rayban and it’s very good for youth and it’s very cool. So I believe by making such mass adoption products will be able to integrate this tech into our society. Well, why? Well, yeah, regulation is, is a must, right? It is needed. We’ve seen what happened in the crypto industry for the past two years, a lot of scams. People lost a lot of money. So such industries, innovative industries should be regulated, of course. I’m not talking about the US regulation when you have to ban a lot of companies. I’m not saying about Chinese regulation, where you just ban every, every technology to develop by your own. I’m saying about good regulation, where you give an opportunity for businesses to thrive, give an opportunity for startups to properly make money and improve, improve the technology. And this, this tech, like metaverse technology, VR, AR, AI, should be regulated, first of all, in the third world countries, where this tech, innovative, innovative tech gives opportunity to increase GDP, to increase quality of people’s lives. And overall, just make a very cool implementations and, and make a future in this, in this countries. So yeah, I’m not a long speech. Overall, I would, I like to say that the metaverse that we’ll see in the upcoming years, is not the metaverse that we have, like what Mark Zuckerberg is building, or what we have in the, in the blockchain space, like sandbox or the central end. These are not metaverses that will be mass adopted. metaverse will be a combination of VR, AR and AI technologies. And specifically, if we’re talking about AI, it’s already being used for, for example, integration of AI into NPCs, right in gaming and virtual worlds, or even in augmented reality, in terms of GPS mapping and creating immersive experiences, automatically with artificial intelligence for AR glasses, or just AR applications on on on our smartphones. So yeah, I think I think this is it for me.

Moderator:
Oh, so Daniel, I have a brief question for you. So what do you think about such a thesis that, of course, taking into account all the positive sides that metaverse enabling technologies can provide, for instance, let’s say, in corporate, corporate education formats, even in space, like in the sphere of auto piloting, and etc. Could there potentially be some structural disadvantages, you have talked briefly about developing countries, and I can clearly see the problem that, of course, metaverse enabling technologies are amazing. They are very aspiring and great, but we should acknowledge that they developed only in countries with high rate of IT development, GDP and etc, mostly Western European countries, could there potentially be a situation with structural disadvantage, where the developed world already has access to such technologies and uses it, and the developing world once again, has to try to reach their level of technological sovereignty and metaverse connectivity and is unable to do so simply because of the structural differences, what should be done about it? Should this question be addressed as well? Yeah, absolutely. I think that’s that’s a great question. And that’s a great statement from you. Because there is an absolute structural disadvantage nowadays in terms of technology creation from the West, right and from China. And that’s why I said that, you know, first countries that should properly regulate and give opportunities for startups to thrive and to build products should be third world countries. Yeah, of course, there is a big advantage of the of the United States and Europe in terms of technology and in terms of technological resources. But you know, a lot of things are changing nowadays. And that’s why third world governments should regulate innovation, firstly, while the other other countries are trying to regulate and they have other interests in their hats. So yeah. Thank you very much. So maybe there are some ideas from the audience, or I also see that we have some quite a number of around 20 people online, I would really encourage anyone to raise their hand and ask a question or maybe propose a certain idea of their own. Yeah, clearly can see a gentleman over there. I do have a mic in the you do have Yeah, there’s a microphone.

Audience:
Thank you. So just a few things that during COVID, it acted as a catalyst for so many different digital platforms to come up and it showed us some of the value proposition of metaverse. But as you know, that the standardization process is still going on. The interoperability issues are there. There have been certain projects like for example, digital immortality, people have been trying to, you know, if you have a digital avatar, and it can basically learn about a certain real life person. And if that person is no longer there, then the avatar lives on and how accurately it can mimic a certain person real person. So there are certain advantages of using metaverse. My question is that, for example, now we have generative AI, there are talks about regulation, and how AI content is going to be taken, for example, whether it would be acceptable in certain areas or not, there are so many different platforms, which in which you can use chat, GPT, mid journey, you can create so many different types of content. In the metaverse. My question would be that how important the role of regulation would be to ensure that the value proposition of metaverse is so much so that it continues to grow, and it offers a lot of opportunities for people in different countries. Thank you.

Alina :
Yes, I can I take like a word for Chanel, I will take your word for you. So actually have a very good think about regulation. Yeah, there is a big reason why metaverse and not regulate is because we’re not understand in which jurisdiction they actually operate. So it companies made metaverse. So where do they actually exist? So basically, some people think that metaverse like the first step to the digital state. So can they afford digital citizenship to a person who actually in the metaverse and if person are in many metaverses, it’s like he has like double citizenship or business ship or three countries. So the question is, do we need to regulate metaverses or the IT companies, and maybe create some kind of a framework for the whole metaverse conception and DLT technologies, because we don’t have like, still wouldn’t have regulation, even on the financial market of things like DLT, cryptocurrencies, they still exist somewhere in the internet without particular jurisdiction without country without everything. So we can’t, we actually not decided if the IT companies just operating social networks have regulations apart from the country they registered in. And this is, of course, a very difficult question. And maybe we’re just in the first step of this. So apparently, metaverse can give you the digital immortality, because it’s kind of a digital prison for people that are not longer with us, I can. So actually, you’re right important, but I don’t think that there is a answer yet for the question.

Audience:
Thank you. Just a couple of other things that I would like to mention is that since you mentioned digital citizenship, that also raises the question of privacy related issues, jurisdictional issues, which law is going to be applied on whom that is also a big problem that needs to be resolved. So thank you. Thank you for the story. My name is just to contribute to that. I think there is actually a partial answer to some of these questions. It depends on the use case in the metaverse. So if it’s anything related to personal data, there is actually regulation that already applies around digital identities, electronic signatures, payment, interoperability standards, but that’s a public sector use case. So again, also with the hosting of data, as soon as it’s personal data, there is regulation that governs that. Whether or not it’s cloud or the metaverse, that doesn’t matter. So there are actually pieces of legislation that are already applicable to the metaverse depending on the use case. Yes, there is some Wild West elements around NFTs and gaming, et cetera, et cetera. But if you look at it from a US context, a Chinese context or European Union context, there is actually legislation in place that governs key elements of the metaverse, whether or not you use it or not. So just a little bit.

Moderator:
Thank you for the brief contribution. I should just give a word to Daniel Mazurin, and then we’ll give a brief word to Vakhtang.

Daniil Mazurin:
So yeah, I would just like to add to Alina in the question that, you know, there is no problem with regulation at all. What Alina said, I kind of disagree with that, because most of the companies and startups building metaverses, they are incorporated in some countries, even crypto companies, they’re incorporated, usually like in Hong Kong right now, or in Seychelles. So that’s not the problem of regulating and really coming to these companies. The real question, I think, how we should really regulate them? Do we need to give them a full freedom of actions? Or do we need to really look after them and see how it goes in the crypto or AI or metaverse, because they can influence, you know, Gen Z and, you know, destroy the world, etc, etc. I think so. And the question that was asked is, how we should regulate properly AI in the metaverse? Well, I think that, you know, AI is already being regulated, and all the companies that are building AI, they already make their auto regulation by their own, because if we’re talking about the open AI, the biggest right now a company, they essentially make their, their, their code regulated. So you cannot, for example, generates 18 plus contact using their generative AI, or you cannot ask certain questions, or get answers on certain questions that are related to some specific topics. But that could go wrong, right? So they can essentially delete this auto regulation of the of the AI. And that’s the real problem. That’s why governments should properly regulate them because AI is dangerous. We have to realize that if it goes beyond the open AI servers, or something, something else, then could turn into a big issue to to the to not only the company, but the humanity in general.

Moderator:
Thank you, Daniel. Let’s give a brief word to what time as well he raised their hand.

Vakhtang Kipshidze:
Thank you so much. It is very remarkable that there are people here who actually raised question about digital immortality. And here being a representative of religious organization, I would like to say that we should be very careful dividing religious issues and technological issues. If at some stage of development of technologies, religious issues such as immortality, and non religious issues like, you know, technological progress are being mixed. So I think I think it is a big danger, it I think sets a big danger for humanity because, you know, if people believe in immortality, it is a good thing, if they can, as they think, get this immortality now and just do some technological procedures, I think it is a very big challenge because we cannot just bring a space of dogmas to the space of technology. In that case, I would say the whole just structure of human personality could be endangered because at some stage a person will not understand whether he or she has body or doesn’t have it. It is, I think, a very crucial issue just to see a difference between real world and virtual world. And sometimes, and the issue of immortality shows it, I think, in good sense, this just mixing is very just seizable. Thank you so much.

Audience:
Hi, I’m not, I don’t know much about metaverse. I was wondering what is the bottleneck for the, for the, let’s say, the spread, like the growth of metaverse right now, if it is technical. And if it is technical, would it be like lag or delay in the connections, one of the big challenges or not?

Daniil Mazurin:
Yeah, that’s a great question. And I actually wanted to say one very important thing and answer this question. The real bottleneck in the metaverse creation and expansion are essentially processors because you cannot really download a crazy world online and to live in it and to communicate with other avatars, other people in this downloaded in this downloaded online world, right? So there is, this is the main technical issue. For example, if you open right now, such metaverse as the central end, and you don’t have an MSI computer, for example, gaming computer, your notebook will, your computer will be very, very, I would say, hard to process things and it will be very low. So this is one of the issues. And if we’re talking about the VR, for example, right, this VR glasses, it’s also very low. And it’s very connected to the development of Unreal Engine and Unity Engine. So a lot of, a lot of things of the matters depend on this infrastructures. And right now you can see a whole new upgrade from Unreal Engine on how you see things with using Unreal Engine, for example, you will be able to literally see every and each detail that was animated, right? So yeah, there are bottlenecks, but sooner or later, we’ll see developments from this engines, we’ll see development from computer processors. So yeah, sooner or later it will happen.

Audience:
Good morning. My name is Claudio Agosti and I’m a platform auditor. Although I welcome the existence of regulation, I also believe that cannot be seen as the solution that will guarantee us safety. Because regulation is the output of lobbying, because regulation need to be abstract while the violation is more concrete and precise. And in the past years, we saw that the only way to investigate on platform misbehave was to have a researcher that were developing their own technology to collect evidence, study the algorithm, study the platform, and then keep them accountable to data protection authority, to media reporting, or to government reporting. So the question that I believe is more for Daniel is, would you allow, for example, in your tool that every user that is having an experience can save a log of what is happening? And would you accept that this log will be used to actually keep yourself accountable or at least to raise question on why a system behaved a certain way? Because at the end, all the experience that a person get is individual, depend from the algorithm that will not repeat their own behavior in the future, depends on other contextual element that will never be repeated. So only having an evidence of what has happened, a log, a video, can allow to a person that suffered something to ask for explanation or for attribution. Thank you.

Daniil Mazurin:
Could I ask a question to a question? Is this the question related to would I allow my platform or tool to be audited or regulated?

Audience:
That is normally defined by regulation. For example, if your platform need to run in a sandbox, if you need to document if it’s high risk, low risk, that is unavoidable. What I was asking is something more. Because normally the regulation can let you can let you certify your tool, but then the problem is never in the tool itself, but in the experience of your users. Are they the ones offering this information or suffering harassment, et cetera? And if there is a log, that is an individual log of your experience, that at least can allow the user to ask a further question or to also offer you feedback to improve the tool. Yeah, absolutely.

Daniil Mazurin:
That’s that’s a great question. And I’ve personally communicated with a lot of auditors, platform auditors and smart contracts auditors in the space. Yeah. You know, that’s that’s a question of UI UX. Right. So it’s always better to skip the Q&A during the onboarding process to your tool. Right. It’s always better. But you will never know the data of the of your users. Right. It’s always better to skip, I don’t know, authorization of the user because it’s long and it’s not very useful for the user because the user wants to get in touch with your product as soon as possible. He or she doesn’t. The users don’t want to essentially register and go through all this process. So, yeah, but it’s it’s it’s a you know, it’s essential nowadays, even even in the crypto space right now. You know, it’s it’s it’s essential to know who is your user or what’s what wallet the user has. Right. You you essentially collect the information, the real issue, how you use this information, because you can you can you can get rid of frauds on your platform when you know your users. Right. Or you can use user data to manipulate users and sell this data. It’s you know, it’s it’s it’s the issue of how you use users data rather than do you need to collect data? Yeah.

Audience:
We have one more question. Sorry, if it’s OK, I’ll just I just have two questions, actually, one from Daniel, and I just need to know a religious perspective on since we already discussed digital immortality. So I’ll start with digital immortality. What happens right now is that we have a lot of content on the Internet and everybody who’s online, they leave a digital footprint, you know, even after they’re dead, the content remains on different platforms. Take the example of YouTube. There are so many different lectures available from so many people. There are documentaries. You can see so much content about people. The only difference that I see with Metaverse is that with digital immortality, that kind of content. I mean, if there is a digital avatar of somebody, you can interact with that digital avatar with the content that we have right now. It’s not interactive. If there is a video on YouTube, you can’t really interact with that person. So from the religious point of view, let’s also consider AI regulation. You I mean, AI should not be discriminatory based on religion, race, all those factors. So when the regulation is there, and when you talk about digital immortality and somebody’s digital avatar lives on, so I just want to know that why can it be considered a bad thing? It can have so many advantages for the people who are related to somebody whose digital avatar lives on. The second question is from Daniel regarding the value that Metaverse has to offer. So let’s talk about, for example, digital assets. There was an article from McKinsey which estimated that by 2030, the Metaverse is going to be worth $5 trillion. And there were so many reasons. One reason was scarcity in the real world. So you have limited resources in the Metaverse. The digital assets, there is no limit virtually. So with generative AI, now we can compare that in the real world we have scarcity, but in the Metaverse, there is going to be abundance of everything. Now with generative AI, you can generate digital assets. There are so many tools through which you can generate digital assets, and a lot of people are doing that. So won’t that reduce the overall value of digital assets because you have scarcity in the real world, but you have abundance in the Metaverse, which in economic terms, abundance basically, in some cases, is not good. It reduces the value of assets. So these are my two questions. Thank you.

Daniil Mazurin:
Yeah, I can also add a little bit about the immortal thing, immortal avatars or immortal digital, your digital persona. But let’s start with the second question. So first of all, you have to realize that in the Metaverse, in the ideal, let’s say, type of Metaverse, you will essentially own your items and assets. So there’ll not be an unlimited supply of items and assets, what can be produced. Of course, if we’re talking about the generative AI right now, let’s say the utility and the price of 3D rendered artworks have been recently declined because right now you don’t need to hire a 3D artist. You can go just to generative artificial intelligence and make your own art. But it’s still, you can consider this as something that creates unlimited supply, but also you can consider it as a tool. Because right now, a lot of 3D artists, for example, they use generative AI to generate pictures and they add their art onto it and it becomes even brighter and more beautiful. So yeah, back to the Metaverse thing. There is still, will not be unlimited supply of assets and items because, you know, it’s supply and demand. You have to sell NFTs, for example, if we’re talking about blockchain-based Metaverse, you have to sell NFTs, have to sell NFT lands or, you know, your clothes or your avatar. So there’ll always be a limited supply in order to create the demand. So yeah, that’s in short. So in terms of immortality issue, I strongly support this question and I believe that there is a future in that. I truly believe that creating an AI for your relative who has died, unfortunately, can be a good thing, but we, essentially, we cannot go too far with that because I don’t think that in terms of religious point of view, that’s a moral thing to do, right? So there will always be such issues. But in terms of people who are willing to do this and who don’t have any religious, you know, bottlenecks or who are not religious or whose religion allows them to do so, then why not do this, right? Because you can always communicate with the person who is very important to you and you’ll be able to do that, right? So yeah, thank you so much for your questions. Very, very interesting questions.

Moderator:
So thank you very much, Daniel, Vakhtang, and our dear colleagues. I think that time is running out, as our colleagues have already shown us. I thank everyone for involvement in the discussion and I also would like to invite you to, after the session discussion, if you have any questions, we’ll be glad to talk in private. And also, a small notice. Tomorrow, our organization organizes a soiree. And we would also love to invite all of you to partake. We’ll give more precise information after the session. Yeah. Thank you very much, all of you.

Alina

Speech speed

177 words per minute

Speech length

284 words

Speech time

96 secs

Audience

Speech speed

164 words per minute

Speech length

1391 words

Speech time

508 secs

Daniil Mazurin

Speech speed

138 words per minute

Speech length

2194 words

Speech time

951 secs

Moderator

Speech speed

162 words per minute

Speech length

1125 words

Speech time

416 secs

Vakhtang Kipshidze

Speech speed

125 words per minute

Speech length

1127 words

Speech time

542 secs

The State of Global Internet Freedom, Thirteen Years On | IGF 2023 Launch / Award Event #46

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Emilie Pradichit

Southeast Asia is currently facing significant challenges due to the presence of authoritarian regimes that employ cyber laws to target individuals who express dissenting views or defend human rights. These regimes often exploit the concept of national security as a pretext for suppressing freedom of speech and violating human rights. For instance, in several ASEAN countries, such as Thailand, Cambodia, Vietnam, and Myanmar, there are concerns about the lack of freedom, as highlighted by the Freedom of the Net report. In Thailand, the situation is particularly severe, with a human rights lawyer, Arnon, facing the possibility of 210 years in jail for advocating for reforms within the monarchy.

Another concerning development in Southeast Asia is the misuse of artificial intelligence (AI) for surveillance and content moderation by governments in the region. These practices have resulted in privacy violations and infringements on individual freedoms. Governments are increasingly regulating tech companies to ensure the enforcement of their laws. Notably, the Thai government has passed a decree obligating tech companies to remove content deemed a threat to national security within 24 hours. Additionally, AI has been misused for facial recognition surveillance, raising concerns about privacy and potential abuse of power.

Emilie Pradichit advocates for rights-respecting regulatory frameworks and holds tech giants accountable for the misuse of their platforms. She calls for the implementation of the United Nations Guiding Principles on Business and Human Rights (UNGPs) and the Organisation for Economic Co-operation and Development (OECD) guidelines for multinational tech companies. Pradichit suggests that tech giants should be held criminally and civilly liable for any harm caused by their platforms. She points to the Rohingya crisis and the use of platforms like Facebook to propagate hate speech against the Rohingya people to illustrate the urgency of her arguments.

The Freedom Online Coalition (FOC), which is primarily known among digital rights and online freedom groups based in Washington DC, lacks visibility and accessibility, especially among people from non-Western countries. To amplify its impact, FOC must work towards increasing awareness and engagement beyond its traditional base. This would involve conducting stakeholder engagements not only in Washington DC but also in other regions. Unfortunately, visa restrictions often hinder engagement with the global majority, making it difficult for individuals from these regions to travel to Europe or the United States.

Furthermore, FOC’s role becomes particularly crucial in light of the many elections scheduled worldwide for 2024. Civil society groups anticipate FOC to release statements targeting authoritarian governments and the private sector to safeguard democratic processes and protect human rights.

To effectively combat authoritarian governments online, FOC should invest in civil society and provide financial support to organizations fighting against digital dictatorship. Financial constraints often limit the abilities of these groups to engage in advocacy and carry out essential work.

Aside from these specific challenges, there are concerns about the local Data Protection Act in Thailand. While the government claims to have developed the Act by taking inspiration from the General Data Protection Regulation (GDPR) in the European Union, there are issues regarding effective oversight and remedy. The Act includes government-led exemptions that allow violations of data under the guise of national security.

Another aspect that deserves attention is the lack of dialogue and understanding of the local context in global exchanges. It is crucial for international diplomats and institutions to have a comprehensive understanding of the practices followed in each country to foster more effective collaborations and mutual understanding.

The overarching theme throughout these discussions is the importance of respecting and implementing international human rights law. Emilie Pradichit insists that civil society does not oppose international human rights law but rather desires governments to adhere to these principles. Concerns are raised about the ease with which governments deceive international institutions by creating an appearance of compliance with international standards.

In conclusion, Southeast Asia faces numerous challenges related to authoritarianism, cyber laws, and the misuse of AI. To address these issues, there is a need for greater awareness and engagement with organizations like the Freedom Online Coalition. Additionally, it is crucial to hold tech giants accountable, invest in civil society, strengthen data protection laws, foster meaningful dialogue, and promote the implementation of international human rights standards. These efforts are essential for safeguarding human rights, protecting privacy, and upholding democratic processes in the region.

Audience

During the discussion, one of the main points highlighted was the confusion surrounding the support mechanisms for online activists who are under threat. The speaker mentioned their ability to provide support for these activists, but there seems to be a lack of clarity on the specific services offered in different jurisdictions. To address this, an audience member sought clarification on the support services available in various legal contexts.

Allie Funk, who leads a team of seven people, stressed the importance of collective work and making tough decisions. This indicates that her team understands the challenges and complexities involved in supporting online activists who face threats. It shows their dedication to their work and the need for collaboration in achieving their goals. The audience showed gratitude towards Allie Funk for her closing remarks, indicating that her insights and perspective were valued.

One noteworthy observation from the discussion is the mention of SDG 16, which focuses on peace, justice, and strong institutions. This indicates the connection between the support for online activists and the broader goals of promoting justice and ensuring the protection of human rights. The speaker’s ability to provide support aligns with the goals of SDG 16.

Overall, the discussion shed light on the confusion surrounding support mechanisms for threatened online activists. It emphasized the importance of collaborative efforts, tough decision-making, and acknowledging the hard work of those involved in supporting these activists. The audience’s gratitude towards Allie Funk indicates the impact of her closing remarks and the appreciation for her insights. Moving forward, it is crucial to address the confusion surrounding support services and ensure a clear understanding of the resources available for online activists in different jurisdictions.

Guuz van Zwoll

The European Union (EU) has implemented regulatory laws, including the Digital Services Act (DSA), Artificial Intelligence (AI) Act, and Digital Markets Act (DMA), through extensive multi-stakeholder engagement. The General Data Protection Regulation (GDPR) has also been rolled out by some companies across all countries. These regulatory laws, such as the DSA, AI Act, and DMA, have received positive sentiment for maintaining a balance between strong regulation and protection of human rights. They include transparency clauses and an appeal process for comments removed.

The Netherlands is committed to promoting the principles of the DSA, AI Act, and DMA. They have released an English translation of the Dutch International Cyber Strategy, urging other countries to adopt these EU regulations and implement associated human rights and democratic clauses. The Netherlands focuses on inclusive internet governance, integrating cyber diplomacy, digital development, and human rights work.

In addition, the Netherlands incorporates the multi-stakeholder model into internet governance, emphasizing digital security, governance principles, and digitalization in all their initiatives. They prioritize civil society engagement, running programs like the ‘Safety for Voices’ program to include diverse perspectives in governance decisions.

The Netherlands also supports human rights defenders and digital defenders at risk through initiatives like the Digital Defenders Partnership. They provide support in legal aid, physical protection, digital security, and psychological well-being. Transparency is a key component of the Netherlands’ global governance approach, advocating for the inclusion of global majority countries and multi-stakeholder involvement to protect human rights.

In summary, the EU’s regulatory laws, such as the DSA, AI Act, and DMA, strike a balance between strong regulation and protection of human rights. The Netherlands actively promotes these laws, advocating for their adoption and implementation of associated human rights and democratic clauses. They prioritize inclusive internet governance, incorporating cybersecurity, digital development, and human rights work. The Netherlands also supports civil society engagement, human rights defenders, and emphasizes transparency in global governance to protect human rights.

Olga Kyryliuk

Over the past decade, the field of internet freedom has witnessed significant changes and developments. Previously, topics like cybersecurity were widely perceived as unimportant and lacked understanding. However, there has been a notable shift in recent years, with cybersecurity garnering more attention and recognition. This growing awareness can be attributed to increased public understanding and recognition of the importance of internet freedom and digital rights.

The advancement of technology, particularly in the areas of artificial intelligence (AI) and blockchain, has brought about both new opportunities and challenges. While these advancements have pushed the boundaries of safety and security, they have also raised concerns about potential threats. The risks and challenges associated with AI and blockchain technologies are a cause for concern, reinforcing the need for robust regulation and safety measures.

In addition, a troubling trend of digital authoritarianism has emerged, characterized by internet shutdowns, content censorship, and the unregulated use of surveillance technology. Instances of internet shutdowns have increased globally, leading to a limitation of free expression and access to information. Moreover, the lack of effective regulation of private tech companies and tech giants has further exacerbated these issues. The use of mass biometric surveillance systems without proper legal safeguards is also on the rise, posing a threat to privacy and civil liberties.

To address these challenges, it is crucial to foster continued collaboration and dialogue. Concrete initiatives and partnerships, rather than just talk, are needed to tackle the growing threats to internet freedom. By engaging stakeholders from various sectors, progress can be made in tackling the complex issues surrounding internet freedom and digital rights.

Furthermore, the engagement of civil society in initiatives such as the Freedom Online Coalition (FOC) is of utmost importance. The involvement of civil society can provide valuable insights and perspectives in shaping policies and decision-making processes. Olga Kyryliuk, who leads an influential internet freedom project, stresses the need for better civil society engagement within the FOC. This can be achieved through periodic consultations on specific thematic issues, allowing for an open exchange of ideas and feedback.

The importance of regional and national communities cannot be overlooked in promoting internet freedom. The FOC should prioritize working with these communities and foster connections and partnerships between them. By bridging the gap between governmental representatives and regional communities, the FOC can play a pragmatic role in facilitating dialogue and collaboration.

However, the current state of the global digital compact and the Freedom Online Coalition calls for improvement. Civil society feels frustrated due to a lack of clarity and engagement opportunities. This restricts the meaningful participation of implementing partners in shaping policies and decision-making processes. It is crucial to establish clear venues and mechanisms that allow for effective engagement and collaboration.

Finally, it is important to exercise caution when adopting regulations from other regions, such as the European Union’s General Data Protection Regulation (GDPR). While these regulations may be seen as ideal, they should not be adopted without proper understanding and adaptation. Countries that directly implement GDPR as their national law have faced challenges during the enforcement phase. Therefore, dialogue and conversation with national legislators, as well as capacity building, are essential for the successful adoption and implementation of such regulations.

In conclusion, the past decade has witnessed significant changes in the field of internet freedom. While there has been progress in raising awareness and understanding, challenges remain in ensuring the safety and security of the digital space. Collaboration, engagement of civil society, and the development of concrete initiatives are crucial in addressing these challenges and protecting internet freedom and digital rights.

Oliver

Oliver expresses concern over the lack of transparency displayed by the Freedom Online Coalition (FOC) in their dealings with UNESCO guidelines. He argues that the FOC needs to be more open and transparent about their actions, implying that they may not be acting in the best interests of promoting freedom of expression and human rights in the digital space.

Furthermore, Oliver raises an additional concern about UNESCO’s guidelines, specifically focusing on the potential promotion of authoritarianism in the digital sphere. This highlights his worry that these guidelines may inadvertently facilitate the rise of oppressive regimes online. Both Oliver and the speaker share a negative sentiment towards these issues.

However, the summary lacks supporting evidence or specific examples to substantiate these concerns. Without further supporting facts or arguments, it is difficult to fully understand the basis for these apprehensions. Including additional evidence or examples would strengthen the arguments made by both Oliver and the speaker.

In conclusion, Oliver calls for increased transparency from the FOC regarding their dealings with UNESCO guidelines. He suggests that the FOC’s actions should be more transparent and urges them to openly share information. Additionally, Oliver expresses worry about UNESCO’s guidelines potentially promoting authoritarianism in the digital space. These concerns highlight the need for careful consideration and vigilance in protecting freedom of expression and human rights online.

Allie Funk

Internet freedom has been experiencing a steady decline for the past 13 years, marking 2023 as another year of regression. According to the assessment conducted by Freedom House, attacks on free expression have become increasingly common, with individuals being arrested for expressing their views in 55 of the 70 countries under review. Furthermore, governments in 41 countries are actively blocking websites that host political, social, and religious speech. These developments have contributed to a negative sentiment surrounding the state of internet freedom.

The crisis has been further exacerbated by advancements in artificial intelligence (AI). The rise of AI has led to intrusive surveillance, censorship, and the proliferation of disinformation campaigns. Generative AI technology has been misused in 16 countries to distort information, while 22 countries have instituted requirements for companies to deploy automated systems that censor speech protected under international human rights standards. These factors have contributed to a growing negative sentiment towards the impact of AI on internet freedom.

To address the urgent need to protect internet freedom, there is a call for the regulation of AI. The key argument is that regulation should not solely rely on companies, but rather center around human rights standards. It is important to increase transparency and understanding of the design, use, and impact of AI systems. The positive sentiment towards this argument reflects the belief that appropriate regulation is necessary to safeguard internet freedom.

In addition to regulation, there is a push for the inclusion of civil society in the AI regulation process. Currently, civil society is being left out in the race to regulate AI, leading to concerns about a lack of diverse perspectives and potential biases in decision-making. Emphasizing the need for involvement from global majority civil societies, this argument holds a positive sentiment.

Despite the challenges posed by AI, there is recognition that it can also contribute to bolstering internet freedom if designed and deployed safely. AI has the potential to help individuals evade government censorship and facilitate the detection of disinformation campaigns and human rights abuses. This positive sentiment signifies the belief that AI can be harnessed as a tool to protect and enhance internet freedom.

However, it is essential to avoid overshadowing long-standing threats to internet freedom by solely focusing on the regulation of AI. The neutral sentiment surrounding this argument highlights the need to maintain momentum in addressing broader issues related to internet freedom.

The European Union (EU) has emerged as a global leader in internet regulation. Bridging the gap between the Chinese model and the US laissez-faire approach, the EU has enacted significant legislation such as the General Data Protection Regulation (GDPR), which serves as a model for global data protection laws. The Digital Services Act and the EU AI Act are further examples of the EU’s commitment to internet regulation, earning positive sentiment and demonstrating their efforts to protect internet freedom.

The impact of internet regulations on human rights varies depending on the rule of law standards in each country. The sentiment surrounding this statement is neutral, emphasizing the need to consider the context in which internet regulations are implemented and their potential effects on human rights.

Governments have a crucial role in protecting internet freedom and ensuring meaningful multistakeholderism. For instance, the Netherlands is exploring strategies that merge cyber diplomacy, digital development work, and human rights aspects to safeguard internet freedom. Programs like Safety for Voices support human rights defenders and civil society organizations through digital security measures. This positive sentiment highlights the importance of government involvement in protecting internet freedom.

Lastly, multilateral bodies such as the Freedom Online Coalition can play a vital role in reversing the decline of internet freedom. Comprised of democratic governments committed to protecting internet freedom, the coalition serves as a platform for collaboration and advocacy. The sentiment towards this argument is neutral, acknowledging the potential impact of multilateral efforts.

In conclusion, internet freedom has been on a decline for the past 13 years, with attacks on free expression and website blocking becoming more prevalent. AI advancements have intensified the crisis by enabling surveillance, censorship, and disinformation campaigns. To protect internet freedom, there is a need to regulate AI, involve civil society in the decision-making process, and ensure good governance centered on human rights standards. However, AI also has the potential to enhance internet freedom if used responsibly. The EU has been at the forefront of internet regulation, but the impact of regulations on human rights varies across countries. Governments play a crucial role in protecting internet freedom, and multilateral bodies can assist in reversing the decline. Overall, it is essential to navigate the complexities of internet freedom and strike a balance between regulation and broader challenges.

Lisa

During stakeholder consultations conducted by Lisa, a representative of USAID, in various countries, a common concern emerged: dissatisfaction with existing international models of digital regulation. This sentiment has triggered a demand for a different approach, a third-way framework for digital rights that goes beyond the risk-based European model, the laissez-faire American model, and the state-based model adopted in China.

Stakeholders, particularly in countries that make up the global majority, expressed a desire for a digital regulation framework tailored to their specific needs and circumstances. They see the necessity of finding a middle ground to address the challenges faced by their nations.

The implementation of the General Data Protection Regulation (GDPR) and similar regulations, specifically in countries with different income levels and limited oversight capacity, has been perceived as onerous. This concern stems from the difficulties these countries face in fully implementing and complying with such regulations. Additionally, there is a noticeable lack of political will and politicization of some oversight bodies, further complicating the effective execution of digital regulations.

In light of these observations, there is a need for a broader conversation on what human rights protections and safeguards should look like in different contexts. Instead of imposing a one-size-fits-all approach, there should be an exploration of context-specific digital human rights protection and safeguards. This approach acknowledges the diversity of countries and their varying levels of development, eliminating the potential burden of regulations that may not align with their specific needs and capacities.

Overall, Lisa’s consultations highlight the dissatisfaction with current international models of digital regulation and the need for a third-way approach that considers the unique circumstances of each country. The difficulties faced in implementing GDPR and similar regulations also call for a more nuanced and flexible approach to digital rights. Engaging in a broader conversation on context-specific human rights protections and safeguards allows stakeholders to work towards a digital regulation framework that respects the rights of individuals while accommodating the realities of different countries.

Jit

Jit attended a United Nations conference with the intention of obtaining a deeper understanding of the global digital compact and seeking various perspectives on its merits. Jit approached the topic with a neutral stance, indicating an open mind and a desire to gain further insights. Specifically, Jit was interested in exploring the potential advantages and disadvantages of the compact.

During the conference, Jit actively participated in the discussion and initiated the topic of the global digital compact. This demonstrated Jit’s eagerness to engage with others and foster a robust conversation. The conference setting provided an ideal platform for an informed and constructive dialogue on the subject.

The focus of the conversation revolved around the impacts that the global digital compact could have on industry, innovation, and infrastructure, as outlined in the 9th Sustainable Development Goal. This goal aims to promote sustainable and inclusive economic growth by fostering technological advancements and improving infrastructure.

Jit’s neutral stance allowed for an unbiased examination of the global digital compact. By requesting insights on both the positive and negative aspects, Jit sought to gain a well-rounded understanding of its potential impact. This approach reflected Jit’s commitment to considering all perspectives before forming an opinion.

While the exact details of the arguments and evidence presented during the discussion are not disclosed, it can be inferred that the conference attendees shared their specific viewpoints and provided relevant information to support their claims. By facilitating an exchange of ideas and opinions, the conference allowed for a comprehensive analysis of the global digital compact.

In conclusion, Jit’s attendance at the UN conference on the global digital compact offered valuable insights into the topic. By adopting a neutral stance and actively soliciting perspectives, Jit exhibited a genuine curiosity and a commitment to exploring both the benefits and drawbacks of the compact. The conference setting enabled an informed and productive discussion centered around the impact of the compact on industry, innovation, and infrastructure, in line with SDG 9.

Session transcript

Audience:
I’m going to give it another minute or so. I know some other sessions are letting out. And then we’ll just get started. Thanks for joining us. We got you. We’ll take what we can get. All right. We’ve got like a whole workshop plan, so we’re going to Brussels and The Hague, where we’re doing events in both of those. Yeah, we’ll go home for a couple days. And then I’m taking a few days off there. So we’re going to have like a week. That’ll be nice. I’ve never flown with him. He’s big, right? Yeah. He’s like 45 pounds. I think I would be way too stressed out to have him down there. You know? So. Yeah. Yeah. Yeah. Yeah. Yeah. Okay. Let’s get going. Okay. Are we ready? How many people have a free call tomorrow? Oh, we’re good. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Okay. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. In If you needed $34 per month , you could pay $14 per month

Allie Funk:
. . . . . . . . . . . . . . . . . get started. Thanks everyone for joining us. My name is Ali Funk. I’m Freedom House’s Research Director for Technology and Democracy. We’re really excited to host this conversation amongst three really brilliant folks that have taught me a lot about this field. What we’re gonna do today is I will give a very quick quick overview of Freedom on the Net and explain what that report even is. Then we’ll dive into an interesting conversation with these folks up here about Internet freedom, how it’s changed over the past decade, where we’re going. And then I’ll open it up to y’all. We’re a small group so I hope we can get nitty-gritty in the issue area. So first let me just have you all introduce yourself. Olga, why don’t we start with you?

Olga Kyryliuk:
Hi everyone my name is Olga Kyryliuk I work as a Technical Advisor on Internet Governance and Digital Rights at

Emilie Pradichit:
Internews. Hi everyone, my name is Emily Palamy-Praditit. I’m the founder and Executive Director of the Manu Shaya Foundation. We are a feminist human rights organization based in Thailand, working mainly in Laos and Thailand. And we work at the intersection of digital rights, corporate accountability, and access to justice for local communities.

Guuz van Zwoll:
Good afternoon everyone, my name is Guuz van Zwoll. I work with the Dutch Ministry of Foreign Affairs on digital

Allie Funk:
human rights. Thanks gang. So what is Freedom on the Net? It is Freedom House’s annual assessment of Internet freedom in 70 countries around the world. We look at how easily can folks access the Internet, what does the Internet look like in their countries, and are their rights protected or violated by the state, by non-state actors, by companies. Just last week we launched the 2023 edition of the report, the 13th version of it, and I’m just gonna give you some of the top findings. If you want to read the full report, which I would urge you to do, we have some fun graphics, we have country reports written by these folks up here at freedomhouse.org, but some just quick key findings that I think will ground our conversation today about where we are in the internet freedom space. 2023, 13th consecutive year of decline for internet freedom. Hopefully next year I’ll have the first year of improvement in internet freedom. Doesn’t seem like it, but you know, girl can hope. Attacks on free expression grew more common around the world. Like I said, we’ve been doing this for 13 years, and each year we have another record high of governments assaulting the fundamental right to free expression. So in at least 55 of our 70 countries we covered, people were arrested for simply expressing themselves. We had a record high of 41 governments in which their regulators blocked websites hosting political, social, and religious speech. And this year what we really zoomed in on is how advances in artificial intelligence are deepening the crisis for internet freedom. So you know, we looked at three different ways that’s happening. It’s AI is driving intrusive surveillance, empowering censorship, and also contributing to disinformation campaigns. So the two specific deep dives we did is first about how the affordability and accessibility of generative AI technology is lowering the barrier of entry for disinformation for the disinformation market. So we found that generative AI tech was used in 16 different countries to distort information on political or social issues, often during times of crisis like elections, protests, and other, you know, conflict areas. And then second we looked at how automated systems are enabling governments to conduct more precise and subtle censorship. So we found in at least 22 countries governments are requiring companies to deploy automated systems to censor speech protected under international human rights standards. So we kind of, some of the call to action that will drive our conversation today is because of the ways that AI is augmenting digital repression, we call for the urgent need to regulate it. And we think the lessons learned over the past decade or 15 years debates really provide a roadmap on how to regulate AI. So first, we need to not overly rely on companies. I think we, you know, at the beginning of the Internet Freedom Project had a big hope of, you know, the Internet’s gonna be this liberating technology, gonna protect democracy, we don’t need to regulate it. Boy, were we proven wrong, so we should be careful and not leave it all up to the private sector. Second, we’ve learned a lot about what good governance actually looks like from the government. So centering human rights standard, increasing transparency over the design, use, and impact of these systems. And then finally, the lesson learned that I don’t think has been learned enough, of civil society around the world really needed to be involved in this process. And right now, in the race to regulate AI, civil society is really being left out, particularly those from the global majority. So we close our report, you know, we think that if AI is designed and deployed safely and fairly, it can actually be used to bolster internet freedom. And there’s a lot of different efforts around the world, AI helping people evade government censorship, being used to detect disinformation and document human rights abuses. But we also note that, you know, as we pay attention more to AI, we have to be really careful not to lose momentum on internet freedom issues more broadly. So reversing internet freedom decline really requires regulating AI, but not forgetting about long-standing threats to free expression, access to information, and privacy. So top-line key findings. I will stop talking for a minute. Again, you can go to freedomhouse.org and read the rest. Olga, I want to start with a question for you. You’ve been working on these issues for quite a long time and wearing a couple different hats. What have you learned about internet freedom over the past decade? What has shifted in this space, and where do you think we are today, and where you think we might be going? Lots of questions.

Olga Kyryliuk:
Yeah, couldn’t be an easier question. But I think literally probably everything has changed during this last 10 years. And when I was thinking and looking back, 10 years ago is exactly when I was starting to write my PhD thesis, and when I came to my law department and the topic which I was suggesting was cyber security, and that was something everyone was looking at me like, this is something not important, we don’t know what it is, just choose something which is common sense for everyone. And then I had to drop that and to look more into like what is this multi-stakeholderism, how this has been developing, and whether at all there is any intersection with the international law. And I think also what has changed is that we had a lot of fascination back 10 years ago, which changed to quite a lot of frustration by now. We were hoping that this multi-stakeholder model and having everyone around the same table would solve a lot of issues for us, and that it would be pretty easy for us to reach a consensus and to find a way how to regulate technology. And we were hoping that at some point probably the legal regulation would be also catching up with the pace how technology is being developed, but still it’s 10 years have passed and we don’t really see that this catch-up has happened. But also there have been many things evolving in a good perspective. I think what we have definitely observed is that the public awareness of internet freedom has raised, and this is also a fair argument to make for every stakeholder, for governments, for private sector, for end-users, for civil society. I think everyone now understands the importance of internet freedom and digital rights, because probably 10 years ago this concept did not make much sense for many people. I believe still sometimes now it is still difficult to explain what is essentially internet freedom as a concept, what it covers, and how we should stand for this. But at the same time this awareness is growing, and this is important because somehow we reach the point when we understand that this is the values which we should be protecting. At the same time, same as positively the technology is developing, there is a lot of innovation, AI is developing, blockchain is developing, they are bringing new opportunities, but they are also bringing a lot of risks and challenges, for example, to security and safety, and there is always this very slippery borderline, where do you find this, how do you divide essentially the freedom and the safety. So, I think it’s very important to see that there is a lot of development of digital authoritarianism in the world. We see a lot of development in the security because in many cases, the governments, they tend to go too much into the security and they tend to limit Internet freedom. So, that’s why on the negative side, we also see that there is a lot of development of digital authoritarianism, and it’s not only by authoritarian authoritarianism, but it’s also a lot of regulation, and we still didn’t see that large-scale shutdowns and that much happening across the world. We didn’t see that much of content moderation and censorship as it is now. We probably could not imagine that we will have so many problems with regulating private companies and tech giants, and that it would be so difficult to find common ground and to agree on the regulation. So, we have to be very, very careful about that, and we have to be very, very careful about how we will solve the issue. We also have, we have also seen that these systems for mass biometric surveillance and facial recognition have developed a lot, and, again, there are countries which are providing these tools and this technology, and there are countries which are simply using it without regulation, and, again, we have to be very, very careful about that, and we have to be very, very careful about how we will regulate human rights or without putting proper legal safeguards, and then this leads to situations when you just don’t have the guarantees in law that you can properly protect your rights. Also, from the positive side, I think it’s still good that we still continue collaborating, we still continue talking to each other. We somehow see that probably maybe some models are not working, and, again, we have to be very, very careful about how we will do that. I think it’s a bit too slow to accept these adjustments to this multi-stakeholder convenience, because, again, we see that many people are not happy, many people want actions, not simply the discussions, some concrete partnerships, some concrete initiatives coming up from these conversations, which is not happening, and I don’t think this is a fair point to make, because, again, we are still in the process of developing, legal landscape is evolving, so we can’t just say that we want to keep discussing this thing if we really can make a difference and can make a change. So many things have changed, and we probably could go into a long conversation about this, but essentially, I think the world is becoming even more complicated than it used to be ten years ago. So, I think that’s what we expected, but that’s where we are.

Allie Funk:
We’re going to pull on the multi-stakeholder thread in a little bit and what meaningful stakeholder engagement looks like. I actually didn’t know that was your dissertation focus. That’s really interesting. But first I want to touch on the regulatory points. you made I think you’re exactly right it’s it’s been I think that is actually something in the field I’m probably most intrigued by because I think the trade-offs around regulation are really complex and who’s I want to pull you in here because you are European folks didn’t know based on the Dutch Ministry of Foreign Affairs title and the EU specifically has served as a global leader on regulating the Internet sort of providing what we’ve think about as kind of this third way for Internet regulation in between the Chinese model and the US laissez-faire traditional approach and we saw with the GDPR the General Data Protection Regulation how it served as a global model for data protection laws after it was enacted in 2018 and we now have the Digital Services Act for folks who don’t know a really ambitious piece of legislation governing online content and a whole host of other things and we’re also in the negotiation process of the EU AI Act so I’m curious you know this has been sort of talked about as the Brussels effect of how what’s happening in the EU is impacting the regulatory state globally how do you think about the Brussels effect and making sure particularly that the good parts of the regulation get implemented elsewhere with the sort of challenge that you know the same law in a country with really strong rule of law standards being implemented in a country with poor rule of law standards has vastly different human rights impact so how do you think about that and what are you all working

Guuz van Zwoll:
on well thank you Ali well we want to keep the good things right but I mean no we I mean it’s difficult I mean it’s a difficult it’s like a tightrope a tightrope that we have to walk I mean also listening here at last day here at the IGF it’s it’s either two things we have to fight censorship and we have to fight disinformation and it’s difficult to do both at the same time right you have to find a balance between the two and and I mean as the Netherlands we are very proud to have this EU laws we would not be able to regulate big tech on ourselves and we were happy to to do it together with with other European countries and we also proud that it’s that that it comes out of a long multi-stakeholder engagement process where we have There have been rounds of input from civil society, from companies, there have been hearings, there have been draft text, yada, yada, yada. And that has, I think, come up with a pretty solid text that we are really happy about and we’re really looking forward to full implementation early next year. I mean, it has started, but we’re building up towards it. So, I mean, there are two ways in how you can see the Brussels effect, right? So, first of all, when the GDPR was implemented in the European Union, some companies said, well, we’re going to implement it for everyone. I mean, it would be just easier to just roll it out over all countries. And the other way is that countries copied the text basically to align themselves into our system and then it would be easier for them to protect privacy in their systems. And this is something that we’re really focusing on as the Netherlands. Last month, we have released the English translation of the Dutch International Cyber Strategy. You can find it on our website, government.nl. And in it, we really state that we are going to propagate the principles of the DSA and the AI Act and the DMA to strengthen this Brussels effect because we do think that these regulatory frameworks provide the right balance between providing a strong regulatory framework while at the same time providing room for transparency and protecting human rights. And that is, I think, that’s the basis. I mean, we’ve argued long and hard and negotiated to get it also into the DSA and it is there. References to the global, the principles on business and human rights are there. There are strong transparency clauses. There’s a way on when your comments on Facebook or any other platform are being removed or downgraded, you’re able to go into appeal. I mean, there will be a whole process for that. And I think those. And that’s all in the text. So when that text will be copied, hopefully those parts will also already are already ingrained into the system. And then that way we try to promote that way of thinking on these issues to other countries. But also, I mean, when we’re going to have bilateral discussions either as the Netherlands with other countries or as the EU with other countries, we also will urge third countries to not only fully or partly adapt to these EU regulations, but also really implement these human rights and democratic clauses that we find so important on this. And this is something that our government is very committed to, and we’ll be focusing on for the next few years. And I would also like to thank you and congratulate you with a great report.

Allie Funk:
Thank you. That was really helpful. Emily, I’m gonna come to you because I think, is this on? Oh yeah, it’s still on. Okay, cool. You, your organization Manusha, you all help run the hashtag stop digital dictatorship, dictator, dictatorship coalition, which is working to, I mean, stop digital dictatorship across Southeast Asia. It’s in the name. And I think that, you know, one of the goals of the coalition is to make rights respecting regulatory frameworks. And I think particularly from the region is very exemplary of how really problematic laws can undermine human rights. So what are you thinking about of what type of regulatory provisions are the most helpful or harmful? How does it relate to AI? If I’m gonna put the buzzword in the zeitgeist, tell me what’s on your mind on this.

Emilie Pradichit:
Thank you, Alina. Thank you for organizing this important session. So I’m coming from a region where according to freedom of the world, we are among 10 ASEAN countries. Six of us, six of our countries are under authoritarian regimes and four of them are. authoritarian regimes. And most of the time when I tell people I’m coming from Southeast Asia, especially Thailand, people are like, wow, because everybody has this impression that Thailand is such an amazing holiday destination. So I really want to emphasize that and I urge people to please read the Freedom of the Net report because if you read the report you will realize that among the countries from Southeast Asia that are being assessed in the report, many of us are not free. Thailand is not free. Cambodia is not free. Vietnam is not free. Myanmar is not free. Indonesia and the Philippines are partially free. Why? It’s because our governments, authoritarian governments, are weaponizing laws. And so there is a proliferation of cyber laws that are targeting dissenting voices and human rights defenders under the name of national security. So in terms of harmful regulation that we have seen growing in Southeast Asia, all the regulations that are meant to protect national security and anyone who is attacking the government or criticizing the government is a threat to national security. And so we have a lot of cases of pro-democracy activists in Thailand, in Laos, throughout Southeast Asia that are being jailed for just voicing and for just telling the truth on Facebook, through Facebook posts. We have a human rights lawyer, Arnon, who was just sent in jail a few weeks ago and who is facing 14 charges under the Complete Terms of Crime Act and the Les Majestรฉs Law and who will face up to 210 years in jail just because he’s calling for monarchy reforms and they are calling for true democracy. So I think there’s a real need for us to look at what are those harmful regulations in Southeast Asia but also how governments in Southeast Asia are also regulating tech companies. So just for example in December 2022 in Thailand, the Thai government passed a decree forcing and obliging tech companies to remove content within 24 hours, any content that is against national security. But again there’s no clear definition of what is national security. So everything can become a threat to national security. So for us what we really want when it comes to good regulation or irrigation that we want to see are regulations that obviously are protecting our online freedom, that are in line with international human rights law, that are protecting our privacy, whereas surveillance is not used against us because you know in Thailand and Indonesia we’re also facing the Pegasus software that is being misused against activists, against journalists, against politicians. So it’s really important for us that we have regulations that are human-centered. And to your question regarding AI, so generative AI can be powerful, right? It can improve our lives but as we heard this morning it also has a lot of risks. And in Southeast Asia we have faced the misuse of AI, especially when it comes to facial recognition, when it comes to surveillance, and also when it comes to bias, especially in terms of language. So if you are in from a Southeast Asian country, our structure, our language structure in some of our countries is Sanskrit or Pali. So if you are using Facebook and Facebook is using AI in terms of content moderation to remove content or to block content that is inviolating the community standards. How can an AI machine can distinguish a word that has in one sound or in one word five different meanings, right? So that’s why for us it’s really important that when we are talking about regulation we are talking about the need to also regulate tech companies. It’s really important for us that we move the discussion from voluntary guidelines and this morning we heard about the Hiroshima AI process. So if you’re an activist on the ground, like I’m a human rights lawyer and I’m working with a lot of activists on the ground, if I’m going back to them saying you know I went to IGF and I heard about the Hiroshima AI process, they’re going to tell me, oh new guidelines, new voluntary measures, where is it going to take us? I think we move to a point where we need real regulations and we need mandatory due diligence. It’s not enough nowadays for Meta, for Microsoft, and for other tech companies to tell us that they are conducting human rights impact assessment that are voluntary and what they are barely doing is just identifying the most salient human rights issues. Then they are engaging us in the stakeholders engagement and they’re presenting to us the most salient human rights issues as if we already didn’t know them. You know we already know the human rights issues, right? So we go through these stakeholders engagement processes where just in that identification of the human rights issues are presented but there is no prevention, there’s no mitigation, there is no addressing those salient human rights issues. But if companies are serious about implementing the UNGPs but also the OECD guidelines for multinationals, they should be able to identify but also address, prevent and address the impact but also provide remedy. So a tech company telling us that the appeal mechanism or reaching out to the human rights team is the best The first remedy offered as of today, it’s not enough. There’s a real need to legislate the UNCPs into real law. There’s a real need for mandatory human rights due diligence, and a due diligence that is actually meaningful, so meaningful stakeholders engagement, not just a tick-the-box exercise, because I think a lot of us in Southeast Asia are tired of being called into stakeholders engagement call, and we give our input, and there’s nothing in terms of follow-up. So meaningful stakeholders engagement, not only with civil society, but also with groups that are directly impacted by the misuse of the platform, by governments, by trolls, you know, in Southeast Asia. We are also facing the proliferation of cyber armies from Myanmar, from Laos, from Thailand. Governments are investing in cyber armies, and we are so small compared to them. When we are one or two people, you know, working in a human rights organization on digital rights, it’s not enough to fight against a cyber army. So how do we do? And when we turn into tech companies for support, there’s nothing they can really do, because they are not being regulated. So it’s time for tech companies to be effectively regulated through meaningful mandatory human rights due diligence, and we need those mandatory human rights due diligence to come from countries where those tech companies are operating, because then there would be an extraterritorial obligation for those companies to make sure that throughout the supply chain, also the country offices, the UN Guiding Principles, and due diligence would be respected. But we also want responsibility and remedy. So we want civil and criminal liability for those companies as well. For example, what happened in Myanmar, and the way that the platform, Facebook platform, has been misused by the government and by other groups to promote hate speech against Rohingya. The fact that nobody’s being held into account is not normal. The fact that nobody’s being held into account in terms of responsibility and criminal and civil liability is just not normal. So we really need an effective mandatory human rights due diligence that would also include impact human rights assessment for AI, and that would include meaningful stakeholders engagement and criminal and civil liability of the company.

Allie Funk:
I think this next year with DSA implementation is gonna be really interesting to see how those requirements of impact assessments is gonna play out. And if you all hadn’t seen, there is the recent, I don’t know what day it came out, but there’s now a new database, thanks to the DSA, where a lot of companies are reporting. different content removal or different actions under the terms of service that you can actually go through, which I think will take a very long time because there’s a lot in there. Let’s go to this question on multi-stakeholder engagement a little bit that you brought up, because I think this is something that we think a lot about. We hear a lot about what does multi-stakeholder engagement mean? How do you make that meaningful? Huz, I’m going to come back to you. You mentioned your international cyber strategy. In the document, it talks about incorporating more emerging countries in internet governance and lays out the importance of multi-stakeholder model of internet governance. How does the Netherlands plan to promote these objectives, particularly as it relates to inclusivity with civil society and also in the global majority who are on the front lines of digital repression?

Guuz van Zwoll:
Well, that’s an excellent question and a difficult one. And we try to, no, we do try to answer it also in our strategy. But so basically, we try to do the following thing. We try to connect in our cyber strategy three strands of work. We try to connect the work that we do on traditional cyber diplomacy, cybersecurity, with digital development work, with our human rights work. And as an overarching team is internet governance there. And this is something that we try to do in those three ways. And we do see it as a kind of, we didn’t mention like that, but I always try to see that either as a tree like a stool, like a milking stool or something, that you can have tree legs in order to keep it balanced. You need to have some form of digitalization in order to be digitally connected. Also, as a country, of course, digital security in order to keep that structure safe. But at the same time, you need principles and good governance to also govern that structure. Otherwise, you’re just implementing. and censorship and surveillance apparatus, right? So what we do as the government is really try to implement it in all our work. So we try to, through our development cooperation work, and we work with that, well, also with our colleagues from Freedom Online Coalition, we try to work on principles for digitalization, for donors in digitalization, in order to improve, well, the digital rollout and connect the last third of the world that’s still unconnected. But at the same time, we do try to get these other principles in place as well. We do try also through the EU Global Gateway, for example, we try to make sure that we are then not only looking only at just getting everyone connected, but also make sure that digital security and then also principles and good governance are part of that equation, and make sure that we, and that through those processes, there’s a multi-stakeholder approach that we’ll get voices from civil society to be part of that discussions locally. But this is still something that’s really in its building block, and this is something that we need to work on. But it’s a clear aim that we set out in our strategy, and we’ll have to roll it out for the next few years. But it’s not, of course, the only thing we do. We also work with local civil society, with our human rights program. We have a strong program called Safety for Voices program, where we try to support human rights defenders and civil society organizations on security, both physically, but most as a strong digital component. So all the programs that we run out that are supporting civil society and human rights defenders have always this digital component to it. So we do also try to mainstream it in those settings. And then we do that. I mean, that’s work done from the Hague, but then also the same principles apply to the work that we do to our embassies. Yeah, I think that’s where we’re at.

Allie Funk:
Great. I’m gonna ask one more for Olga and one more for Emily, and then we’re gonna open it up. up. Time has snuck up on me. So, Olga, for you, you teased that dissertation. So I’m going to press on that a little bit. And I should also add that the Netherlands is also taking chair of the Freedom Online Coalition next year. The U.S. government is chair now. And for folks who don’t know, Freedom Online Coalition, multilateral body of 27 governments now? How many? 38. We’re going to get 27. Wow. I am behind. I’m a bad advisory network here. Working to protect internet freedom around the world. So I’m curious. It’s a two prong question. I’m going to ask the same to both of you all to hear your input. How can governments themselves, what does meaningful multistakeholderism look like to you? How can they make sure that they’re listening to the different sectors? But also, what do you think the role of, you know, the FOC, a multilateral body of democratic governments that are really committed to protecting internet freedom? How can they reverse this decline? Do you have any best practices they can adopt? So you can take any of that. That’s like seven questions in one. So I’ll let you take it.

Olga Kyryliuk:
This is actually what I also want to know. Maybe also since we have this opportunity, maybe also Guus can help to clarify that how essentially a civil society can get better engaged in FOC, especially because this is also part of my job portfolio. I need to identify this connection point because my team is running the largest internet freedom project. We are covering five regions across the world and we are working with 120 implementing partners from civil society. So essentially we have this pool of talent of civil society activists and human rights defenders. And we would like to see what is this entry point, how we can better coordinate, how we can help engage them in your space, and where do you see the value from these people, how they can meaningfully contribute to what you are doing? Because you had this Freedom Online Conference which has not been held for the last few years, which I think was one of the opportunities to get together for different stakeholders to discuss different issues which are important and making trends. But this is not happening anymore. I know there is advisory network, but again this is an election-based process which is also happening in some periods. So I would say if there is any opportunity to organize some kind of periodic consultations with the civil society, to choose some thematic issues so that it’s not just about everything and about nothing at the same time, but to make it very specific, whether you want to focus on some regulatory issue, whether it is something related to AI, I think we would be only happy to support with that and essentially we have a huge variety of expertise. I loved how it was done by FOC and US chairmanship and this is something that Lisa was also leading, this consultation with civil society on the principles for human rights in digital age. It was really nice to have everyone in the same room and everyone essentially truly having the opportunity to express their opinion and we also have the result of the discussion. So deciding something which is very tangible has practical result. This is something which is missing and which we could do more. Thank you, Ali. All right, so in terms of the FOC,

Emilie Pradichit:
but there’s also Michael in the room, so I’m also looking at you in terms of the Forum on Information and Democracy. Working with member states and the potential that you have to support us in countries where there’s no democracy and since the Netherlands will be sharing the FOC next year, I really urge you to help us because our online democracies are under attack and is not going to change tomorrow and 2024 is a very important year because there will be a lot of elections throughout the world, so there will be a lot of demand on the FOC. Honestly, the FOC is not accessible and is not known for the majority of the people from the global majority. So I think the FOC is accessible for DC groups, so online freedom and digital rights groups based in Washington DC. For us, based in Southeast Asia or in the African continent, we don’t know about the FOC and we don’t know how you can help us. So I think the best thing that you could do first is to better promote your work so we can better understand how the FOC can actually support us and actually support us in demand of true democracy. We really need statements coming from IFOC members that are targeting our authoritarian governments. We are trying our best. We are a coalition, the ASEAN Coalition to Stop Digital Retardation. But we are also part of the Southeast Asia CPN targeting tech companies. But we are just a handful of people. So we actually need your support. And there’s a real need for the IOC to look at the global majority and to engage with us. So when you are doing stakeholders engagement, please don’t do them only in DC. There is a need for you to come to us because we need your input. We need your recommendations. And we need your statement to target our governments and also the private sector in our countries. So there’s a need to you to come to us. Why? Because for most of the people from the global majority, traveling to Europe or to the US is not easy, right? There’s visa restrictions. So it’s always the same people that you get to meet. It’s always the people who can travel. It’s always the people who have access to you. There’s also a need for the IOC to not only talk to the traditional digital rights organization, but to the broader human rights field. The digital space is becoming more and more important. I mean, we’re all moving into the metaverse. What’s happening offline is now happening online. So there’s a need also for human rights groups to understand and to engage with the FOC. So really, looking at us, inclusivity is key, engaging with the global majority, and bringing the FOC to the global majority countries is really important because not everybody will be able to travel to you, invest in civil society, being able to engage with you. Financially supporting groups that are fighting against authoritarian governments online, it’s also very important because most of the time, not everybody can engage and not everybody can do this work. Also, this need to understand that the work that we do is also putting us at threat. A lot of us sometimes cannot speak publicly or cannot engage. A lot of activists have to speak or have to remain anonymous. I mean, Freedom of the Net report has a lot of anonymous authors as well. So there’s a real need for IOC to look at the global majority and to understand us, to come to us, and to also financially support us because we need this report to be able to fight against digital ratioship.

Olga Kyryliuk:
Because what Emily was saying, I was also thinking that you have this access to governmental people, essentially, which is usually what is missing a lot on, let’s say, not at the global IGF, probably, but at the regional discussions because we also have regional, national IGFs. And it is always a struggle to get these governmental representatives to be present in the room. So I would say you can also focus on working at least with those countries which are members of FOC so that to somehow encourage and maybe also to build connections between them and these local regional communities because they could be part of these conversations. They could get into some specific partnerships and work on some issues together. I think from my region of Southeast Europe, it is maybe only Georgia and Moldova who are members of FOC. But at least at that level, at least those few countries, because I know I’m also part of IGF for Southeastern Europe. And well, I know firsthand experience how challenging it is to get in touch with governmental people. So that would be also very practical help from your side, just at least to help to get connected with these people and to have them in the room.

Allie Funk:
Is there anything you want to say before we go to Q&A about the FOC?

Guuz van Zwoll:
Well, I mean, these are very concrete and thoughtful points. We’re writing our plan of action as we speak. I mean, we just had it out for consultations with the AN network. And I mean, these are great points that we’re happy to digest and bring them further. I think that it’s very interesting to say that the Freedom Online Conference is a missed, that it’s being missed. It’s very nice to hear because we did, there was, I think COVID was the first reason not to organize it. But also because there’s already so many conferences we’ve got right. we’ve got IGF. So I mean, it would be good to discuss maybe later to see how we can make best use of the space and time and core footprints that we have to make sure that we can make use of that. And the other points on, I think, many of them, at least myself, but also many of our colleagues within the FOC are very open always to have discussions with human rights defenders and digital defenders. So I think it would be great to maybe see if we can promote that strand of work and to have direct contact outside of the AI network. We could also have a long talk about representation in the AI network and I think we should also have that. But I mean, these are very valid points and we’ll certainly take them forward. One last thing on the security side. As the FOC, we did create a group called the Digital Defenders Partnership, which is focusing on holistic support for human rights defenders and digital defenders at risk. And that’s specifically aimed at digital defenders and civil society groups that are facing online threats, but also now physical and psychological threats, etc. And they are, I mean, that’s one of the concrete results that we continue to support as the FOC. So we do try to keep an eye on it, but it’s always great to have concrete suggestions on how to improve these things. Thank you.

Allie Funk:
I’ll just make a pitch. I mean, if RightsCon is not happening until 2025, there is a little space in our calendars for an FOC conference. I can see if we can invite everyone to the Netherlands. All right, everybody, we’re going in the Netherlands. You’re gonna kill me. All right, we’ve got 15 minutes. I want to open it up to y’all. Who has a question? Anybody? Hi, Lisa. Oh, yes, Jit.

Jit:
Yeah, thanks, everyone, for this fabulous discussion. I learned a lot. You know, in thinking about how we can make meaningful impact since we’re at a UN conference, curious to hear what people think about the global digital compact, pros, cons, what we see happening with it.

Allie Funk:
Step right in, if anybody wants to take that tiny question. Yeah. And we also have questions. We can just get them all, maybe, and then. Oliver?

Audience:
For the gentleman, you mentioned that you can support, provide some type of support for people who are under some sort of threat for their online activism. So I was wondering if you could explain what type of mechanisms you have available in terms of what? To send lawyers if they’re already in prison or something like that? I’m just curious to know, what exactly do you mean by that? Bearing in mind the geography, bearing in mind different juristical systems, and so on, and so on. What is, is not crime in given legislation? Thank you.

Allie Funk:
Just going to collect them all. We’ll do Oliver, and then Lisa. Then we should answer some, because I’ll have a lot of questions.

Oliver:
Hi, this is Oliver. I won’t give my organization name, if you don’t mind, just because of security reasons. But I think it’s really important for FOC to be a bit more clear with the outside world about what they’re doing in regards to the UNESCO guidelines, which the global CSOs in the global south are extremely concerned about the direction of the guidelines and how they will encourage authoritarian states to crack down on the digital space. We haven’t seen much from FOC, not that we ever would really see it. But it would be very useful to know that behind the scenes, there is actually some pushback on something that looks like it’s being driven by authoritarian state members of UNESCO. Thanks.

Lisa:
Hi, everyone. I’m Lisa from USAID. So I’ve been doing a lot of stakeholder consultations this year in different countries where we are doing work or trying to scope out potential for new work. And one of the things that keeps coming up when we talk about international human rights frameworks and the GDPR and the DSA and the DMA and the EU AI Act and all of these frameworks is that other countries, particularly in the global majority, see the risk-based European model. They see the laissez-faire industry-based American model. They see the Chinese state-based model. And they don’t want to have any of those models plopped into their space. They’re thinking about, what is this third way? So it’s very Cold War rhetoric of we’re in the third space. And what does that mean? And how are we going to figure out a regional approach, perhaps, or a national approach? And I think one of the key concerns is that when you plop the GDPR into Serbia or Indonesia or Kenya or wherever, there are certain aspects of the regulation that are extremely onerous for countries that are at a different income level than a lot of European countries and that are very challenging to implement when you don’t have the oversight capacity. And there’s perhaps lack of political will and politicization of some of these oversight bodies. And so that’s also a concern. And so I’ve sensed that there’s a real frustration among a lot of actors in civil society and local tech in different countries with this very what people have expressed as a heavy-handed, the international human rights framework is the thing to implement everywhere. And so what are your thoughts? It can be for anyone on the panel about how to navigate that so that you still have the overall protections and safeguards that are being transferred to the extent that they’re going to be useful in those contexts. for human rights defenders and activists and the like, but you’re not imposing aspects of that regulation or imposing at all really, or like there’s a space for a conversation about what the human rights protections and safeguards look like in different contexts.

Allie Funk:
Anything else before we dive on in? Okay, all right. Who wants to start? My esteemed panelists, Olga, there you go. And I can also repeat the questions if need be and make sure we answer them all.

Olga Kyryliuk:
So on global digital compact, I think this is the same thing for me as for Freedom Online Coalition. I would want to see more clarity about what is happening, where it is going, and especially for civil society how to be part of that because there is a lot of frustration at the moment as to how they can engage. And same, we were trying to see how we can support our implementing partners to engage in this process and we don’t really see a clear way or a clear venue where this can happen. For the regulations, for Lisa’s question, I think the problem is that we think everything which is coming from the EU is just will solve all our problems. This is ideal and the standard which we all should be using, which is, as you’ve mentioned, has its own challenges once we start to implement and go to enforcement phase. But I think there are always, there is the framework of principles and standards which, let’s say, are basic and which can be replicated in every single country. But then you also should be aware that if you go into some detailed regulations, then they should be also conscious of the context where they are being thrown to. So it requires a dialogue and a conversation with the national legislators, but also probably some capacity building for them to understand that it’s okay, because what countries are doing, they just take the text of GDPR and implement as their national law. And then when it comes to implementation, now we have to face a lot of challenges, but then what you can do, the law is already there. So it has to be done at a little bit earlier stage when just some specific legal act is being incorporated into the national legal system.

Emilie Pradichit:
Thank you. So I’m gonna answer the question related to the protection of human rights. As Olga said, there is a need to also understand the local context. And most of the, I mean, most of the Southeast Asian governments, and I’m gonna talk about Thailand mainly, is that we have a Data Protection Act. You know, and what the Thai government said is that. Oh, we just took the GDPR and we developed the Data Protection Act, so we are, you know, following the EU example. But there’s no real oversight, there’s no independent oversight, it’s full, it’s totally government-led oversight and there’s no remedy. And there’s an exemption into that law that allows the government to violate our data under the consideration of national security. So governments, I would say, are really good, you know, to replicate what the EU is doing, which is a challenge for us because we want them to engage in a dialogue with parliamentarians but also with civil society. And what governments are doing is that they’re saying, I’m taking the German example, I’m taking the EU example, and I’m developing this law. And it’s government-led, it’s from the executive, it’s not from the legislative, and it allows the government not to engage with civil society. So there’s no dialogue, so that’s a real frustration for us. And they think that then they go into diplomatic discussion with diplomats in the country but also at the global level at the UN saying, we are following global standards and we are following good standards because we are in line with the EU. So it’s a real challenge for us because then diplomats believe it. So diplomats are then congratulating Thailand for having a Data Protection Act instead of really looking into the act because the act is in Thai unless a civil society translates it for the international community to know. So it’s really important for us that I don’t think civil society is against international human rights law. Like, we all follow international human rights law. Actually, we want governments to respect international human rights law. We just want to make sure that when there is an exchange between global north countries and global majority countries, that this exchange takes into consideration our context and that governments, like when they are exchanging the Thai government or the Lao government going to Australia or Australia to look at AI, for example, AI regulation, or when the Thai government is saying we are putting together an AI advisory committee and are inviting experts from all around the world, it’s just to appear as a good student or it’s just to appear as a good member state at the UN. But in reality, they’re just fooling the world. And never, ever we have the expert and the other government engaging with the Thai government, helping to develop those laws. Who is asking the Thai government, but where is civil society? Where is the dialogue with civil society? Where is the dialogue with parliamentarians? So this is where the frustration is coming. It’s the lack of dialogue and it’s the lack of understanding of the context. And it’s how easily EU member states and also the US and the international committee can be fooled by our governments. Thank you.

Guuz van Zwoll:
That was a great point. And I mean, I think that for us, I mean, I think that although there might be some people that have hoped it, I think that the worldwide rollout or effect of GDPR came to us. where everyone was a little bit surprised, right? And then we’d start claiming Brussels effect and stuff like that. But I mean, I think that we didn’t really plan on it to be, well, it was not there in the room, I don’t know. But I mean, I think that we were, I would expect that, I mean, we’re diplomats, we’re human beings, we are, we’re from nine to five. And I mean, I don’t, but I mean, so the point being is that I think that we have to learn by doing on this. And I think that we, I mean, your feedback on this is extremely helpful. And each time we’ll get better at it. And, but we need your honest and open criticism on these things in order to do learn from it and to do implement it. The next time we’ll have these discussions on how are we going to have shared approach on AI? Or how are we going to have a shared approach on the DSA or the DMA? So that’s something that I would just urge everyone to keep doing and then also reach out to not only the embassy, but also try to, well, I mean, try to find the advocacy focal points to because these are the ones that are probably more resonating to these arguments than someone who’s covering 27 issues because we’re two people in the embassy. So that’s, I mean, that’s just very challenging. And so, yeah, I would try to do that. As on the UNESCO guidelines, I think that that is indeed, and we’ve been following that progress with great interest. We have, as the FOC, we did try to, we did approach it. We have the advisory network wrote terrific comments on it. And we took that all at heart when talking to UNESCO and then participating in the Internet for Trust. Conference, I mean, this is not completely FOC, but I do want to mention our recently launched global. declaration on information integrity that was signed by 30 countries, and more countries are signing on to it, which do try to say, well, it’s very important that we are going to fight disinformation and make sure that we are promoting information integrity, but at the same time, we do need these human rights guide rails, so to speak, in these international processes like the UNESCO process, but also the Code of Conduct that’s being run by Under Secretary General Fleming, to make sure that the human rights language is there in those processes, so that’s something that we are really pushing for us as the Netherlands, and with 30 other countries, including the US, the UK, but also countries like Brazil and Argentina and Chile have signed up to those principles. We do try to promote that in that way. About the GDC, that’s just, I mean, I think that it’s also very difficult for us, at least as diplomats, for me, to follow it. I mean, the process, I mean, there have been some stakeholder rounds, we attended those, I mean, they were open to watch online, I mean, you know as much as I do. I mean, it’s just, yeah, we are following it, and we try to make the best of it, and we do think that it’s great, at least in the chapters or in the sections that are there, human rights online is really there, so we do have good hope for it, but we have to see how it will develop, and for us it’s really a question on how this is something that we also set out in, pretty publicly I would say, it’s even in the strategy that we say, well, I mean, we have to strike a good balance between the GDC and UISIS, and they are both very important, and we have to find a good way in protecting human rights online, we have to find a way to encapsulate multi-stakeholderism in these governing processes, but at the same time, we have to make also sure that these processes are really transparent, that everyone can engage, that the global majority countries have a seat at the table, that we include them into the process, and that’s something that we… remains, that remains a constant challenge. I mean, but that’s always, of course, a challenge in these issues. Yeah. And then on supporters, support for human rights defense at risk. The Netherlands funds tons of NGOs and initiatives to protect human rights defenders who are at risk locally. So we, for example, fund Frontline Defenders that has, I think, 12 regional coordinators all over the world speaking. Well, I mean, Southeast Asia is, of course, difficult with tons of languages. But, for example, in Latin America we have where they speak local languages. They are there. I mean, they’re someone for Southeast Asia. But they’re really trying to provide practical, holistic support for at-risk human rights defenders, both in a legal way but also courses in physical protection, digital security, psychological well-being, etc. We fund that with Frontline. We have Reporters sans frontiรจres we fund through the EU. We support protect defenders that has a conglomerate of, which is a consortium of 13 organizations that are doing this worldwide. I mean, I think that there are tons of organizations that are doing, that do try to provide these kinds of direct practical support for at-risk human rights defenders. And some of them are even here. I mean, Access Now has a booth. They have a helpline. They’re connected with Defend Defenders to work together with Frontline. And if you want to know more about it, I’m happy to to speak for hours about this topic because I’m really passionate about it.

Allie Funk:
These microphones, tricky. Well, thank you all. We’re at time. I think that we could go on for a really long time. All these, there’s just so many initiatives. I’m so tired. I’m sure everybody else is. I’m like, we’ve got a seven-person team. We have to make tough decisions about how to engage and when not to. And I’m grateful that we’re in partnership with all the fantastic panelists, for people in this room. that we’re doing this work together. And I won’t hold you back from dinner anymore. I know we’re all hungry as well. So thank you for joining us. A pitch again, you can read the latest Freedom on the Net report, freedomhouse.org. Let us know what you think. And looking forward to a great week. Thanks all.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Allie Funk

Speech speed

179 words per minute

Speech length

2358 words

Speech time

789 secs

Audience

Speech speed

123 words per minute

Speech length

436 words

Speech time

212 secs

Emilie Pradichit

Speech speed

197 words per minute

Speech length

2788 words

Speech time

849 secs

Guuz van Zwoll

Speech speed

182 words per minute

Speech length

2770 words

Speech time

912 secs

Jit

Speech speed

187 words per minute

Speech length

53 words

Speech time

17 secs

Lisa

Speech speed

162 words per minute

Speech length

418 words

Speech time

155 secs

Olga Kyryliuk

Speech speed

183 words per minute

Speech length

2025 words

Speech time

663 secs

Oliver

Speech speed

195 words per minute

Speech length

141 words

Speech time

43 secs

Unstoppable Together:Digital Grassroots Impact Report Launch | IGF 2023 Launch / Award Event #143

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Estelle

In this extended summary, we have two individuals, Estelle and a team, who express positive sentiments about their achievements. The team’s hard work and dedication resulted in the completion of an impact report, showcasing their remarkable accomplishments. Their efforts have led to the creation of new young leaders from their side of the world, highlighting the team’s ability to make a lasting and positive impact on their community. Estelle, in particular, takes great pride in the team’s success.

Estelle also strongly believes in the importance of representation and recognizes its significance in creating a fair and inclusive society. To promote representation, Estelle initiated DIGRA programs with the aim of fostering increased representation from their side of the world. These programs are designed to empower individuals and provide them with opportunities to make their voices heard, aligning with the goals set forth by SDG 10: Reduced Inequalities.

The positive sentiments expressed by both Estelle and the team reflect the significance of their achievements. Through hard work and dedication, the team’s impact report serves as tangible evidence of their success. Moreover, the creation of new young leaders signifies the team’s ability to inspire and cultivate future talent. Estelle’s commitment to representation further emphasizes the importance of diversity and inclusion in various domains, including the Internet governance ecosystem.

This analysis sheds light on the remarkable accomplishments of the team and Estelle’s dedication towards creating positive change. Through their efforts, they aim to reduce inequalities and create a more inclusive world. The success of their initiatives serves as an inspiration for others, encouraging them to follow suit and make a difference in their respective communities.

Audience

During the event, the audience expressed concerns regarding the lack of multilingualism and the predominance of English-speaking Africans at the Internet Governance Forum (IGF). The audience specifically highlighted the need for the IGF to promote a multilingual environment. One audience member from Cameroon expressed surprise at learning about the project for the first time at the event. This observation drew attention to the necessity of reaching out to countries where English is not the primary language of communication.

The call for a multilingual environment at the IGF aligns with the goals of inclusivity and reduced inequalities, as outlined in SDG 9 (Industry, Innovation and Infrastructure) and SDG 10 (Reduced Inequalities). By accommodating various languages, the IGF can ensure that individuals from diverse backgrounds have equal access and representation in shaping internet governance.

In addition to the language barrier, an audience member from Cameroon also highlighted the need for clarification on how to become an ambassador for the Digital Grassroots Movement. This request reflects an interest in actively participating and contributing to the movement’s objectives, particularly those related to quality education (SDG 4) and reduced inequalities (SDG 10).

Overall, the audience’s concerns and requests highlight the importance of promoting inclusivity, reaching out to non-English speaking countries, and providing clear guidelines for participation. Addressing these issues will enhance the effectiveness and impact of the Digital Grassroots Movement and create a more diverse and inclusive environment at the IGF.

Nancy Wachira

Nancy Wachira’s journey with the Digital Inclusion and Governance Research Alliance (DIGRA) has been instrumental in her growth as an advocate for digital inclusion. Since joining DIGRA in 2018, Nancy has actively engaged with the organisation and has become an essential part of its efforts to bridge the digital divide.

One of the key ways in which Nancy has contributed to DIGRA’s cause is by representing the organisation at various international events, such as the Commission on the Status of Women. This involvement has not only provided her with a platform to share her insights on digital inclusion but has also allowed her to network with like-minded individuals and organisations. Through these interactions, Nancy has been able to broaden her perspective on the issue and gain a deeper understanding of its global impact.

Furthermore, Nancy’s work with DIGRA has had a specific focus on reducing digital inequalities in rural communities. She recognises the importance of ensuring that people living in remote areas have equal access to digital technologies and opportunities. By actively working towards this goal, Nancy is actively contributing to the United Nations Sustainable Development Goals (SDGs) of Industry, Innovation, and Infrastructure (SDG 9) and Reduced Inequality (SDG 10).

In addition to her involvement with DIGRA, Nancy also acknowledges the significant impact of her mentors and the supportive community within the organisation. Mentors such as Esther, Ufa, and Wadhangi have played a crucial role in guiding and shaping Nancy’s advocacy journey. Their expertise and guidance have provided Nancy with invaluable insights and teachings, enabling her to further develop her skills and knowledge in the field of digital inclusion.

Overall, Nancy Wachira’s involvement with DIGRA has been transformative. Her active participation in the organisation, representation at international events, and focus on reducing digital inequalities in rural communities highlight her dedication to the cause of digital inclusion. Furthermore, the influence of her mentors and the supportive DIGRA community has significantly contributed to Nancy’s growth and success as a digital inclusion advocate. Through her efforts, Nancy is making tangible contributions towards achieving the SDGs and creating a more equitable digital future for all.

Grace Zawuki

Grace embarked on her DIGRA journey in 2022 when she participated in the Digital Rights Learning Exchange, which proved to be a transformative experience for her. This opportunity equipped her with valuable knowledge and skills in the field of digital rights. Recognising her potential, Grace was subsequently selected to join the prestigious Community Solutions Program, solidifying her dedication to addressing digital rights issues in the United States.

Grace expresses her profound gratitude for the DIGRA community, which has shaped her perspective and fostered her personal and professional growth. She acknowledges the invaluable impact DIGRA has had on her journey and credits it for her positive transformation.

Collaboration emerges as a crucial factor in this context, with Grace highlighting its potential to make a significant difference in communities and elevate Africa’s global standing. Emphasising the power of collective efforts, Grace and her fellow advocates strive to effect positive change by addressing digital literacy and digital rights issues.

Grace’s own experiences serve as evidence supporting the argument for collaboration and its benefits. By working with individuals from diverse backgrounds and areas of expertise, they can adopt a comprehensive approach to solving complex challenges. Furthermore, their collective efforts not only improve their own communities but also position Africa as a hotbed for innovative solutions in digital rights.

In summary, Grace’s involvement in DIGRA and the Community Solutions Program is a testament to the transformative power of such initiatives. Through collaboration and a shared commitment to enhancing digital literacy and digital rights, Grace and her team make a meaningful impact in their communities, propelling Africa into the spotlight as a catalyst for positive change.

Stanley Junior Bernard

During the discussion, the speakers delved into several topics pertaining to digital rights, internet governance, and internet accessibility. They underscored the importance of advocating for digital rights and internet governance, recognizing that these areas play a crucial role in shaping the future of the digital landscape.

One notable point raised was the positive impact of the training received through Digital Grassroots in understanding digital rights and internet governance. This training not only enhanced the participants’ knowledge but also equipped them with the necessary skills to actively advocate for these rights.

Moreover, the speakers highlighted that the advocacy for digital rights and internet governance led to significant recognition. For instance, one speaker mentioned being awarded a scholarship by the One Young World due to their involvement in championing digital rights. This achievement underscores the recognition of the importance of such advocacy efforts on a global scale.

The significance of an open and accessible internet was also emphasized during the discussion. It was noted that although internet connectivity remains challenging in countries like Haiti, there is a shared belief that the internet should be accessible to all, not only in developed nations but also in the global South. This argument stems from the understanding that a more equitable and inclusive internet access can help foster reduced inequalities and promote innovation worldwide.

Additionally, the speakers expressed their support and admiration for the work of Digital Grassroots in building digital capacity for marginalized youth. Specifically, they praised the innovative program called the Digital Rights Learning Exchange, which was highly regarded for its ability to empower marginalized youth.

Overall, the discussion provided valuable insights regarding the significance of digital rights, internet governance, and internet accessibility. It highlighted the importance of advocacy efforts, the need for an open and accessible internet for all, and the crucial role that organizations like Digital Grassroots play in building the digital capacity of marginalized youth globally.

Hanna Pishchyk

Hanna Pishchyk, who is currently based in France, is the Communications Lead at Digital Grassroots. She plays a crucial role in acknowledging the efforts and impacts of DIGRA community members. Digital Grassroots is a community of Internet governance advocates focused on sharing knowledge and experiences. They aim to achieve global digital inclusion, reduce digital inequalities, and promote digital literacy. Nancy Vachira, a member of DIGRA since 2018, works towards reducing digital inequalities in rural communities and represents DIGRA in various events and initiatives. Stanley Junior-Burner has been an impactful member of the DIGRA community, contributing to various projects and leading a successful DIGRA mini-hackathon in Haiti. Stanley also promotes digital literacy and mitigates gender-based violence through platforms like the Young Girls Empowerment Initiative in Haiti. The efforts of Hanna, Nancy, and Stanley highlight the importance of industry, innovation, and infrastructure in achieving Goal 9 (Industry, Innovation and Infrastructure) and Goal 16 (Peace, Justice and Strong Institutions) of the Sustainable Development Goals.

Uffa Modey

Digital Grassroots is a youth-led non-profit organization founded in 2017, with a focus on promoting digital citizenship and advocating for internet rights in underrepresented regions. The organization conducts advocacy programs and digital rights learning exchange programs as part of their efforts. One of their flagship initiatives is the Digital Grassroots Ambassadors program, which aims to raise awareness and advocate for the internet in local communities. By engaging with young individuals in underrepresented regions, Digital Grassroots aims to bridge the digital divide and reduce inequalities.

Uffa Modey, the co-founder and global lead at Digital Grassroots, strongly supports the creation of pathways for young individuals to understand and navigate the internet ecosystem in their communities. She believes in collaborative work towards digital rights and internet governance with others in the global internet ecosystem. This demonstrates the organization’s commitment to fostering partnerships and creating a collective impact.

The Unstoppable Together report summarizes Digital Grassroots’ work over the past five years. Collaboratively created with the community, the report provides an ownership perspective and showcases the experiences and challenges related to digital rights abuses. It highlights the importance of community engagement and inclusivity in sustaining the work of Digital Grassroots. The organization recognizes the crucial role of community resources and contributions in their digital rights advocacy efforts.

Digital Grassroots also extends its reach to Francophone-speaking countries in Africa, running a specific training program on internet governance and digital rights for these regions. This demonstrates the organization’s dedication to addressing regional needs and empowering individuals in Francophone-speaking communities.

Additionally, Uffa Modey acknowledges the language barrier as an issue in internet governance. This shows the organization’s awareness of the challenges faced by different communities and its commitment to creating accessible platforms and materials.

Finally, Uffa Modey emphasizes that Digital Grassroots is continually looking for innovative ways to involve more people in internet governance. Their commitment to openness and a proactive approach ensures that the organization remains dynamic and responsive to changing needs and circumstances.

In summary, Digital Grassroots is a youth-led non-profit organization focused on promoting digital citizenship, advocating for internet rights, and bridging the digital divide in underrepresented regions. Through their advocacy programs, initiatives like the Digital Grassroots Ambassadors program, and collaborations, they strive to make a positive impact and empower communities in their digital journey.

Rachad Sanoussi

Rachad Sanoussi, a technical support member of Digital Grassroots, introduces himself as he takes the stage to present the impact report. He expresses his optimism and excitement for the launch, firmly believing in the collective force of the organization and the community in effecting change in the digital space. Rachad’s deep-rooted faith in the team’s abilities and capabilities shines through his speech.

During his presentation, Rachad graciously acknowledges the team’s hard work and dedication in delivering the impact report and successfully executing DIGRA programs. He expresses gratitude towards his fellow team members for their active engagement and valuable contributions. The significance of the impact report launch is highlighted by Rachad, emphasizing its importance to the organization.

Looking to the future, Rachad anticipates further progress and eagerly looks forward to continuing the journey with the team. He expresses his belief that together, they are unstoppable, and he is determined to build upon the current foundation for even greater accomplishments.

Notably, Rachad emphasizes the inclusive nature of Digital Grassroots programs. He shares his own experience of hailing from a French-speaking country, Benin, and stresses that the organization welcomes participation from individuals regardless of their language or country of origin. This underscores the importance of inclusivity and promotes the message of accessibility and universality within the digital grassroots movement.

In conclusion, Rachad’s introduction of the impact report is marked by his optimism and excitement for the launch, showcasing his belief in the collective force of the organization and community. His gratitude towards the team and anticipation for future progress reflects his dedication and commitment to the cause. Furthermore, his emphasis on inclusivity and the organization’s open invitation to participants from all languages and regions highlights the significance of diversity and accessibility in digital grassroots programs.

Session transcript

Rachad Sanoussi:
Okay, good morning, everyone. Good morning, participants. I don’t know if they can hear me online. Okay, perfect. Okay, perfect. I think we will start our session, and welcome everyone for this session. My name is Rashad Zanussi. I am technical support at Digital Grassroots, and today we will start our session. It’s a great pleasure for me to welcome you all for this significant event to launch our impact report. So today we gather here not only as a community, but as a collective force driving change in the digital space. So as we start this journey since 2017, our organization do a lot of things. So today we are happy to have you all for this launch. So I’m here with my colleagues, and I will let them introduce themselves. So over to you, Ufa. Can you hear me online? Yes, can you hear me?

Uffa Modey:
Hi. Yeah, we can hear you. Thank you. Hi, yes. Thank you, Rashad. Good day, everyone, and thanks for joining us here today. My name is Ufa. I am the co-founder and global lead at Digital Grassroots. I am a software engineer and technology potency analyst currently residing in Newcastle, UK from Nigeria. I don’t know if I can put on my video as well. Okay. Yes, that works. So, yeah. Thank you so much for joining us. And unfortunately, I can’t be present at the IDF in Japan, but we’re really, really happy to have you here with us today. As many of you know, Digital Grassroots is a youth-led nonprofit organization that is focused on increasing digital citizenship for young people from underrepresented regions with respect to internet governance and digital rights. We were founded in 2017 as one of the outcomes of the Internet Society Youth at IDF Fellowship. And since then, we have been doing a lot of work around digital literacy for young people to enable them access the services that they need to excel in the digital age, as well as engaging them in community engagement projects with regards to digital rights and internet governance, enabling them to understand the internet ecosystem in their local community in order to properly advocate for various instances and challenges of digital rights and internet governance abuses in their own local communities. And because of that, we are now, at the end of every year, we like to congregate at the IDF to highlight the good work that has been done in our communities, to talk about how we go around and navigate these digital rights issues in our communities as well. So today, we’re here to talk about our impact report. We would be showing how we have engaged in the last five years and the work that we have been doing with regards to building our communities, engaging in our programs. We have two flagship, we have a flagship program called the Digital Grants and Ambassadors program, which we run in coordination with our community leaders for advocacy programs, as well as our digital rights learning exchange programs. All of these programs are avenues and pathways that we are using as a method of getting more young people to be aware of how to advocate for the internet in their local communities, as well as how to connect and collaborate with other participants in the global internet ecosystem where they can come together and do this amazing work. So that is why we’re here today. And I’m really looking forward to presenting this impact report to the global community and getting everyone’s input. Thank you very much for joining us today. And over back to you, Rashad.

Rachad Sanoussi:
OK, thank you so much, Ufa. And I think we can move forward for the session. I would let you. I don’t know if Esther is already online. Yeah, let me check. OK, I will give the floor again to Ufa to present the impact report further before we launch it. Thank you, Ufa. Over to you.

Uffa Modey:
All right, thank you very much, Rashad. So as many of you who have had a chance to pass by our booth at the IJF Village, you should be able to scan a copy of our report and download it. I’m sure Rashad also has some copies of the report that can be passed around to be scanned. And this report is called Unstoppable Together. It is a summary of the work that we have been doing in the past five years. It highlights so many of our community members. Digital grassroots is not just an organization, it’s a community. And why is this community based learning important? This is important because as young people from underrepresented regions, every single resource that goes into doing the digital rights advocacy work that we do is very, very crucial to us. So this report will enable us to tell our stories from an ownership perspective, to be able to put out the work that our amazing community has been doing in their various capacities and the various resources that have been made available to them. This report was made in collaboration with the community. It was done in a bottom up way, using stories, highlighting the work, showcasing the experiences and the lived challenges and different instances of digital rights abuses that has been occurring in these various communities, talking about freedom of expression and privacy, surveillance, hate speech, inclusion, accessibility and other issues that would hinder the open access of the Internet in so many local communities. Us, the people together, is not just a one off report that we want to put out. It’s an entire journey that shows a pathway to what where we are going towards the digital future that we are trying to build. And we want this to be something that we can build upon. So we want your feedback. We want your input. We want you to use this as a channel to get to know more about our work and how you can be a part of it. Us, the people together, also highlights the key ways that you can be a part of our community and how people can contribute to our community, which is very crucial to us. The work that we do cannot be sustained if it is not open and if it is not inclusive. That is something that is super important to us as well. So please engage with the report. We want to hear from you. We want your feedback. We want your contribution. We want your collaboration in every single instance of the way. And yes, we’re also going to use this as a platform to highlight some of our community members who are doing so many amazing work in the communities. And we want to use this as a platform to also recognize this work that they are doing. And please, again, before I tap out, make sure you engage with the report and you engage with the work that we do and that’s coming out of our communities. Thank you very much.

Rachad Sanoussi:
Thank you so much, Ufa. As she was saying earlier, we have some community members who are doing a good job in our community, so I would like to invite Anna to present this community member. Thank you. Over to you, Anna. And also, I have some copies of the impact report, so hard copies, you can come to take some if you want. Thank you. So, Anna, over to you.

Hanna Pishchyk:
Thank you, Rashad. I hope you all can hear me. My name is Hannah. I’m a communications lead at Digital Grassroots. I’m coming from Belarus, but I’m currently based in France. Yeah, and I think we’re coming to the most exciting part of this session for us at DIGRA, where we get to celebrate and acknowledge the amazing impact that our community members have been doing, because as Ufa mentioned, we are an organization, but we’re also a community of people who are driving the knowledge and experiences that we get to, that we try to transfer to the communities across our global network. And the stories of the people that we are happy to recognize today, they are testament to DIGRA’s spirit and values of fostering digital literacy, advocacy and impactful leadership in Internet governance. The first person I would like to recognize, and importantly, do not hesitate to and be very generous with the clapping emojis. When we recognize people, I think it’s a very cool option that we have here. Yes, the first person I would like to acknowledge is Nancy Vachira. Since joining DIGRA in 2018, Nancy has magnified her impact in the digital space, leveraging her journey from a participant to a youth leader in global Internet governance initiatives. Nancy utilized DIGRA experience to become a global digital inclusion advocate, working towards reducing digital inequalities in rural communities through her international engagements and representing digital grassroots at the events like Commission on the Status of Women, to involvement with IGF, ISOC and other key initiatives. Nancy has been advancing DIGRA mission at a global stage, ensuring that the efforts to bridge digital divides resonate across different communities and inspire active participation in the digital space. Nancy, I would like to give you space. It’s OK now, you can speak.

Nancy Wachira:
Hello, everyone. Thank you for this opportunity. And I’m so grateful to be part of this event and to be able to share my experience with you. And I would like to invite you to join me in welcoming Nancy Vachira to the stage. Thank you. Hello, everyone. Thank you for this opportunity. And I’m so grateful to be part of this community since I began and joined the community in 2018. It was the first I was in the first cohort when DIGRA just began and I didn’t know much about the digital space or what to really expect as I began my journey. But out of curiosity, I just followed through and participated in the digital space. I had done information technology back in my university, but I didn’t know where to begin to grow myself, to be able to speak up and to champion issues that can bring positive changes to people in the community. So DIGRA was my first community. I’m really grateful for my mentors, Esther Ufa and Wadhangi. They really held my hand and showed me what to really do in this space. And as I kept growing on, I have been in IGF space and I have contributed. And recently this year, I represented DIGRA community at International Women’s Day in New York. It was a great platform to share my story and how I began, where I am and the impact I’m still creating. So I’m really grateful for this community and together we can be able to achieve much and to do much as we keep on growing and growing young people, helping a hand and showing the way. Thank you, everyone. And I hope we both participate and get to grow ourselves to the better.

Hanna Pishchyk:
Thank you so much, Nancy, and thank you to everyone who’s reacting with the emojis. The next person I would like to introduce and acknowledge is our community member from Haiti, Stanley Junior-Burner. Stanley has magnified his impact as a DIGRA community member, championing youth empowerment and Internet governance on a worldwide stage. Actively engaging with DIGRA across years, Stanley has shown leadership in several of our projects, notably leading a DIGRA mini-hackathon, which has been a huge success for DIGRA in Haiti. Stanley is also the co-founder of the DIGRA several of our projects, notably leading a DIGRA mini-hackathon. Stanley’s leadership in his home country also drives the Young Girls Empowerment Initiative, where he tirelessly works towards mitigating gender-based violence and fostering digital literacy through various platforms, including the local chapter of the Internet Society and Youth Observatory. Stanley has transitioned his insights into action advancing our cause of building youth Internet leaders, both within his community and on a global stage. Stanley, please, the floor is yours.

Stanley Junior Bernard:
Hello, everyone, and thank you. Can you hear me? Yes, we can hear you. Yes, we can. OK, thank you. Thank you, Ashna. Welcome. Hello, everyone. And thank you for this introduction. I am Stanley Junior Bernard. I am from Haiti and I am also part of the DIGRA community. And it’s an excellent opportunity for me to be here present at the IGF 2023, even if I’m not present physically. But I think being part of it online is an amazing thing. And also today is the best day for me because it’s my birthday and I have the opportunity to talk about digital grassroots, how that community has impacted my life. Since I’ve joined the digital grassroots in 2019, I think this is the first time I’ve met things related to Internet governance. And that has delved me into Internet governance. I joined the Internet Society and took many courses online with the Internet Society that helped me build also my knowledge and my skills on infinite issues. And I could say now that the digital grassroots was one of the best things that could happen to me because that has played a significant role in shaping my understanding on digital rights and Internet governance. And that has provided me with the tools and knowledge that I needed to succeed in the digital world. Because nowadays, Internet, the technologies are the new. new trend, and people in my country doesn’t really have access to technology, to Internet, to connectivity. And even now, I still struggle to go online because of Internet connectivity. And I think the Internet should be open, free, accessible to everyone, not to only the North country, but also to global South. People should benefit from opportunities that are online. And I can say, through Digital Grassroots, I was awarded a scholarship of the One Young World this year. And I think this is one of the things that made the impact of Digital Grassroots in my life, because through Digital Grassroots, my work and ideas have been recognized, and I was granted a scholarship from the One Young World, one of the global events in the world. So, I would say thatโ€”I’m sorry. I would say that I believe that Digital Grassroots has an important role to play in building digital capacity for marginalized youth around the world. Because of its innovative program, I would say that the Digital Rights Learning Exchange was one of the best programs that I’ve ever attended that is based on digital rights, on digital activism, on digital advocacy, because we need this kind of training to reinforce the capacity of young people from the global South. So, I would encourage everybody to support the work of Digital Grassroots, because the work that they are doing is impeccable. Thank you.

Hanna Pishchyk:
Thank you so much for sharing, Stanley. I’m not sure if it’s appropriate space and place to sing happy birthday collectively, but happy birthday to you. I hope you’re going to have a wonderful day. And, yeah, last but not least, we have Grace Zawuki from Zimbabwe. Embarking on her leadership journey with DIGRA, Grace has forged her path in the community from a learner to a mentor and advocate, exemplifying DIGRA’s values of community elevation and knowledge translation. And her efforts at the Zimbabwe Information and Technology Empowerment Trust have been instrumental in embedding digital rights and literacy within local framework. Her translation of capacity building skills and DIGRA knowledge translated into actionable initiatives, not only uplifting her community and acquiring crucial digital literacy skills, but also has been playing a crucial role in the learning experience of DIGRA newcomers, she has been supporting our learners, creating a repo of empowered digital advocacy and literacy across our DIGRA network. Grace, the floor is yours.

Grace Zawuki:
Thank you so much. Can you hear me? Yes, we can. Hi, everyone. Yes, my name is Grace and I’m from Zimbabwe. And I’m so pleased to be part of this event to launch the impact report. My journey with DIGRA started in 2022 when I participated in the Digital Rights Learning Exchange. And I can actually openly say that that was more like an eye-opener and not only an eye-opener, but it propelled me in my leadership journey in the internet and digital rights landscape. Because soon after participating in the Digital Rights Learning Exchange, I actually got spotlighted by the opportunity and I could have never had an opportunity to be part of this prestigious community solutions program. So currently, I’m in the United States, still carrying on the same issue, working on the same issue when we’re looking at digital literacy and also increasing digital safety and digital rights awareness amongst our communities. So, well, yeah, we are really unstoppable together. And through DIGRA, I actually learned that instead of looking for what’s wrong in any other situation, we should look at what we are strong at. And if we maximize on that, we can continue to have impact in our communities. So, yeah, I’m actually happy to be part of this community and I would like to continue to be part of the DIGRA community. Based on the work that we are doing, it’s actually similar work. And working together can actually make us have more impact in all our communities. And we are also putting Africa on the spotlight. So thank you so much, DIGRA, and I’m so happy that you invited me to be part of this event. Thank you.

Hanna Pishchyk:
Thank you very much, everyone. And just before we go, I would like to just say a big thank you to everyone. And I don’t know if any other member of the team would just like to say a few words before we pass on back to Rashad.

Rachad Sanoussi:
I think Estelle would like to say something. I don’t know.

Estelle:
I just wanted to say big congratulations to all of you. This impact report would not have been possible without your hard work and also the dedication. When we started DIGRA programs, just through volunteering and collaborating, it was really in the hope that we can create new leaders, new young leaders from our side of the world so that we are more represented in the internet governance ecosystem. I’m just so proud to see what you’re all doing and the good success you’ve achieved. And just huge congratulations. We are very proud of you. And thank you for being part of our community. Thank you, Rashad.

Rachad Sanoussi:
Okay. Thank you so much, everyone. As we are going to the end of this session, I would like to express my gratitude to each and every one of you for your active engagement and also your contribution. So in the coming month and coming year, we hope to build upon this foundation and go forward. Together, we are truly unstoppable together. And I look forward to our continued journey together. Thank you. And until we meet again, keep the digital grassroots movement alive. Thank you. Thank you, Rashad. Thank you, everyone. I’m sorry. Aren’t there any questions? Yes, yes. You can ask questions. You can use this mic to ask your question. Yeah.

Audience:
Good morning, everyone. My name is James. I’m from Cameroon. So I want to thank you for your well-articulated presentation and for your report. But going through the presentations and from all the guests, you know, speaking, I realized one similarity. They were predominantly from, you know, say, English parts of Africa and, you know, other African countries. And the IGF is really struggling to promote multilingual, you know, environment. So I come from Cameroon, for example. Today is the very first time I hear about this lofty project. So what โ€“ firstly, what are the conditions to become an ambassador? And the second question is, what are people doing to get into other countries which do not express themselves in English? Thank you very much.

Rachad Sanoussi:
Okay. Thank you so much for your question. I will give you a little answer, and my colleague also will help me. So, you know, as I was saying, I am Rashad Tanusi, and I am from Benin. And you know, Benin is also a French-speaking country like Cameroon. So my journey also in digital grassroots started in 2019, where I attended IGF, like you are attending now, in Berlin, and I met digital grassroots in Ubud. So it’s where I hear about digital grassroots, and I decided to join one of the programs, which is the community leadership training. So I joined this program, and over the year, I learned a lot. And after now, I joined the team. So even I am not English-speaking from English country, I was able to learn through this journey together. So I learned a lot. And I think our program is also open for everyone, even you are not from English country. I have a lot of ambassadors from Benin as well, who joined our program. But I will let my colleague to give more answers. So, Ufa, would you like to comment?

Uffa Modey:
Yes, thank you very much, Rashad. And like you’ve already said, digital grassroots, we do a lot of work in the Francophone-speaking countries in Africa. And we have a couple of applications when we are running our ambassadors program. Admittedly, the language barrier in Internet governance is an issue. So we’ve historically run a specific cohort of training for Francophone-speaking countries, where we try to run the entire training program on Internet governance and digital rights in French. The whole program is delivered in French. You can engage with us. Talk to Rashad after this session to visit our website, learn more about our work with our reports, to stay in touch with us, join our mailing list and see how you can be part of our community. We are 100% open and always looking for new ways to innovate around engaging more people in Internet governance.

Rachad Sanoussi:
Thank you, Ufa. We can engage further after. Thank you. I don’t know if you have questions also online. No? So thank you all for joining us. And it’s really great to have you all. So have a good day. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.

Audience

Speech speed

141 words per minute

Speech length

141 words

Speech time

60 secs

Estelle

Speech speed

132 words per minute

Speech length

117 words

Speech time

53 secs

Grace Zawuki

Speech speed

139 words per minute

Speech length

313 words

Speech time

135 secs

Hanna Pishchyk

Speech speed

143 words per minute

Speech length

757 words

Speech time

317 secs

Nancy Wachira

Speech speed

171 words per minute

Speech length

325 words

Speech time

114 secs

Rachad Sanoussi

Speech speed

128 words per minute

Speech length

737 words

Speech time

347 secs

Stanley Junior Bernard

Speech speed

152 words per minute

Speech length

519 words

Speech time

205 secs

Uffa Modey

Speech speed

162 words per minute

Speech length

1230 words

Speech time

455 secs

Robot symbiosis cafรฉ | IGF 2023 WS #95

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Hiroaki Kotaka

Hiroaki Kotaka is a well-known advocate for the use of technology, particularly robotics, in various industries. He is particularly focused on developing the manufacturing and processing industry in Kyoto. Kotaka firmly believes that embracing technological advancements, such as robots, can lead to significant growth and innovation in the industry.

One area where Kotaka sees the potential of robotics is in the service industry, including assisting both able-bodied individuals and those with disabilities. To explore this idea further, he initiated the Robot Symbiotic Cafe Initiative. The initiative involves testing remote customer services and deploying robots in actual cafes to serve individuals with disabilities, demonstrating how robots can improve service delivery and inclusivity.

Kotaka has also been striving to provide work opportunities for individuals with disabilities through the use of robotics. He initiated discussions with Mr. Inoue, which led to the start of the Robot Symbiotic Cafe project. This project brings together researchers and executive managers to discuss the use of robots in customer service and communication at cafes, with the goal of creating meaningful employment for individuals with disabilities.

To ensure that robot technology is accessible to all, Kotaka collaborated with the partner company Kegan to customize existing service food robots. This partnership aims to find suitable solutions that cater to the diverse needs of individuals with disabilities. Through ongoing discussions and efforts, Kotaka and Kegan are working towards creating truly inclusive and accessible technology.

Collaboration plays a vital role in developing warm and personable robots that allow individuals to express their uniqueness. Kotaka advocates for partnerships with various stakeholders, including the Department of Welfare, city administration, and legislative corporations. By broadening these partnerships, Kotaka hopes to foster a collaborative environment that encourages the development of innovative and inclusive robot technologies.

Kotaka also emphasizes the importance of publicizing these initiatives among businesses. He believes that highlighting the benefits and opportunities associated with robot integration will encourage more companies to embrace these technologies. The Robot Symbiotic Cafe Initiative serves as an excellent example of how robots can enhance job satisfaction and meaning in the lives of pilots, further supporting Kotaka’s argument.

In conclusion, Hiroaki Kotaka is a strong advocate for the use of robotics in various industries. He believes that leveraging technological advancements, particularly in the manufacturing and service sectors, can lead to significant growth and inclusivity in Kyoto. Through initiatives like the Robot Symbiotic Cafe and collaborations with stakeholders, Kotaka aims to create accessible and meaningful work opportunities for individuals with disabilities. Overall, he remains committed to supporting the development and integration of robots in different industries.

Audience

During a panel discussion, a representative from Benin raised the question of the cost of developing a robot and sought advice on reducing these costs. In response, the representative from Kagan Inc acknowledged the complexity of quantifying the cost of robot development, explaining that the process typically takes three to five years.

The representative from Kagan Inc suggested that one way to reduce costs is by using simpler and less complex mechanisms in the development of robots. By simplifying the design, it becomes easier to manufacture, ultimately reducing the overall cost. The representative highlighted the importance of the start-up element in bringing down manufacturing costs. Start-ups often have innovative and efficient methods that help streamline production processes and decrease expenses.

Additionally, the representative emphasized that reducing complexity is crucial to achieving cost reduction. Complex mechanisms not only increase costs due to the need for intricate manufacturing processes but also require more time and resources during the development phase. By keeping the mechanisms simple, the manufacturing process becomes more straightforward and less costly.

The panel discussion provided valuable insights into the cost aspects of robot development. It highlighted the challenges in quantifying these costs due to the lengthy development process. Furthermore, it emphasized the significance of simplifying mechanisms and leveraging start-up elements to decrease manufacturing costs.

In conclusion, the session shed light on the high level of effort and time investment required to develop a robot. It underlined the importance of considering cost reduction strategies, such as using simpler mechanisms and taking advantage of the innovative methods employed by start-ups. These insights can guide future efforts in robot development, promoting more affordable and accessible technology in this field.

Manabu Inoue

Manabu Inoue is a strong advocate for promoting opportunities and inclusivity for individuals with disabilities. He believes that robots can play a crucial role in improving their lives, both in terms of communication and work opportunities.

One of Inoue’s key beliefs is that individuals with severe disabilities should be able to operate robots for communication. He observed that individuals with communication and cognitive impairments faced difficulty when using a robot-assisted customer service at a cafe. This led him to reach out to local companies to discuss the possibility of developing a robot specifically tailored to suit the needs of individuals with severe disabilities.

However, there are skeptics who doubt the feasibility of developing such robots. Inoue himself expressed doubt in the feasibility, as he found no evidence of companies already developing robots that met the specific requests. Despite this skepticism, Inoue remains committed to customizing robots to be simple and easy to operate, thus making them suitable for individuals with limited hand dexterity.

Inoue also recognizes the importance of collaboration with disability support organizations and schools. He aims to expand on supported services by partnering with these organizations and sparking a change in awareness of what can be achieved with robotics. By collaborating with these entities, Inoue hopes to create more opportunities for individuals with disabilities and provide them with a sense of pride and confidence in their work.

The sentiment surrounding Inoue’s vision and efforts is overwhelmingly positive. Pilots who have had the opportunity to operate the robots have expressed great joy and a desire to actively participate in society. Inoue’s goal is to empower individuals with disabilities, especially those with severe disabilities, by helping them obtain employment and gain a sense of achievement.

In conclusion, Manabu Inoue believes in the potential of robots to transform the lives of individuals with disabilities. Through customization and collaboration with disability support organizations and schools, he aims to create more opportunities and inclusivity. The positive sentiment from individuals who have experienced the benefits of robotic assistance further emphasizes the importance of these efforts. Ultimately, Inoue’s goal is to enable individuals with disabilities to gain confidence, pride, and employment opportunities through the use of robotics.

Leila Liza Dailly

Kagan Inc. is a startup company that was founded in Kyoto Prefecture in 2016. The company’s team consists of members not only from Japan but also from the US, Europe, and Asia, bringing together expertise from major electronics manufacturers. Kagan Inc. focuses on the development, manufacturing, and sales of robotics, with a particular emphasis on customizability to meet user demands.

A key product offered by Kagan Inc. is the Kagan Motors, which simplifies the process of creating robots. The motors have received positive feedback for their ability to streamline robot construction. Additionally, the company has launched the Kagan ALI Autonomous Robot, which is widely used in various settings such as factories, warehouses, and restaurants. The versatility of Kagan Inc.’s robots allows them to be tailored to specific needs.

The company recognizes the wide applicability of robotics in different sectors. Their robots have been successfully implemented in factories, warehouses, and restaurants, showcasing their flexibility. Kagan Inc. highlights the importance of user-centered design and interfaces, implementing foot pedals as the main interface for individuals with limited hand dexterity. Feedback from users is actively collected and used to improve the user interface, and pilots are extensively trained to maneuver the robots.

In addition to their focus on robotics, Kagan Inc. specializes in customizing robots to suit customers’ needs. By minimizing basic functions, the company ensures that their robots are perfectly tailored to each customer’s requirements. Furthermore, Kagan Inc. aims to utilize existing business estates to address individual needs and support job procurement, contributing to economic growth.

Overall, Kagan Inc. is a pioneering startup that prioritizes the development and customization of robotics. Their Kagan Motors and versatile Kagan ALI Autonomous Robot showcase their innovative and highly customizable products. With a strong emphasis on user needs and the utilization of existing resources, Kagan Inc. strives to contribute to both individual and societal growth.

Moderator

Hiroaki Kotaka, a prominent figure in the field of robotic technology, approached Kegan, a company specialising in service food robots, to customise their robots for implementation in the Robot Symbiotic Cafe. This partnership aimed to enhance the functionality and efficiency of the robots specifically for use in this unique cafe setting. The collaboration between Kotaka and Kegan was met with a positive sentiment, as the moderator of an event invited Kotaka to demonstrate the usage of these robots in the Robot Symbiotic Cafe.

During the demonstration, Leila Liza Dailly showcased the capabilities of a robot operated by an employee at the company. This provided a hands-on experience for the audience, highlighting the practicality and usefulness of these robots in real-world scenarios. The demonstration generated a neutral sentiment, with the moderator expressing interest in continuing the demonstration.

One notable aspect of the robots’ operation is the use of foot pedals instead of a keyboard for control. This decision was made to simplify the piloting process and make it more intuitive for the operators. This innovative approach not only reduces costs but also improves user experience and accessibility. Furthermore, the company actively seeks input from individuals with disabilities to ensure that the operation of the robots is accommodating and convenient for everyone.

While training pilots to manoeuvre the robots was appreciated, it was observed that this process led to exhaustion among the pilots. This highlights the importance of striking a balance between providing adequate training and preventing fatigue to optimise the performance and well-being of the operators.

A key strength of Kegan lies in their expertise and ability to customise robots to suit individual needs. This bespoke approach ensures that the robots can effectively cater to the specific requirements of different environments and users. Additionally, to reduce development costs, the company leveraged existing food serving robots, demonstrating a cost-effective and efficient approach to innovation.

During the event, a speaker from a robot manufacturing and development company shared their expertise, citing a development timeframe of three to five years for creating a robot. This insight offers a realistic perspective on the time and effort required for the successful development and implementation of robust robotic systems.

Furthermore, the speaker emphasised the importance of simplicity in technology, particularly in reducing costs. Keeping technology straightforward and streamlined not only facilitates cost reduction but also enhances usability and maintenance.

In conclusion, the partnership between Hiroaki Kotaka and Kegan aims to enhance the functionality of service food robots for implementation in the Robot Symbiotic Cafe. The use of foot pedals for control, customisation of robots to suit individual needs, and consideration for disabled users demonstrate the company’s commitment to innovation and accessibility. Further insights from experts highlight the dedication required for successful robotic development and the benefits of simplicity in technology.

Session transcript

Hiroaki Kotaka:
to be able to work out of a consultation with the Kyoto-based robotic companies and the prefecture of Kyoto by operating a robot through the internet. Thank you, Mr. Uenobue. Next, I request Mr. Kotaka from the Kyoto prefecture in the Department of Commerce, Labor and Tourism, Manufacturing Promotion and Division. My name is Kotaka. I am the Manufacturing Promotion Division of the Department of Commerce, Labor and Tourism in Kyoto prefecture. First, I would like to briefly introduce the robot initiatives in Kyoto prefecture. The park division supports SME and the manufacturing and processing industry in the prefecture, as well as content companies such as games, videos, and startup companies. It promotes robots, one of the cutting-edge technologies, and Japan used to be known as one of the world’s leading robot-producing countries. However, with foreign competitors have emerged in recent years in Japan, so no longer hold the number one position. To recover, we set up the Keihana Robotic Engineering Center in 2019 to reclaim the position, which supports the development of next-generation technology, and the entry of small and medium-sized enterprises and startups in the prefecture into the robots industry. Over 720 research and development projects and demonstration tests have been conducted at the Robotic Engineering Center. The number of companies that have reached in the social implementation stage and conducting field demonstrations in various locations within the prefecture. Under the Robot Symbiotic Cafe Initiative, we are conducting demonstrations of remote customer services and serving individuals with disabilities in actual cafes, thereby aiming to create a place where humans and robots coexist and work together in harmony. So finally, I would like to introduce Ms. Rayla Daly from Kagan Inc. Thank you. My name is Rayla Daly. I am from Kagan Inc.

Leila Liza Dailly:
Nice to meet you all. Our company is a startup founded in Kyoto Prefecture in 2016. Our mission is quick and easy robot for everyone. And we have members not only from Japan, but also from the United States, Europe, and Asia. Most of our personnel are from major electronics manufacturers, and we conduct development, manufacturing, and sales. At the start of our entrepreneurial journey, we developed Kagan Motors, which makes it astonishingly easy to create robots. We have received favorable feedback from customers across universities and R&D fields. So we began offering motorized robots such as conveyors, rollers, and AVGs to respond to requests for use on factory production lines. In 2022, we launched Kagan ALI Autonomous Robot widely used in factories, warehouses, restaurants, etc. Customization is the key feature of which gives us the flexibility to meet user demands such as transporting items and fulfilling communication roles. Well, thank you, Rayla. So all of you who are working, the Robot Symbiotic Cafe, tell me how this got started.

Hiroaki Kotaka:
Let’s start with Mr. Kotaka. Mr. Inoue consulted with me over the phone regarding the possibility of individuals with disabilities working remotely from home by operating robots through the Internet. At the same period, we conducted a panel discussion called Keihana Residence to expand on the network of acquaintances of researchers and business professionals in Keihana Science City. Increasingly, researchers working on robotics and executive managers’ rehabilitation-related facilities We discussed excitedly about the possibility of using robots to assist customer services and communication at cafes. With all these factors coming together, I felt that I had to do something about the conversation with Mr. Inoue. Thus, I started the project Robot Symbiotic Cafe. So what exactly did Mr. Inoue consult with you, Mr. Kotaka?

Manabu Inoue:
So last year, I visited a cafe where a robot-assisted customer service through a remote operation was already being implemented. When I looked at it, the communication and cognitive-impaired individuals had difficulty using those robots, and I was not able to imagine an individual with a severe disability operating those robots. Therefore, I reached out to the local companies collaborating with us on a regular basis and discussed the development of the robot that I had in mind at the time. And at the time, I was told about the Robotics Engineering Center, and I called them promptly to discuss the prospect of enabling it. As a member of the inquiry, I doubted the feasibility as no companies already developed the robots matching the requests.

Hiroaki Kotaka:
So I decided to approach Kegan as a partner company of the Robot Technology Center because we were holding a seminar on robots becoming a way of life, and they agreed to my request by customizing the existing service food robots so that I could find a suitable solution for the equipment.

Moderator:
So I was able to match the two. We brought the actual robots used for the Robot Symbiotic Cafe. Would you like to demonstrate?

Leila Liza Dailly:
So let me show you the actual robots. So the person who built the robot is at home right now, and today we are operating by the team personnel out of our company. So you call them as pilots, right? Today, so one of your employees at your company is going to be the pilot. So we can’t really handle food and drinks at this venue, so we will be carrying pamphlets. See how that works. Would you like to come up to the table? Because, you know, nobody uses chairs. Thank you for the demonstration of the robot.

Moderator:
Please continue to operate. So you did the demonstration implementation in February.

Hiroaki Kotaka:
So let me know how it went.

Manabu Inoue:
So this was done by the individual who has a severe and mental physical disability, who needs constant nursing care, and those who need daily medical treatments for serious illnesses, and those in so-called social withdrawal states, having difficulties stepping out of home, and those individually still wish to work from home. And allowing them to work by allowing them to work from home by operating a robot, that is the prospect of what we wanted to achieve, Symbiotic Cafe. In the realm of supporting for those that are challenged with disabilities, this is a completely new thing. We want more and more people from the organization supporting individuals with disabilities to become aware that individuals with disabilities can work by remotely operating robots as it may open up new possibilities to new employment. So how did you choose the people who will be doing the demonstration test? For the pilot project, we consulted with the organization supporting individuals facing social withdrawal in the local community. The actual pilot is an individual who left computers and the Internet, so expressed a strong desire to participate. Tell us about the development of these robots. I personally don’t know much about robots, but I request that it will be possible to operate the robot remotely from home. And I requested that some of the pilots have limited hand dexterity. It should be made easy, so I requested operation to make it as simple as possible.

Leila Liza Dailly:
And so we decided that we use foot pedals instead of like using keyboard. So it’s easier for the pilot to operate, and we made it easier. Today, we brought the foot pedal that we used at that time. We use it as a single table. So this is a foot pedal. This pedal that I’m showing you right now, that’s how they operate the robot. So our employee went to the home of those who are going to be the pilot, and we directly asked the questions what would be easier for them to operate. So I assume that people with a disability have different kinds of disability that are distinctly different. So could you give us an example of what are their difficulties in developing? Yes. So they went to the training, and they were very happy when they received training in maneuvering the robots. We tried to improve the user interface. They got exhausted by operating them. So the challenge for the future is how meticulously can we address their needs. I have previously heard that the startups can handle requirements of flexibility that large companies cannot afford to do so. Could you please elaborate on this? So our company is specializing in providing customers with the most suitable robots by minimizing basic functions and customizing the robots. We are good at it. I believe that the disability of the pilots can be diverse. The development cost will be high if you have to develop from scratch. What did you do to reduce the development cost? We used existing serving food robot, where what are the things that are used to serve food.

Moderator:
So we kind of appropriated those existing robots in adapting the pedal that is readily available in the market. I know that you will be continuing this initiative in the future. So how are you going to improve and continue with this project?

Hiroaki Kotaka:
The most important aspect is clearly defining what kind of robot that are going to be manufactured. Individuals with disability who will be pilots. Also, as pilots become accustomed to maneuvering the robot, they can make more tasks, or they can address the needs rapidly. So we want to evolve the robot by talking to individuals with disabilities, defining requirements for the kind of system that is best for them. And I want to be able to evolve the robots according to their needs. What are the key points of the demonstration? So the customization and demonstration and individuality are the key points. So in regards to customization, making improvements to a finished product is time consuming and can be expensive. So the time and cost can be reduced by combining existing technology. Next, regarding the demonstration on individuality, full automation using a robot will require time and cost for development. However, by skillfully combining human-operated and robot-operated possibility, where robots complement what humans cannot do, and humans in turn complement action that robots struggle with. Through this kind of initiative, we want to collaborate with everyone in creating warm and personable robots that allow individuals to express their uniqueness.

Manabu Inoue:
So that means that humans and robots should coexist by demonstrating their individuality instead of relying solely on robots for everything. What are your thoughts on this, Mr. Inoue? The pilots express the great joy about their experience in being able to operate the robot, and I want to try more customer interactions. And those who support individuals in a state of social withdrawal were pleasantly surprised that these individuals expressed a desire to participate actively, which brought them an immense happiness. And I hope that through a demonstration test in the future, individuals with disability will not only operate robots, but will also interact with people with robots, participating in the society to lead their lives, working, and earning a salary by themselves, and society to be able to accept this as its norm.

Leila Liza Dailly:
Could you share your thoughts on this, Reita? So our company has been active in the role of manufacturing robots up to now, and engineers often tend to focus solely on manufacturing robots using cutting-edge technologies. However, the Robot Symbiotic Cafe Initiative, this gave us an opportunity to think about how to design a robot that can help pilots enhance their purpose in their lives and meaningful job satisfaction. So finally, what are the prospects for the future in this project?

Hiroaki Kotaka:
Mr. Kotaka? So as the department in the Kyoto Free Factory, we wish to continue to support development of robots. And I think it’s important that we develop more partners, not just in the Department of Welfare, but also in the city and other administration and legislation corporations. And through these corporations, I hope to be more connected and collaborative with the world. And I would like more businesses in our prefecture, one of which has to do with publicity of these initiatives among the businesses.

Leila Liza Dailly:
Ms. Reina, our company have this existing business estates. And in the future, we want to be able to address the needs of the individuals by customizing and help be able to obtain a job. Not just help with the efficiency of the product, but to be able to help every individual to contribute to the society.

Hiroaki Kotaka:
Final remarks, Mr. Inoue?

Manabu Inoue:
So personally, helping a person with disabilities, severe disability, to obtain some kind of employment and job, that is what I hope to do. So those with severe disability, we are customizing these robots. So we are examining the feasibility if we can customize these for disabilities. And we want to be able to continue to develop talents who can be the pilots. And we will be collaborating with the schools that are helping the people with disability. Through using robots, I hope the people with disability can have this confidence and pride in their work and their life to live better and to be able to cooperate with various stakeholders in supporting those with disabilities. And to our prospective partners, as the prefecture of Kyoto has mentioned, I would like to talk to other organizations who supports the people with disability so that we can expand on supporting them in the future. And I think it’s important for them to witness that this is something that can be achieved and it will change their awareness.

Moderator:
Thank you, all of the panelists. So we would like to move on to the question and answer sessions. Is there any session that are asking questions of the chat? So it seems that there is no one on the chat. But if you have any questions in the floor.

Audience:
Yeah. My name, I am from Benin. So I would like to ask you about the development. How much was the cost to develop the robot? I am doing a research on robot development. But I am aware that it can be expensive. So if there is good advice for us to be able to reduce costs for development. So this is a technical question. Could you talk for us? So I am from the Kagan Inc, who is doing the robot manufacturing and robots development. About the cost of the question, so it took about three to five years in terms of the duration of the development. There has been many years to develop the robot. So I cannot give you a single answer about the cost. We made effort in reducing the cost. I was checking with the interpreter if they needed time for consecutive translation, but it seems to be OK. So the startups is something, is an element that is in the space. Because using something that is as simple as possible, not to reduce the cost. So that is the way I think that is a way to enable social implementation by reducing complexity and keeping it simple. I hope that answers your questions. Is there any other questions on the floor or over the chat?

Moderator:
So I would like to wrap up the session. Thank you so much for joining the session. You’re welcome.

Hiroaki Kotaka

Speech speed

138 words per minute

Speech length

851 words

Speech time

369 secs

Audience

Speech speed

131 words per minute

Speech length

251 words

Speech time

115 secs

Leila Liza Dailly

Speech speed

130 words per minute

Speech length

754 words

Speech time

347 secs

Manabu Inoue

Speech speed

132 words per minute

Speech length

753 words

Speech time

342 secs

Moderator

Speech speed

128 words per minute

Speech length

167 words

Speech time

79 secs