Digital democracy and future realities | IGF 2023 WS #476

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explores various aspects of public interest internet and its societal impact. It highlights the need to understand the funding mechanisms for public interest internet, particularly in relation to the Wikimedia Foundation. Ziske, who represents the Wikimedia Foundation, has requested information on funding in this area, indicating a growing interest in understanding the financial aspects of public interest internet.

Another perspective is sought from Bill, who has a background in research and development (R&D). This aims to gain insights into public interest internet from someone with expertise in innovation and infrastructure. Including Bill’s viewpoint enhances the analysis and provides a more comprehensive understanding of the topic.

The analysis also discusses the role of Facebook in providing internet access, especially in many global majority countries. It is noted that Facebook often offers free internet services, positioning itself as the primary gateway to the internet in these regions. However, concerns are raised about the monopoly Facebook has over internet access, which may result in limited choices and potential inequalities in accessing the internet.

Furthermore, the analysis examines the global impact of the internet, highlighting its positive and negative aspects. While the internet has facilitated globalization and connected people worldwide, it has also centralized control and decision-making processes. This centralization undermines the democratic nature of the internet.

A significant issue identified in the analysis is the digital divide, particularly affecting young men and women in grassroots communities. Limited access to necessary infrastructure and content creates a substantial barrier to internet usage for these individuals. Additionally, language and content act as obstacles in bridging this divide.

The analysis also delves into how internet usage challenges social norms, particularly for young women. In many societies, using the internet is stigmatized as it is seen as a threat to established norms. This negative perception hinders women’s empowerment and their participation in the digital space.

Acknowledging the importance of digital literacy, the analysis emphasizes the need to increase digital skills among young people and women. It includes not only basic technological skills but also the ability to generate content and engage in internet activism. Promoting digital literacy can contribute to reducing inequalities and fostering greater gender equality.

Lastly, the argument is made for democratizing access to the internet. The presence of the digital divide within societies and the centralization of control over the internet necessitate equal opportunities for participation and engagement. Democratizing access ensures a more inclusive and equitable digital society.

In conclusion, this analysis sheds light on various issues surrounding public interest internet. It emphasizes the importance of understanding funding mechanisms, gaining diverse perspectives, and addressing inequalities such as the digital divide. Furthermore, it underscores the significance of digital literacy and the need to democratize access to ensure equal opportunities for all.

Rachel Judistari

The analysis sheds light on the crucial role that public interest platforms, such as Wikipedia, play in the digital world. It argues that the current digital landscape is primarily dominated by private and for-profit platforms, which in turn exacerbate existing wealth and knowledge gaps, compromise privacy, and facilitate the spread of misinformation.

However, the analysis also highlights the positive aspects of platforms like Wikipedia. It underscores that Wikipedia is a not-for-profit public interest platform that undertakes consistent technological innovation and actively addresses knowledge gaps. It emphasizes that Wikipedia is a community-led platform, with decentralized community-based content moderation, making it a unique and valuable resource.

The analysis suggests that regulations implemented in the digital space often focus on big tech companies and overlook the diversity of internet services. It argues that policymakers should ensure that regulations uphold protections for human rights and safeguard user privacy, while also fostering meaningful community participation in internet governance. The supporting facts provided highlight that Wikipedia opposes overly broad restrictions with highly punitive consequences and actively encourages meaningful community participation in internet governance.

Furthermore, the analysis points out that Wikipedia is actively involved in training large language models essential for generative AI, thereby contributing to reducing knowledge inequalities. It further showcases Wikipedia’s commitment to knowledge equity by highlighting their launch of knowledge equity funds to create more content and uphold diversity.

The analysis expresses concerns regarding the unintended consequences of public interest technologies. It highlights the potential risks of endangering indigenous languages and criminalizing dissenting voices, urging stakeholders to carefully consider and mitigate such risks.

Addressing the digital divide is seen as a major priority. The analysis points out that in the global south, where many individuals lack access to the internet, public interest platforms like Wikipedia should actively contribute to discussions aiming to bridge this divide.

Content moderation also features as a significant concern. The analysis notes that while Wikipedia puts effort into content moderation, regulations primarily designed for large corporations can complicate this process. The work being done by UNESCO to assist with content moderation is highlighted.

Furthermore, the analysis acknowledges that internet regulations can be new and complex in certain regions. It points out that some regions in Asia consider internet regulation a new concept, and emphasizes the presence of diverse ways of content modifications.

Advocacy for using superior platforms for better content moderation is presented. The analysis mentions the social media platform Mastodon as an example of a better alternative. It also highlights the importance of exceptions being made for public interest platforms, citing Rachel as an advocate for such exceptions.

Engaging young people in digital literacy is identified as a priority. It highlights that Wikimedia is actively working with communities of editors to provide training and focuses on initiatives, like in Cambodia, that involve indigenous young people in creating content and videos to preserve their culture.

Successful engagement with young people, the analysis suggests, can be achieved through collaboration with other organizations. It points out that Wikimedia has collaborated with the Minister of IT in Indonesia and expresses a desire to have more collaborations with youth-led organizations.

The analysis advocates for the promotion of the internet of commons to serve public interest and suggests that exceptions should be made for public interest platforms. However, no specific evidence or supporting facts are provided in this regard.

Diversity within public interest platforms’ community contributions is another important aspect emphasized in the analysis, without any further details or evidence being given.

Finally, the analysis advises policymakers to be mindful of the diversity of the internet ecosystem. It suggests that policymakers should take into account the various perspectives and interests within the ecosystem while formulating regulations. It concludes by highlighting the importance of promoting the internet of commons for public interest and creating an inclusive environment for all stakeholders.

Overall, the analysis provides a comprehensive examination of the role and impact of public interest platforms like Wikipedia in the digital world. It highlights the need to address wealth and knowledge gaps, privacy concerns, and misinformation, while also recognizing the positive contributions of public interest platforms in addressing those issues. It argues for regulations that protect human rights, encourage user participation, and support diversity. The analysis also raises concerns about unintended consequences and identifies priorities such as bridging the digital divide and engaging young people in digital literacy. The insights gained from the analysis shed light on the complex challenges and opportunities in creating a more equitable and inclusive digital ecosystem.

Mallory Knodel

The internet is widely seen as a public good that offers numerous benefits. It empowers communities and provides valuable tools for communication, information sharing, and access to resources. Examples of public goods on the internet include Indymedia, a platform for citizen journalism and protest news, and Wikipedia. These platforms serve as valuable sources of information and rely on the contributions of individuals to create and share knowledge.

However, there is a concern that corporations monopolise user experiences on the internet and engage in anti-competitive practices. While community-driven innovation still thrives alongside corporate platforms, it can be challenging to compete with large corporations that prioritise their own interests. Communities continue to build their own tools and generate content, but they face difficulties in gaining a strong foothold against corporate dominance.

Furthermore, efforts to create a public good internet are often not inclusive. The individuals involved in the hacking culture, which contributes to developing a public good internet, tend to be those with free time or jobs that align with this pursuit. This exclusion of people who lack the time or access to technology creates a barrier to participation and limits the diversity of voices and perspectives in shaping the internet.

To sustain a public good internet, substantial investment is necessary. Public good internet initiatives, being not-for-profit, struggle to maintain themselves without financial support. These initiatives often rely on “bootstrapping” and grow gradually once established. Without sufficient investment, the potential of the public good internet to thrive in many areas is limited.

On a positive note, communities that build public good internet technology tend to be self-perpetuating. By fostering strong community involvement, these initiatives can continue to expand and grow, gaining support and participation from individuals who understand and appreciate the importance of a public good internet.

However, the existence of public good internet is not guaranteed without strong nearby communities. Building a public good internet requires the dedication and collaboration of individuals in a specific locality. Without this local support, it is difficult to establish and sustain a public good internet that truly benefits the communities in the area.

Public interest work on the internet does not necessarily have to be for-profit to be sustainable. There are alternative ways of generating revenue, such as contextual advertising, that can be profitable and less invasive. The focus should be on creating sustainable models that prioritise the public interest.

In contrast, big tech companies are often criticised for prioritising monetisation over innovation. These corporations, with their established platforms and significant influence, can create barriers for competing services and limit the choices available to users. Targeted advertising, a common strategy used by big tech, is seen as invasive and contrary to the public interest. It violates user privacy, and there are concerns about the ethical implications of such practices.

The regulations designed for big tech platforms may inadvertently hinder public interest platforms. While efforts should be made to improve big corporate platforms, it is important to devote attention to public interest platforms, such as Wikipedia, that serve the public good. Current regulations may not fully consider the practices and needs of these platforms, which can impede their ability to operate effectively.

To promote competition and user preference, it is important to have more choices in platforms. The ability to migrate to different platforms encourages healthy competition and provides users with options that align with their values and preferences. Currently, big multinational corporate tech platforms dominate many regions, leaving limited alternatives.

Public platforms, like Wikipedia, should be considered in discussions on content moderation. These platforms have established practices and guidelines for content moderation that can serve as examples for other platforms. It is crucial to learn from these successful models and incorporate their insights into broader content moderation discussions.

In conclusion, building and sustaining a public good internet requires effort, investment, and support. While corporations dominate the landscape, efforts to create a public good internet are still underway. However, inclusivity remains a challenge, and investment is crucial for the success and expansion of public good initiatives. It is important to ensure that public interest work is sustainable and prioritise the public interest over monetisation. While big tech companies have their shortcomings, the existence of more platform choices and proper regulations can foster healthy competition and better serve the needs and preferences of users.

Bill Thompson

The analysis explores various arguments concerning the current state of the internet and its ability to fulfil public service outcomes. One viewpoint asserts that the existing internet standards are inadequate, primarily due to their domination by commercial interests. It is argued that this has hindered the delivery of public service outcomes. Efforts for intervention and regulation are advocated to address this issue effectively.

Another argument suggests that Internet governance needs to be inclusive and representative of a wider variety of communities. Traditionally excluded groups should have a voice in shaping the internet to create a fair digital public sphere. Inclusion and active participation from these communities are considered crucial for better internet governance.

The analysis further highlights the need to reevaluate and reimagine the internet to enhance democracy and protect individuals from surveillance. The current internet structure is questioned as potentially unsuitable for these purposes. A network that safeguards individuals’ privacy from surveillance is deemed necessary.

The limitations of existing protocols are seen as a hindrance to innovation in the design of modern social networks. The emergence of similar platforms that lack innovation and the perceived restrictions of current protocols provide evidence to support this argument. However, the introduction of alternative protocols such as ActivityPub offers the potential for innovation in online social spaces and presents a different lens for constructing such spaces.

Responsibility for delivering various aspects of the public interest internet is viewed as falling on all stakeholders. It is emphasised that these stakeholders should contribute to the public service internet in accordance with its overall interests. This collective approach is crucial to ensure the internet effectively serves the public interest.

Funding of public infrastructure, including the internet, is another debated topic. The argument is made that society should bear the cost of public infrastructure rather than relying on private entities or philanthropy. State funding is considered an acceptable option if it avoids exerting control over content. However, concerns are raised regarding the risk of state-controlled media associated with government funding.

The analysis also calls for a different approach to the internet model. The current model, based on decisions made by a select group of individuals predominantly from North America and Europe, is criticised for its failure to address current challenges effectively. The importance of co-creation and community engagement is emphasised as a means to reshape the internet model and build a more sustainable digital ecosystem.

In conclusion, the analysis presents a range of arguments that highlight the inadequacies of the current internet model in delivering public service outcomes. The influence of commercial interests, limitations of existing protocols, and the need for inclusivity, democracy, and community engagement are all key factors that require attention. Ultimately, a collective effort is necessary to create an internet that effectively serves the public interest.

Anna Christina

The analysis reveals various important aspects concerning internet governance and cultural diversity. One of the key points highlighted is the pressing need for diverse cultural content on the internet, with a specific focus on meeting the needs of indigenous communities. It is pointed out that a significant portion of the current internet content does not relate to indigenous communities. This is particularly relevant in Mexico, which is the 11th country with the most multicultural communities. Efforts should be made to ensure that indigenous cultures and perspectives are represented and celebrated through diverse online content, particularly as it relates to sustainable cities and communities.

Additionally, the analysis underscores the importance of establishing a governance system that fosters balanced and inclusive participation of all stakeholders. This includes promoting transparency, accountability, and stakeholder inclusion in decision-making processes related to internet governance. To this end, UNESCO has been running a consultation since September 2022 to develop guidelines for regulating digital platforms. These guidelines aim to ensure that governance systems are transparent, accountable, and promote diverse cultural content. This is important for achieving peace, justice, and strong institutions.

Furthermore, the analysis highlights the need for active youth participation in internet governance discussions. It is noted that children aged 13 to 18 expressed their desire to participate in governance discussions during the consultations. Recognizing that the youth are the most important users of the internet, their active involvement is required to reduce inequalities and promote peace, justice, and strong institutions.

In terms of implementation and evaluation processes of internet regulation, the analysis emphasizes the importance of involving internet stakeholders. It is observed that civil society participates in advocacy but does not often participate in implementation and evaluation processes. Evaluation is crucial for judging the effectiveness of the governance system. Promoting stakeholder involvement is vital for achieving peace, justice, and strong institutions.

Moreover, the analysis highlights the positive role that community networks in Mexico, Central America, and Latin America play in promoting indigenous expression and cultural content online. These networks were created in partnership with UNESCO and serve as an example of promoting indigenous expression and cultural diversity. This is related to industry, innovation, infrastructure and peace, justice, and strong institutions.

The analysis also addresses the issue of funding public interest technology. It emphasizes that responsibility for funding public interest technology lies with all stakeholders, including governments, the private sector, and users. This collaborative effort is necessary for achieving partnerships for the goals.

Another important aspect brought up in the analysis is the need for a balance of responsibilities and contributions from all involved parties to achieve sustainability. This involves governments, the private sector, and users working together to achieve common goals. This is essential for achieving partnerships for the goals.

The analysis also emphasizes the importance of the consultation process for guidelines and regulations. It notes that building, maintaining, and resisting during this process is crucial. This indicates the significance of active engagement and continuous involvement in shaping internet governance policies. This is closely tied to achieving peace, justice, and strong institutions, as well as partnerships for the goals.

Additionally, the analysis underscores the importance of identifying the roles of different stakeholders in the regulatory process. It is highlighted that this aspect received the least response during the consultation. Involvement is necessary even after regulation happens. This is tied to achieving peace, justice, and strong institutions, as well as partnerships for the goals.

Furthermore, the analysis notes that while good laws and standards are essential, they can be misused in authoritarian regimes. This raises concerns about the potential misuse of laws in authoritarian regimes. This is especially relevant for achieving peace, justice, and strong institutions.

In conclusion, the analysis provides valuable insights into the need for diverse cultural content on the internet, the establishment of inclusive governance systems, the importance of youth participation, stakeholder involvement in implementation and evaluation processes, the role of community networks in promoting cultural diversity, the responsibility for funding public interest technology, the balance of responsibilities for sustainability, the significance of the consultation process, and the role of civil society in fighting against misuse of laws. These findings shed light on the complex nature of internet governance and the importance of fostering cultural diversity in the online world. These aspects are tied to achieving quality education, reduced inequalities, sustainable cities and communities, peace, justice, and strong institutions, industry, innovation, and infrastructure, and partnerships for the goals.

Widia Listiawulan

Traveloka, a publicly traded private sector company, prioritizes innovation and technology to enhance tourism while emphasizing sustainable and inclusive growth. They collaborate with communities, governments, and stakeholders, operating in six ASEAN countries with over 45 million active users monthly. Even during the COVID-19 pandemic, Traveloka’s contribution to Indonesia’s GDP in the tourism sector reached 2.7%. They actively partake in policy-making processes and ensure compliance with local regulations, promoting customer safety. Traveloka’s commitment to sustainability involves working with women and environmental groups, supporting local communities. Their focus on youth involvement and digital literacy empowers young people to contribute to community-building and develop new tourism destinations. Traveloka promotes tourism through local perspectives, valuing the preferences and aspirations of local communities. They also engage in collaboration, partnering with institutions nationally and internationally to provide digital literacy training and foster inclusivity. Moreover, Traveloka advocates for collaboration and public-private partnerships to address technology regulation concerns effectively. They emphasize responsible technology use, focusing on customer needs and societal benefits. Traveloka’s multifaceted approach showcases their understanding of the relationship between technology, community engagement, and responsible business practices in driving positive change in the tourism sector.

Nima Iyer

Nima Iyer, the founder of Policy, a feminist civic tech organisation based in Kampala, Uganda, expressed concern over the commercialisation and politicisation of online spaces. She noticed a shift in how internet spaces evolved over time, from being free and accessible to becoming controlled by commercial interests and divisive politics. Nima believes that this trend has eroded the idea of a free, open, and publicly-owned internet. She argues that the internet should be a space that is not restricted or controlled by commercial or political interests.

Nima advocates for the creation and governance of public internet spaces that are inclusive and free for everyone to use. She is concerned about the diminishing open internet, which was initially intended to be a space that everyone could use freely. Despite the challenges, Nima believes that there is still an opportunity to create public, inclusive, and free digital spaces.

In addition to her concerns about the commercialisation of online spaces, Nima also observes a divide in conversations between for-profit and non-profit tech communities. She maintains separate Twitter accounts for both communities and notes that they discuss vastly different topics, with the for-profit community heavily focused on revenue generation and customer retention. Nima also explores the influence of profit-driven motivation in the innovation space, using the example of Couchsurfing and Airbnb. She believes that profit-driven corporations can have a negative impact on innovation.

Furthermore, Nima questions how to maintain public interest when innovation is dominated by profit-oriented motivations. She notes that the concept of public interest appears to be overshadowed by the quest for profits in the innovation space. Nima also highlights the importance of differentiating the rules for big tech companies and small start-up companies when creating data protection laws. She points out that it is unfair for small companies in their early stages to have to follow the same dense regulatory protocols as larger, technologically advanced companies.

Bill Thompson, another prominent voice in the analysis, suggests that commercial engagement should be allowed in the public service internet, but on public service terms. He believes that the public service internet should support democracy online and a digital public sphere without traditional commercial capture or monetisation. Thompson criticises the current model of a global timeline used by platforms like Facebook and Twitter, arguing that it is not reflective of real life and is not good for civil society. He suggests the need for a different way of thinking and building internet systems, abandoning certain core assumptions of existing models.

In terms of universal internet access, Nima expresses some sadness about the idea of previously disconnected indigenous communities being connected to the global internet. She questions whether constant access to global information is always beneficial. Nima also calls for deliberate design of public spaces, goods, and platforms, highlighting the need to encourage people to use them rather than defaulting to existing ones due to convenience. She advocates for conversation between government officials and civil society for effective legislation.

Throughout the analysis, there are several other noteworthy observations and insights. The importance of encouraging volunteerism and contribution to open-source software and knowledge bases is discussed. The challenge of public infrastructure funding is reflected upon, with a comparison to essential services like sanitation and water. Finally, there is a call for action on the discussed matters and a focus on the next steps to address the issues raised.

In conclusion, the analysis highlights the concerns and arguments put forward by Nima Iyer and Bill Thompson regarding the commercialisation, politicisation, and profit-driven nature of online spaces and innovation. They advocate for the creation of public, inclusive, and free digital spaces and the differentiation of rules for big tech and small start-up companies. They also emphasise the importance of deliberate design, conversation between government officials and civil society, and addressing the challenges of universal internet access and public infrastructure funding. Overall, their insights contribute to the ongoing discussions and efforts aimed at creating a more accessible, inclusive, and socially responsible digital world.

Session transcript

Nima Iyer:
and future realities. Thank you so much for coming early this morning and for joining us for what I believe will be a very interesting and exciting conversation. So I’ll just briefly talk about why we’re having this conversation and how it came about. First, let me just quickly introduce myself. My name is Nima Iyer and I am the founder of Policy. And Policy is a feminist civic tech organization based in Kampala, Uganda. And when I first founded Policy about six years ago, there was a lot of buzz around civic tech. And I feel even like using the word civic tech feels a bit dated. Like it feels very, you know, 2016, 2017, but it’s the same topics with just different names and the similar ideas. And why this is really interesting to me because I vividly remember the first time I used the internet back in the early 90s and just how much joy it had and how it felt like you could create anything and it felt, you know, it felt free and accessible. And then slowly over time, things changed and platforms became very gated and then you had to be in these closed spaces and sort of the dreams that we had for, you know, this open internet that we could all use was slowly diminishing in some ways. So now we have a lot of platforms that are fueled by commercial interests and, you know, fueled by advertisement or they’re fueled by divisive politics online. And so the question that we are asking here today is what happened to the spaces that would have been publicly owned and publicly governed? What happened to those spaces? Do we still have an opportunity to create those kinds of spaces? Who should be having these conversations about making these spaces? And yeah, that’s kind of why we’re all gathered here and also to get different perspectives of who can be in the room, who’s not in the room, who should be in the room. And yeah, also generally to talk about how this term of public good has changed over time but how it’s still very much the same concept and still very important and very relevant. So I hope you will have a great conversation with us and the format we’ll have is that we’ll talk together for about 40 minutes on the panel, 40, 50 minutes, and then we would love to have time to open it up to hear your perspectives and also to get your questions. So I know there’s often, you know, this is not a question but if you do have interesting comments to add that would definitely be welcome. So I would love to start the panel and I will first start off with Mallory and I’ll give a quick introduction to Mallory Nodal. Mallory is CDT’s, that is Center for Democracy and Technology’s Chief Technology Officer. She’s a member of the Internet Architecture Board and the co-chair of the Human Rights Protocol Considerations Research Group. She takes a human rights people-centered approach to technology implementation with a focus on encryption, censorship, and cybersecurity. Mallory, thank you so much for joining us this morning. The first question I wanted to ask you is generally about what do you think about the general concept of public internet infrastructure or public goods? How would you explain it to the people in the room, first of all, and what good is it providing to us or even what’s the potential of what it could provide to us?

Mallory Knodel:
Yeah, thanks so much for inviting me and for having me here to talk about this topic. What I really like about your framing of this panel is it answers the question, internet for what? Because I feel that we often just assume that the internet is inherently a good thing and that’s actually not a bad assumption. I think we all arrive at the same conclusion but I don’t think we introspect or remind ourselves enough what for and what does it provide. I think that where governments and corporations have made the case for why we need to move online and digitize, often those are austerity measures. Often those are ways of replacing infrastructure with digital infrastructure. And I think what we’re talking about in this panel is the opposite. Why do we have the internet? Why do we believe in it so much? Why is it so important? And I can tell you from a while back, I’ve been in this space for a terribly long time, it turns out. And I remember when we didn’t have social media or we couldn’t take for granted that one could simply go on the internet and build oneself a platform or share information. It started for me when I was an activist with Indymedia where we were going around mostly just filming protests or sharing information about protests. And then that wound up online because the Indymedia websites were ostensibly somewhat open. They were kind of the proto web 2.0. You could upload an event or share event details with people, you could then post a blog or we just called it news. We could post news from a protest on the Indymedia website. And then those got published. And so that sort of citizen media became a real precursor to what we see now pervasively in social media where a lot of that content now is on corporate owned private space platforms. Indymedia still exists. Those things are still around. Other things that are in that spirit are like Wikipedia, things where we’re co-generating with one another in aggregate content. And I think the other thing that back in that time when I’m really stretching my mind backwards where we were really insistent upon owning the technology and not just owning it, in terms of having a bare metal server somewhere in a co-location center that you could visit it and check in on it, see how it’s doing, install the software you want on it, make sure you have the encryption keys and no one else, et cetera, et cetera, we were also really invested in figuring out how to do it too. So it wasn’t just about the having, it was also about the doing and the making. And I feel like that in and of itself was quite an empowering sort of action because we were actually building cool tools. Like I mentioned, Indymedia sort of invented social media. We were, and we were hacking on it, we were figuring out. And so I think there’s some spirit of that that still happens. I see it everywhere. It’s sort of a yes and, right? It hasn’t been that corporates have sort of replaced this, it’s just that we now have to compete with the corporates that of course act in anti-competitive ways, they are interested in capturing users, they have all kinds of other incentives. And so while some of those traditions are still around, they’re just not as present, they’re not as well used, they’re not as well remembered. And so I think, yes, the internet is itself a public good, but I think all of the things that sort of come out of it when the exercise is itself the end goal, really I think is what communities end up coming up with as what are public goods for them. So I think I’ll stop there and let you introduce the rest of the panel.

Nima Iyer:
Yeah, I just wanted to add on to that in terms of what you said. For example, what groups do you think benefit from these public goods and who is excluded as well? Just as you said, the corporations tend to own them now, so if you could just expand on that.

Mallory Knodel:
Yeah, I think your question about exclusion is a good one and I’m sorry, I didn’t mention it before. I do think that while we like to valorize this sort of hacking and the making and the doing, it is not that inclusive, it does require a lot of time. And so much of the people in this space still today are folks who have free time or that have jobs that align with this sort of work. And so it does, by virtue of that, simply exclude people who maybe don’t have a lot of time to just try to figure out the technology or they don’t have access to those things. So I think we shouldn’t be too overly enamored with this idea that we can just build it and make it. It actually does take a lot of time, it does take a lot of investment. And so I think without a concerted effort to build up the public good internet without real investment in money, again, because we’re not doing this for profit, there is no business model, that it won’t thrive and in a lot of places won’t even exist at all. And because these communities are very much bootstrapping communities, meaning that once they exist, they start to grow.

Nima Iyer:
There’s a request for you to speak a little bit slower.

Mallory Knodel:
Oh, certainly, yeah. Absolutely, yes. So I was just finishing up, but what I was saying is that a lot of the communities that make public good internet technology tend to be self-perpetuating. So that needs to be grounded in existence. And the opposite then is also true. If there is not a strong community of building a public good internet nearby, it’s really difficult to expect one to just happen or expect the local communities there to benefit from a global public good internet when it’s not in their local language, it’s not necessarily serving their needs. So again, I’ll just reiterate the main point here is that it takes effort and investment, support, money, et cetera, to make it happen.

Nima Iyer:
Thanks, Mallory. I think I still have more questions on that topic, but let me get on to some of the other speakers as well, because I definitely am curious in terms of, when you say the investment and the money for a public good that will not make money as well. And yeah, I’m curious, we’ll discuss it later, like where might this money come from and how would it be sustained? But we’ll come back to that. I would like to bring on our next speaker, who is Bill Thompson from the BBC. Yeah, I think he’ll-

Bill Thompson:
Hello.

Nima Iyer:
Hi.

Bill Thompson:
Can I be seen or even heard?

Nima Iyer:
We’re just waiting for your image to come up on the screen, just one. I’ll introduce you in the meantime.

Bill Thompson:
It’s not worth waiting for.

Nima Iyer:
Okay. Oh, whoa. I’ll introduce you in the meantime as that happens. So Bill will be joining us remotely. Bill leads the public value research in BBC research and development. He’s also well known as a technology journalist and advisor to arts and cultural organizations on matters related to digital technology. From January, 2001 to April, 2023, he was also a regular studio expert on the BBC World Service Technology Program, Digital Planet, which is also known as Go Digital and Click. And he still appears regularly as an independent commentator. He’s an adjunct professor at Southampton University and member of the board of the Web Science Trust. So Bill, we’re still waiting for your image to appear. Should we just go ahead?

Bill Thompson:
I would carry on. I’m better on the radio anyway. I know that.

Nima Iyer:
Oh, there you are. All right. So we can see you on the screen now. Welcome. Welcome and thanks for joining us very late your time. We really do appreciate it. So the question that I have for you today, Bill, is how can we build internet technologies that are architected, designed and deployed to meet the specific requirements of public service organizations? So in simpler words, how do we make these public goods and how do we make sure that they work within the current standards of the internet? So what’s the best way we can go around to create these digital public goods?

Bill Thompson:
Oh, the easy questions first then. I think that it’s interesting that you say that we do them in line with current internet standards because that sort of assumes that what we’ve got now is a sufficient base for public service outcomes. And I’d argue just in line with what Mallory has been eloquently saying that the history of the network over the past now 50 years is that we have a set of technology standards that have failed to deliver public service outcomes that have been subverted, that have been taken over by commercial interests intentionally in the governments have sort of given that space to commercial interests, but also the standards, the technical standards, the protocols themselves have proved unable to resist commercial pressure and have not effectively delivered good outcomes. And we see that again and again in the way that the open web has been closed in the way that sort of things we would like to happen in terms of open communications protocols haven’t happened. So part of what we’re looking at at the BBC is in fact to ask whether we need a significant intervention in the underlying technology stack as well as work on regulation and governance. So let’s not just accept the internet as it is, but let’s think about how we might build it or improve it and design it to deliver those outcomes. So bring in the sorts of communities that have been traditionally excluded from internet governance activities, bring the sorts of communities that were definitely not part of the conversation in the 1980s and 1990s when today’s network was emerging and try to have a more structured conversation. As a public service broadcaster, you see that the BBC has spent a hundred years making television and radio work. And it feels to me that as part of our mission, we should be trying to work with others to make the internet work. And that means trying to go back to basics, to ask ourselves what a network would look like that could allow us to effectively assert say identity that could protect people from surveillance, that could deliver those public goods. And then on top of that, we could start to build a digital public sphere in which people could feel more fulfilled, could feel happier, could feel protected from some of the bad aspects of the commercial internet if they chose it. And so I’d say the two parts of your question go together quite effectively in that we want to consider what good public service outcomes are. We sort of know what they are in the real world. We sort of know what they are in the broadcasting space that the BBC knows very well. I think we’re quite unclear about what they would be online, particularly when we have many different constituencies of interest. And so we need to have the widest possible coalition of interest, people talking about this, designing the network, but we shouldn’t assume that what we’ve got today is actually the right starting point. Perhaps the radical thing to do is to accept that if we’re to serve democracy and serve digital democracy, we should be willing to ask some very hard questions about the way today’s network runs, the technical protocols, the design standards for our applications, and the governance, and whether that’s the right way to deliver the sort of public service internet that we’re looking for.

Nima Iyer:
Thank you. Thank you so much for that, Bill. I think it’s interesting in terms of the design because a few, I want to say a few weeks ago, there was suddenly a ton of platforms that came about to replace Twitter slash X. And it just felt like over the course of two weeks, there was like 10 new online platforms, but they all looked exactly the same. There was no innovation. It was just copy paste of the same platform. And it just felt so boring. Like, isn’t there another way to design a space where we can share our very brief thoughts? But I think it’s really interesting. And like, yeah, how do we get to get together and design something that looks different from what we currently have? And yeah, it just, it felt so restrictive.

Bill Thompson:
Indeed. And of course, part of that is if you like, the network primitives, the underlying protocols that you have to work with if you want to build a modern social network are themselves quite limited. So, you know, the emergence of ActivityPub was brilliant because it was a different way of thinking about how you might construct an online social space. And it allows you to have different design criteria to work in a different way, to build security into it in a different way. And I think it’s that novelty that is going to be absolutely important to the next generation of the internet, that what we’ve got now doesn’t feel to me like it’s a good starting point. So let’s have the sort of radical conversations that we could have in this room and see where they take us.

Nima Iyer:
Lovely. Thank you so much. All right. I would love to move to our next speaker, Anna Christina Aruelas from UNESCO. Thank you so much for joining us this morning. Anna is a Senior Program Specialist at UNESCO Communications and Information Sectors, Section for Freedom of Expression and Protection of Journalists. She has dedicated her work to the promotion and defense of human rights, freedom of expression and the right to information. Previously, Anna Christina was the Director of Article 19’s Regional Office for Mexico and Central America. Once again, thank you for joining us. The question that I have for you builds upon what Mallory started, talking about who’s included and who’s excluded and the kind of resources that are needed. So I’d love to ask you, how do we ensure that various stakeholders are heard and have the appropriate input so that we can develop these online governance structures that serve everyone?

Anna Christina:
Great, thank you very much. It’s a great conversation. I just wanted to think of what Bill was just saying of how we are thinking the internet and how we are including different voices within the internet. But, and that remind me of one of the things that my first job of UNESCO was related to, which was trying to make indigenous communities content within the internet to be available and how the possibility of creating indigenous communities content, acknowledging that most of the content right now is content that do not relate to most of these indigenous communities. I’m Mexican and Mexico is the 11th country with most multicultural communities. So I was thinking on how can we actually make sure that diverse cultural content, that cultural expressions are well set in internet. And that when we navigate into internet, we relate to those communities that live in our countries more than to other community. And as long as, at the same time, as we relate to other communities from other countries. Because as I say, in my country, sometimes we don’t know, we don’t even know about indigenous communities, even though they live in the side of our door. So I just was thinking about that because this relates a little bit of what UNESCO is doing right now and what we’re intending to promote in this process of defining how the governance of digital platforms. look like when we’re facing different processes, regulatory arrangements in different parts of the world. So UNESCO has started since September 2022 a process of consultation on guidelines for the regulation of digital platforms. In the beginning we started thinking about how the different discussions around regulation should take shape and try to create an understanding, a common understanding, that a human rights-based approach should come into place. And we realized that there were three elements that we wanted to enforce. One is that, as some of you know, UNESCO endorsed in a declaration, unanimously, that is called the Windowhead Declaration, that said that information is a public good, and that there’s three steps to actually make sure that information becomes, as a share, good for everyone. The first one is transparency for internet platforms, the second one is empowerment through media and information literacy mechanisms, and the third one is media viability. So through that, taking that in mind, we started this discussion recognizing that the thing was happening in silos, that we wanted to maintain the freedom that we all have in the internet. We wanted indigenous community to be able to engage, to have cultural content within the internet as we have it, but at the same time we were looking that regulation that was happening around the world was targeting the users and not seeing what the companies could do to be more transparent, to be accountable, to identify what was that phenomenon that wasn’t to be targeted, such as disinformation, hate speech, et cetera. So through a different process, it was three stages of open consultation where we received more than 10,000 comments from many of you, we realized that what we wanted is, one, to safeguard freedom of expression, access to information, and I will say one thing that will come in the next version is diverse cultural content, because one of the things that we aim in this process is to balance and make sure that whatever the governance system is, thinking that there’s always complementarity between self-regulation, core regulation, and statutory regulation, whatever that kind of shape of arrangement, of regulatory arrangement is, the governance system, which is a group of people, a group of people that should be identified, and this relates to your question, we need to identify those stakeholders that are interested to participate in the governance system, and the governance system had to be able to create balance in the participation of these stakeholders. We need to bear in mind that when we’re talking about a governance system, we need to include those voices that are mostly affected by the different phenomenons that we are seeing in the internet, and that are the issues that we want to address in order to also preserve freedom of expression, access to information, and diverse cultural content. So this is one of the things that UNESCO guidelines are trying to put forward, how we can ensure that governance systems are transparent, how we can ensure that governance systems are accountable, that they promote diverse cultural content, that actually they are and have in place check and balances, because sometimes even when we’re talking about self-regulatory measures or self-regulatory arrangements, there’s not within a specific check and balances or mechanisms to be accountable, and we want them to be able to be open and inclusive and accessible for everyone, not only for the ones, the technical community or the people that knows about the internet, but the people that wants to engage with the internet and have in the possibility of create their own content. So I will say that for us, there’s one, two, three, four, five, six elements that we said about the multi-stakeholder approach within this governance system. The first is acknowledging and identifying the stakeholders, including the companies that should be responsible for the compliance of the five principles set in the guidelines, then afterwards I can talk about them, and when identifying these companies, the regulators should bear in mind, yes, on one hand, the size, two, the market share, and three, the functionality of the platforms. And in this last section, I want to stop a little bit because it has to do with public interest, internet technologies. In this last section, the guidelines are clear, that when a governance system identifies which are the companies that should be on the scope, there should be a clear understanding of what is the kind of functionality, business model, service that the companies place, etc, etc. So I could read the thing, but it’s complicated. And then the second thing is encouraging inclusive participation, and when we say encouraging inclusive participation, it’s not about only the usual suspects, but actually one of the things that we receive from the various submissions from the consultation, for instance, for children from 13 to 18 years old, we’re like, we want to participate in these discussions, we are not in these discussions, you know, like, and you’re always trying to protect us, but how are we enabling the possibility for us to participate and engage more in the internet and in the decision-making process of the governance system? How are you giving us the tools to actually engage in these processes? And I think this is an important question because I don’t see that we have been able, for instance, in this forum, to bring together people that are actually the most important user in the internet right now. And the third thing is creating balance, that means acknowledging that the different actors within the stakeholder or the governance system have different levels of power, so we need to create balance and understand how balance should be worked. Ensuring transparency and accountability, as I already said, collaborative decision-making, so it tends to put forward a set of guidance of how decision-making is going to be, and then coordinating implementation efforts and evaluation. So that means that when we talk about multi stakeholderness, it’s not about just the moment of releasing any type of regulation or any type of code of conduct or any type of whatever, we need to participate in the implementation process and in the evaluation process. What we’ve heard from the regulatory groups is like, city society participates a lot in the process of, you know, advocates a lot for or against regulation, but then once the regulation pass, they leave us alone, they are not with us, and we need to participate together because we are the technical person that are going to implement regulations that are facing the different questions, and we don’t have the participation of the different stakeholders in our decision-making process. So I think that’s another important thing, and the evaluation process, which allow us to identify if the governance systems is working or not. Thank you very much.

Nima Iyer:
Thank you so much, Anna-Christina. Thank you for breaking that down, I think that was really helpful. All right, we’re going to move on to our next speaker, and our next speaker is Rachel Judistari from Wikimedia Foundation, and Rachel is Wikimedia Foundation’s lead public policy specialist for Asia. She has extensive working experience engaging key stakeholders through lobbying and advocacy to promote knowledge sharing, innovation and village governance, human rights, and youth empowerment. All right, so what I was thinking about while this discussion was going on is that I have two separate Twitter accounts, and on one it’s the people in this room, it’s about like open source or non-profit driven public interest tech, and then on my other Twitter it’s purely for-profit, and the conversations these two groups are having do not intersect at any point. The for-profit one is about how do you launch a SaaS, how do you get the most money from your users per month, how do you raise your prices so that you can have the most money and recurring revenue, and it’s just all about like how to suck out the most money from possible from customers, identifying their pain points, how do you keep them locked into the platform. A big thing in that is like how do you reduce churn, which is people dropping from your platform, and yeah, two very, very, very, very different conversations. So I wanted to ask you, when we think about public interest, what does it mean to place this public interest, these public goods at the heart of innovation or regulation? So I feel like the innovation space is really being taken over by, I don’t want to say corporations, but people who want to make profits. I also thought about this example a few weeks ago about couch surfers. So when I was in college, couch surfers was really popular. In case you don’t know, it was basically a platform where you could go to different countries and stay for free in someone’s house, and you would stay on their couch most likely. And after Airbnb came about, I feel that it killed couch surfing, because I actually logged into my account after like 10 years, and it’s become like a cesspool. The vibe is gone, you know. And then on the other side, it’s all about Airbnb, and like how much money they can make, and how they’ve taken all the apartments. So it’s a long-winded way to say like, yeah, how do we keep public interest when innovation nowadays is really focused on profit? Over to you, Rachel.

Rachel Judistari:
Thank you, Nima, for giving me the longest questions. One million questions. But yeah, good morning everyone. So I think I just want to summarize what has been shared by previous speakers, that the digital world today are mostly private and for profit platforms, as Nima also said. And in some cases, the privations of internet amplify wealth gap, prevent equitable access, and also exacerbated knowledge gap, especially for women, indigenous people, people of color, and other socially depressed groups. It’s also compromised our privacy and intensified polarizations and disinformation that is very detrimental to the protections of human rights and democratic values. So at this juncture, I also want to give you good news that Wikipedia still exists. We are the only not-for-profit platforms which are maintained by a community of users that are consistently ranked among the top 10 most visited websites. In this year alone, about 4.5 billion unique global visitors visit Wikipedia monthly. No one owns Wikipedia, and it’s available for free without advertising, without selling personal data, while maintaining strong user privacy protections. However, when you mentioned about innovations, this is also something that we are consistently doing. We realized that as the world’s largest online free encyclopedia, we play an essential role in training most large language models, which is essential in generative AI. I think we’ve heard the buzzword since day one of IGF, I don’t want to bore you, but what we are trying to do is to address knowledge gap within our communities. We also understand that while we are not-for-profit public interest platforms, we are far from ideal. Majority of the editors, we have around 300,000 editors right now, are still from Global North, and we want to diversify our community of editors, and also providing tools and accesses for most repressed groups. For example, two years ago, we launched knowledge equity funds, and in this year, we provided funds to Aman Alliance Masyarakat Adat Indonesia, one of the largest indigenous people alliances with more than 2 million members, to create more content in wiki media projects, to preserve their indigenous cultures and languages. We also have some projects to ensure the participations of women, people of color, and queer people through art and feminism. So by providing more profile of women in Wikipedia, we hope that it can shift the conversations around us. And the second part of your question is also about regulations, and what are the key principles that we need to preserve to ensure the protections of public interest platforms? Well, this is a dicey topic right now, because I feel like in the past few years, we saw a surge of very restrictive regulations on content moderations and platforms. However, the creations of these regulations are often focused on the big tech, and forget to consider the diversity of internet services. So some of these policies prescribe overly broad restrictions with highly punitive consequences, which also affecting our decentralized community-based content moderations. So hopefully, when new regulations are created or the current regulations are revising, the policymakers can also bear in mind the diversity of internet, especially for public interest platforms like Wikipedia, where we are using our community-led models to maintaining the website, but also becoming the antidote of this information. Because daily, our editors are doing fact-checking for more than 50 million articles that are available in Wikipedia. And we also want to encourage the regulations that caters internet to not solely mandating automations of content detections, but also help create opportunities for people’s participations to avoid creating a digital divide. Another principles that need to be protected within the internet regulations is definitely meaningful community participations in internet governance. I think Anna-Christina has mentioned earlier on the importance of that, and I would like to resonate with that, because decentralized content moderations model is one of the ways to preserving democratic values in the internet. And we also see the importance of having open and free internet for a diverse and equitable digital environment. We saw internet shutdowns, service interruptions, website blocking as means to hinder Wikipedia volunteers collaborations. And hopefully this also can be addressed technically, but also regulations-wise. Lastly, the regulations should definitely safeguard user privacy and ban intrusive surveillance system, while also upholding our protections to human rights. And lastly, because our next speaker will be coming from private sector, I also want to encourage further collaborations and communications with commercial platforms that also have pivotal roles in sharing information globally. So thank you, Dima. I hope that I answered your questions.

Nima Iyer:
Yes, yes, yes, definitely. Thank you so much for that. Thank you. But what you said also got me thinking about, generally interesting talking about how regulations are often aimed at big tech. And I was doing a couple of surveys, interviews a few months ago, looking at Kenya’s data protection. And you know, on the surface it looks great that there’s these data protection laws, there’s a data protection office, you need to comply with all these different laws. But then I think about small companies that are just starting out, because I used to be a small company that was just starting out. And I couldn’t imagine adding that other layer of work that you’d have to do when you’re a two-person company. And you’d have to follow all the same rules for a company that has a hundred thousand employees, and there’s no way around that. And it feels extremely unfair that it’s the same rules that applied despite such different context. So that’s a really good point, thank you for that. All right, I would love to bring on our last speaker, who we’re very excited about, Widya Listiawulan, who is a VP of Public Policy of Traveloka. And Widya will be joining us virtually as well. Thank you so much for joining us this morning. Widya has 20 years of experience on public policy. Currently she leads the leading, sorry, she leads policy work of Traveloka, the largest travel and lifestyle app in Southeast Asia. And previously she managed public policy at Amazon Web Services, and also worked in the UN. So Widya, we’re just waiting for your image to appear on the screen. If you’ll just give us… Hi, welcome. There you are. Hi, it’s so lovely to have you. Thank you so much for joining us this morning. So yeah, as Rachel already prefaced, we’re very interested to hear from you about how the private sector, or you know, e-commerce, business… businesses can be a part of this discussion about public interest tech, how can companies ensure that some of the principles of public interest tech continue to live on? So if you can just add generally to the conversation that we’ve been having, we would really love the private sector perspective. Please go ahead.

Widia Listiawulan:
Thank you, Neema. Thank you, Neema. Hi, everyone. Good morning from Jakarta, Indonesia. Thank you for having us in here. My name is Vidya from Trafaloka, but first of all, perhaps some of you not really familiar with Trafaloka, so just for a second, I’d like to share what we are doing in Asia, right, and how far we’ve been working in terms of innovating travel and providing convenient services for customers globally. So Trafaloka has been here since 11 years ago. We started from, you know, meta search and trying to help people to travel conveniently. And after 11 years of, you know, working with all of the ecosystem, now we are operating in six countries in ASEAN. We have more than 45 active users monthly. We have more than 2 million partners, and by partners mean restaurants, hotels, flights, transportation, as well as all the ecosystem of tourism sector. And you know, we don’t stop here. We hope to expand and work more and provide more and better service for customers globally. Now to add on the discussion that we have this morning, we as a company believe that innovation is the key factor, technology and innovation are the key factor to boost tourism in the world. And perhaps we remember back in COVID time, we understand that tourism is, you know, one of the biggest industry that hit the most by COVID because, you know, people don’t travel, didn’t travel, people didn’t, you know, want to go outside and so on and so forth. However, Nima and everyone here, we actually very, very proud because this year we published our impact study showing our impact to community, to society, mostly during the COVID era. So during that time, we actually contributed 2.7% GDP to tourism sector in Indonesia, and that’s quite large. And we didn’t work alone, obviously, we work with the government. We work with community in Nima, we did a lot of digital literacy throughout the years, and we aim to have 100,000 participants from tourism sector in our digital literacy program. We work with community across Indonesia mostly, we work with women community, we work with fishermen and, you know, environment community to make sure that we have sustainable component in tourism because according to our data, there are four points as a trend in terms of tourism after COVID recovery. Number one is actually flexibility that we provide through our innovation and technology. Number two is people tend to travel in nearby areas. Number three is people prefer to travel outdoor. And the last one is people actually prefer to travel in area that offers sustainability practices. And we actually focus to make sure that sustainable is in our core of business. Now we’re talking about policy. I heard Rachel say that, you know, there should be a collaboration with the government. Bill mentioned the openness of government, and we all we agree with that. And therefore, Traveloka is actually very active in association, both locally and regionally. For example, in Indonesia, we have an association for e-commerce called IDEA, and we become one of the active participants, active members, and actually we hold the position there. And also we actually the coordinator of industry task force. This is a task force assigned by the Ministry of ICT during G20 MIMA. So in this two organization or association or community, if you may say so, we provide input, we provide practices, we provide lesson learned that we capture on the ground that we heard from our customers. And then we provide input to the government, to the regulators, with the hope that innovation, regulation, and customers can actually talk together, can actually produce a solution that fits for everybody needs, that provides safety for our customer, but still, you know, comply to the regulation in the local. So I think that’s, you know, that’s opening MIMA. I hope I answered your question and I’m happy to further discuss. Thanks. Thank you so much for that, Widya. Thank you. All right. I want to go back to Mallory with the question that I started in the beginning, and I’m actually going to ask you two questions in one because I feel like they’re related. So of course we’ve heard from Widya, but I’d love to also get your idea in terms of can public interest work in a for-profit model, and yes or no, maybe, but if not, like how would you otherwise fund the infrastructure and maintenance required for public interest infrastructure?

Mallory Knodel:
Fair question. I set myself up for this. I just want to correct, I think, a slight nuance that I hear a lot, which I don’t think what we see in the massive corporate big tech space is innovation. It’s monetization. They’re taking things that people want that have already existed that are there, right, and then they’re figuring out ways to make a lot of money off of it, right? So, you know, we’ve come up with loads of examples already on this panel. I don’t have to restate them. How do you make that profitable? I don’t know that that’s the question, right? What we’re asking is not profit, but sustain. How do you make it sustainable? So I think that there are a few different ways to look at this. This is not at all going to be coherent because this is not my area of expertise, but one, for example, is like barrier to entry. Right now, it’s really difficult to compete because the barrier to entry is enormously high. We’ve monetized just about everything at this point, right? We’re now picking up the scraps off the floor. Even the big corporates, right, are suffering. They pretend that it depends on the day, right? Are they doing awesome, making loads of money for shareholders? Are they really losing a lot of money and they need your pity? It’s hard to follow. The other issue then, too, is I think a lot of what we’ve been talking about so far is assumed that we’re talking about platforms or social media, but there’s actually tons of different services out there, right? There’s, you know, email and web hosting. People do pay for those things. Businesses pay for those things. There’s financial services, certainly something that people pay for. Lots of things that are possible to be made in the public interest without profit-seeking but that typically just aren’t because we’re really just hyper-focused a lot of times on, like, what’s social media doing? How do you make social media profitable? And the last thing I’ll just say is that I think a lot of our—I would say often we are critiquing this issue of surveillance and privacy violations in service of the, quote, innovative, you know, targeted ads-based monetization, right? That is really narrow, and I think it’s starting to break down already. Maybe I’m too eager to see it collapse, but I don’t think necessarily the issue is with advertising itself. There’s a lot of ways to do advertising that’s not targeting, right? Contextual advertising is great. If I’m already reading an article about something, it’d kind of be great to see ads related to it. There’s no need to necessarily, you know, again, like, try to sell me wool socks in Washington, D.C. in the wintertime. Like, I’m going to buy warm socks when it’s cold in a place that I live. I don’t need an ad from Facebook to tell me to do that. So we’re kind of wasting a lot of potential on this idea of targeted ads, and so I’d really like to see that go. And I don’t think that, for example, that is a monetization strategy that’s at all compatible with the public interest, but we don’t need to just look at the figures to make that determination. It’s inherently a paradox to survey and to serve in the public interest. So I think maybe when we’re coming up with monetization schemes or sustainability schemes, right, that there’s alignment with values, and then that really points the way towards what’s possible. And so I don’t think that there’s any issue with that. It’s just, yeah, it has to be done with principles in mind.

Nima Iyer:
Thank you so much for that. And I think you’ve made such a great point that a lot of it, yeah, it’s definitely not innovation. It’s just monetization. I saw these angry messages from people because there was a website where you could learn, where you could get sheet music for guitar that had existed for, like, 20 years and was free, and then somebody bought it up and made it a SaaS, and now you have to pay a monthly subscription. And yeah, but that was praised as a very good business. So on the other side, interesting. Okay, I’d love to bring Bill back. Hi, Bill.

Bill Thompson:
Hello there. I’m still here.

Nima Iyer:
I wanted to ask, how do you determine responsibility and accountability for delivering various aspects of public interest internet? So yeah, please go ahead.

Bill Thompson:
I think that, I mean, it’s a very broad question and a useful one. To some extent, it’s the responsibility of everyone who wants a public service internet to figure out what they can do to contribute to it. And then we can look at existing institutions and organizations and ask whether they are aligned with the overall interest of the public service internet. So when Vidya was talking about commercial engagement, there should be no barrier to commercial engagement with a public service network, as long as it’s done on the public service terms and not on commercial terms. And there should be no barrier to anyone’s or any organization’s engagement, as long as they accept the terms of trade, that what we’re looking for in supporting democracy online and supporting the idea of a digital public sphere where society can come together is something which is sustaining, something which has positive attributes and is not subject to commercial capture or monetization, as Mallory was saying. In that sense, it’s up to everyone to decide how they can contribute and how they can support it. The issue, as ever, is going to be coming up with some underlying principles that we can all agree on about how such a space, how such a network should be constructed and run, and then also feeling comfortable with the fact there will be divergence in how it’s delivered into different cultures, to different interest groups, to different societies, to different countries. Because one of the problems that’s emerged in the last few years has been the idea of the global timeline, that Facebook, Twitter as was, want everyone to see everything and we all exist in the same space. And that’s not how real life works, it’s not effective for us as human beings, it’s not effective for civil society. And so we need to abandon some of those core assumptions on which the existing systems have been built and look to a different way. I do not have an answer. I have an organization, the BBC, which has been quite good in the past at figuring out how to do these things in the world of broadcasting. I believe there are enough of us, some of us who are in this room right now, who care enough about the model of an internet that is sustaining and nourishing to want to build it and to have those difficult conversations about what it might look like. And everyone brings their own concerns to the party. We try to be much more representative than we have been and certainly than we have been in the past 30 or 40 years building today’s network. If we do that, my optimistic view from this side is that we can achieve something really good and valuable, that we can begin, we can outline the design principles for a network that will actually serve the public interest and will sustain civic society. As I say, I don’t know what it is yet. I do think there’s a process for getting there beginning to emerge and this conversation is part of that process.

Nima Iyer:
Thank you so much, Bill. Thank you. I have a last question and then I will open it up to the floor for a discussion. And my last question is to Rachel. So this whole time we’ve been having a conversation, we’ve been using words like public interest, public good, right? So we’re inherently assuming that it’s good. And like as Mallory said at the start, is the internet always good? Because I was having this conversation with somebody about how they brought the internet to these like really previously disconnected indigenous communities and I almost felt sad. I mean, I don’t like access, like, you know, access is great and everything, but also it’s sometimes it’s like, yeah, what if we just lived in a world where we didn’t have to know what was happening in American politics all the time? You know, what if? So let me ask, let me close my questions with asking what unintended consequences could public interest technologies have? Yeah, so what could go wrong? How might we anticipate and or mitigate them?

Rachel Judistari:
Well, it’s kind of very interesting questions, but there are some risks that can be affecting public interest platforms, especially in the process of knowledge creations itself. As you know, like two thirds of global majority countries are consuming information from the internet. However, only less than 15% of representations of the global south are actively create knowledge online. And mostly the contents are in English. So one of the possible risks that we might be facing is endangering indigenous and less resources languages. And as I shared with you, this has been picked up as one of the key priority of the foundation’s knowledge equity is one of our main goal in achieving our 2030 visions. And in doing so, we are working with community of editors, partners like the UN and government to do digital literacy so that more people can contribute in the creations of knowledge. Second of all, internet is only a reflection of what’s happening in the society. So it’s unfair if you want to have a free and accessible internet, while in reality, civic space are shrinking. And sometimes information in internet, especially the creator of that, can be utilized to punish the information that they put in the internet. And we see some of these cases happening in public interest platforms. So definitely regulations that criminalize dissenting voices need to be addressed, while we are also strengthening community resources to ensure the holistic security of contributors of the internet. And ultimately, while we are thinking that internet is everything, and if I don’t have internet access in five minutes, I’ll definitely get anxiety attack. But it’s literally not everything. There’s a lot of people who do not have access of internet. And the digital divide is still become one of the main issues in the global south. So although this is not specifically the risk that arises from the public interest platforms, but I feel that the public interest platforms should also contribute into discussions on how to addressing this access inequity. So yeah, I think I’ll stop there. And hopefully, other people can also have more questions on this. Thank you.

Nima Iyer:
Thank you so much, Rachel. I just had one funny example to share on what I consider a public interest tech. And so my mom is from Tanzania. And a few years ago, they started digitizing their government services. And so before, it’s quite a centralized country. Before, you’d have to go to the capital where the office is, give the papers, a person’s gone to lunch, the person’s not around, the person has been sick for two months. And then they digitized the service. But what that basically meant was that most people couldn’t fill the forms online. And so people would go to the office. But now there was a little kiosk outside where there was a man with a computer who would then fill it in for you. And I was like, yeah, it’s a cost-cutting measure. And it’s all these other things. But it’s also like, have we thought about whether people have the access and know how to fill online forms, et cetera, et cetera. So yeah, it’s interesting in how you bring people in the design of these things and thinking about those issues. But yeah, I’ve really enjoyed this conversation. And I’ve hogged most of the questions. So I would love to open it to the floor if there are any questions for five of our amazing panelists. Or if you have stories to share.

Audience:
Hi, I’m Ziske from the Wikimedia Foundation. Thank you so much for a really wonderful and engaging discussion. I would love to hear other people’s answers to one of the questions that you asked, Nima, which is I think it was about funding. I forget exactly what the phrasing was, so maybe you’ll meet the honor of re-asking it. But I’d really like to know from Bill’s perspective, particularly because you’re also in R&D, how you see funding working. Thank you so much. I’ll just reframe the question for you, Bill. The question was, how do you fund the infrastructure and maintenance required for a public interest internet?

Nima Iyer:
Please go ahead.

Bill Thompson:
It’s a good question. Obviously, I speak from the BBC in the UK, so my obvious answer is you make everybody pay for it by forcing them by law to give you money to cover the public infrastructure that you acquire through the television license that we have in the UK. It’s a sort of a frivolous answer, but it also actually has some serious intent behind it, which is that you don’t get good public infrastructure for free. The danger of having state funding media is, of course, that you then have state-controlled media, and that’s a very dangerous thing to have, and so you want to avoid it. But it feels to me that a society that wants an internet that can deliver public value should be able to invest in it and not require it to be self-sustaining on a commercial model. So I would much rather that we looked for a design and a set of functions that we wanted, that we believed could be, and we were using the term good fairly loosely earlier, so I’ll carry on using it loosely, that was good for society, and then find a way for paying for it that does not require compromise. And from my mind, if what you’re covering is the sort of core internet infrastructure, it’s just moving the bits around, and you can get some guarantees from governments not to interfere too much, then a degree of state funding is acceptable because what you’re paying for is the underlying network in the way that you’re paying for the roads, or you’re paying for water services and things like that. You’re paying for the infrastructure of a society in order to allow civic society to flourish on top of it. So I’d much rather that sort of model than rely on, say, philanthropy, or rely on private companies being able to do something commercial on there but to stay good, because I think that sort of thing goes wrong. So I’m reasonably sort of firm in my own mind that paying for public infrastructure is a reasonable thing to ask a society to do. The problem is we don’t yet know what we’d want to be paid for or, indeed, how much it would cost. I hope that’s helped.

Nima Iyer:
Thank you so much for that, Bill. We have… Please go ahead, either or.

Audience:
Hi, I’m Ivan Sigal from Global Voices. I have a question about mobile technology. In much of the global majority, internet access is through telecoms. And as we’re talking about the internet, we should also not neglect that question. I’m curious, given that in many countries in the global majority, Facebook is de facto the internet, given its access point and often its free offerings, how we reconcile the desire for a public interest internet in many global majority countries with the fact that most of the energy effort and resources is coming through telecoms, which is a different technology architecture. Thank you.

Nima Iyer:
Do you have someone you would like that question to go to in particular?

Audience:
Not really, though. Maybe UNESCO, that would be an interesting one.

Nima Iyer:
Okay, let’s go with that. Anna-Christina.

Anna Christina:
Well, it’s a difficult question, but I actually was looking to a person that just step up because it’s a very good sample of what you’re mentioning. They started creating in Mexico, in Central America and Latin America, community networks with the community. They start building those networks. They actually work with UNESCO in a process of creating public policy, and that’s the one that I was referring to, to promote indigenous expression and cultural content from all of the process of creating community networks, but then engaging communities, indigenous communities in broadcasting, but also in generating internet content and having the possibility to create media and information literacy processes. But what I think is that, and what I’ve learned, it’s that we need to learn also, and Bill just mentioned, from other expression, of other experiences that have faced the same, kind of the same struggle, acknowledging that we have differentiated approaches when it comes to the internet, the scale, the way it functions, et cetera. I don’t have a specific answer of how sustainability would come into place, but I think if we are talking about multi-stakeholder, it’s a word that comes all the time, all the way through. We also need to take into consideration that funding public interest technology comes to the responsibility of all of the actors that participate and engage in this process. So it’s, yes, a responsibility from the governments, and I totally agree with Bill. We cannot rely on governments because then it can become co-opted, but there’s part of responsibility of governments, there’s parts of responsibility of private sectors, there’s part of responsibility of the users, and the people that engage in this, and so we need to define and create balances where these come from. I urge you to talk to Redes because I really think that they have come up with a good idea of how to deal with this, acknowledging that the scale might not be enormous, but the change would be very, very, very, very good. Yeah, so that would be my take.

Mallory Knodel:
Yeah, I’ll add on. I might actually connect it a step back because I think one of the things about the work you’re doing at UNESCO to help with content moderation, and then this ties into sort of Wikipedia’s woes around having to actually meet then those standards that are really designed for big tech. So just connecting those dots, I think for Wikimedia, maybe other public interest platforms, that element of regulation really isn’t helpful. In fact, I think it can be really counterproductive because ultimately all of this, even if it’s multi-stakeholder, all of this effort is going into making big corporate platforms better, and maybe they’re just not good, and maybe we shouldn’t be using them because they’re not awesome, and if we had more platforms, more choice, we would eventually just migrate off of them. But why it’s important to consider these larger platforms is they will end up being the only thing that’s in place in a lot of places that don’t have a robust local economy or the ability to create these alternatives. So we can’t neglect really big multinational corporate tech platforms because they are big, and a lot of people use them. A lot of places don’t have the ability to completely modify the market or the landscape that they’re working in. And so I just wanted to acknowledge that it’s both, right? It’s not either or. Like, we have to do all the things, but I do wanna just lift up the fact that a lot of this regulation, I feel like there should be something called the Wikipedia test or something, right? It’s like, if your regulation is making it hard for Wikipedia, your regulation is not great. So, I mean, if anything, we should be asking a lot of questions of you all. Like, how do you do content moderation of disinformation at scale? We know you’re doing it. Teach us how, right? And everyone else should be learning from it. That’s not currently what’s happening, and I have a lot of sympathy for that because the two are not equal, right? And so that nuance gets lost. And ultimately, yeah, if a platform just is not working and there’s a better one out there, thinking about social media and activity pub-based platforms like Mastodon and other ones, let’s let the bad one die. Let’s use the better one that has better content moderation that fosters community better. But that’s a sort of long-term solution, and it’s going to be unevenly applied around the world, so.

Rachel Judistari:
Thank you so much for advocating for us. I think I need to just copy-paste what you’re saying. Yeah, I think, yeah, on top of that, I think what we are really trying to say to the policymaker is to have exceptions for private and public interest platforms like Wikipedia, but also internally we understand one of the major hindrance is lack of understanding about decentralized community-based content moderations, especially in the global majority country. For example, in Asia, the issues of internet regulation is considerably new in privacy protections. So the default response is using fear-based approach to quote-unquote control it so that it doesn’t create a public chaos or whatever based on the assumption. So I think one of our main responsibility as public interest platform is to educate the lawmakers with our community about the diversity of internet ecosystem and also alternatives content moderations tactics because there are different models. So, yeah, and hopefully we can have more allies to do that and ensuring that communities are actively participating in that effort.

Nima Iyer:
Wonderful, thank you all so much. Does that answer your question? All right, we have a question back there.

Audience:
Thank you very much. I think very great panel. Thanks for sharing all the information. So this is Nazmul Ahsan from Bangladesh. I work with ActionAid Bangladesh and particularly with the young people. I’m very much interested in terms of how young people are being engaged in the internet and also cyber spaces. So the internet, it’s not only globalized the world, but it was also centralized the whole process. This is a big challenging. I think this is the anti-democratic kind of movement and process that we somehow, we all are in this kind of process. So in our context, we see there is a huge digital divide, particularly we see young men and young women, particularly in the grassroots. They don’t have access to the infrastructure at the same times in the content. You already mentioned about these languages and the other aspects of the contents. And we see also stigma. Sometimes internet, using internet is being stigmatized by the patriarchal interventions in the society. So when young girl and women are using internet, probably the society don’t see this look good kind of thing. This is going in a different direction or challenging the social norms and we have had this kind of things. My interest is that since I work with young people, how can actually we make and make more grassroots young people, young women, under this kind of digital literacy network and bring up particularly with this content generation and also make them as a kind of active internet activist and for the social good. So this could be something actually would be really helpful for me. I think it goes directly to the Wikipedia at once, but you can also respond UNESCO. Thank you very much.

Rachel Judistari:
Thank you so much for your questions. I think engagement of young people has become one of our focus these days because as I shared earlier, majority of our editors are from the global north and coming from specific age group that are not young. So what we are currently trying to do is to work with community of editors to provide trainings, not only on how to use Wikimedia projects, but overall digital literacy that are contextual and culturally appropriate according to the needs of different young people because as we all know, young people is also a diverse constituencies. So some of the example is our projects in Cambodia where we provide capacity building and tools for young indigenous people to create content and also video for preserving their culture. In addition to that, we are also working collaboratively with government. For example, in Indonesia, we are collaborating with the Minister of IT to create a cyber kreasi, which is a national digital literacy education for a community of young people in school, but also in the community that have various needs. So it’s definitely a work in progress and we are hoping to have more collaborations with youth-led organizations to make sure that we are still relevant in that case. I hope that answers your question and thank you for your questions.

Nima Iyer:
So Vidya would also like to give a response to this question. Vidya, please go ahead.

Widia Listiawulan:
Thank you, Nima. Thank you for the questions. So for us, Traveloka, as a tech travel company, youth is part of the core of our ecosystem. And then we divide it into two things when we talk about young people and youth. Number one, actually, our talent pool, most of them are young people. We recruit the best talent in Indonesia for Indonesia market and for other areas as well. But on top of that, your point, your question was, how can young people work together and then create such an impact for community, for their own community? Now in Indonesia, if you are familiar with the geography of Indonesia, we have more than 500 villages all over Indonesia. And working with the Ministry of Tourism and Creative Economy, not only that we provide a digital literacy for young people in those area, but we empower them, we encourage them to help their community to build a new tourism destination. And using our platform, we promote those tourism destination using their language, using their analysis, using their assessment on that tourism area. So in a way, we empower them to be proactive in looking what is the potential of their tourism destination and voices their assessment toward the neighborhood. So that’s how we empower the young people across Indonesia, but not only in Indonesia as well. We work with young people in Vietnam. We work with RMIT, Royal Melbourne Institute of Technology in Vietnam to empower young people to work with them in providing digital literacy for young people, for disability community, as well as for women-led business. So I hope I answered your question. Thank you.

Nima Iyer:
Thank you so much, Widia. All right, I’m going to start wrapping down the panel. And I feel like this has been a great, obviously, this has been a very great conversation, but I feel like I’m leaving with more questions after this discussion. And some of these questions that I have are, how do you design public spaces or public goods? I feel like we’re a bit locked in with the designs that we have at present. How do we get out of that? How do we think about what platforms could look like? Who do you engage in those discussions? How do you build it and how do you make people come, right? So just building it doesn’t mean people will come. Mallory, I know that you did say we would just move, but I remember when the WhatsApp signal thing happened and then we pretended to move and then we didn’t, a lot of us didn’t really move. We just went back to WhatsApp. I think we get stuck using platforms because you’re like, I’ve already used it for 15 years. Like, I mean, I know it sucks, but I’m not, you know. But I would love to see a new form of design. And my question is, yeah, how do we design that? The other question I have is related to like, how do we have these conversations with lawmakers? Like, as civil society, I can see that we are annoying to governments, you know. If we first approach governments and said, we want data privacy, and then we come back and we’re like, but not like that. You know, not for those people, but for these like, yeah. I can also see from the government’s point of view that it’s difficult to legislate for different people. So how do we have these conversations in a constructive way? How do we encourage people to build public goods in a world where we’re very money and monetization driven? How do we get back to that culture of volunteering and maintaining open source and that kind of stuff? How do we encourage? Yeah, how do we, I mean, yeah, there was conversation about encouraging young people, but just in general, like, how do we get more people to give their knowledge to Wikipedia? You know, why is it that group of people? It’s amazing work that, you know, that they do give that information, but what is it about that group that makes them give the information versus other groups? And then the big question, how do we fund the infrastructure? I really like Bill’s point about how to think about it like a public service, like sanitation or water or any of those issues, like we need media. We need spaces as a public service, physical and digital. And then my biggest question is where do we take the conversation from here? So yes, it’s nice to have this conversation, but what’s next? How do we actually answer these questions? So we only have about five minutes left. And I would just love to hear from each speaker if you could really, really just keep it to one minute of a parting message to us of what’s next. So I’ll just go in the same order that we started and let’s start with Mallory. One minute.

Mallory Knodel:
All right, challenge accepted. The, so about leaving or moving, I just wanna say that I don’t think it’s always about, have we successfully moved off of, or have we killed it? It’s the threat that we can that’s really important. So while like, yeah, maybe we’re all still using WhatsApp, but now we’re maybe using both, or at least it started a conversation. And it proves that users are paying attention. Who knew people were reading the terms and service of WhatsApp so closely that they could basically, they had a red line in their mind. And they’re like, they changed the sentence, and I’m furious about it. That was a really impressive moment to me, because it demonstrated that people care. And that’s just as important as people now don’t use it anymore, they move to something else. So and to that point, I think we have to stop thinking, I’ve said this already once, we’re not replacing anything. We’re actually just moving into this incredibly complicated landscape where we’re downloading apps all the time, we’re trying out new things. I mean, at least a lot of us are, right? There’s more and more and more. Nothing’s really dying anymore, right? So I would think that what’s going to be important moving forward is integration. And this is not exactly interoperability, but it does, I think, implicate things that Bill was saying about standards. If your app or your new thing integrates with all the other ones, that’s actually an asset, and it’s a feature, and users are going to come to expect it. And that’s really, really good for competition, and it’s really good for end users. So I think that if you’re building something new, or if you’re an ossified old social media platform that’s been around for too long, if you don’t start integrating or creating those features, people are not going to like you as much.

Nima Iyer:
Thank you so much. I’ll stay with the people here. So Anna-Christina, if you could go next. One minute, please.

Anna Christina:
Yeah, I was thinking of the government question because I have heard within the consultation different views from one side. Well, governments are not all the same, and civil society is not all the same, and companies are not all the same. So everyone has their own opinion and their own comment. But my thinking in all this process of consultation on the guidelines is the most important part is to build and to maintain, to resist the process. Because what happens, as I said, is that we’re very used to think as the regulation as the ultimate goal for the good and for the bad. And we don’t see, and it’s difficult even for us to understand what is our role in the process of implementation, reviewing, monitoring, evaluation, et cetera. So I think, and I have to say this, in the question that we made specifically within the consultation, what a multi-stakeholder role looks like in all of the stages of the regulatory process, this was the less responded question. Even though this word is a word that, along with the GNI, is the most used in this forum. So I think it is very important to identify not like when we’re dealing with the governments, what is also our role after regulation happens in dealing with the people that is engaged in the regulatory process, and then afterwards in the evaluation of these regulations. Because if not, then the regulatory cycle has a breach. And it becomes, like we were saying, and this is just to end, you can have the best law, you can have the best standard. But in an authoritarian regime, this can be misused. And the only way to target it, to fight it, is with resilience, with capacity building, with a strong civil society that is advocating for change. So I think this is important.

Nima Iyer:
Thank you so much. Rachel, one minute.

Rachel Judistari:
I think it’s really important to back to BASIC and really promoting the internet of commons for public interest, and also reminding policymakers about the diversity of internet ecosystem and providing exceptions to protect public interest platform. While at the same time, public interest platforms, including Wikipedia, has to ensure the diversity of communities that contributes in the creations of knowledge so that it will be positive for our sustainability and also the diversity of internet itself. And I’ll stop because time is up. Thank you.

Nima Iyer:
Can we have two more minutes? Oh, it’s like really time is up. OK, two minutes. Bill, please go ahead. One minute.

Bill Thompson:
Very, very briefly, I think we need to accept that the model of an internet based on the technical decisions made by a bunch of overly optimistic, mostly men, in the 70s, 80s, and 90s, based in North America and Europe, has failed us. And we need a different approach. The answer is about co-creation. It’s about bringing communities of interest together to decide what’s important to them and to work on that basis to look at what we actually really need from the internet to build and sustain civic society. And so what I look forward to is actually revisiting some of those core assumptions and working together. Thank you.

Nima Iyer:
Thank you so much, Bill. And lastly, Vidya, please go ahead. One minute.

Widia Listiawulan:
All right, it will be quick. So again, like what Bill said, the last part, collaboration, public-private partnership, regulation need to open a discussion for private sector to raise concern and for user, for society to raise concern. But on the other hand, company has the responsibility to ensure that we will focus on customer needs, not only providing service, but what is important for society. And digital literacy that is not only focuses on how people actually use technology, but also to ensure that people know their rights when they use technology. So twofold here, the regulation, the corporate sector working together with the ecosystem, and people need to know their rights. People need to be educated on how to use technology in a very responsible way. Thank you, Nima. Thank you.

Nima Iyer:
Thank you so much, Vidya. That’s such a good note to end on. Thank you so much to all our amazing panelists. Thank you to everyone for joining us. This has been a really great conversation. And I wish you a wonderful rest of the IGF. Thank you so much.

Bill Thompson:
Thanks for having us.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you very much. Thank you.

Bill Thompson

Speech speed

206 words per minute

Speech length

1992 words

Speech time

580 secs

Anna Christina

Speech speed

155 words per minute

Speech length

2262 words

Speech time

876 secs

Audience

Speech speed

186 words per minute

Speech length

624 words

Speech time

201 secs

Mallory Knodel

Speech speed

203 words per minute

Speech length

2665 words

Speech time

788 secs

Nima Iyer

Speech speed

197 words per minute

Speech length

3797 words

Speech time

1159 secs

Rachel Judistari

Speech speed

121 words per minute

Speech length

1770 words

Speech time

876 secs

Widia Listiawulan

Speech speed

157 words per minute

Speech length

1286 words

Speech time

492 secs

Defence against the DarkWeb Arts: Youth Perspective | IGF 2023 WS #72

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Speaker

The dark web and internet have various purposes beyond criminal activities, making them tools rather than the enemy. Machine learning, AI, improved encryption, blockchain, and advanced data analysis can assist in combating dark web crimes. Focus should be on technology and software companies rather than user identification. Mitigating the abuse of power in the fight against crime involves forming specialized cybercrime agencies and collaborating with academia. Mandatory cybersecurity education is necessary for all involved in handling data. Whistleblowing mechanisms should be encouraged. Law enforcement and politicians often lack understanding of the internet’s working, necessitating increased awareness. Diverse hiring can aid in understanding software misuse. A software registry and due diligence are crucial in identifying and preventing software misuse. These measures contribute to creating a safer online environment.

Maria Lipińska

During the discussion, the potential positive use cases of the dark web were explored, shedding light on how it might impact the future of online privacy and security. The speakers acknowledged the dark web’s negative reputation but emphasised that there are aspects of it that can be harnessed for beneficial purposes.

One of the main points raised was that the dark web enables anonymous communication and the exchange of information. This can be advantageous for individuals living in repressive regimes or facing persecution, allowing them to freely express themselves and access uncensored content. Moreover, whistleblowers and journalists can use the dark web to protect their sources and share sensitive information securely.

Furthermore, the dark web can facilitate the sale of legal goods and services. For example, it serves as a platform for anonymous online marketplaces where individuals can purchase legal products, such as books or art, without leaving a digital trail. The anonymity provided by the dark web can also empower activists and dissidents in countries where their activities might be monitored or suppressed.

In terms of online privacy and security, the dark web can act as a catalyst for innovation. The constant battle between criminals and law enforcement agencies pushes the development of advanced encryption techniques and cybersecurity measures. As a result, lessons learned from tackling the challenges presented by the dark web can be applied to enhance overall online privacy and security.

It is worth noting that the positives discussed should not overshadow the illegal and unethical activities that are prevalent on the dark web. Criminal networks, such as drug trafficking and illegal marketplaces, make up a significant portion of dark web activity. However, it is essential to consider the potential positive aspects and explore how they can be used responsibly.

In conclusion, the potential positive use cases of the dark web were evaluated, highlighting its impact on online privacy and security. While acknowledging its negative reputation, the discussion shed light on the anonymity and freedom of expression it offers individuals living in repressive regimes. Additionally, the dark web’s role in facilitating legal transactions and driving innovation in cybersecurity was recognized. Nonetheless, it is crucial to address the illegal activities on the dark web and ensure that any exploration of its positive side is done responsibly and ethically.

Izaan Khan

The analysis suggests that the dark web can offer benefits to certain individuals by providing anonymisation services. This can be particularly useful for individuals who require a high level of privacy and restricted access to a tightly knit community. Anonymity on the dark web can be critical for use cases such as journalists researching or communicating under extreme conditions, as well as for organising protests. Overall, the sentiment towards the dark web is positive, emphasising its potential advantages.

Furthermore, the analysis acknowledges that law enforcement agencies have achieved successful outcomes in cases involving cybercrimes on the dark web, citing notable examples like Silk Road and AlphaBay. However, it argues that eradicating privacy-enhancing technology, such as the dark web, is not necessary to combat cybercrime effectively. Instead, alternative strategies such as open source intelligence, infiltration, and hacking techniques can be employed to counter cybercrime without compromising privacy rights. The sentiment towards this argument is neutral.

The report also highlights the importance of people’s ability to protect their online privacy using technologies like the dark web. It advocates for a principles-based approach that balances the need for anonymity against other legitimate uses of anonymising technologies. This sentiment is positive, reflecting the belief that individuals should have the right to safeguard their privacy online.

Regarding regulation, the analysis suggests that regulations should be defined within the context of cybercrime. Existing regulations, including basic criminal law, already exist. However, it is noted that enforcement often involves a constant arms race between authorities and cybercriminals. The sentiment towards regulation is neutral, emphasising the need for a careful and nuanced approach.

It is also highlighted that technological solutions alone are inadequate in combating cybercrime. The dynamic nature of cybercrime requires innovative solutions that go beyond technology. Additionally, adopting more pragmatic approaches to regulation, such as controlling information flows and data retention, is seen as potentially beneficial.

The importance of trust in institutions within the complex regulatory environment is emphasised. It is believed that trust is crucial for navigating the challenges posed by emerging technologies and evolving regulatory frameworks.

The analysis further emphasises the significance of international cooperation and capacity building in effectively combating cybercrime. It notes that a lack of understanding of technology can hinder policy outcomes and enforcement efforts. Existing international cooperation organisations, such as Europol and Interpol, are highlighted as essential in the fight against cybercrime.

Additionally, the analysis raises the concern that tension between governments and encryption services will intensify. Governments may seek to undermine encryption for backdoor access, potentially restricting the privacy and security provided by these services. This development is viewed negatively, suggesting potential conflicts between privacy protections and government surveillance.

Furthermore, the report anticipates changes in the landscape of internet usage due to technological advancements and government regulations. It suggests that the emergence of new anonymisation services and government attempts to undermine encryption could reshape the way people use the internet.

In conclusion, the analysis highlights the benefits of the dark web in providing anonymisation services to individuals who require heightened privacy. It emphasises that eradicating privacy-enhancing technology is not necessary to combat cybercrime effectively. Instead, a principles-based approach that balances anonymity and other legitimate uses of technology is advocated. The report also emphasises the need for pragmatic regulation, international cooperation, and trust in institutions to address the challenges posed by evolving technology and cybercrime.

Pedro de Perdigão Lana

The discussion revolves around various aspects of internet governance, the dark web, and intellectual property. One argument highlights the importance of intellectual property in the context of internet governance. It is stated that intellectual property was among the first and most important discussions among civil society, the private sector, and the government in relation to internet governance. However, another argument challenges the dark web’s notorious reputation for intellectual property infringement. It argues that the portrayal of the dark web as a hub for criminal activity, particularly intellectual property crimes, can be misleading. The argument suggests that the dark web and deep web are not exclusively used for illegal activities, but are also repositories for various types of files, including copyrighted content.

Furthermore, the discussion explores the negative consequences of fear-driven policies and rigid copyright systems that have emerged due to concerns about the dark web. It is argued that several society reforms have been implemented based on the idea that piracy, including intellectual property infringement, is a widespread problem. These fear-driven policies may have inadvertently created obstacles to the very objectives they aim to promote.

The need for purposeful and careful regulation of the dark web is emphasized. While acknowledging the potential dangers associated with the dark web, the argument highlights that regulating it should take into account its positive uses, such as communication in environments where freedom of expression is restricted. It is suggested that regulation should be purposeful, avoiding undue restrictions on legitimate uses and considering the underlying reasons for regulation.

Additionally, the discussion examines the ethics and inequalities associated with academic documentation. It is noted that some academic ecosystems are unjust towards poorer countries, and publicly funded scientific publications charge high fees for access. This situation raises questions about the ethics of sharing academic documentation and the role of copyright in academia.

Furthermore, there is criticism directed towards the science publishing industry for charging exorbitant access fees despite being sustained by public funding. The argument highlights that the industry charges thousands of dollars for access to scientific publications, which creates barriers to knowledge dissemination and exacerbates economic inequalities.

In conclusion, the discussion revolves around the complexities and nuances of internet governance, the dark web, and intellectual property. It emphasizes the need for careful consideration when regulating the dark web, taking into account its positive uses. The discussion also raises important questions about the ethics and inequalities associated with academic documentation, as well as the practices of the science publishing industry. By critically examining these issues, it is hoped that a more balanced and effective approach to governance and regulation can be achieved.

Pavel Zoneff

The Tor software is a powerful tool used by millions of individuals worldwide to securely access the internet while protecting their right to privacy and information. It aids users in circumventing censorship and browsing the internet freely without facing restrictions imposed by governments or other entities.

It is important to note that only a small fraction of the traffic on the Tor network is directed to onion services, which are confined exclusively to the Tor network. This suggests that while censorship circumvention is a significant use case for Tor, it is not its sole purpose.

However, there is notable criticism levelled against privacy-preserving technologies such as Tor, Signal, and encryption platforms. Some individuals or entities misinterpret encryption as being associated with nefarious intentions, leading to unjust criticisms of these technologies. This misconception can result in policymakers lacking a comprehensive understanding of how privacy-preserving technology works.

As a consequence, governing laws are sometimes enacted that roll back international standards related to human rights, freedom of expression, and access to information. This situation is concerning, as it indicates a lack of education and awareness among policymakers about the importance of privacy and its relationship to fundamental human rights.

To counter this negative perception, it is crucial for proponents of privacy-preserving technology to engage in robust advocacy efforts. There is a need to raise awareness and educate policymakers about the benefits and importance of these technologies, as well as to dispel any misconceptions or unfounded fears surrounding their usage. By doing so, it may be possible to protect and preserve fundamental human rights in the digital age.

Overall, the Tor software plays a pivotal role in safeguarding internet users’ privacy and right to information. However, the criticism and lack of understanding around privacy-preserving technologies highlight the need for continued efforts to advocate for their importance and counter any unfounded narratives surrounding their usage.

Alina Ustinova

In this series of articles, Alina Ustinova delves into the controversial topic of the ‘dark web’ and aims to shed light on its implications, fears, and potential benefits. Ustinova, as the president of the Centre for Global IT Cooperation and the organiser of the Russian IJF and Youth Russian IJF, is well-positioned to explore this subject and provide valuable insights.

In her exploration, Ustinova acknowledges the widespread misunderstanding around the term ‘dark web’ and its incorrect association with negative activities. She seeks to clarify the misconceptions that people have by differentiating the dark web from the deep web, emphasizing their distinct characteristics. By doing so, she hopes to dispel the misconceptions and provide a clearer understanding of the dark web.

Ustinova also emphasises that the dark web potentially holds benefits beyond its negative connotations. She aims to uncover these potential benefits and challenges the prevailing notion that the dark web is purely a hub of illicit activities. By exploring the possibilities, Ustinova opens the door to a more nuanced understanding of the dark web and its potential uses.

On a different note, research indicates that young people, mainly millennials, exhibit bad habits in cybersecurity. It is observed that many youths are drawn to the dark web out of fascination for forbidden things and as a form of protest against the system. This insight highlights the complex motivations behind young people’s engagement with the dark web, indicating a deeper societal issue that needs to be addressed.

Additionally, the rise of Generation Alpha, growing up in the digital age, has led to their inherent reliance on the internet. Ustinova highlights that Generation Alpha, exposed to internet devices at a young age, considers the internet as a beneficial tool that is essential for various aspects of life. This has significant implications for education and the development of digital literacy skills.

In conclusion, Ustinova’s exploration of the dark web sheds light on its implications, fears, and potential benefits. By clarifying misconceptions and differentiating the dark web from the deep web, she offers a more comprehensive understanding of this often-misunderstood realm of the internet. The insights gained from Ustinova’s analysis also highlight the complex motivations behind young people’s engagement with the dark web and underline the importance of digital literacy skills in the modern age.

Abraham Fiifi Selby

The dark web, a part of the internet accessed through special software, presents a range of risks and benefits for users. It contains websites that are not indexed by traditional search engines, making it a haven for illegal activities such as the sale of drugs, stolen data, and hacking tools. However, it is important to note that not all aspects of the dark web are associated with criminal activity.

One argument suggests that using the dark web can be dangerous for ordinary users, unless used properly. This is due to the specific risks involved, including exposure to malware, scams, and illegal activities. Dark web tools are not encrypted and can be monitored by third parties, potentially compromising user privacy and security. Nevertheless, another viewpoint asserts that the dark web can also be used for legal and meaningful purposes if used correctly. In fact, the dark web is not exclusively for criminal activity and, when utilized wisely, it can actually provide protection for users. It is essential for ordinary users to learn how to navigate the dark web safely in order to avoid these risks and experience the potential benefits it offers.

Regulating the dark web is seen as a complex task for law enforcement agencies. While there is a need to investigate and prosecute organizations that engage in criminal activities on the dark web, developing effective regulations is challenging. The dark web operates on an anonymous network that is difficult to trace, requiring specialized strategies and tools to combat illegal activities. However, governments and law enforcement agencies are taking steps towards regulating the dark web. For instance, the FBI shut down the Silk Road, one of the largest dark web marketplaces, in 2013. In 2020, the UK government announced plans to introduce new legislation aimed at giving enforcement more powers to investigate and prosecute dark web crimes.

Education and awareness are highlighted as key elements in safely utilizing the dark web. As users are often unfamiliar with how to navigate the dark web safely, there is a need to provide education and raise awareness about the risks and best practices. Understanding the nature of the dark web is crucial in order to detect and mitigate potential threats. Creating awareness about the dark web can help users make informed decisions and protect themselves from the dangers associated with it.

Despite its association with criminal activities, the dark web can also be utilized for good purposes. People can leverage the anonymity and privacy provided by the dark web to conduct research and share information about sensitive topics without fear of censorship or surveillance. This highlights the potential for the dark web as a platform for positive contributions to society.

In conclusion, the dark web presents a complex landscape with both risks and benefits. It is important for users to understand the dangers involved and learn how to navigate it safely. Regulating the dark web is a challenging task, but necessary to combat criminal activities. Education and awareness play an important role in safely utilizing the dark web, while also recognizing its potential for positive usage. By promoting responsible usage and implementing effective regulations, society can better harness the potential benefits of the dark web while minimizing its risks.

Audience

The analysis highlights several important points raised by the speakers. One speaker discussed the challenge of identifying cyber crime and used the analogy of a thief breaking into a house to illustrate the complexity involved. The speaker’s sentiment towards this challenge was negative, indicating the difficulty of understanding cyber crime.

Another speaker emphasized the need for consistency in global internet usage and regulation. They stressed the importance of establishing a common ground for internet governance and highlighted the different approaches taken by countries like China and Russia. The speaker’s sentiment towards this topic was positive, suggesting the necessity of a consistent approach.

A concern was expressed for marginalized communities in the context of internet governance. The speaker acknowledged that these communities often lag behind in internet access and usage, potentially exacerbating existing inequalities. The sentiment expressed towards this issue was one of concern, demonstrating an understanding of the potential marginalization.

Furthermore, research findings revealed that millennials tend to have poorer cyber security habits compared to older generations. This observation underscores the need for increased awareness and education on cyber security, particularly targeting younger individuals.

Lastly, there was a discussion on the future landscape of dark web activities, focusing on the perspective of youth. Although specific supporting facts were not provided, the analysis indicates an interest in understanding the potential evolution of dark web activities among young people.

In summary, the analysis provides valuable insights into cyber security, internet regulation, and their impact on marginalized communities. It underscores the challenges in identifying cyber crime, the importance of consistent global internet governance, and the need for improved cyber security habits among younger generations. Additionally, it recognizes concerns for marginalized communities and the urgent need for inclusive and equitable internet governance. The analysis also raises questions about the future landscape of dark web activities, particularly from a youth perspective.

Miloš Jovanović

The internet is a vast space that contains a wealth of resources, some of which are not easily accessible through conventional search engines. These resources are found in the deep web, which is the part of the internet that is unindexed by search engines like Google. The deep web contains content that is not readily available to the general public, making it a mysterious and intriguing realm.

However, there is often confusion between the deep web and the dark web. The dark web is a subset of the deep web, specifically associated with negative and illegal activities. It is a place where individuals can engage in illicit behavior, such as buying weapons or drugs. It is crucial to differentiate between the two and not solely associate them with negativity.

The deep web and the dark web share the common characteristic of housing unindexed resources on the internet. The dark web, however, is just a portion of the overall deep web. It is essential to clarify this distinction to avoid misunderstanding.

While the dark web and the deep web are often viewed as havens of illegal activities, it is crucial to note that illegal behaviors and cybercrime are not exclusive to these parts of the internet. Negative behaviors and cybercrime can occur on publicly available resources like social networks as well. Therefore, it is essential to approach discussions of online security and criminality with a broader perspective that considers the entire internet landscape.

Protecting one’s metadata is also a significant concern for individuals who value privacy and security. Techniques like using the Tor Browser or the onion protocol can help hide metadata, ensuring greater anonymity online.

The responsibility for controlling internet information channels lies with national governments. Geopolitical circumstances have resulted in a fragmentation process on the internet, with different countries seeking control over internet governance. Protecting infrastructure and citizens from cybercrime necessitates traffic control and monitoring.

Investing in technological sovereignty is crucial for nations to have control over their internet space. This involves developing strong agencies and institutions to protect national interests and enacting strict laws regarding data storage and usage. By doing so, countries can ensure they have the means to safeguard their digital infrastructure and maintain control in the ever-evolving technological landscape.

Regulating the dark web or the deep web exclusively is not feasible since they are integral parts of the entire network. Instead, efforts should be focused on regulating the internet as a whole to combat illegal activities effectively.

While technology such as TOR and VPNs can provide some level of data protection, they may not guarantee absolute privacy. It is essential for users to understand the limitations of these technologies and exercise caution when sharing sensitive information online.

Accessing services that are not available in one’s country may violate local laws. It is important for individuals to be aware of and respect the legal frameworks in their respective jurisdictions to avoid engaging in illegal activities.

The fight against cybercrime requires a multi-stakeholder approach, involving collaboration between security and intelligence agencies, governments, and other relevant parties. Current alliances and systems like Europol have made significant contributions but may not be sufficient to effectively combat cybercrime. Enhancing cooperation and communication among different parties is crucial to solving and understanding the complexities of cybercrime cases.

Overall, the deep web and the dark web are intriguing aspects of the internet that warrant further investigation and understanding. While they are often associated with negative and illegal activities, it is important to approach discussions with a balanced perspective that considers the wider internet landscape. By promoting awareness, improving regulation, and fostering international collaboration, we can work towards a safer and more secure online environment.

Session transcript

Alina Ustinova:
Hello, everyone, we’re going to start like now. So I hope everyone who wants to join, join us, and we’ll have a wonderful discussion. So my name is Alina. I am the president of the Center for Global IT Cooperation, where the organizer of Russian IJF and Youth Russian IJF. So and today we’ll discuss a wonderful topic, dark web. I will make a remark that we will call everything we discussed a dark web. But it’s not like the term that usually used to describe correctly what we’re going to talk about, but because it’s common knowledge, we will speak about it. So what we’re going to discuss today, and we’ll try to understand why people basically are afraid of dark web, and why maybe dark web is not so threatening as we think, actually. And in the end, we’ll try to answer the one question. So is it a cybercrime heaven or just another layer of the web, where our society can also find benefits? So we’re going to start with the basics, because people sometimes, they kind of mess with the terminology and think that dark web is actually something that only contains bad things. And they mess it up with the thing called deep web. So our first speaker, Milos. So can you please tell what is the difference between deep web and dark web?

Miloš Jovanović:
So thank you very much. Thanks for organizing this panel. It’s very interesting topic, because we should discuss about the dark web, deep web, and all challenges on the internet. But speaking about deep web, we should say that deep web is a part of internet, which is an index, speaking about conventional search engines like Google, like, you know, who we index and so on and so on. So if you understand how internet works, we see some resources on the internet, which is available, we can easily search. like on Google and so on and so on. But on another part, there are a lot of resources which is not available easily. So we should understand architecture of internet. We have domain name system, we have IP addresses and so on and so on. So if we see internet as a global network and I don’t want to go into fragmentation processes and so on and so on. If you look internet as a global available network in every part of power world, we should understand that there are a lot of resources which are available only via IP addresses. So there are some different aspects how we can control this, what’s behind this, what can we do accessing these resources. And this is really interesting. So speaking about dark web and deep web in our, I would say community, there are a lot of confusions and misunderstanding what is dark web, what is deep web. Many people would say that dark web and deep web are same concepts and speaking about terminology and so on and so on. But I would agree with this. So speaking about dark web, many people think that when we speak about dark web, we generally speak about some bad behaviors, buying some weapons, drugs, and so on and so on. But on another hand, we should underline that they are very similar approach when we speak about dark web and deep web that I would say that this is all about unindexed sources on the internet. So we can do bad things regularly when we visit some other publicly available resources, speaking about Facebook, about social networks, about all other resources which we use every day. So it’s not only when we speak about dark web that this is a bad behavior, speaking about some illegal things and so on and so on. So we should understand how internet works. And I would conclude that if we compare dark web and deep web, that it’s all about unindexed resources on the internet.

Alina Ustinova:
Okay, thank you very much for your answer. And of course, yes, the main concern of all the people that dark web brings only cyber crimes that bring nothing wrong. So our next speaker, Fifi, she will be joining us online. Do you hear us? Yes, I can hear you. Okay, hi Fifi. So my question to you, so in terms of cyber security, why dark web too? for an ordinary user is considered dangerous. Okay, all right.

Abraham Fiifi Selby:
Thank you for this. And as my colleague explained the dark web, it could be, we have the good side and the bad side of it, not only for the criminal aspect of it. But let me address this. Whenever we are using dark web tools, as you’re saying in terms of cybersecurity to the ordinary user, dark web can be very dangerous because we see that users are not familiar with how to use them safely. So how to use the dark webs can be also dangerous. If you are able to use it safely because people use it and they use it for criminal activities and other stuff, but there are some specific risks that it can be seen and involved when we are talking about dark web. One, it could be the malware aspect of it, that is aspect of the cybersecurity whereby there are some distributed malwares on the internet which contain the viruses that throw down around somewhere people also use. We also have scams because using the dark web, there’s a lot of scam because people use it for illegal activities. So we have people that they are being scammed like phishing and scam attacks, phishing meals, and some other phishing as that people may be targeted to the ordinary user. And also let’s see the illegal activity. As my colleague was saying that people would be using it for some sexual aspect and child sexual abuse and materials and other stuff over the internet. And when you also move forward, we also see the aspects of some people who don’t have the capacity to learn how they can use it. This is in a sense that people who are using dark webs in conjunction using the dark web, using with the criminals online at the same time. So they may not be able to see how they can protect themselves. And these are the various aspect of it that it is very cyber-concerned because the ability for you to use it very well, wisely, can also help you protect you. But also, you might also know that dark web tools are not encrypted and they are not protected unlike the normal applications as well. Although people use it for normal deep web applications for general purposes, but they can also use for criminal activities. So the ability to use and also ability to protect yourself. And why is it for the ordinary user is that it is not highly secure and encrypted, whereby the dark web also, like you can be monitored anytime maybe with a third party organization or for criminal offenses investigation. So these are the various concerns that we’ve been raising for the ordinary user because they are very prone to other threats on the internet when they are using dark web because they think that they wanna browse private or they want to access information private. Thank you very much.

Alina Ustinova:
Thank you for input. So does anyone want to add? Miloš want to add some stuff?

Miloš Jovanović:
I just wanna clarify, when we speak about deep web, dark web and so on, we should understand that dark web is just a part of deep web. So speaking about deep web, as I mentioned before, it’s just a part of overall network of internet and it’s majority of course, but when we speak about deep web, when we speak about deep web, all unindexed resources on the internet. So when we speak about illegal things and cyber crime and everything, which is actually trending topic today, we should understand that it’s not only exclusively on the dark web or deep web or public resources, it’s available everywhere. So when we speak about dark web, we should understand that there are many techniques which gives you availability to hide your metadata and so on and so on, because Tor browser, onions protocol and the different techniques speaking about how to hide your, I would say metadata. Yeah, that’s actually metadata. So the main question speaking about privacy, about security is how to secure your own metadata in the concept of security. So we should not make misconfusion and misunderstanding. Dark web is a just part of deep web as we can consider all the resources on the internet, which are not indexed on search engines as a part of deep web, yeah. So, yes, this is the main concern and the main confusion.

Alina Ustinova:
You’re right. So I want to ask Izan, why actually like, we know that people think that dark web only criminal activity, like only people that use databases and steal them and load them there and use it like for some kind of bad behavior. But actually, is there something good in the dark web? What benefits can it bring to the people?

Izaan Khan:
Thanks, Alina. That’s a very interesting question. I feel that the dark web, basically just a bunch of hidden services that are made available, you know, through tools like Tor and so on, can provide benefits that any other piece of technology really that has those anonymizing features or pseudo-anonymizing features, shall we say, would provide to an individual who needs them. And there are many legitimate use cases for something like the dark web to have hidden services or services that only a few people from a tightly knit community can access. And those could be potentially journalists, could be individuals who are researching or communicating in situations of extreme censorship or duress, for example. There are numerous websites, for example, The New York Times, that have mirror websites on the dark web to allow individuals to be able to access that when that content is usually going to be censored from the clear net, as we call it. You know, digital activists as well have many, many different use cases for accessing these sorts of services and communicating. Organization of protest sometimes also happens on these dark net platforms. So I think there is a lot of interesting use cases for this kind of technology. But over and above that, I also feel that in general, people should have the ability to protect their privacy online and they should be able to use whatever services are at their disposal. And this is one of them. And of course, this gives rise to legitimate concerns on the other side of the coin, which we often see by law enforcement, which is that, well, how are we going to be able to tackle cybercrime online? Is all hope lost if we have totally anonymized services? And I would say no. We don’t necessarily have to throw out the baby with the bathwater, as it’s so called, and get rid of every single privacy enhancing technology simply because it makes law enforcement difficult. In fact, there have been many, many successful cases of law enforcement that have taken place in dark net contexts. We saw the shutdown of the Silk Road, and the second iteration of that, and other darknet markets like AlphaBay, where there were drugs and other sort of paraphernalia that were stolen being sold online. We’ve seen other tools by law enforcement, such as open source intelligence or infiltration to get rid of CSAM material on the darknet as well and apprehend those offenders. We’ve also seen basically other hacking techniques, like if there’s a misconfigured server on the darknet, they can take full advantage of that. They can run as well, their own middle relays and exit nodes and sniff content over there as well. So I think there are many different techniques that they can use to fight cybercrime online without having to get rid of that technology in the first place. As I mentioned, it’s always an arms race. If you have a removal of this technology, there’s going to be another technology to come and replace that. What we need is a principles-based approach to how we balance these issues of anonymity and other legitimate use cases for this kind of anonymizing technology, like free expression and so on. So that’s to serve my two cents on this.

Alina Ustinova:
Yes, thank you, Izan. And actually yesterday, we had a wonderful talk with the TOR project, yes. With the actually, I hope he will join us maybe today and express the position of the TOR project because they said like interesting statistics that only in services, it’s like where actually dark web pages exist, it’s like only one to 3% of the whole TOR browser traffic, which means that people access TOR browser specifically not to like do something bad, not to do some kind of criminal activity or to access even the dark web pages, but just to use it as a VPN service, for example, because it encrypts your surfing traffic, yes, surfing the web. So, but technology develops. We see that lots of things appear now and maybe with. technology it can affect both dark web tools and not only in the dark web itself. So my question to Gabriela will be is how do you think how actual emerging technologies in the future affect the whole dark web? Yes thank you so much and I just want to ratify the importance of what

Speaker:
was said before so the dark web and the internet in general is a tool it’s not the enemy and we’re fighting here the criminal organization so the crime on the internet in the dark web is the problem. So when it comes to emerging technologies they of course can play a significant role in the fight against again these crimes in the dark net. The first of all that is again very popular everywhere if you think of rec tech technologies is the machine learning and AI so these technologies can be for example employed to identify patterns and anomalies in dark net activities assessing law enforcement agencies in tracking illegal activities and identifying potential threats. So again you can just think of what’s happening in the banking sector right now you have the QAC softwares that are helping to understand the different money laundering techniques and patterns so this is something that can be reused eventually to seek some sort of criminal behavior and anomaly in the dark web as well. Then you have the improved encryption and cyber security aspect of it so we are talking about developing advanced encryption techniques and cyber security measures that can help protect sensitive data and prevent unauthorized access to dark net platforms. So here again many many different hacking attacks and attempts are of course very much I would say popular. There is a sort of a race between dark net marketplaces right now so if Silk Road 2.0 or Hydra or many others were shut down there was a sort of a competition between other dark net marketplaces that were taking that was taking place and it’s still ongoing. So they’re trying to undermine each other and here of course this is you know something that can be thought of in the future thinking of a solution in terms of improved encryption cyber security. Then you have the blockchain and this distributed ledger technology which again is very I would say popular it’s not a new technology but these can be used to create transparent and tamper-proof records making it more challenging for criminals to conduct transactions on the dark net without leaving digital footprints. Then the advanced data analysis which again is very popular I would say in the commercial internet if you will. So here again we’re talking about leveraging big data analytics which could help law enforcement agencies and of course other actors to undercover hidden connections, track financial flaws and identify individuals involved in criminal activities on the dark net. And of course the collaboration tools are the most important ones today so enhanced communication, collaboration tools can improve the coordination among everyone involved, and of course, work to combat darknet criminal networks altogether. So in Europe, we have the DAS directive a few years ago, which kind of revolutionized, if you will, the overall understanding of cybercrime. So European countries had to open a cybercrime unit within their organizations, which is very important. And this is exactly what I would generally advocate for in every single country to do. And so again, I would say do not restrict the personal opinions online, because again, we’re talking here about different civil liberties and what other speakers were telling about the importance of having the option to be private on the internet. And again, I would say just the focus on the biometric identification of users is, in my own opinion, the wrong direction. I’m seeing several countries trying to implement that type of tooling, but again, the identification of users is, in my opinion, a wrong focus. We should maybe focus on the technology. We should focus on the software companies, on the applications. How are they used? We should assess like having maybe a technical due diligence of the softwares and trying to actually stop them rather than or modify the use of software rather than focus on the users. Thank you, Gabriela.

Alina Ustinova:
I think that’s a wonderful input to our discussion. We actually, let’s say, we didn’t cover one topic, considering dark web, and it is actually the protection of intellectual property, because many of the people, we actually discussed it, of course, yesterday with your project, that they said that many people, of course, you store browser for downloading, like for pirating. This actually messes up with the whole system, the whole connection, because it’s very, very big files to download. I want to ask our online speaker, Pedro, a question. How does the usage of dark web tools and actually dark web affect protection of intellectual property?

Pedro de Perdigão Lana:
Hi, everyone. I would like to first of all greet everyone here. from Brazil. I hope everyone has arrived in this early morning after those amazing IHF nights. But to get back to the issue here, I would like especially to build up on Aizen comments. And I kind of like to use intellectual property as an example for everything. Every thematic that I work on, especially fragmentation sovereignty, intellectual property can be used as something in the middle of it. Because especially when we’re talking about internet governance, intellectual property is at the basis of it, right? So it was among the first most important discussions that we had, that we had amongst civil society, the private sector, the government. It kind of nowadays, it isn’t so much on the highlights, but it still comes up from time to time as something that kind of implements the debate once again. And on this occasion, I would like to use intellectual property infringement as a good example on how deep web and darknets can be weaponized, argumentatively weaponized, and how they are weaponized kind of erroneously as a presentation of the idea that they are something purely threatening, purely menacing, even if the argument is actually absolutely wrong. After all, when you search for intellectual property and deep web, or more specifically, what we’re here calling dark web, darknets, you will tend to think that this is a place created for criminal intent. It’s used only for that. You will see a lot of lawyer firms talking about intellectual property crimes happening on these places. And of course, it is a… a page that facilitates the sharing of illegal copyrighted material. You can find books and other visual contents that are under heavy enforcement on the superficial levels of the internet, especially with those wonderful tools that the entertainment industry have to date, automatically search and take down content, not always illegal ones. Illegal ones also get taken down by this sort of source. And also, some types of severe intellectual property infringements, such as trade secrets, commercialization, really are especially problematic here. But the dangers and infringements that are actually kind, they are actually kind of the same of those that we find on the surface net. And they are even less concerning, considering the sheer number of people that have access to normal websites and those that have access to deep web content repositories. You must remember that copyright infringement is not a problem when just a dozen of peoples are doing it. But a multitude of people affecting through known market failure, the possibility of existence of a certain business, or the possibility of revenue from a creator. And more than that, dark web agreements are actually presented as a paradigmatic example of the alleged dangers of copyright piracy online. So people and organizations use the threats of the dark web to actually enhance and extend the fear on overall sharing of content online. Which ends up just reinforcing even more how these policies are modeled towards rigid and aggressive systems of copyrights. So these illegal frameworks became arguably obstacles to the objectives they promote, because of the informational society reforms that were based exactly on the same idea of how piracy was a pandemic, and what we needed to reframe a bit of the internet potential so it could have a positive impact on the internet. avoid the bigger evil of intellectual property infringements. So the point I would like to talk here and discuss a little bit more later is how we need to be careful on how these ideas around dark web and dark net are presented, are used. So we don’t end up just trading something that is somewhat problematic for something that is systematically and severely problematic. So back to you.

Alina Ustinova:
Yes, thank you for input. And actually I have a wonderful news because I just found out that joining us online is actually one of the tour projects. So I actually think that it will be great to hear perspective from one of the most famous browser that is usually connected with the dark web. But so, Pavel, can we give a word to Pavel Zonov online so he can speak? No. No? Okay. I will go to the tech team to get, but before I go, maybe speakers can discuss, can make a discussion. Then we’ll, yes, we will just discuss about dark web, how we actually can, can we actually regulate it? Or can we actually control some kind of the thing that’s in the web? Because there are lots of policies. There are lots of laws created to govern the internet because we’re all here, but can actually parts of the web that we call dark web be governed? So I ask this question to every speaker. So if anyone wants to start. Okay, I can start. Okay, Fifi, yes.

Abraham Fiifi Selby:
Okay, all right. So looking at the. and regulating of dark web. It’s quite a complex task, and very challenging. But there are some ways that we can regulate, because we must ensure that there’s a law enforcement. And the law enforcement agencies must be able to investigate and prosecute organizations that use dark web for criminal activities, because people can use the dark web to do good things, good research, as my colleagues were saying. And also, the technology aspect, the government company might develop tools that will implement to disrupt the dark web activities for criminal activities, that are blocking access and other aspects. And one thing that we are all doing is education and awareness. We are creating education and awareness. Now, let me give this small scenario that we’ll be able to understand. Despite the challenge in the solution that to regulate dark web, there have been some approach in the various years and some other research that I have done personally, that in some cases, dark web were trying to be regulated. Maybe in 2013, the FBI shut down some Silk Road in the largest dark web marketplace. And also, when we’re looking at in 2020, the UK government announced plans to introduce new legislation that will give enforcement more powers to investigate and prosecute dark web crimes. So we can also try to give much education and awareness. And also, the technology and policies that we can also try to put it behind, in terms of developing new encryption algorithms, that can really help to regulate that. It’s a collaborative effort. And one entity cannot do. We are all involved. We must also be able to be safe on our own use of the online tools and resources, because dark web, as we are saying, it’s not only a tool. only for crimes. You can also use it to do. And one thing I want to say is that in this life, you cannot detect darkness unless you’ve been in darkness before. So it’s very important for us to learn how we can use this dark web so that we can make policy a regulation behind this, as Pedro was saying. So this is my take on it. There are some few regulations that we can do, but it’s a collaborative effort between an institution, we individuals, and stakeholders. Thank you very much.

Miloš Jovanović:
So when we speak about control of our information channels, I mean, traffic flows, and so on and so on, we should think about how to control our all, I would say, internet, speaking about sovereignty. So if we speak about fragmentation processes, which occurs definitely right now in these geopolitical circumstances, we know what’s happening right now in Europe, in the Middle East, everywhere across the globe, and so on and so on, we should see different technological zones. And when we speak about sovereignty, which is a really important topic in China, in Russia, in some countries in Europe, in America as well, so on and so on, we should understand that controlling information channels and traffic flow, I mean, speaking against cybercrime and how to protect your own infrastructure, how to protect your own citizens, and so on and so on, it’s a job for national governments, I would say. So when we speak about internet as a global network, we should understand that it’s a global network, but control of every part is in the hands of local governments, I would say. And this is what China proposed, what Russia proposed, and what other countries proposed. And this is a good example, because when we speak about data, but about potential investigation, controlling and monitoring traffic and so on and so on, we see fragmentation processes, and it’s all about technological sovereignty. So I would give, you know, I think it’s a good example. For example, when you visit China, you are not allowed to use some Western services. When you are in Russia, for example, there is a strict laws which propose that all data of Russian citizens should be stored in the territory of Russian Federation. When you go to Europe, to America, there is a huge discussion about, I would say, Huawei equipment, ZTE, Chinese manufacturing, and so on and so on, speaking about tech industry. So when we speak, moving back, I would like to make parallel with the interim and so on. It’s all about hardware, software, and protocols. So if we want to maintain and to control our national internet space, and our, I would say, information channels, it’s really important that we invest in, I would say, technological sovereignty of every country at the national level. So only if we have strong, I would say, powerful institutions, and forces, and speaking about some agencies, and monitoring institutions, and so on and so on, we will be able to fight against cybercrime and to protect our own interests. Okay, thank you.

Alina Ustinova:
So you mean that protection of a citizen is enhanced in the local government? Absolutely. Okay, so this avenue wants to add to regulation. Yes, Izan, thank you.

Izaan Khan:
I think that’s an interesting question, primarily because we need to sort of define what exactly we mean by regulation, because there are already regulations that exist. It’s like basic criminal law, don’t do crime. So if you’re talking about regulation in the sense of, is there a technological way to be able to control what people do online? Well, as I mentioned, it’s just an arms race. The government can try as hard as it can to be able to take down these unlawful services and activities, and individuals will try to find ways around that. That’s always going to be a cat and mouse game. But in terms of making the lives of law enforcement slightly easier, one interesting example of basically a type of forum shopping, essentially, is that law enforcement officials in many parts of the world are not actually allowed to commit crime in the course of… off fighting crime. And specifically in the case of the dark web, in order to gain trust of individuals who are accused of or being suspected of trading in CSAM material, you cannot try to gain their trust by yourself sending CSAM material to them. Except, unless I’m mistaken, the last time I read in the case of Australia. So a lot of international cooperation centers around the Australian government because Australian officials are then able to go, because they know that they have to be able to gain their trust in order to be able to detect and figure out who these individuals are. And so they are allowed to do that because the judges and the law is drafted in such a way that there is this sort of carve out or exemption. And I think we need to think about solutions like that where you’re able to come up with these sorts of solutions that don’t necessarily involve cracking the technological nut of what Tor and I2P and all of these other services provide, but also enable for a more pragmatic approach towards tackling cybercrime in these sort of contexts, in these anonymous contexts. So that’s, I think, my sort of two cents on the problem. Because when we talk about regulation, we need to talk about what exactly we’re trying to regulate and what mode of regulation are we using. There’s, sure, the law, but there’s also ways that we can regulate through controlling information flows and data retention and stuff like that as well. So we need to recognize what the limitations of each mode of regulation could be. Because if you say, don’t do crime, somebody could still go ahead and do crime. So you need to figure out, okay, is there a technological way that we can deal with this? If there isn’t a technological way that we can deal with this, is there a way that we can make our own lives easier to be able to tackle the cybercrime when we’re going and venturing out into that space? So I feel like there’s different sort of approaches and different layers to this problem of regulation that need to be considered. It’s not really a simple problem, but I’m hoping, and I have trust in our institutions to be able to do that in a balanced manner, basically.

Alina Ustinova:
Yeah. Thank you, Jonathan. That’s actually a very important point you made.

Speaker:
So Gabrielle, you want to add as well? Yes, just a few sentences to what was already said. In how I see it, my profession is to talk about risks and try to mitigate different risks and not go in unwanted territories, let’s put it like this. So when it comes to the dark web and everything concerning this, I would say that it’s a risk in this situation, the abuse of power in the name of fighting crime. So this is something that we should be aware of because it’s something that we should be aware of because data is the new oil today. If you will, it’s not something that is happening from yesterday to today. It’s already like from a decade that we see this type of activities going. And so what I would suggest… suggest is really to focus on cybercrime agencies, on dark web teams that would work actually together with, you know, the academia, with the different, you know, actors in the field. AeroPool, for example, has a dark web team right now that is focusing exactly on this type of illegal activities. Then I would say it’s very important also to report illegal activities. So, for example, you know, Monkey Sees, Monkey Reports in a very, I would say, private manner because whistleblowers are never welcomed in any country. So this is something that should be normalized, if you will. And then, of course, the awareness and the mandatory, as I see it, cybersecurity educational lessons for government people, for everyone involved in data, you know, whether it’s patient data in a hospital, anyone like an administrator in the hospital, the very first person you come and give your ID to, that person needs to go through a cybersecurity, you know, lesson and educational workshop. So this kind of three points that I really feel strongly about is something that is the basis of where we can actually give our input, because through, you know, an agency, we can always, you know, advocate for certain things, for certain techniques, we can learn from each other, and we can help. And of course, whoever is part of the sector already can definitely support the educational lessons. These are very simple, actually, lessons, because you don’t need to be, let’s put it that way, you know, a technical genius to understand certain things. But everyone knows here in this room that the cybersecurity breaches are generally, you know, connected to human breach. So it’s all related. to humans. So we like dogs, or we like, I don’t know, sheep, but you know, you click on a link, you do something, you don’t think that person, you know, is has some malignant thoughts, you know, it’s a lady or it’s a young boy, but you know, this is fishing. So you have many, many different situations that just regular citizens should be aware of, because we’re living in a digital era. And this is not something that is so special anymore. And it just needs to be normalized and put into a system that makes sense for everyone. So everyone should be part of it. And, you know, this type of topic should be just again, normalized, standardized in order to tackle this type of topics in the future. Okay, thank you. And so I was told that Pavel is now can add something as a third project. So please, Pavel, we want to hear your input and your view on the whole dark web theme.

Pavel Zoneff:
Thanks for giving me the space to say a few things. Ultimately, I think we can mirror a lot of the sentiment that has already been expressed today in the sense that there are probably many more positive use cases as it pertains to certainly the use of Tor software, whether it’s accessing our network and onion sites, and some of the other censorship circumvention tools that we provide. So ultimately, for us, we’re helping millions of daily users to securely access the internet and access their right to information and privacy and safeguard their human rights online. So whether that is, you know, from day to day online activity and protecting your rights to say no to non consensual tracking, to in certain parts of the world even be able to access news. So I know onion services have been discussed. And what we always point out is the scale and the statistic that they were referencing earlier. So actually, if we’re looking at the traffic on our tour network, only 1% or a little over 1% are solely traffic that is only directed to onion services. So meaning the sites that are completely confined to the tour network. So that is an extremely small number, especially if you know you want to open up the conversation to potential illicit uses of the tour network. So that is such a small faction that it is really hard to account for any nefarious activity carried online. So what we’re seeing is that our network is primarily used for censorship circumvention for maintaining your right to privacy. And the fun fact actually around onion services is that the most popular onion service seems to be Facebook and news sites, so we’re really seeing that these provide a valuable service and people’s ability to partake and democratic day to day actions. Yes, thank you very much. I think it was important to hear that because people usually associate that sort of browser with very realistic services and I know because we held a session like during our youth Russian IJF. We had a session about dark web, and one of our speakers said that you should not consider like every user who opens their browser that he is intended to do something wrong that he’s a criminal, just for opening the browser, just another browser, and that’s just all. So I think we can move to the major. Yes, you want to add, yes. Oh, just add to that I think this is a very important point that you make that because this is not just tour browser. This goes back to many other technologies such as encryption. I actually had a panel about this just before yours, but the truth is that there is a very powerful force right now that is trying to malign the use of privacy preserving technology, whatever it is, whether it’s Tor or Signal or any kind of other platform that utilizes encryption to make the case that this constitutes some sort of nefarious intent. And that is a very slippery slope. And this is something that we all as a community need to be outspoken against. Because I don’t know, I don’t remember who exactly said that, that there is regulation needed and that they have a huge trust in lawmakers. I don’t think that people across the globe have the same trust in the lawmakers, especially as we’re seeing that a lot of policymakers like the fundamental understanding of how privacy preserving technology actually works to the extent that governing laws are now made that roll back a lot of international standards as it pertains to human rights, access to information and freedom of expression. So we need to all be vigilant and ensure that we continue to have a right to privacy and encryption. Thank you very much.

Alina Ustinova:
Yes, it was a very important point to make. And so we move to questions and I give work to our online moderator, Maria. So we have some questions in the chat. So it’s- Yes. Yes, I can see you. Hello. Good night. Yes. We have one question from Habib Corrida if I read it correctly.

Maria Lipińska:
Can you share your insights on the potential positive use cases of the dark web beyond its negative reputation and how it might impact the future of online privacy and security? That’s the first question. And we want to ask, we want the audience. to share other questions, of course, from the online participants and audience. Thank you.

Alina Ustinova:
Okay, so who would like to answer the question? So the question was, yes, okay, yes, Pedro.

Pedro de Perdigão Lana:
I think I can go on that because I would also like to comment on something that you were debating before. Oh yeah, you want to go back? No, yeah, but I actually, Gabriel and Aizen, I told exactly what I was going to say, and after that, Pedro commented about cryptography. And the thing here is, when we’re talking about regulating dark web and darknets, I think if it’s possible to regulate or not, it’s not a real question, not the most important question, but more precisely, why we need to regulate it and what we need to regulate it, because pointing out more precisely what the problem is and how to tackle it without affecting the rest of the technology is really the scope we have here. The question that has been made about how it might impact the future of online privacy and security is that if we talk about regulating the dark web, regulating the darknets, sorry, the deep web, the darknets, without being careful about the ideas of more legislation, more regulation, stronger institutions, this may end up becoming a problem to those who care about and use these spaces for good uses, good utilizations, such as communication in places where freedom of expression is restricted, and so on and so on. Yes, thank you for your input. And going back to the question, so the

Alina Ustinova:
question was what other positive things can dark web bring to the ordinary user, but I think we actually covered lots of that, including that it gives you basically private connection and freedom of expression in most of the time, because you can access some web services that are not available, for example, in your country, or because actually going back to the conversation with the founder of the Tor browser, you want to add? If you access some services, you know,

Miloš Jovanović:
which are not available in your country, as you mentioned, you violate the laws in your country, you know, so there is a, you know, circle. So how we use technology and I came, you know, to fragmentation processes. So if you want to regulate something, I think it’s not possible to regulate exclusively dark web, deep web, because we see dark web is a part of deep web and deep web is a part of all network. So we should see layer as one level, all network. So now we need some approach, how to deal with some challenges which occurs in geopolitical sphere, because if you protect something in your country, maybe it will be allowed in other countries. So if you access these services, you violate the laws of your country. So it’s not easy discussion from regulatory side. From the technical aspects, you can do almost everything. And speaking about TOR, speaking about VPNs, about different aspects, how to protect or so-called protect your data, traffics and so on and so on. I don’t think that there is a right way and that you are able to protect what are you doing on the network. Speaking about standard stuff, about encryption techniques, about IIS certificates and so on and so on, it’s a huge discussion, it’s complex and so on, but I think there is no privacy on the internet.

Alina Ustinova:
Okay, that’s an interesting thought. So, but going back to the question, I think we actually covered lots of the positive impacts of the dark web, but so, I think if everyone in the audience, like on site have questions, we will do like one question from a site’s audience and one question from the online audience. So do you have, yes, do you have a mic? Please represent yourself and ask your question.

Audience:
Okay, good morning, everyone. My name is Ismaila Jawara from the Gambia and I lead a cyber security community in the country and then we provide training for law enforcement and university students in the areas of cyber security technical research and education. So, I have a question but before that I just want to give a preamble on that like that was this training that we had for one of the law enforcement agencies, and one of the inspectors, you know, officers said, Mr. If a thief broke into someone’s house, and the case is reported to the police, we come check, you know, how the thief broken, maybe the door or the window. And if someone have $1 million or dollars in this account. And the following morning, you know it is $0. How do I know which door or window they break, you know, so, so, so what I’m trying to emphasize here is, you know, the issue of dark web and, you know, regulating the internet and all that. I think, you know, as my colleague said yes it’s important that you know we also not notice the fact, you know, that local, you know, governments and regulators have different opinions and ways of, you know, things that I mean I would say, particular, you know, a wheel on how they want to regulate the internet within their space, but also, you know, I think the main purpose of the IDF, you know, is to for us to have a common ground on how we want the internet to be operated and use globally. You know, so for example what works in the in the Gambia or Africa, you know, should be something very similar to what works for, you know, us or Ukraine or China, because if not what we are going to run, you know, into is that you have countries like China Russia will implement, you know, certain things, sorry to mention, but then how about the already marginalized communities and nations that are already behind, you know, the current, you know, progress of the internet, you know, regards to education and technical support accessibility and all that. How are those people going to fit, you know, into that discussion of, you know, each your own way with your own way, I didn’t have it, you know, and then considering access to information right to privacy and all that so I think I just want to understand how is marginalized communities feel fit into this whole discussion, you know, when they are already behind. Thank you.

Alina Ustinova:
So, does anyone want to answer this question I think this is very important point is on yes. Thank you.

Izaan Khan:
So international cooperation is definitely one, and capacity building is definitely too. So you need to be able to train law enforcement because I think what the point that Pavel from tour made is a really important one which is that a lack of understanding of what the technology is capable of is what leads to really bad policy outcomes and enforcement outcomes. So upskilling and definitely giving law enforcement training on cyber crime related issues, and how to actually works and as 50 probably isn’t there anymore but as 50 mentioned earlier, you know, you can’t fight the darkness unless you’ve actually been in the darkness and it’s very similar when it comes to understanding, you know how browsers like to work you can’t regulate in the abstract you have to actually go in there and figure out how it works and try to, you know, put yourself in the mind of the criminal essentially. So capacity building is definitely a big one and then on top of that you have a lot of already existing international cooperation on fighting cyber crime you know was mentioned previously we have Europol we have Interpol we have office we have a whole bunch organizations that exist to fight cybercrime on a number of different fronts, be it geopolitical, be it on an individual level, be it organized crime, what have you. So definitely focusing on those two areas, both the diplomatic and the technical, would probably be the best approach that not only yourself, but as mentioned, any nation would have to fight this issue of cybercrime on the dark net and understand what is capable and what isn’t. That’s my point of view. I’m not sure if there’s anyone else.

Alina Ustinova:
Does anyone want to add something to the, well, I think that’s it.

Miloš Jovanović:
I mean, speaking about fighting against cybercrime, you know, I participated in some, you know, events in Serbia, you know, we had some accidents and it’s a multi-stakeholder approach. So when you want to fight against some, you know, accident or whatever, to, you know, do research what’s happened in the situation and so on and so on. We had a situation that 17 security and intelligence agencies participated in just one investigation. So it’s too complex, you know, and sometimes, you know, if you speak about Europol, about some different alliances and so on and so on, it’s not enough. You need to go into a multi-stakeholder approach and to communicate with a lot of different parties to solve some problems and to check what’s happening exactly. So that’s just the nature of network and packet transmission and the traffic flows and so on and so on.

Maria Lipińska:
Okay, thank you, Maria. Back to you. Do we have other questions in the chat? No, really, we don’t have any for now. So maybe we’ll ask the audience on site.

Alina Ustinova:
So do we have any questions on site? Yeah, please take the mic. Good morning to everyone.

Audience:
I’m from Sri Lanka. Actually, research shows that most of the millennials have the bad habits of cyber security, in cyber security. parent to the gen X and the older people. So in this context, so actually why do you think young people are drawn to the dark web and what are the activities that they really engage in? That’s my first questions. And my last, second question is, how do you see the landscape of dark web activities you will learn in the coming years? That’s mean in future, in the youth perspective. Thank you.

Alina Ustinova:
So I guess the first question is very interesting, Juan. And I think that personally I will answer that and then give the words of course to everyone. I think that young people are very, they like to see what is forbidden because forbidden fruit is the sweetest one. And they always want to try something new. Yes, yes. And it’s like, and it’s actually because like when you restrict something, it is very interesting to know what your restriction because it’s, especially in the young age, it’s a very kind of a protesting things against the system. Like you want to do this, you want to say, I can do this because I’m young and I know what I do. And the old people out there, they don’t actually don’t know what they’re doing. I will find the right answer. So I think one of the reason is that. And the other, well, we actually have a new generation alpha that has grew fully in the age of internet. It was very developed. So they knew how to use the phone probably before they knew how to speak. So probably because of that, they grew with the opinion that internet is not a threat but it’s actually a very good benefit they have. And so if you don’t know how to use internet, if you’re in the generation alpha, I think that it will be hard for you in the future in the life. So, and they try to use every tool that they can find on the internet. So maybe someone can add something to the point from our speakers. No one? So Ezzan, yes. Yeah, just a quick one.

Izaan Khan:
So usually why people would use these tools and access hidden services would be either curiosity or privacy or necessity. So one of those three things. In terms of the landscape changing, I think we may see technological advancements that would further protect privacy, which would again lead to further issues down the line potentially. As Paul mentioned, there’s a lot of work that Tor Project is currently doing on anonymization services. There are other anonymization services as well that are popping up. So yeah, that may be one force in the landscape. And another force would be, as again, was previously mentioned, to fight against encryption. We will see governments retaliate by saying, we want to undermine encryption in some way or have some back doors. And if we don’t provide those back doors, then those services will not be allowed, effectively making it illegal to use these kinds of services. So we will see that tension play out, and that tension has basically always existed. since the history of the internet is just going to be, we’re not fighting with sticks anymore, we’re fighting with nuclear weapons. That’s what it’s going to be like, I feel.

Alina Ustinova:
Yes, thank you. So I think we’re like actually running out of time. So I will ask every speaker who’s left, because I know that someone already left, to like, to have a closing remark, just one minute, what actually we have discussing. And I guess maybe you can answer the question. So should we like treat, the dark web is a fundamentally dangerous part of the web, or is it most similar to the rest of the web? So the basic question, just your opinion. So Gabriella, you wanna start?

Speaker:
Well, again, as I said before, I would just say that dark web in general is a tool, and it depends on how it’s used. Criminal organizations and whoever has bad intentions to deviate from the law in any given country will do that regardless of whether you have those tools or any other tools. Because what is important here is that the crime that we’re talking here in the dark web is actually the crime offline. It’s just the tool that is facilitating to make it, make it look like something different. You get a nice clear net website where everything is shiny and whatever. So it’s kind of very easy to actually deflect from this type of situations in case if someone has the intention to do that. So the dark web in per se is not, again, the enemy. The enemy is the system that should fight more often and more strongly towards fighting organized crime, because this is, again, just a tool. I’ve spoken many times on different conferences about different cybercrime topics. Recently, I’m focusing on AI-powered cryptocurrency cybercrime, because it’s, again, happening in a very unprecedented way, and it’s very, very quick. So every second something new is happening, 30 millions are laundered, just like that. So you just need to be, I would say, realistic and try to understand that the law enforcement, and many times the politicians are not aware of how the internet works. If you hear some congressional, US congressional hearings, it’s clear that by the questions asked to Meta, Facebook, Twitter, X, all these names, but you see that they don’t understand what’s their business model about. So if you do not understand how they make money, it’s difficult for you to understand how other people can actually use that tool to create, I would say, a dark economy for themselves. So there is actually also a colleague of mine who got a very interesting idea, and I fully support that, is to actually start hiring people, very creative people who should be, you know, from different backgrounds, it doesn’t matter, but to try to understand how their software can be manipulated in a bad manner. So for example, you have, I don’t know, a type of software, can it be misused? Okay, how? And this is actually something that, you know, is a discussion that is happening among high tech startups that are working on AI machine learning type of tools, because again, we’re creating right now tools that can be easily, you know, manipulated into what other people need. So the dark web, no, it’s not the enemy, we should just be more aware. And eventually, I mean, I don’t know if that’s correct or not, but I would like to open the discussion on a registry, for example, of softwares and their official use, and maybe have like a due diligence or technical due diligence to understand the different backdoors, or the different misuse, you know, someone who is interested in child pornography, or anything like that, they will use even games, online games, like there was a case of a pony game. So little ponies, you know, between each other, finding each other on a game. And in the chat, you know, pony, pink pony is talking to the black pony, hey, let’s meet in room number one, you know, and then they’re discussing, you know, a new terrorist attack, or they’re discussing where the trade of, you know, illicit narcotics or whatever, it’s going to happen. So, you know, it’s just the creativity that never stops. So the tool is not a problem. It’s, you know, the different approaches, that is what is important to focus on. Yes, thank you. So Izan, your last remark?

Izaan Khan:
Just to keep it very brief, I’m very glad that that amongst the panelists here, there’s some consensus about the fact that Tor is just a tool and there are many positive and necessary use cases for this technology. And the fact that law enforcement has other mechanisms that exist and it’s not all hope is lost for cyber crime. And yeah, it’s always a perpetual arms race. We’ll probably be revisiting this question in the next 20 years again with a different kind of technology. So we’ll see.

Alina Ustinova:
Yes, thank you. I also have Alain Pedro. You want the last remark?

Pedro de Perdigão Lana:
Yeah, really fast. I would just like to highlight that things are not in many cases, it’s not in situations that they seem really easily defined for sites. So answering many questions that were posed before with just one example that perhaps are also used for sharing academic documentation in an academic ecosystem that is very unjust towards poorer countries. So it is a crime or at least in civil infringements, but it may be less of an ethical problem than part of the science publishing industry. For example, those that are sustained with public funding and still charges are charged for thousands of dollars for access.

Alina Ustinova:
Yes, thank you. And we have left with Milos, so your final remarks. Okay. So as a conclusion, I would give it a strategic approach.

Miloš Jovanović:
My perspective is that we can use all technologies on the bright side, on the dark side, only how we think is right at the end. So, you know, speaking about dark web, about deep web, it’s just the service of internet, I would say, and TOR as an application and so on. So yeah, let’s end with this, that my approach is that we need to strengthen our local institutions and speaking about fighting against cyber crime. So this is the way how we can protect internet globally because we need some processes, we need some fragmentation processes. We will see how it will, and you know, how internet will look like in the near future. So yeah, my approach is that we have to fight against cyber crime with common approach, but on authority as the local governments. So thank you.

Alina Ustinova:
Yes, thank you very much. Thank you for joining us online and outside. I think we had a wonderful discussion. So we can of course talk after session. And if you want to speak more with us, we have a booth in the booth village, Central Global IT Corporation. You can always come and we have a wonderful discussion there. Thank you very much.

Maria Lipińska

Speech speed

157 words per minute

Speech length

102 words

Speech time

39 secs

Abraham Fiifi Selby

Speech speed

168 words per minute

Speech length

1015 words

Speech time

362 secs

Alina Ustinova

Speech speed

166 words per minute

Speech length

1868 words

Speech time

674 secs

Audience

Speech speed

162 words per minute

Speech length

557 words

Speech time

206 secs

Izaan Khan

Speech speed

205 words per minute

Speech length

1855 words

Speech time

544 secs

Miloš Jovanović

Speech speed

176 words per minute

Speech length

1824 words

Speech time

622 secs

Pavel Zoneff

Speech speed

170 words per minute

Speech length

690 words

Speech time

244 secs

Pedro de Perdigão Lana

Speech speed

164 words per minute

Speech length

1079 words

Speech time

396 secs

Speaker

Speech speed

159 words per minute

Speech length

2033 words

Speech time

766 secs

DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bengus Hassan

The analysis highlights several important points in the AI conversation. One key finding is that companies are striving for a first-mover advantage in the AI race, often neglecting to consider the ethical implications of their developments. This emphasizes the need for grounding AI conversations in ethics. It is crucial for companies to not only focus on technological advancements but also take into account the potential consequences of their AI systems on society.

Furthermore, data protection emerges as a vital element in the AI conversation. Many countries, particularly those without data protection frameworks, are now grappling with significant data projects and AI implementations. This raises concerns about the privacy and security of individuals’ data. Reports from Paradigm Initiative highlight this issue, shedding light on the absence of sufficient data protection regulations in various regions, particularly in Africa. These findings underscore the importance of developing robust frameworks to safeguard personal information and ensure the responsible use of AI technologies.

The analysis also highlights the significance of diversity in AI. A personal experience shared by Bengus underscores the potential for bias in AI systems. This serves as a powerful reminder that AI technologies should incorporate perspectives from across the world, not just from the global north. To achieve this, diverse representation in AI modeling and research is essential. By encompassing different viewpoints, AI systems can be designed to be more equitable and inclusive, reducing biases and promoting equal opportunities for all.

Another important aspect discussed in the analysis is the role of regulation in the AI landscape. It is argued that regulation should do more than simply implement control; it should create standards. The conversation about data protection regulation in many countries has often provided an opportunity for certain governments to seek control rather than establishing reliable and comprehensive standards. This highlights the importance of developing regulatory frameworks that genuinely protect individuals and their data while fostering innovation and advancement in AI technologies.

The analysis also raises the point that innovation tends to outpace regulation. It provides a case where a country banned cryptocurrency before fully understanding its potential as a foundation for new forms of money and movement. This example serves as a cautionary tale, indicating that regulators and policymakers should strive to comprehend emerging technologies before enforcing restrictive measures. By creating sandboxes where ideas can be experimented within specific frameworks, regulators can grasp the intricacies and implications of new technologies, enabling them to make informed decisions.

In conclusion, the analysis underscores the need to consider ethics, data protection, diversity, and effective regulation in the ongoing AI conversation. Companies must not solely focus on being at the forefront of the AI race but must also take into account the ethical implications of their developments. Strong data protection frameworks are necessary to ensure the responsible use of AI and safeguard individuals’ privacy. Diversity in AI modeling and research is essential for creating inclusive and unbiased systems. Regulation should aim to establish high standards rather than merely exerting control. Policymakers must strive to understand emerging technologies before enacting restrictive measures.

Amandeep Singh Gill

The Secretary General of the United Nations has proposed the creation of a multi-stakeholder high-level advisory body to govern artificial intelligence (AI) practices. The objective of this proposal is to ensure that AI governance is aligned with principles of human rights, the rule of law, and the common good. The advisory body will serve as a credible and independent entity responsible for assessing the risks associated with AI and providing recommendations to governments on global AI governance options.

To ensure its effectiveness, the advisory body will work towards implementing existing commitments made by governments under international human rights instruments in the digital domain. This emphasizes the need for AI governance that upholds these important values.

The formation of the advisory body is still ongoing, with nearly 1,800 nominations from across the world being considered. It is expected to release an interim report by the end of the year, outlining its initial findings and recommendations.

In its work, the advisory body will consult various ongoing AI initiatives to ensure comprehensive engagement and cooperation. These initiatives include the G7 Hiroshima process, the UK AI Summit, UNESCO’s work on the ethics of AI, and the efforts of the International Telecommunication Union. By incorporating knowledge and insights from these endeavors, the advisory body can harness a wide range of expertise to inform its assessments and recommendations.

One important aspect of the advisory body’s mandate is to examine both the risks and opportunities presented by AI with regard to achieving the sustainable development goals. It will conduct a thorough assessment of the potential risks associated with AI, as well as identify the opportunities and necessary enablers that can help AI contribute to the acceleration of progress in these goals.

Overall, the proposal for a multi-stakeholder high-level advisory body on AI governance reflects the growing recognition of the need for responsible and ethical AI practices. By aligning AI governance with principles of human rights, the rule of law, and the common good, the proposed advisory body seeks to guide and shape the development and deployment of AI in a way that benefits society as a whole.

Moderator – Moritz Fromageot

An open forum on AI regulation and governance at the multilateral level took place, organized by the Office of the UN Secretary-General Envoys on Technology. The event, attended by both in-person and online participants, began with welcome remarks from Moritz Fromageau of the Office of the UN Secretary-General Envoys on Technology, who outlined the agenda for the day.

Amandeep Gill, the Secretary-General’s Envoy on Technology, delivered keynote remarks on AI regulation and governance, followed by Peggy Hicks, Director at the UN Human Rights Office, who moderated the panel discussion.

The forum then transitioned into a Q&A session, with audience members asking questions and the panel members providing answers. During the session, Amandeep had to leave and Quinten stepped in to fill his role. Additionally, Benga also had to leave, so priority was given to including his participation before his departure.

Co-facilitators from the Global Digital Compact were present in the room and encouraged to join the discussion.

For the Q&A session, on-site participants lined up behind the microphone, and the first three questions were collected. The questions focused on balancing the need for quick action with global processes, ensuring the enforcement of agreed-upon rules, and the importance of multi-stakeholder assessments in mitigating and enforcing the rules.

The panel members addressed these questions, and Gabriela took the opportunity to thank the audience and panelists for their participation. Peggy then concluded the session.

Audience

AI regulation is deemed necessary on a global scale due to the rapid advancements in technology, which are surpassing the development of regulatory frameworks. The current lack of swift global regulations means that tech companies are not being held accountable for the ethical and human rights implications associated with AI. To address this, there is a call for punitive measures or fines to be imposed on tech companies that disregard these implications. This approach is supported by the European Union’s General Data Protection Regulation (GDPR), which has implemented significant fines for non-compliance.

Ethical values play a crucial role in the development and deployment of AI. These values, such as dignity, autonomy, fairness, diversity, security, and well-being, are recognized by institutions such as UNESCO, EU regulation, and OECD. However, the challenge lies in enforcing these values in specific contexts. It is argued that concrete and measurable adherence to ethical values is essential in AI to ensure responsible and ethical development and deployment of AI technologies.

Another important aspect of AI regulation is the need for ethical assessments at both micro and global levels. These assessments involve multiple stakeholders and aim to identify, mitigate, and avoid risks associated with AI. At the company level, discussions involving customers and clients are necessary. Additionally, the intersection of bioethics and infoethics needs to be addressed. By including the perspectives of different stakeholders, these assessments can help shape the development and deployment of AI technologies in a manner that upholds ethical standards.

The governance of AI should be guided by global standards that are developed gradually and holistically. This will ensure that all aspects, including economic, social, and cultural rights, are taken into consideration. Furthermore, it is noted that the private sector has an interest in interoperable governments to facilitate seamless jurisdiction transitions. Governance of AI can also involve other policies that impact incentives, such as taxation, trade policy, and intellectual property policy.

In developing AI governance, an interdisciplinary and inclusive approach is advocated. The involvement of voices from all regions, genders, and disciplines is crucial to ensure a comprehensive understanding of the societal impacts of AI and its effects on social, economic, and cultural rights. High-level advisory bodies on artificial intelligence that incorporate diverse perspectives have been established to foster this approach.

Overall, the analysis highlights the importance of global AI regulation, the adherence to ethical values, the need for ethical assessments, the development of global standards, and the embrace of an interdisciplinary and inclusive approach to AI governance. These measures are essential to address the challenges and risks associated with AI technologies and to ensure their responsible, ethical, and inclusive development and deployment.

Moderator – Peggy HICKS

During the discussions on AI governance, participants stressed the need for a comprehensive conversation on this complex topic. They highlighted the importance of addressing issues such as privacy protection, deepfakes, and transparency.

In terms of privacy protection, the speakers noted that recommendations have already been made regarding the establishment of guardrails to protect individuals’ privacy. They emphasized the urgency of taking immediate action on issues like deepfakes and ensuring transparency in the data sets used for large language models.

The global challenge of AI governance was also discussed, with participants calling for a level playing field in the development and implementation of AI technologies. They stressed the need for increased investment to engage with the global majority and ensure inclusive AI governance.

The importance of multi-stakeholder participation in AI governance was highlighted. The participants noted the significant influence held by a small number of companies in the AI sector and called for increased commitment to effective engagement from various stakeholders. Civil society involvement was seen as particularly important in ensuring inclusive AI policy decisions.

Another important aspect discussed was the integration of a human rights framework in AI governance. Participants acknowledged the agreed-upon human rights framework across continents and called for its application in AI governance. They emphasized the need to move beyond rhetoric and make human rights actionable in policy making.

Diversity in the global conversation on AI was recognized as crucial. Participants stressed the need for greater diversity and inclusion to achieve a comprehensive understanding of AI governance issues.

The participants also emphasized the necessity of global standards and guardrails for AI. They highlighted the importance of integrating current knowledge and red lines into global standard-setting processes to ensure responsible AI development.

Transparency emerged as another key aspect of AI governance. Participants advocated for greater transparency in the global AI conversation, including dedicated forums for discussing AI governance.

The discussions also addressed the need for investment in social infrastructure and the digital divide. Participants highlighted the importance of building social infrastructure to support AI development and the role of public investment in creating necessary infrastructure for AI research. They suggested that those profiting from AI should contribute to these investments.

Lastly, participants stressed the need for a global framework to address digital technology and human rights issues. Collaboration across sectors, rights, communities, and countries was deemed essential to effectively tackle these challenges and ensure inclusion of all those affected by technological choices.

Overall, the discussions emphasized the importance of approaching AI governance from multiple perspectives, involving global engagement, multi-stakeholder participation, and a human rights framework. Participants urged immediate action on key issues, increased investment in inclusive AI governance, and the establishment of global standards to ensure responsible and equitable AI development.

Owen Larter

The analysis strongly supports global governance and standards for Artificial Intelligence (AI). The speakers believe that AI presents immense opportunities for humanity but also poses risks that require global collaboration and consensus development. AI encompasses a wide range of tools that offer significant opportunities for industries and infrastructure. However, these opportunities come with risks that transcend boundaries, making a global approach necessary to ensure the safe and responsible development of AI.

The main argument is the need for global standards to be established and adopted by national governments. The International Civil Aviation Organization (ICAO) is an example of successful global governance, involving every country in developing safety and security standards for aviation. The goal is to set global standards for AI in a representative and global way, promoting fairness and accountability.

Developing a global consensus on AI risks is also emphasized. The Intergovernmental Panel on Climate Change is cited as an example of successfully building an evidence-based consensus around climate risks. Similarly, there is a need for a collective understanding and agreement on the risks associated with AI. A global consensus would enable effective mitigation of these risks.

Investment in infrastructure is essential for a broad understanding of AI. The analysis suggests providing publicly available compute data and models, allowing researchers worldwide to better understand AI systems. Additionally, a global conversation on the social infrastructure surrounding AI, including ethical considerations and policy frameworks, is needed. This ensures that the benefits and challenges of AI are understood by stakeholders and align with global values.

The analysis consistently expresses a positive sentiment towards global collaboration, consensus development, and standard setting in AI. AI is seen as an international technology requiring international cooperation to harness its potential and address challenges. Examples such as ICAO and the Intergovernmental Panel on Climate Change are cited as successful models for consensus building and standards setting.

Furthermore, it is important to apply existing domestic laws to AI systems. Discrimination laws pertaining to loans and housing should extend to cover AI systems to prevent biases and discrimination.

Impact assessments are crucial for AI system development. Microsoft’s responsible AI program is mentioned, where impact assessments with human rights-related elements are conducted for high-risk systems. Sharing the workings and templates of these assessments can benefit the AI community in improving transparency and accountability.

In summary, the analysis strongly supports global governance, consensus development, and standards for AI. Collaboration across nations is necessary to maximize opportunities and mitigate risks. A global approach ensures that AI is developed and implemented in line with shared values, benefiting humanity as a whole.

Gabriela Ramos

Artificial intelligence (AI) has played a significant role in various sectors such as health and education. For instance, AI has contributed to our understanding of how the COVID-19 virus works, which has been crucial in vaccine development. AI has also been utilized in the distribution of benefits within the welfare, health, and education systems.

To ensure ethical advancements in AI development, UNESCO has developed frameworks and tools like the Readiness Assessment Methodology and Ethical Impact Assessment. These resources aid member states in implementing AI in an ethical manner. Currently, 40 countries are deploying this framework, with more expected to follow suit.

Legal frameworks play a vital role in the control and development of AI in the public sector. UNESCO recommends that legal regulation, rather than market forces or commercial reasoning, should guide AI development. Many countries are actively building their capacities to handle AI technologies responsibly.

Interoperability is essential in both technical and legal systems. As technologies become increasingly global, it is crucial to ensure interoperability of technical systems and data flows across countries. Additionally, the transnational nature of technologies calls for interoperability of legal systems to effectively regulate AI developments.

Harmful impacts of AI technologies are a concern, and governments need to understand potential implications and anticipate possible harm. It is essential for governments to have measures in place, such as compensation mechanisms, to address any harm caused by AI deployment.

Gabriela Ramos, an advocate for responsible AI development, emphasizes the role of governments in managing AI impacts and upholding the rule of law. Governments serve a crucial function in monitoring and regulating AI technologies to protect individual rights and maintain social order.

In conclusion, AI has been instrumental in sectors like health and education, aiding in vaccine development and benefit distribution. Ethical advancements in AI are promoted through frameworks and tools developed by UNESCO. Legal frameworks guide the responsible control and development of AI in the public sector. Interoperability, both in technical and legal systems, is crucial due to the global and transnational nature of technologies. Governments play a vital role in managing AI impacts and enforcing the rule of law.

Session transcript

Moderator – Moritz Fromageot:
Welcome to everyone here in the room. Also welcome to everybody who is participating online. We have an open forum on AI regulation and governance at the multilateral level now. My name is Moritz Fromageau, I’m part of the Office of the UN Secretary General Envoys on Technology. Let me quickly walk you through the agenda of the day. We will start this off by some panel remarks by our esteemed guests here, and then we’ll have a big Q&A session in which we want to engage with you, the audience. We will start this off with keynote remarks by Amandeep Gill, who is the Secretary General’s Envoy on Technology, and after that, Peggy Hicks, Director at the UN Human Rights Office, will moderate. the panel, and after that we go over to the Q&A session. Yeah, without further ado, I would hand over to Amandeep to introduce the topic.

Amandeep Singh Gill:
Thank you very much, Moritz. Welcome to this event, this discussion on AI governance and the very important dimension of human rights, the role of human rights in how we approach AI governance. So to set a little bit the context, I will talk about the Secretary General’s proposal in his policy brief on the Global Digital Compact that he launched on June 5th this year for a multi-stakeholder high-level advisory body for artificial intelligence that, as the SG said, would meet regularly to review AI governance arrangements and offer recommendations on how they can be aligned with human rights, the rule of law, and the common good. This proposal that he reiterated in his remarks to the first Security Council debate on artificial intelligence in July is currently being put into practice. So this advisory body is being formed as we speak after a process for nominations that ran along two tracks. One was member states being invited to nominate experts to the Secretariat, and the other was an open call for nominations. And all together, we got about 1,800 nominations from around the world. So different areas of expertise, backgrounds, different geographies. So it’s very satisfying to see that degree of interest and excitement about this proposal. we kind of hit the right spot with this. Now, what is the advisory body when it comes together? What is it supposed to do? The Secretary General has tasked it to provide an interim report by the end of the year. And there is a context to this timing. The discussions on the Global Digital Compact start early next year, restart early next year. They move into a negotiation phase. So this interim report would help those who are putting together GDC to consider one of the more important dimensions. There are these eight important high-level dimensions along with the cross-cutting themes of gender and sustainability that have surfaced through the consultation. So it’ll bring more substance and expert-level insight into that discussion. So after that, there is time for the advisory body to consult more widely, including with ongoing initiators. You heard the Japanese Prime Minister speak about the G7 Hiroshima process. There is the UK AI Summit. There has been work that’s been done earlier in the G7, G20 on AI principles. And there is longstanding work in the UN context. And today, I’m very happy to be joined by some of my colleagues. The work in UNESCO on the ethics of AI, consensus recommendation adopted by all member states. The work in the International Telecommunication Union on some of the standards that underpin digital technologies, but also at the AI for Good meetings. And then, most importantly, from the perspective of the SD’s vision and today’s topic, the work being done by the Office of the High Commissioner for Human Rights on how to make sure that. existing commitments that government member states have taken under international human rights instruments, they are implemented in the digital domain. So I just want to conclude by saying that this body that we’ll start meeting soon would help us pool multidisciplinary AI expertise from around the world to provide a credible and independent assessment of AI risks and make recommendations to governments on options for global AI governance in the interest of all humanity. I think those conversations that are happening today, they are very important, they are essential building blocks, but if this is an issue that concerns all humanity, then all humanity needs to be engaged on it through the universal forum that is the United Nations. The risk discussion can often be political or it can be motivated by economic interests. We want a discussion in which there is an independent, neutral assessment of that risk and a communication of that to the global community at large. At the same time, we also need to make sure that the opportunities and the enablers that are required for AI to play a role in the acceleration of progress on the sustainable development goals, they are also assessed, they are also presented in a sober manner to the international community. So looking at the risks and the opportunities in this kind of manner allows us to put the right governance responses in place, whether they are at the international level or at the national, regional regulatory level or at the level of industry where there may be self-regulation, co-regulation schemes to. address risks, including through the kind of initiatives that the Japanese Minister shared yesterday. So I’ll stop there and hand it to Peggy for the

Moderator – Peggy HICKS:
moderating panel. Thank you, Peggy. Great, thank you so much. We’re so fortunate to have Amandeep with us to give us that overall perspective about where we stand on these issues now. I’m going to have the pleasure, I’m Peggy Hicks with the Office of the High Commissioner for Human Rights, and I’ll have the pleasure of moderating the panel but also giving some introductory remarks from the Human Rights Office perspective starting out to just sort of set the course for us by making sort of four introductory remarks. One is that I think when we’re looking at the issues of AI governance, we need to be able to have a complex conversation. We tend to throw out the term AI and think that we all know what we’re talking about. We tend to talk about existential risk, near-term risk, short-term, mid-term risk, with no real definitions on the table. We need to break the conversation down. We need to be aware that there are areas that are already existing. AI that’s in use, being used in human rights sensitive and critical areas like law enforcement, where we don’t have any question about what needs to be done. We just need to implement the things that we already know. Recommendations have already been made about the guardrails that should be in place, for example, on mass surveillance technologies to protect privacy and in other places. We need to move forward on that and we don’t have to wait to do that. But then we also have the issues that have really rushed the surface around generative AI where there is a real need to look at what are the new challenges that are presented. And even within that area, some are immediate in terms of, for example, the impact of deepfakes. the need for water marking and providence to be put in place as quickly as possible, transparency around data sets for large language models. So there are things that we can do urgently, even within that emerging space. But then we have to also be able to look forward at the same time to what are the risks that are in our future that we see, and to be able to do the hard work of putting in place the governance mechanisms and approaches that will allow us to make sure that we’re tackling not just what we already know, but what we foresee for the future. The next point I want to emphasize is that that is a global challenge. And as much as we appreciate all the different efforts at the national and regional level, we need to be able to come together in a global way to address these issues. We need to be able to learn from each other, we need to recognize that the solutions won’t work if they’re only solutions that are adopted and taken in one place. And for that global engagement to work, we need to create a level playing field. And that means that there needs to be much greater investment and resources and engagement with the global majority that may have more difficulty being part of these policymaking conversations going forward. The third piece is one that of course comes up in the IGF context all the time, is around what we mean by multi-stakeholder and how that has to be part of the governance approach that we undertake in AI. And I want to emphasize that when we talk multi-stakeholderism, we are talking both in terms of the business side of things and the civil society side of things. And in fact, what we need on each of those pieces is quite different. With regards to business, there’s a tendency to really look at how we engage and to some extent mitigate the extent that a small number of companies have an enormous influence in this space. But at the same time, we need to create a race to the top where those companies may be the ones that are best prepared to put in place some of the guardrails that we need, but we also need to protect against the way other businesses will come into the sector and are coming in, perhaps with less incentive to put those same guardrails in place as we go forward. On the civil society side, we all know. know that that is an area where there’s a lot of commitment to general participation, but perhaps not as much to effective engagement. And we need a different pathway. We need to draw on the expertise. We need to make sure that civil society is present, because they’re the ones that will help us to make sure that no one’s left behind. And finally, and you won’t be surprised to hear me say this, I want to make a pitch for human rights and the human rights framework as being a crucial tool to allow us to move forward in all of these areas effectively. We’ve heard in many of the sessions I’ve been in already at the IGF how we have to build on what already exists and not create everything afresh. Well, the human rights framework is a framework that has been agreed across continents, across contexts. It’s celebrating its 75th anniversary today. My pin shows. And we need to find a way that we leverage it in this space. But that also requires support for us to be able to do that more effectively. It requires all of us to move from the talking point of, yes, we’re grounded in human rights, to making it actionable in a variety of ways in the policymaking context. So those are the introductory remarks from my side. But I’m very much looking forward to hearing from the contributors today. And I’m very pleased that we’re going to turn first, I guess, to Bengus Hassan, who’s the executive director of Paradigm Initiative and a member of the IGF leadership panel. So over to you, Bengus.

Bengus Hassan:
Thank you, Peggy. And thank you, Amandeep, for the earlier comments. I think it’s important to start with the three areas that have been identified by the Secretary-General in terms of human rights, rule of law, and common good, help with the ongoing conversation. But let me start with a statement, someone. So at the opening ceremony, someone who sat, I think, behind me. Yes, behind me. I shouldn’t confuse behind with beside. leaned over after the session and said, look at the stage, there’s no diversity, and during the AI panel, and then we had a conversation. And the conversation we had wasn’t just about diversity, but was about many things. And Peggy, you’re right. Civil society already, I mean, AI is not new. It’s been said that AI is the unofficial theme for 2023 IGF. I’m sure if you got a dollar for every time AI is mentioned here, you all be billionaires already. And also, there’s a tendency for us to assume that a conversation we’re having is understood by everyone and we’re all at the same level, but we’re not. So first of all, there are people whose level of inclusion, even before you have conversations of AI, are not exactly, we already have a divide, right? We already have a divide that is contributed to by some of the problems that we have that civil society is trying to address. And so three very quick things for me. Number one is that in all of this conversation, we’ve talked about the need for human rights, for the rule of law, and for the common good. But I think the common good will only be served if we have a conversation that is based on ethics. And I say this because if you look at all of the race, literally, like the AI race that we had over the last few months, and I’m sure we hear a bit more from the private sector representative on this, and at some point, there had to be a call to say, guys, let’s stop. And the reason for that was because it then, it became a race literally without rules. And everybody was trying to get to be the first to do it. Of course, there are many reasons for that. There’s economic incentive and there are other, the first come advantage and all of that. But those conversations must be built on ethics. And thankfully, we already have many frameworks around human rights that can guide us in this. So it’s not, we’re not creating new principles. We’re not saying that the ethics should be based on new inventions. We already have principles for that. The second is on data protection. And I say this particularly because we’ve had many conversations about the need for. privacy and protection, but there are many countries where there are still, for example, majority, you know, so we do a part of the initiative, we do a report every year on the state of the internet, on digital rights across the African continent. And one of the major challenges that we have is that there are many countries that do not even have data protection frameworks already. And not only are they now talking about, you know, just collecting biometric data, but they’re also talking about AI, they’re talking about massive, you know, data projects, and that is important. So ethics, also data protection. And I’ll come back to the first point that I made about diversity, not just diversity in terms of conversation. It’s great to have a panel, and at times I think with tokenism you can solve the problem, but we need to go beyond the tokenism. I think that the importance is not just in the conversations, but also in the modeling. I always give the example of my very first, you know, experience with an AI demo, you know, somewhere, you know, not too far from here. And I, you know, stood in front of this machine where everyone was standing and they were testing. I need to tell you where in the world you’re from and tell you a bit more about yourself based on the data I had been fed with. And then I faced this machine and I said, hi, and I said, hello, and I said a few words. And the machine not only said I was from the wrong continent, but also said I was very angry, and I was like, wait a second, what is going on here? And by the way, that project was already being used by a country to determine who to arrest based on prank calls. So it meant that anyone who sounds—I sound like this all the time because I’m Nigerian. I’m from a country of 200 million people. You need to raise your voice to be heard. So when I speak, I need to raise my voice. So if the machine thinks I’m angry and all that, it’s not because I am, it’s because I’m Nigerian and I have to raise my voice. So I think it’s absolutely important for us, not just in conversations, but in modeling and also in— research. AI by nature is global, but global does not mean it happens in the global north. Global means that it has applications across the entire world, and if it has, then it means that diversity must be a fundamental factor in what we do. Otherwise, we’re going to keep having many of the problems we currently have on social media where platforms are struggling to interpret something that is understood within a context and mean something else entirely once it crosses to another context. So ethics, data protection, and diversity.

Gabriela Ramos:
Thank you very much, Benga. Words to live by there, and I’m sure we’ll go back to each of those three points. But I understand that Gabrielle Ramos is now online and able to join us, so I’d like to introduce Gabrielle Ramos, who is the Assistant Director General of Social and Human Sciences at UNESCO. Over to you, Gabrielle. Thank you so much, Peggy, and I’m very sorry, but I got the wrong link, and I was with a very technical expert. Very interesting session, but it was not mine. Great to be here with you, and thank you. Great to share this panel with you and with Amandib. And I could not agree more with what the previous speaker mentioned. I think that ethics is a good guide because it’s not only about the challenges we are confronting now, but actually the challenges that might be posed to us with this very fast moving technologies. And we are now probably questioning all these issues brought by the generative AI, but AI is not new. And we know since how many years AI has been used to take decisions that are substantial and relevant for all of us. We know the application of these technologies in the distribution of benefits in the welfare system, the health system, the education system. We know how much… facial recognition has been used and now is being debated how much we can rely on it to take decisions in the public sector. But the public and the private sector have been taking decisions based on AI for many years. We tend to forget, but we know that having a vaccine to fight the COVID pandemic was actually allowed because of the analytical capacities that the technologies could put together to understand how the virus work. So it’s not new, but the questions that we ask of course are much more relevant given the pervasiveness and also the fastest speed at which these developments are advanced. So it’s very important that we have the right frameworks. If these major technologies are just deployed in the markets for geopolitical reasons, for commercial reasons, for profit-making reasons, it’s not going to work. And that’s why we are very pleased to be contributing to this framing of the technologies in the right manner at UNESCO. As since two years ago, the 193 member states adopted the UNESCO recommendation on the ethics of artificial intelligence. And I recognize Amandeep was one of the major contributors because he was part of the multidisciplinary group that we put together to develop the recommendation. And it was pretty straightforward, but I feel it was also in the right frame because the question was not to go into a technological debate of how do we fix the technologies or how do we build the technologies in certain ways to deliver for what we want to have in the world. But the question was actually, what are the values that we are pursuing? And then we put it all around. It’s a societal debate, not a technological debate. And the values, we know them. the values that the technologies should serve, are the human rights and human dignity, are the fairness, inclusiveness, protection, privacy. And these values need to be served by certain principles or goals. And you know them because these goals are of accountability, transparency, proportionality, the rule of law. These principles are part of the equation that have been advanced by many, many players in the artificial intelligence ecosystem. But these principles need to be translated from our perspective into policies, because policies is what will make the difference. Yes, the technologies are being developed by the private sector mainly, but this will not be different as many other sectors that we have in the economy where governments need to provide with the framework and the right framework for them to develop according to the law. And at the end, it’s not that the governments are going to go into every single AI lab to check that we have diverse teams, that the quality of the data is there, that the training of the algorithm has the adequate checkpoints, not to be tainted by biases and prejudice. But at the end, when you have the norm and when you have the tools and the audit systems to advance these kinds of outcomes, is when you get things right. And this is where we are now in the conversation, because the member states, when they adopted the recommendation, it was not only left to the goodwill of anybody who wanted to advance in building these legal frameworks, but they also asked UNESCO to help them advance specific tools for implementation, because we also are in an heterogeneity of capacities and systems that can be put together. And therefore, we developed two tools. understand where member states are regarding the recommendation, the readiness assessment methodology, that is not only a technological discussion, again, it is about the capacities of countries to shape these technologies, to understand these technologies and to have the legal frameworks that are necessary for them to deliver. And then we also develop the ethical impact assessment. And I feel that now we are converging with many other institutions and organizations that are advancing better frameworks for developing on AI. Just last Friday, we were with the Dutch Digital Authority because this is also an institutional debate. For us, this is for governments. Governments need to upgrade their capacities and the way they handle these technologies because, as I said, I’m a policy person and the reality is that this is about shaping an economic sector. An economic sector that, yes, pervades many other sectors and is changing the way all the other sectors are working. But at the end, it’s an economic sector. The way that the technologies are produced can be shaped, can be determined by technical standards, but it can also be determined by the rule of law. And it’s not as difficult as it might seem in terms of at least having these guardrails. When we say, for example, that we need to ensure human determination, well, then what the recommendation established is that we cannot provide AI developments with legal personality. And I feel this is just the very basic to ensure that whenever something goes wrong, there is going to be a person, there is going to be somebody that is in charge and that can be legal, liable, liable legally. And then we also need to have systems for redressal mechanisms and to ensure that the rule of law is really ensured online. I’m proud that have this framework is now being really deployed by 40 countries around the world and we will be having more. Next week we are going to be in Latin America launching the American Council for the Implementation of the Recommendation and we’re partnering with many institutions, with the European Union, with the Development Bank in Latin America, with the Patrick McGovern Foundation, with Bilastar, to ensure that we work with member states to look how they can build up these capacities to understand the technologies and to deliver better frameworks. We always also talk about skills, skills, skills, skills to understand, to frame, to advance a better deployment of the technologies. I feel that it’s also very important that we have the skills in the public sector to frame and to understand because these are also so fastly moving technologies that we need to be able to anticipate also the impacts that they can have in many fields that have not been tested. But if you ask me for the bottom line, the bottom line, and I think this is not the way that generative AI or chat GPT arrived to the market, is that you need to have an ethical impact assessment, a human rights impact assessment of major developments on artificial intelligence because before they reach the market. I think this is just right due diligence and it’s not what is happening in many of these developments as we see them. And therefore, I think it’s the moment to put the conversation right in the right framework to ensure that these technologies deliver for good. And we are seeing many movements. We just saw the bill that was put together in the US Congress. We know what the European Union is doing. We know how many countries are advancing this and we’re also doing it with the private sector. We can neither put all the private sector in one basket. We’re working with Microsoft and Telefónica because also this needs to be a multi-stakeholder approach, also gathering the civil society and many, many groups that need to be represented because the ethics of artificial intelligence concern us all. I’m so glad that I have this minute to share with you these thoughts and I’m looking forward to the exchanges. So thank you so much.

Moderator – Peggy HICKS:
Thank you very much, Gabriela. It’s wonderful to hear your comments based on the experience of UNESCO and the ethics of AI development, but also its application, as you said, and the work that’s being done globally to move forward on these issues. And I think the point that you make around human rights impact assessments and the need for them to be done before things reach the market is one that we’ll come back to as well. I’d like to turn to our final panelists now. We’re fortunate to have with us Owen Lartner, who’s Director of Public Policy in the Office of Responsible AI at Microsoft. Over to you, Owen.

Owen Larter:
Thank you, Peggy. It’s a pleasure to be here. It’s a pleasure to be part of such an esteemed panel. So as Peggy mentioned, I’m Owen Lartner at Microsoft. We are very enthusiastic about the opportunity of AI. We’re excited to see the way in which customers are already using our Microsoft co-pilots to better use our productivity tools. We talk a lot about co-pilots at Microsoft rather than auto-pilots. The vision for Microsoft around AI is very much retaining the human dignity and the human agency at the center of things. And I think more broadly, we see AI as a huge range of tools that is gonna offer humanity an immense amount of opportunity, really to understand and manage complex systems better and to be able to address major challenges like climate change, like healthcare, like a lot of what is being addressed in the SDGs. So a lot of opportunity, but I think it’s clear that there is risk and complementarity. Thank you very much. a panel, and so we need to think about governance. And I think as we turn to governance of AI, we need to think about governance globally. As it was said before, AI is an international technology. It is the product of collaboration across borders. We need to allow people to be able to continue to collaborate in developing and using AI across borders. It’s also quite clear that the risks that AI presents are international. They transcend boundaries. An AI system created in one part of the world can cause harm in another part of the world, either intentionally or via accident. And so I think as we think about global governance, it’s worth taking a little bit of a step back and sort of understanding where we are. And I do feel like an enormous amount more work is needed, but we’ve made a huge amount of progress in the last year. We’re coming up to quite an important milestone or a significant milestone, which is that we’re just a few weeks shy of the one-year anniversary of ChatGPT being launched on the 30th of November in 2022. And I think we can see the way in which that has really changed the conversation around the world on these issues. I think it’s fantastic to see the way in which the UN has done what the UN is always very good at doing, which is really catalyzing a global and representative conversation on these issues. We’re excited about the high-level advisory body. We think that’s gonna be really productive work. Really delighted to be working with UNESCO to be able to take forward their recommendation on artificial intelligence. We think that’s a really important piece of work. And really exciting to see the way in which you now have concrete safety frameworks being developed and implemented around the world. People might be familiar with the NIST AI Risk Management Framework. This is from the National Institute for Standards and Technology in the US. They published their AI Risk Management Framework at the start of this year. It really is a global best practice framework that any organization can use now to develop their own internal responsible AI program. So I think we’ve sort of moved to a place where we have the building blocks of a global governance framework in place. I think now it really behooves us to take a bit of a step back and think about how we chart a way forward. And I think there’s probably a couple of things that are worth bearing in mind as we do that. of having a bit more of a conversation about where we actually want to get to. What do we want a global governance regime for AI actually to be able to achieve? And then secondly, what can we learn from the many attempts and the many successes around global governance in other regime? So I’ll offer a few thoughts in closing. I think as we move forward, we ultimately want to get to a place where we are setting global standards that are being developed in a representative and global way that can then be implemented by national governments around the world. And I think there are great lessons to draw from organizations like ICAO, the International Civil Aviation Organization, part of the UN family. It does a great job of including pretty much every country around the world in developing safety and security standards for aviation globally. So I think there’s more that we can learn from that. I think the other thing that we need a global governance regime to do is to help us develop more of a consensus on the risks of AI. It’s really important part of thinking about how we address them. So I think of organizations like the Intergovernmental Panel on Climate Change, which has done a fantastic job of developing an evidence-based consensus around risks in relation to climate. Actually a really effective job of then taking that out and driving a public conversation, which can lay the groundwork for policy as well. I think that the final suggestion I’ll make is that we really need to invest in infrastructure as we move away forward. That’s both the technical infrastructure so that we’re able to study these systems in a holistic and broad way. It is very intensive to develop and use these systems, so we need to provide publicly available compute data and models so that researchers around the world can better understand these systems, can develop the much-needed evaluations that we need going forward. I think the other bit that is just as important, if not more so, is thinking about the sort of social infrastructure. How do we really have a global conversation on a sustained way on these issues that is properly representative and brings in views from everywhere around the world, including the global south? I think it’s a great start on that front. I think conversations like this and work that the IGF is doing is really important. I think there’s more that can be done. One small contribution that we’ve made so far and we want to do more is setting up a global responsible AI fellowship. So we have a number of fellows around the world, including from countries like Nigeria and Sri Lanka and India and Kyrgyzstan, where we’re bringing together some of the best and brightest minds working on responsible AI, right across the global south to help shape more of a global conversation and inform the way that we at Microsoft are thinking about responsible AI. I think there’s much more opportunity to do this kind of thing when we’re moving forward. But I’ll pause there for now.

Moderator – Peggy HICKS:
Great, thanks so much, Owen. It’s been really helpful to hear your comments on what the global governance AI challenge looks like and what are some of the next steps we need to take. Just to pull together some of the thoughts and then we’re gonna turn over for the question and answer. I mean, I think we heard very, very similar messages to some extent from our somewhat diverse panel, not as diverse as we’d need to be probably here either, Benga, but we all recognize the need for that global diversity. How we achieve it, I think we still have a lot of work to do, we can commit to it in principle, but in practice, it requires a lot more effort, a lot more resources to make it a reality, I think. We also heard the importance of really putting in place guardrails based on what we already know in the space and moving forward on them, the governance conversation with regards to the best practices is there, but we also need to recognize that we do have some red lines and those red lines ought to be part of the global standard setting process as well and moving forward. And finally, we also need to understand the need for greater transparency, greater ability for a global conversation to happen and that means making sure that forums like this one are available to a much broader audience, but that we have, I liked Owen’s comments about the social infrastructure that’s needed and that will require investment and commitment as well to move forward. So with that, I think I will close this first segment of the panel discussion and I’m to turn. over to Moritz, who will guide us in the question and answer. Over to you.

Moderator – Moritz Fromageot:
Thank you very much, Peggy. So we will now take the time for an extensive question and answer. So you all have the possibility to ask any question you might have. Unfortunately, Amandeep had to already leave the session, but our colleague Quinten is filling in. Also, I understood that Benga has to leave it in 20 minutes as well, so we might just prioritize you in the process. And I’m also seeing that we have the co-facilitators from the Global Digital Compact in the room, so do let us know if you want to participate in the discussion. For the on-site questions, you can line up behind the microphone over there. First come, first serve. We collect the first three questions and then answer them from the panel. And yeah, so feel free to ask anything regarding the session topic.

Audience:
OK, that’s a nice clarification. Hello, everyone. I’m Alice Lenna from Brazil. I’m also a consultant for GRI, the Global Index on Responsible AI. And I have a question that I think has relations with everything that you’ve said so far. Because we’ve been listening in all the panels on AI that AI must be regulated through a global lens, right? It can’t be just national frameworks. And we’ve also been listening that it must happen now. It’s urgent. And these things, we know that global regulations are not the fastest regulations we have. So my question is, how do we balance these needs? Thank you. Hi, I’m an attorney at law from Sri Lanka. Last year I just did a course from CIDP in Washington, and I’ve been studying AI policy. I was just wondering, the biggest threat is that the technology is running far ahead of the law. And is there any possibility, like we were speaking of global AI regime, et cetera, is there any possibility that punitive measures, like fines or penalties, can be given to these tech companies which are going ahead without the implications, the human rights aspect, the ethics, without that being examined, if the tech companies put out the tech? I feel the only way is to penalize them somehow, like how GDPR brought huge fines. Is there any conversation on that going on, or I just want to know? Hello, my name is Yves Poulet, and I am vice chairman of IFAP UNESCO program, and my specialty is infoethics, and I am chairing a working group on infoethics at UNESCO. I think we are agreeing all together about ethical values. I think there are a certain number of ethical values which are recognized by UNESCO recommendation, by EU regulation, by OECD, and these ethical values are very well known. That’s dignity, that’s autonomy, that’s definitively fairness, that’s diversity, that’s the problem of security and well-being, and so and so. So the problem is not the ethical values. I think that Gabriela was right. The problem with ethics is not the problem of… of designing the ethical values. But the problem is to what extent these ethical values are met in a concrete situation. And that’s another problem, and that’s another difficulty. And that’s why I think we need to have, definitively, legislation imposing what we call ethical assessment. I think it’s very important to have this ethical assessment. At a micro level, it means at the company level. And this ethical assessment needs, absolutely, to have what we call a multi-stakeholders within the company, and perhaps the customer, perhaps the clients, and I don’t know exactly which must be around the table. But we need to have this multi-stakeholders and multi-disciplinary assessment to clearly enunciate the risk, to mitigate the risk, and definitively, to try to avoid the main risk. And that’s very, very important, I think. If we have this ethical assessment at the micro level, I think that’s the most important thing. At the global level, I think we need, definitively, to have the discussion, discussion about a very important issue, like the increased man. It is quite clear that bioethics and infoethics, tomorrow, will join together. It is quite clear that, definitively, we must have a certain number of reflection about our iterative system, especially as we have the problem of manipulation of people, and all these questions. So my question is to know what’s your position about this reflection?

Moderator – Moritz Fromageot:
Yes, thank you very much. Just one suggestion, I think for the next round of questions, you could also say whom on the panel. you address the question too, then we can have it a bit more targeted. So yeah, three questions. The first one on how to balance the need for quick action in the face of some of the global processes that can take a little longer. Second question’s on enforcement. How do we make sure that the rules that we agreed on are actually applied? And yeah, the third one on the need for multi-stakeholder assessments on how to mitigate and also enforce the rules. So who would like to go ahead?

Gabriela Ramos:
I can chip in if you. Perfect, Gabriela. Then we’ll start with Gabriela and then give over to Benga. Okay. Well, thank you. I think these are very relevant questions and it’s true that the technologies are global and therefore this transnational character needs to be recognized. And I feel that’s why we are always referring to the interoperability, not only of the technical systems and the data flows across countries, but we are also talking about interoperability of the legal systems, because at the end, the kind of definitions that you have in one jurisdiction is going to be determining the kind of outcomes when you go into international corporations for law enforcement. But at the end, the very basic tenant of all this construction is to have the enforcement of the rule of law regarding these technologies at the national level. And this is the emphasis that we are putting in the implementation of the recommendation on the ethics of AI with the many different countries with whom we are working, because at the end, governments need to have the capacity first to understand the technologies, which is not as a straight. forward as it seems. Second, to anticipate what kind of impact that they can have on the many rights that they need to protect. And then to have commensurate measures whenever there is harm. And I think that this is another bottom line. Whenever there is harm, there should be compensation mechanisms. And these are the areas where governments need to upgrade their capacities. Then, of course, we need international cooperation, because at the end it would not work only if you have regulatory fragmentation at the national level. It’s very important that we also have this kind of exchanges in a multi-stakeholder approach to ensure that we learn from each other and that we can also share what we know are those that are the front-running developments in terms of the legal frameworks and those that are lagging behind. But I feel, again, the role of governments is really important in trying to ensure that the rule of law is respected. But that’s their task and that’s why they are paid for.

Moderator – Moritz Fromageot:
Thank you, Gabriela.

Owen Larter:
Fantastic. I can jump in and give some thoughts and agree with a lot of what Gabriela said as well. I think on the sort of global piece, I think it’s exactly right to look at these issues through a global lens. The risks that are presented are global. But I don’t think that necessarily means that every single national regulation needs to look the same as each other. Exactly as Gabriela said, I think it’s all about interoperability. And I think a big part of this will be developing some global standards in relation to how you evaluate these systems, for example, that different countries can then implement in a way that is sensible for them. In terms of sort of how to apply the law and where the law might apply, I think there is a large amount of existing domestic law that should be being applied right now in relation to AI systems. I think if you’re in a country where you have a law against being able to discriminate against someone in providing a loan or access to housing, it shouldn’t matter whether you’re using AI or not, that law should apply. I don’t think it should be an offense that, oh, you know, yes, I discriminate against this person and gave loans on unfavorable terms, but I was using AI, so don’t come and penalize me. That’s not gonna hold. So I think existing law should be applied across various different jurisdictions, whilst we also put in place these other frameworks that address some of the specific issues of AI as well. And then in relation to the impact assessment process, I think it’s a great thought. We are very enthusiastic about impact assessments at Microsoft. It’s one of the many things that we’re very enthusiastic about in relation to the UNESCO framework. We actually have an impact assessment as a core part of our responsible AI program at Microsoft. So any high-risk system that is being developed, the product team has to go through an impact assessment. It has a number of human rights related elements to it in relation to making sure the system is performing fairly, addressing issues of bias. We think that’s a fundamental sort of structured process to be able to go through. We actually have now started publishing the templates that we use for our impact assessment, and we’ve also published the guide that we use to help our colleagues navigate the impact assessment process. We think it’s really important to share our working as we go as a company so that others can quite frankly scrutinize and build on it and improve it. So we’d welcome thoughts that people have on the impact assessment template that we’re using at Microsoft.

Bengus Hassan:
Thank you. I mean, just to build on the earlier contributions, in terms of regulation being global and the fierce urgency of now, I mean, I can understand why that is the conversation that is happening because that’s a natural reaction to some of the confusion we’ve seen in the last one year. But one is that, first of all, the regulation and academia are now trying to diagnose the issues that the government has identified and they have identified instructions and they have had brief conversations with the government, but they have not had a conversation with the individuals that they have in various places in order to look into impairments and understanding and implementing new regulations. And I think it’s really important to say this, is that regulation is about creating standards and not implementing necessary control. And I say this because this is the same conversation we had about data protection regulation in many countries where it then became an opportunity for certain governments to seek legitimate control over areas where they were supposed to create standards. So the idea was to control and not to create standards that they were also, you know, going to go to a bad buy. But I think there are many existing processes that we can build on. And I can understand why global always, you know, gives the idea of being slow. Because there’s negotiation, there are countries that – I think there are some countries that may just want to be contrarian, just so, you know, because they want to, you know, take the mic and speak or something. But there are existing processes and there are things that work. I mean, I like the example that you gave of the International Civil Aviation Organization. And there are many examples that we can look at. We can look – you know, we can talk about some of the multi-stakeholder conversations we’ve had at ICANN, you know, and now at the IGF, and we can build on those processes. And on the second question, just very quickly, I understand the concern, and like you said, there are many, you know, the existing laws that can be applied. But I’m also a bit cautious when it comes to the sort of the tension between innovation and regulation and policy or regulation. I think that innovation will always, always, always be ahead of regulation. And what is important is for regulators and policymakers to at least seek to understand before regulating. Because we’ve seen in many instances – I mean, I know a country where we’re working where cryptocurrency was banned, and we had to write a policy brief to the central bank. You can’t ban this. What you are banning is the foundation of the new forms of money and movement. So I think it’s really important to, you know, create sandboxes where people can experiment ideas but within, you know, specific frameworks where if something goes wrong, of course, there is – you have to apply – abide by but it’s absolutely important that in the name of you know cautioning and not you know not allowing people to go a wire that we’re not stifling innovation because we’ve also seen that happen where regulation doesn’t understand innovation and wants to jump ahead of it. Thanks and I’ll pop in as

Moderator – Peggy HICKS:
well and then the the first question I think is a really important one and I think the that idea that that we can’t come up with a global framework I’ve said that a million times myself that you know making a treaty isn’t isn’t going to get us there because it will take us too long and by the time we got it it would already be outdated but I think Benga’s answer and and and Owen and Gabrielle as well have said some of the pieces that we have and we need to we need to build piece by piece one thing that we desperately need right now we talked about in a conversation earlier today is around a authoritative monitoring and observatory that will give us greater

Audience:
there’s a kind of paradox here everyone’s talking about global standards universally global standards and everyone’s talking about fast and what I’d like to suggest in a minute is that perhaps in this case I mean there are reasons why that this could happen very quickly including the fact that the private sector is very interested in interoperable governments so that they can move through jurisdictions easily without having you know different regulation in different jurisdictions so there’s a lot of kind of of a carrot there but I’d like to suggest in this case slow maybe fast in a sense because to get a global agreement to move from 20 countries 50 countries to 193 countries all of those countries have to want this and what we’ve noticed at least on the global digital compact process process, is that the term human rights has often had certain connotations for certain groups of countries. And as an example of that, we had a lot of submissions to the global digital compact process and, you know, from some of the political groups that were, say, from the Global North Human Rights, we did a word count of how many times human rights was put in there. It was, you know, a ratio compared to the words digital divide. It was in a ratio of up to 15 times. Every time digital divide was mentioned, human rights was mentioned 15 times. For some of the other groups representing the Global South, human rights may have been mentioned zero times and digital divide several times. So completely the opposite. Now what I’m going to suggest is that when we think about human rights holistically, yes, we have the individual, civil, political, and kind of rights. We also have the social, economic, and cultural rights in the Universal Declaration of Human Rights 22 to 27. And these are also human rights. And these also need to be protected and governed for. And these are human rights which the whole world can get behind, including the right to work, employment, favorable pay, standard of living, education, protection of authorship. So how can the world think about this topic of governance of AI from a holistic perspective and bring along the countries who have more urgent pressing needs on the economic side, on the development side, and take a holistic approach, not just geographically to 193 countries, but also holistically from a governance perspective? So if you allow me one more kind of interpretation here, we’re talking a lot about regulation and legislation in this panel. But governance can also involve other types of policies, not just legal regulation, not even just ethical standards or technical standards. They can also involve other kinds of policies that impact incentives, from taxation, trade policy, intellectual property policy, which also, by the way, is one of the socio-economic cultural rights for authorship. So how can the conversation be shaped in a way that governance can be thought of holistically across the different parts of the UN’s work, not just what is commonly thought of as human rights, the social and political rights, but also the economic, social, cultural rights, and the sustainable development goals? And how can all of these other countries who, when they hear human rights, they think, it doesn’t matter if we don’t focus on the economic side, to actually embrace a concept of governance that will, we hear a lot about AI accelerating SDGs, but how is that actually going to happen? We can talk about productivity tools on Office Copilot 365, that’s great for a lot of office workers in the West, but how does that actually put bread on the table? How do we get the climate resilient agriculture that people keep talking about? Does that actually involve different forms of economic policy like prizes or subsidies or even incentive-creating policies, like in the COVID challenge trials where the vaccine was developed in a matter of weeks instead of normally years? How does that happen to really get material impact on the SDGs? So I would say slow is fast in this. To get a global 193 countries agreeing, they have to see an interest in it, and to see an interest in it, we have to think of human rights holistically, to include the whole Universal Declaration of Human Rights, not just a sub-part of it, and to get to that, we need a holistic approach to policy which doesn’t focus… focuses only on regulation, but also embraces other kinds. And that’s why, when the Secretary General put together his high-level advisory body on artificial intelligence, which will look at governance, there was an explicit choice to make it interdisciplinary, include voices from all regions, genders, but also from all disciplines, including digital economy, including anthropology, to look not just at the individual impacts on individuals’ human rights, but also the societal impacts on individuals’ social, economic, and cultural rights. Thank you.

Moderator – Moritz Fromageot:
Thank you very much, dear audience and dear panel. I would hand it back to Peggy for wrapping this up very quickly.

Moderator – Peggy HICKS:
Thanks. Quentin already helped me out with that assist on the human rights side. But I do think it’s a crucial point, and one that we need to think about is that human rights aren’t only, when we use the words human rights, the digital divide, and what it means for people who are suffering for the lack of technology is also a human rights that falls in the basket of economic, social, and cultural rights, as Quentin has described. But we have to get away from a terminology debate and move forward on the issues that we’ve discussed today. I see the facilitator for the Global Digital Compact here as well. There’s a lot of work to be done in building that global framework, but it does need to be done across sectors and across rights, but also across communities, countries, and people. And that means finding the ways to bring in all of those who are going to be affected by these choices in a much more effective way. And that goes to the second part of the question that you asked, which is, how do we make sure that the resources are available to do it? I think that’s a fundamental piece here, that we need investment in this global public good. And that does mean, and I think Owen even brought up, the need for that social infrastructure to be built. And that means public compute resources that will allow the researchers to be able to do the research that we all know we need them to do. So it’s really looking at those questions and finding a way that we can make sure that those who are making the profits out of this are also helping us potentially to invest in the ways that we can make sure that this opportunity side of artificial intelligence is there for all of us. Thank you all so much for joining us. Thanks to the wonderful panel that we’ve had with us today. And I hope everybody enjoys the rest of the IGF. Thank you. Thank you. Thank you. You You You You

Amandeep Singh Gill

Speech speed

150 words per minute

Speech length

871 words

Speech time

348 secs

Audience

Speech speed

150 words per minute

Speech length

1561 words

Speech time

623 secs

Bengus Hassan

Speech speed

199 words per minute

Speech length

1673 words

Speech time

503 secs

Gabriela Ramos

Speech speed

166 words per minute

Speech length

2066 words

Speech time

745 secs

Moderator – Moritz Fromageot

Speech speed

157 words per minute

Speech length

484 words

Speech time

185 secs

Moderator – Peggy HICKS

Speech speed

200 words per minute

Speech length

2122 words

Speech time

636 secs

Owen Larter

Speech speed

235 words per minute

Speech length

1799 words

Speech time

460 secs

DC-CIV Evolving Regulation and its impact on Core Internet Values | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sébastien Bachollet

The internet, a network of networks, is a global medium that operates on open protocols such as TCP, IP, and BGP. It is free from centralized control and promotes open and interoperable communication worldwide. This highlights the positive aspect of the internet, emphasizing its ability to connect people and facilitate the exchange of information.

However, financial challenges are impacting internet freedom. As the world economy struggles to recover, what was previously offered for free on the internet may no longer make financial sense for companies providing services. This negative aspect raises concerns about potential limitations and restrictions that may arise due to economic constraints.

In response to these challenges, governments are actively involved in drafting and implementing regulations concerning internet governance. Notable examples include the UK’s online safety bills, the Australian Online Safety Act, the European Digital Services Act, Digital Market Act, and the US Kids Online Safety Act. This neutral argument suggests that governments are taking steps to ensure the safety, security, and responsible use of the internet.

Amidst these discussions, defenders of the core values of the internet emphasize the importance of preserving certain principles. The Dynamic Coalition on Co-Internet Value promotes permissionless innovation, which allows for the unrestricted development and deployment of new technologies and services. This is seen as a positive stance that supports the notion of an open and innovative internet.

Overall, the analysis illustrates the complex nature of the internet and its evolving landscape. While the internet offers open and interoperable communication, financial challenges pose a threat to internet freedom. Governments are actively intervening through regulatory measures, and defenders of internet values highlight the importance of preserving the core principles that have contributed to its success. The promotion of permissionless innovation adds another layer to the discussion, highlighting the need for ongoing innovation and development in the digital realm.

Audience

The provided summary examines various arguments and viewpoints concerning the security, reliability, and anonymity of the internet. It highlights the increasing dependence on the internet and the rising number of security breaches, emphasising the need to enhance its security and reliability.

On the other hand, the summary acknowledges the struggle with the need for identification on the internet. While identification is necessary for certain purposes, the concept of anonymity is also seen as significant. It argues that anonymity should be considered a fundamental value of the internet and advocates for the development of a standard that can combine both security and anonymity.

Furthermore, the summary supports the creation of a trusted service that promotes secure anonymity on the internet. The benefits of such a service are not explicitly stated; however, it can be inferred that it would provide a secure platform for users to maintain their privacy online.

The summary also brings attention to the concept of communications metadata security, suggesting that it may be a more accurate term than anonymity. It explains that the term “anonymity” can be misleading and proposes that the focus should be on protecting the security of communications metadata.

In addition, the summary mentions the use of Tor for accessing services like Facebook, highlighting the advantages it offers. It allows users to have control over the level of communication metadata they reveal, ensuring their privacy and security online.

Furthermore, it discusses the network layer of the internet, emphasising that identification is not automatically performed at this level. This suggests that users have the ability to choose whether or not to disclose their identity.

The summary concludes by suggesting that it might be beneficial, both in a societal and platform context, to have the option of identifying oneself at a different layer of the internet. This implies that users should have the flexibility to choose when and how they reveal their identity online.

Overall, the extended summary provides a comprehensive overview of the arguments and viewpoints regarding internet security, reliability, and anonymity. It touches on the perspectives of enhanced security, the need for anonymity, the concept of communications metadata security, and the importance of user control over identification.

Lee Rainie

The analysis highlights the issue of internet fragmentation and its impact on various aspects of society. One significant finding is that a staggering 2.6 billion people currently lack access to and use of the internet. This statistic emphasizes the importance of addressing the digital divide and ensuring equal access to the internet for all individuals.

The impact of the internet is further explored through four major revolutions: home broadband, mobile connectivity, social media, and artificial intelligence. Home broadband revolutionised the internet by making it an essential utility in people’s lives. Mobile connectivity then increased the speed of information access and communication. Social media expanded social networks, connecting people globally. Lastly, the emergence of artificial intelligence brought both promising possibilities and fears.

However, it is important to acknowledge that these internet revolutions have also led to social, cultural, and legal fragmentation. Different experiences have emerged across various segments of society, including differences based on class, gender, age, race, ethnicity, religious affiliation, awareness, optimism, and individual behaviours. These disparities highlight the need to address inequalities and ensure that the benefits of the internet are accessible to everyone.

Another significant finding suggests that individuals often perceive themselves as managing the internet better than society as a whole. This perception may stem from personal proficiency or satisfaction with their own internet usage. However, this self-perception does not necessarily align with the overall societal impact of the internet, which may still face challenges and inequalities.

In terms of technology policy, the analysis reveals a growing trend towards partisanship. Previously, there may have been a consensus on issues like anonymity, but that consensus seems to be diminishing. Signs of polarization are evident in the dynamics of populist mainstream parties in Europe. This partisan shift in tech policy raises concerns about the ability to reach effective and inclusive regulations and policies.

The analysis concludes by suggesting that the current dynamic in tech policy is fluid and unsettled. Discussions surrounding technology and its regulation suggest an environment where things are constantly evolving and difficult to settle. This observation underscores the complexity and challenges in shaping a cohesive and inclusive tech policy framework.

Overall, the analysis highlights the need to address internet fragmentation, overcome inequalities caused by the different experiences of internet revolutions, and find ways to address partisan tensions in tech policy. By tackling these challenges, policymakers and society can work towards a more equal, inclusive, and beneficial internet ecosystem for all.

Alejandro Pisanty

Regulation proposals in the context of the internet have raised concerns regarding their potential infringement on the core values of the internet. It is believed that these regulations may have a negative impact on the technical principles with which the internet was built. This concern stems from the assumption that such core internet values are primarily rooted in these technical principles. The sentiment towards these regulation proposals is generally negative, highlighting the need to carefully consider their potential consequences.

One of the main concerns regarding regulation proposals is the potential reduction in the universality of the internet’s reach. There is a risk that these regulations may limit the accessibility and availability of the internet, thereby undermining its global reach. Additionally, it is argued that these regulations may also lead to a reduction in interoperability, making it more difficult for different systems and platforms to effectively communicate with one another.

In order to enhance security, there is a suggestion that additional devices might be necessary for stronger authentication or identification. This highlights the need for ongoing technological advancements to address the evolving challenges of cybersecurity and digital identity verification.

However, it is crucial to implement regulations carefully in order to strike a balance between enforcement and the preservation of core internet values. The focus should be on finding a middle ground that allows for the regulation of the internet while ensuring that the underlying principles that shaped its development are not compromised. This approach is considered constructive, as it acknowledges the importance of regulations while also emphasizing the need to safeguard the fundamental values that the internet was built upon.

The topic of trust establishment in the internet also arises, with questions raised about the magnitude of architectural changes that may be required. There are concerns about the scalability of trust systems and whether they can effectively meet the demands of a growing global network. Alejandro Pisanty specifically highlights Estonia’s trust system as a brilliant example but potentially limited in its scalability. This insight offers valuable considerations for future developments in trust establishment within the internet infrastructure.

Furthermore, discussions around internet governance touch upon the significance of privacy and online identity. It is argued that individuals should have the choice to identify themselves online without being compelled to disclose personal identification data. This highlights the importance of striking a balance between privacy protection and the necessary security measures in place.

The case of AFRINIC, a regional internet registry, brings attention to the challenges faced by private entities registered in certain jurisdictions. AFRINIC’s position as a private entity registered in Mauritius has resulted in numerous court cases, sparking discussions about according technical organizations governing the internet the status of internet government organizations. This observation raises important questions about the governance structure and legal frameworks surrounding the internet.

In conclusion, regulation proposals for the internet have generated concerns about potential infringements on the core values and principles of the internet. Discussions revolve around the need to carefully implement regulations to preserve the internet’s universality, interoperability, and core values. The importance of stronger authentication and identification is highlighted, but considerations must be made for the impact on privacy and choice. Trust establishment also comes under scrutiny, with reflections on scalability and architectural changes. The legal status of technical organizations governing the internet is explored, emphasizing the need for effective governance structures in addressing the complexities of the digital age.

Iria Puyosa

The analysis considers various perspectives in the debate on content moderation in encrypted apps and the transnational flow of data. It raises concerns about ill-designed regulation that could potentially disrupt the internet. The argument is that rushed regulation may have unintended consequences and negative effects. This highlights the need for careful planning and comprehensive consideration.

Another important point raised is the focus on harmful content within encrypted message apps. While much of the public conversation revolves around managing harmful content in these apps, research shows that the majority of content in messaging apps is actually useful and positive. This challenges the notion that harmful content is pervasive and questions the urgency of regulation.

Furthermore, the analysis presents an argument against breaking encryption solely for content moderation purposes. It suggests that there are alternative ways to address harmful content without compromising encryption. Breaking encryption in messaging apps could have broader implications and potentially undermine encryption on the internet as a whole. This negative sentiment emphasizes the importance of considering long-term effects on digital security and privacy.

The analysis also emphasizes the significance of considering the transnational flow of data in policy making. Regulations implemented in one country can significantly impact other countries. The extraterritorial nature of data flow is often overlooked in policy discussions. This neutral sentiment highlights the need for a global approach and collaborative efforts to ensure coherent and harmonized regulations that do not have unintended negative consequences on cross-border data flow.

Additionally, the analysis highlights the importance of respecting human rights, the rule of law, and internet integrity. It suggests that solutions should be found that align with these principles. Balancing concerns while maintaining the core principles of the internet is crucial.

The analysis recognizes the need for technical expertise in policy discussions. It emphasizes the importance of individuals with the knowledge and skills to solve problems and implement effective solutions. This observation underscores the intersection of technology and policy and the value of diverse expertise in shaping regulations.

To prevent unintended consequences, the analysis stresses the necessity of input from civil society and a thorough understanding of human rights before implementing regulations. Involving a broad range of voices and perspectives can help avoid exacerbating existing problems or creating new ones.

In conclusion, the analysis highlights the complexities and various perspectives within the content moderation debate in encrypted apps and the transnational flow of data. It underscores the need for well-designed and thoroughly considered regulations that do not compromise internet integrity or undermine encryption. Respecting human rights, the rule of law, and involving technical expertise and civil society in policy discussions are also crucial. A balanced approach is needed to address concerns while upholding the principles and integrity of the internet.

Nii Quaynor

The African Network Information Centre (afriNIC) has faced significant challenges in Mauritius due to local legislation. These challenges have affected afriNIC’s ability to develop effective policies and have caused issues with Resource Registry (RR) transfer policies. This legislative impact has had a negative effect on afriNIC.

Despite these challenges, afriNIC’s multi-stakeholder approach within the Policy Development Process (PDP) has remained resilient. Draft proposals aimed at hijacking resources have failed to reach consensus, demonstrating the effectiveness of the multi-stakeholder approach in preventing such attempts. Although participation in the PDP has been hindered, leading to the recall of a co-chair, the multi-stakeholder approach has overall been positive for afriNIC.

One argument put forth is that internet identifiers should be managed as public goods, rather than treated as property. Transfer policies in other regions have considered resources as property, but not necessarily for the end user. It is argued that managing internet identifiers as public goods is crucial for their equitable distribution and accessibility.

afriNIC has also faced challenges regarding non-compliance from a member. This member, who had received significant resources but refused to comply with afriNIC’s requirements, had their resources recalled as a consequence. This non-compliance has created further difficulties for afriNIC.

Another concern is the need for stronger protections and governance for afriNIC. Despite plans to become a decentralized organization, this transition remains incomplete. Additionally, afriNIC’s attempts to seek diplomatic protection have not been successful. These factors highlight the need for improved security measures and governance within afriNIC.

Commercial disputes between non-profit organizations and members have also arisen as a challenge. It has been observed that disputes can occur, raising questions about the effectiveness of the current legal system in resolving such issues.

Furthermore, disapproval has been expressed towards a member who refuses to be disciplined and has abused the legal system by generating multiple court cases. This member has violated rules and even attempted to bribe individuals, undermining the integrity of afriNIC and placing further strain on the legal system.

Lastly, concerns have been raised about business misuse and the potential hijacking of numbers by organizations lacking proper infrastructure. Some organizations have been found to be misusing resources and generating numerous court cases without the necessary business infrastructure. This raises ethical concerns and questions about the proper allocation of resources.

In conclusion, afriNIC has faced various challenges, including legislative barriers, non-compliance from members, commercial disputes, and concerns over business misuse and number hijacking. Despite these challenges, afriNIC’s multi-stakeholder approach has shown resilience in the Policy Development Process. However, there is a need for stronger protections, improved governance, and a more efficient legal system to effectively address these issues.

Vint Cerf

The analysis covers a wide range of topics related to internet security, privacy, anonymity, accountability, and the role of technology in filtering harmful internet behaviour.

One area of discussion is the side effects of internet security measures. While governments have enacted laws to protect internet users, there is concern that these laws can be used to inhibit freedom of speech. It is argued that internet security measures have unexpected consequences and may not always achieve the desired outcomes.

The importance of strong authentication is emphasised as a means of preventing unauthorised actions and impersonation. Strong authentication, such as end-to-end cryptography, is seen as a way to protect user information and maintain confidentiality.

Anonymity on the internet is also addressed, with some arguing that it can lead to harmful behaviour. Anonymity is believed to shield individuals engaging in bad behaviour and decrease the consequences for their actions, thereby encouraging harmful actions. However, others argue that mechanisms allowing for identity discovery should be tolerated, as accountability can help prevent harmful actions. The tension between anonymity and accountability is a significant consideration in this debate.

The limitations of technology, such as machine learning, in filtering harmful internet behaviour are highlighted. It is argued that technology fails to effectively filter harmful behaviour and that incorrect filtering can infringe upon individuals’ rights.

Certain situations, such as whistleblowing, are seen as necessitating anonymity. Whistleblowers rely on anonymity to protect their identity and ensure their safety, especially when exposing sensitive information.

The need for architectural changes to internet identity is also discussed. The current identifier provided by the internet, the IP address, is seen as insufficient for maintaining security and privacy. Estonia’s implementation of strong authentication for its entire population is cited as an example of the potential for significant changes to internet identity.

The importance of accountability over absolute anonymity is emphasised, acknowledging the potential risks associated with identifying individuals by biological metrics. Privacy concerns are balanced against the need for accountability to prevent harmful actions.

Vint Cerf, a prominent figure in the field, argues that absolute anonymity may no longer be a core value that serves the interests of internet users. He also supports the inclusion of a multi-stakeholder perspective in policy formulation, believing it should be a normal practice for governments. The multi-stakeholder model of organisations like the Internet Governance Forum (IGF) is praised for ensuring robust policy-making regulations and engagement with governments.

The value of cryptography in data protection is highlighted, with examples of Google’s encryption practices and user-controlled data keys. However, arguments against the idea that data about citizens should be kept within national borders are presented. Keeping data within physical borders is seen as compromising reliability due to the lack of redundancy, while transborder data flows combined with encryption are seen as offering safe data storage options.

The layering mechanism for communications metadata security is appreciated, drawing parallels with other elements of internet design such as the domain name system. The concept of user-choice in revealing identity is viewed positively and considered an important aspect of internet security.

The power of internet exchange points for connectivity is acknowledged, facilitating efficient connections between networks. However, concerns are raised about government-operated exchange points leading to unwanted surveillance if all traffic is required to go through them. It is suggested that cryptography could help secure encrypted traffic running through exchange points.

Furthermore, the challenges of maintaining exchange points and data centres in space are noted, due to the difficulties in accessing these locations and carrying out necessary maintenance.

Lastly, the critical importance of the internet in everyday life is recognised, with global surveys indicating a widespread unwillingness to give it up. The positive impact of the internet on various aspects of society is acknowledged.

In conclusion, the analysis explores complex and diverse perspectives on internet security and related issues. It highlights the need for a balance between security, privacy, anonymity, and accountability. The role of technology in filtering harmful behaviour is examined, and the importance of strong authentication and architectural changes to internet identity is emphasised. The multi-stakeholder approach in policy-making, the value of cryptography in data protection, and the challenges and benefits of internet exchange points and space-based infrastructure are also discussed. Overall, the analysis sheds light on the multifaceted nature of internet security and the ongoing discussions surrounding its various dimensions.

Deborah Allen Rogers

The extended summary discusses the effective e-governance models developed by Finland and Estonia. According to Deborah Allen Rogers, who works with the digital fluency lab Find Out Why, these solutions often go unnoticed. She suggests that promoting learning from and collaborating with Finland and Estonia on their e-governance models is important, as they have been implementing them for about 20 years and have answers to many challenges faced by Europe and the United States in e-governance.

The summary also highlights the crucial role of cryptography in protecting human rights, personal rights, and privacy. It is considered a safe and scalable method for safeguarding information.

Furthermore, the significance of scale in technology is emphasized. Deborah Allen Rogers points out that smaller societies can serve as test samples, and scaling their functional aspects has been successful. The CEO of XRoad, based in Finland, shares insights about their more conservative cultural context in scaling technology compared to Estonia. The summary also mentions that scale changes the concept of what can be done at the push of a button.

It is worth noting that Deborah Allen Rogers has previous experience with drastic transitions, having been a clothing designer during the shift of global manufacturing to China and during the AIDS pandemic, as well as being in New York during the 9/11 attacks. This experience adds credibility to her perspectives.

The functionality of societies is discussed, with Deborah pointing out the difference between highly governed and functional societies, like the Netherlands, and dysfunctional ones. The summary implies that dysfunctional societies may struggle in handling societal aspects effectively.

Finally, the summary emphasizes that the functionality of a society is more important than its size. This notion aligns with the SDGs of reducing inequalities and promoting sustainable cities and communities.

Overall, the extended summary provides a comprehensive overview of the main points, arguments, and evidence discussed in the original text. It also includes Deborah Allen Rogers’ insights and experiences, adding depth to the analysis.

Shiva

Internet exchange points (IXPs) are critical infrastructure that facilitate the exchange of internet traffic between different networks. However, there are concerns about the potential impact of IXPs operating on a commercial business model on internet neutrality. Some IXPs operate as for-profit entities, and this could potentially lead to favouritism or discriminatory practices, impacting the principle of net neutrality.

The argument against commercial IXPs is rooted in the belief that when financial interests are prioritized, the impartial exchange of internet traffic may be compromised. This sentiment is reflected in the negative sentiment associated with this argument. The supporting facts suggest that some IXPs do indeed operate on a commercial basis, which raises concerns about the potential erosion of internet neutrality.

Another concern related to IXPs is government regulation. There is a fear that governments could use their regulatory powers to manipulate or control the internet through IXPs. This negative sentiment draws attention to the potential misuse of IXPs as tools for political censorship or surveillance. The related sustainable development goal of SDG 16: Peace, Justice and Strong Institutions highlights the importance of preserving a free and open internet.

On a more neutral note, there are ongoing discussions and considerations for the design of interplanetary internet exchange points. Given the increasing interest in space exploration and the possibility of future interplanetary communication networks, the concept of interplanetary IXPs is being explored. However, limited information is provided regarding this topic, suggesting that more research and development is required.

In conclusion, concerns about the impact of commercial IXPs on internet neutrality and the potential for government control highlight the need for careful regulation and oversight in the management of IXPs. The concept of interplanetary IXPs adds an intriguing dimension to the discussion, emphasizing the evolving nature of internet infrastructure as technology and human exploration progress.

Joseph

The use of Virtual Private Networks (VPNs) is a topic that sparks controversy. VPNs have the ability to bypass internet restrictions, granting users the ability to access sensitive data that may be otherwise blocked. This feature has both positive and negative implications. On one hand, it allows individuals to browse the internet freely, evade censorship and access information that may be crucial in certain circumstances. However, this freedom can also be easily misused, leading to fraudulent activities and infringement on sensitive data.

The argument against the use of VPNs centres around the potential for misuse and harm. Those raising concerns argue that VPNs provide a cloak of anonymity that can enable cybercriminals to carry out illegal activities, such as hacking, fraud and identity theft. By masking their IP addresses and encrypting their online activities, these criminals can disguise their tracks, making it difficult for law enforcement agencies to trace and apprehend them. This creates a significant challenge for cybersecurity and poses a threat to the security of individuals and organisations.

However, it is important to note that VPNs have legitimate applications as well. Many individuals and organisations, such as journalists, activists and businesses, rely on VPNs to protect their sensitive information and maintain privacy. For these users, VPNs provide a layer of security by encrypting their data, making it difficult for hackers or prying eyes to intercept and exploit it. In this context, VPNs are seen as valuable tools for safeguarding data and ensuring the protection of individual content on the internet.

The need for protective measures for individual content on the internet is a relevant concern in today’s digital age. As more and more information is stored and shared online, the risk of cyber threats and data breaches increases. This issue is closely linked to topics of internet security, cyber safety and data protection. With the rise of cybercrimes and the increasing value of personal data, it is crucial to find a balance between protecting privacy and ensuring the safety of individuals and society as a whole.

In conclusion, the use of VPNs is a contentious matter. While VPNs can provide internet users with greater freedom and privacy, their potential misuse raises legitimate concerns. The debate surrounding VPNs highlights the importance of balancing individual privacy rights with the need for cybersecurity measures. Solutions that address these concerns while preserving internet accessibility and protecting sensitive data are crucial for tackling this complex issue.

Jane R. Coffin

This extended summary provides a more detailed overview of the main points, arguments, evidence, and conclusions present in the provided text. It also includes noteworthy observations and insights gained from the analysis.

1. Importance of funding small networks in the United States: – The text highlights the importance of funding small networks, specifically in rural and underserved areas. – It recognises the lack of connectivity in certain areas in the US and the need for creative and innovative funding solutions. – The argument is strongly in favour of funding small networks to bridge the digital divide and reduce inequalities in access to the internet.

2. Open connectivity and fewer regulations: – There is a call for open connectivity and the need to reduce regulations to foster innovation. – The text mentions the importance of keeping internet exchange points open with fewer regulations. – The argument is positive and emphasises the benefits of promoting open connectivity for industry, innovation, and infrastructure development.

3. Concerns about the erosion of core internet values: – The text raises concerns about the erosion of openness, interoperability, global connection, and permissionless innovation. – Certain countries and international organisations are observed attempting to regulate internet exchange points. – The argument expresses a negative sentiment towards the potential threat posed to the core values of the internet.

4. Advocacy for community networks and competition in connectivity: – The importance of community networks for building networks that serve the community, with the community, and by the community is emphasised. – The text highlights regulations that prohibit community networks and stresses the need for more network diversification and competition in connectivity. – The argument is in favour of community networks and advocates for their importance in reducing inequalities in access to the internet.

5. Need for inclusive, multi-stakeholder policymaking and regulation: – The text argues for inclusive and multi-stakeholder inclusion in policymaking and regulation. – It suggests that neglecting smaller networks, internet exchange points, and other stakeholders may lead to forced centralisation. – The sentiment is negative towards the exclusion of certain groups and emphasises the importance of diverse perspectives in regulatory decision-making processes.

6. Observations on unintended consequences in policymaking: – The text suggests that excluding civil society, the technical community, and academia from policymaking may lead to unintended consequences and forced centralisation. – The negative sentiment arises from the potential negative impact of excluding certain stakeholders from decision-making processes.

7. The role of the Internet Governance Forum (IGF) and the multi-stakeholder model: – The text highlights the obligation of the IGF and the uniqueness of the multi-stakeholder model in working with governments for better policy formation. – The argument is positive, emphasising the need for collaboration between the IGF, governments, and other stakeholders to improve policymaking and regulation.

8. Possibility of exchange points in space with Low Earth Orbiting Satellites (LEOs): – Relevant research funded by the Internet Society Foundation explores the possibility of exchange points in space using LEOs. – The argument remains neutral, presenting this as an area of exploration for future developments in internet infrastructure.

9. Issues surrounding control over traffic in LEO constellation networks: – The complex nature of control over traffic in LEO constellation networks is acknowledged. – Complications arise in negotiating cross-border connectivity issues with transmissions between countries. – The argument takes a negative stance towards a potential concentration of control in the hands of a single entity or company.

10. Acknowledgement of different types of internet exchange points: – The text acknowledges that some countries require traffic monitoring at exchange points. – It recognises the existence and role of both neutral, bottom-up internet exchange points and government-managed ones. – The sentiment is neutral, neither positive nor negative.

11. Support for encryption and potential relevance of cryptocurrencies: – The importance of encryption in protecting the privacy of internet traffic is acknowledged. – While the support for encryption is positive, there is no significant interest expressed in cryptocurrencies at present. – The sentiment is positive, emphasising the importance of privacy and security in internet communications.

12. Overall sentiment towards the future of the internet: – The analysis reveals a positive sentiment towards keeping the internet open, secure, and globally connected. – The text recognises the need for collaboration, open connectivity, and innovative funding solutions to bridge the digital divide and reduce inequalities. – There is a strong emphasis on the core values of the internet and the importance of multi-stakeholder involvement in policymaking and regulation.

In conclusion, the text highlights the importance of funding small networks, the need for open connectivity, and concerns about the erosion of core internet values. It advocates for community networks, competition in connectivity, and inclusive policymaking to avoid forced centralisation. The role of the Internet Governance Forum and the multi-stakeholder model is recognised, and potential developments in internet infrastructure, such as exchange points in space, are explored. Encryption and privacy also receive positive support. Overall, the sentiment emphasises the need to keep the internet open, secure, and globally connected.

Olivier Crepin-Leblond

The Dynamic Coalition, led by Olivier Crepin-Leblond, extends an invitation to individuals to join their year-round discussions. Notably, there is no requirement for a membership fee, making it inclusive and accessible to a wide range of participants.

The work of the Dynamic Coalition holds significance, as they will be creating a report based on their sessions. This report will be taken into account in the Internet Governance Forum (IGF) messages for the Kyoto meeting, emphasizing the recognition of the Coalition’s efforts and their valuable contributions.

The initiatives of the Dynamic Coalition align with two Sustainable Development Goals (SDGs): SDG 9, focusing on industry, innovation, and infrastructure, and SDG 17, emphasizing partnerships for goal achievement. This demonstrates the Coalition’s commitment to contributing to the global sustainable development agenda.

Overall, the Dynamic Coalition, under the leadership of Olivier Crepin-Leblond, provides an open platform for discussions and collaboration. Their dedication to producing a report that influences internet governance decisions highlights the importance of their work. Furthermore, by aligning their efforts with key SDGs, the Coalition showcases its commitment to contributing to global sustainable development goals.

Session transcript

Sébastien Bachollet:
Ladies and gentlemen, we’ll start in one minute, please. Thank you. OK, let’s go. My name is Sebastien Bachelet. I am in charge of taking care of this meeting, but I will not be the main speaker. Of course, you are joining the dynamic coalition on co-internet values on the topic of evolving regulation and its impact on co-internet values. So co-internet value, which comprise the technical architectural values by which the internet is built and evolves and derives universal values that emerge from the way the internet works. So internet, it’s a global medium open to all, regardless of geography or nationality. It’s interoperable because it’s a network of networks. It doesn’t rely on a single application. It relies on open protocols such as TCP, IP, and BGP. It’s free of any centralized control except for the needed coordination of unique identifiers. It’s end-to-end, so traffic from one end of the network to the other end of the network goes in the grid. It’s user-centric, and users have control over what they send and receive, and it’s robust and reliable. So dynamic coalition on co-internet value held sessions at every previous IGF. And every year, there seems to be another challenge, one of the most basic co-internet value. It’s unique weakness. In 2023, the world economy has not recovered from the challenge of previous years. What was free on the internet might no longer make sense financially for companies offering the service and might end up behind a paywall. What was free movement of information in the past might not be seen by government as a good thing today. What was free connectivity might not be financially sustainable any longer. What was free might be blocked tomorrow for many reasons. On the one hand, there are calls for commercial operators such as telecom providers asking for a fair share of internet profits, which is gaining grounds with some lawmakers. In addition to this commercial pressure, where the free mode of operation might no longer be the preferred mode of operation, recent years have seen a lot more regulation affecting the internet. Whether it is the UK’s online safety bills, the Australian Online Safety Act, the European Digital Services Act, and Digital Market Act, or the US Kids Online Safety Act, regulation is being drafted and ruled out by many governments. Very often for good reason and good objective, but it’s something we will see during this discussion. So not only is there a strong movement worldwide to implement some major structural change to the ways internet and internet services work, there is also a commercial interest from some to change the internet business model altogether. A few years ago, the Dynamic Coalition on Co-Internet Value promoted permissionless innovation. These days, for many governments, this translates to the World Wild West. Is this a fair assessment of the internet that we have been defending? Are the core values that gave internet its freedom at risk? Regulation, it’s now firmly back on the agenda. This session of the Dynamic Coalition Co-Internet Values will, again, bring world-class experts to discuss the internet we want, each bringing their unique experience to the table. I will briefly talk about our speaker. We are here on my right, Lee Rainey. I will leave them to present themselves. It will be shorter. Jane Coffin is with us.Nii Quaynor and Iria Pusosa are online. And Vint Cerf is with us. I would like to thank them very much. And give the floor, if you agree, to Lee to start the discussion. Lee, the floor is yours.

Lee Rainie:
Thank you, Sebastian. It’s wonderful to be here. I’m honored to be here. And really, my philosophy has been whenever you’re in the same room with Vint Cerf, you have to start by saying thank you. And I come to you from 24 years of doing research with the Pew Research Center about the social and political and economic impacts of the internet. I thought I was going to retire from Pew Research this past spring. And I flunked retirement. So I got a wonderful gig to continue on with a portion of the work at Elon University, which is in North Carolina in the United States. We’ve done a lot of work with them related to that. And I get the title professor in front of my name now. So my mother is smiling at me in heaven. And my children laugh at me a little bit less now. I wanted to start by saying the overlying topic here is fragmentation. So the first thing maybe to note in the sense of fragmentation is that there are 2.6 billion people who don’t have the internet and don’t use it. And so there is an enormous fragmentation at the heart of the social, political, cultural experience of the internet. So just noting that is an important scene-setter for this conversation. Over the course of my work at Pew, though, it was easy to spot four different revolutions that were occurring on our watch. And watch then the reckoning that came from those revolutions. There was a dynamic that has tightened up. There’s usually great enthusiasm at moment zero. And then the enthusiasm sometimes faded as the reality of things came out. So I want to also make sure that you understand I’m going to be talking about four social, cultural, and legal changes. These don’t really affect how people think about the underlying principles of the internet. They love it. You poll on the ideas of free, open, secure, interoperable. And you get unprecedentedly positive survey ratings about the principles that underlie what the master here built. What happens, though, is that once those principles collide with culture and law and people’s own personalities, there are ways in which their enthusiasms begin to fade or their qualms begin to rise. So go through the four revolutions relatively quickly. The first one we saw in the late 1990s, beginning in the late 1990s, was the rise of home broadband, which made people enthusiastic users of internet protocols because the internet became a utility in their life. It was not a play thing anymore. When you dialed up those modems, that was kind of a fun sound to hear. But when it became always on and on higher speed, people began to embrace it in the rhythms of their life. It changed the volume of information that was coming into their life. And you could see the incipient ways that they became enthusiastic about being content creators themselves. So it was democratizing. It was doing end runs around gatekeepers. There were ways in which new kinds of communities could be built that were built around affinity and affiliation rather than localities and the physical proximity that people had to each other. And people just loved the idea that they could tell their stories without being shut down or without having to cajole a gatekeeper to allow them to tell their stories. And yet, right in those early days, there were early signs that people, while they liked that for themselves, they didn’t like that necessarily for others who had different ideas. The medical community at first was one of the initial communities to sound alarms around mis- and disinformation. They were worried, from a gatekeeper sense, that people were doing end runs around their providers and getting second opinions and diagnosing themselves and things. But there was also concern that more and more misinformation and just bad information was getting out into the world. Dangerous actors early on began to figure out how to exploit these new tools for themselves. Concern about the content that was appropriate, particularly for children, to be exposed to. I came out of the world of journalism, too. So it was easy to see the warning signs of what the internet was going to be doing to mainstream journalism in the culture. So that was part of the backlash. Love at first, democratizing, but also concerns about some of the early ways in which it was playing through the culture. Second revolution was the mobile connectivity revolution, which changed the velocity of information into people’s lives. All of a sudden, their phones became another body part and another lobe in their brain. And they loved that. They loved the always-on, always-available connectivity that they had with others. They liked being able to be reached by others. They liked the fact that the nature of their social networks was changing, even before social media really sort of came to prominence. They could see more people in their lives and interact with more people. And they enjoyed that. But they, again, sort of early enough in that whole arrival of that second revolution on mobile connectivity, they began to worry about the distractions that it was bringing into people’s lives, the way it was disrupting their attention flows, the way that they were always available to others. They liked it in some sense. They certainly liked it when they could do outreach to others. But they didn’t like necessarily being always available to others. And it imposed new obligations on their lives. So again, there’s this sort of push-me-pull-you, yin and yang dimension to the rise of this second revolution. Third revolution is social media, particularly when combined with the mobile connectivity revolution. It just put everything on accelerants, the relationships in their social networks, the size and scope of their social networks, their exposure to new information and people and ideas, the fact that they could share the adventures of their lives, and even the little things in their lives, very quickly with the push of a button. And they could like and affirm things that others were doing. That was incredibly exciting to people and changed the way that they reacted to media. They lived their lives in a variety of ways. But then, relatively soon, too, began the first backlash wave about, well, what’s this doing, particularly to younger children, and especially to girls, when their messaging that was coming into the world was not necessarily affirming or was showing them parts of life that they struggled to think that they would ever have access to and things like that. The business model of the companies themselves began to raise questions about, well, how much do they really know about me? And how much am I being targeted and manipulated or steered or things like that? Obviously, there were concerns about harassment and hate speech and threats and all kinds of things like that. And information warriors themselves taking actions in this. The fourth and final revolution that I’ve been privileged to watch and is unfolding in front of our very eyes and is the central topic of this idea is the artificial intelligence revolution. And clearly, people have very discriminating ideas about it. There are ways in which they think AI is doing wonders in their life. And they anticipate even more wonders in the future, their productivity and things like that. But they’re also worried about their jobs. And they’re worried about bias and discrimination. They’re worried about their own autonomy and a way to act. And they’re worried about ethical applications. I heard a number here. I hope someone will fact check me on this if I’m wrong. There are at least 1,300 documented protocols of ethical AI that are now being circulated, God knows how many more, in more private channels. But it’s a sense that there’s a palpable fear that these tools might turn bad or they might be pulled in bad directions. So those are the four revolutions and the backlash. So each of them sort of have affected people’s lives. But I also wanted to talk for a minute about other ways that I call them fragmented souls are affected by these new environments. And again, a play through the social, cultural, and legal fragmentations that we’re seeing. Everything that we’ve studied about those four revolutions shows that different groups have different experiences of the revolutions. And the obvious ones that Pew measured every time something new happened was there are differences by class, differences by gender, differences by age, differences by race and ethnicity, and sometimes pretty significant differences by religious affiliation or non-religious affiliation. There are also differences by sort of psychographics, the way people are affected their relationship to these new tools. First of all, especially when it comes to AI, their awareness is an enormous determinant of how they think about it. The less people know, the more scared they are. And you can see how public education and other just sort of familiarization processes might ease things over time, but that’s a big determinant now. But there are differences among those who are optimists and pessimists, those who trust and don’t trust as their starting point with other individuals, extroverts and introverts, and a whole lot of other psychographics. Finally, just to make things confusing from a fragmentation sense, and anybody that’s trying to deal with this has to deal with the reality that different people act different ways in these environments. At one moment, the context is open and affirming, and I want these things in my life, and I would like them available to me. At another moment, I don’t want any access to me. I don’t want my data being gathered. I don’t want to be offered this transactional kind of thing. So there are ways in which you can’t even predict at the individual level at times whether people are going to like it or not like it, which makes lawmaking hard, which makes rollouts of new products and applications hard, and things like that. And the final one is the sort of big one, which is there’s an optimism gap that’s at the center of people’s thinking about the fragmentation we face. They think, each individual thinks, I’m doing OK in this environment. They like all of these revolutions for what they bring to their lives. But they also think everybody else is messed up by them. I’m OK. You’re not. So they think they’re doing fine, but the society is not doing well, and they have a split mind thinking about how to reconcile that in policy, in culture, in norms, and in technology. Thank you, Sebastian.

Sébastien Bachollet:
Thank you very much, Lee. I will give the floor to Jane now, please.

Jane R. Coffin:
Hello. For those of you that don’t know me, my name is Jane Coffin. I’ve been rambling around the internet community and connectivity communities for about 25 years. I’ve been in government, industry, non-profits, and startups. My last startup was one that I didn’t start it up, but it was one of the key people. But it was one of the key people working on the startup to help fund small networks, believe it or not, in the United States, because there are a lot of networks that are not being deployed in the rural, remote, urban, unconnected, un- and underserved areas. And it was specifically to take a look at how to fund those networks with creative, innovative funding, a.k.a. bringing what people call blended finance and impact investment back to the United States, where it probably should stay for a while, because there’s a lack of connectivity and things have to change. And the regulations need to be loosened up a little bit in order for that to happen. During the 25 years that I’ve been running around, I’ve done a lot of work in what people call the Global South, but the Global South often doesn’t call itself that. The common denominator is working in those places that are less connected and potentially had fewer regulations and policies. So helping to bring some policy and regulatory sense in some areas and or building regulators, to help bring in more open connectivity, which was always my goal. I was at the Internet Society for 10 years and spent a lot of time working on internet exchange points and community networks, which I’m going to focus on as some of the core internet valued entity things that we need, related to something called invariance. And the Internet Society put out a paper called Internet Invariance. And I want to read the Wikipedia, if I can find it again. Definition for you of invariant, which is a constant. It’s something that’s not changing. And so if some of the key internet invariants are openness, interoperability, globally and something that I think Vint coined as permissionless innovation. I’ll call it innovation without permission. Those are critical things for building your internet community and building networks in anywhere. But what we’re seeing is some erosion of those key things about the openness, the interoperability, the globally connected part, which is if any endpoint of a network can connect to another endpoint from that global interconnection, this is super important. Internet exchange points are a sign of some of these invariants because they bring networks together in a very neutral fashion to exchange traffic without a lot of rules. The rules are, of course, based in protocols that come out of the Internet Engineering Task Force and some other organizations like the IEEE. If you’re doing wireless, sort of Wi-Fi connectivity at the IX. But those internet exchange points that we helped develop over time gave people a neutral grounding place to exchange traffic. They were often not regulated, and it’s been quite something to work over the last 15 to 10 years to make sure that they weren’t regulated and to keep them open. We’ve seen some erosion of that in different countries, and I’m not gonna name the countries or where some of this is coming from, even in international organizations where they wanted to standardize the stack of equipment in IXPs, which could have created more challenges and hardened the architecture to a degree that there was less innovation when you’re building the internet exchange points. The other connectivity medium that we were working with so closely, and I’ve been working with in the last couple years as well on a different level on financing them, are the community networks. You can call them municipal networks, open networks, structurally separated networks where there’s more networks riding over a network that somebody else runs, the baseline network. But with community networks, you have permissionless innovation to just bring in what you’d like from the community out. And if we see more regulation that prohibits community networks, I’ve been in international meetings where people said I was trying to stand up a terrorist network. Or, and I thought, wow, okay, that’s a whole new spin on what I’m trying to do. But, and it wasn’t me, and I should say, the expression we used to use was for the community, with the community, by the community. These are organic networks that are built out in places that have little last mile connectivity to no last mile connectivity, or no competitive last mile and middle mile connectivity. So I would posit that when we keep seeing spectrum locked in, when we keep hearing people say, no, you can’t have a different type of network that isn’t an incumbent network, or designed a certain way, there’s, they’re locking out innovation, but they’re also locking out competition, and they’re locking people out of connectivity at a cheaper price. So if we’re talking about some of the core internet values of openness, interoperability, globally connected, and innovation without permission, internet exchange points, community networks, and working with brilliant technical people in a very innovative way, which is not in a university setting at times. I’ve worked with a lot of people in the network operator groups, which I think a lot of people don’t know what those are, the NOGs, the network operator groups around the world are some of the best places where you see technical expertise transferred to other people at what I call the local, local level, where you, if you’re talking about sustainability and building more internet infrastructure, it’s not just people jetting in to say, you do this this way, it’s more of a, how do you work with local people to train local people for local connectivity? So I’m gonna stop there and just also say that, I think Lee had mentioned it, but there are some things that we’re seeing with the DSA and with fair share, which by the way, I saw so much erosion of this fair share issue 20 years ago. People were calling the internet bypass because it was bypassing the traditional telco networks. So for years and years in certain fora, people were locking out the internet. They didn’t want IP-based networks in their countries because it was going around the toll booth of the old telco networks. Now I’m not anti-telco, full disclosure, I did work for a telco years ago, but there’s room for everyone in this equation, and I’m gonna turn it over back to you, Sebastian.

Sébastien Bachollet:
Thank you very much, Jane. Very well articulated, I think it will be useful for the follow-up of this meeting. Now we have two person online. I would like to be sure that, Nii, who will be the next speaker, and Iria, who are available online. And Nii, please take the floor.

Nii Quaynor:
Yes, I’m available.

Sébastien Bachollet:
Go ahead. Thank you, Nhi.

Nii Quaynor:
Thanks very much for inviting me to share some views on the topic. I tend to think internet means fragments, so perhaps the fragmentation is elsewhere. I’ll be speaking to how AfriNIC, Africa’s regional internet registry, was affected by local legislation in Mauritius, and what impact this could have on regional internet registries. There’s sufficient background information at the afriNIC.net website on legal cases, but take a look also at the assisted review. I intend to present that though the legislative context is a factor, there were real other challenges, including RR transfer policies, policy development process attacks, cyber bullying, legal denial of service attacks on the org, and also on individuals who dare speak. Misinformation was peddled, even there was cyber squat of the RR, and so on, community poisoning, and naturally that generated some internal governance challenges surrounding the resources. However, the core, the afriNIC core function of administering resources to operators and end users, according to community developed policies, has so far held up very well. The good news is that the multi-stakeholder approach we practice in our PDP has been resilient, and several draft proposals to hijack resources did not reach consensus. Attempts to gain the participation in the PDP were also thwarted, and a co-chair was recalled for the first time. A brief history will put this in context. Proposal to establish was made in 97, meetings in 98 in Kotonu and AFNO 2000 endorsed the proposal, and afriNIC itself was established around 2004 going to five. It received endorsements and support of several governments and inter-governmental organizations, many African countries, African Union, ICT ministers, OIF, Francophonie, E-Africa Commission, UNECA, UNICEF Task Force, and many others supported. So the need to have it established was unquestioned. The original idea was to establish as an incorporated association, not for gain in South Africa, but eventually consensus was to develop a decentralized organization with headquarters in Mauritius and other operations in South Africa, Egypt, and Ghana. afriNIC was blessed with generous financial resources from the government of South Africa and was actually incubated in CSR in Victoria. And we proceeded to build a headquarters according to the consensus with additional support from the government of Mauritius. And in Mauritius, we ended up establishing as a private company with membership bylaws. For a decade, the shared objective was clear and was to build the foundations of internet in Africa. We lost this shared objective as we went along and interests, personal interests or self-interest began. And this began with the, when afriNIC received the last slash eight of IPv4 in 2011 as per global soft landing policy. The pressures on the common objective started at this time and transfer policies adopted by other regions questioned service versus property. These policies considered the V4 resources as property to LIRs, but not to the end user on whose behalf LIR justified the resources. Given that people are voluntarily adopted to use the identifiers, we have responsibilities to manage them as public goods, not property. There were discussions on changing scope of our function. Some say are a mere bookkeeper versus a registration service agreement to be complied. The need basis policy was questioned out of reaching use of IPs became an issue. Meanwhile, of course, board got involved in our case in resource allocation, which was a no-no. There was misappropriation of legacy V4 by founding staff, which has been addressed and most resources recalled. The consensus we had had weakened and the board got divided resulting in community disagreements. We’ve had three CEOs, 2004, 15 and 19 and none since 2022. In 2021, afriNIC initiated a resource members assisted review according to the RSA. The membership application has compliance requirements where members shall do specific things as well as consequences if member is not compliant. In the review, some members accepted some had forged documents. One member who had received more than slash nine in four locations in 2013, 2014, 2015, 2016 refused to comply saying afriNIC is a bookkeeper, has no rights. But the member signed the RSA, the member in question also has no ASN and no V6. afriNIC followed the RSA and applied the consequences by recalling resources. The member did not seek arbitration, denied afriNIC rights to assess his compliance and started litigations. A commercial dispute therefore had erupted between the member and afriNIC. There were 28 cases with member initiating 26 and afriNIC only two. 18 of cases were completed with 12 set aside, four withdrawn by a member and two null and void or by agreement. There were 11 injunctions, three state of executions, four claims and one contempt. The claims were to amend our register to make the person like a director, whereas he’s not been elected, demanding $1.8 billion, demanding afriNIC on use V4 resources, garnishing the company’s assets, claiming defamation and so on. The cases seem frivolous and designed to overwhelm attention, financial resources and stress governance. This member bullied community members with defamation suits in their countries if they dared mention a name on mailing lists. However, the substantive case on violations of the RSA by a member has not yet been heard. One of other consequent cases damaged board quorum and could not appoint lawyers for court cases to defend afriNIC nor her CEO. A recent court order has appointed an official receiver to hold elections to restore governance at afriNIC. In summary, someone saw a loophole and decided to harass company, attack the weak part of the RRA system. This started with review of compliance. Then we saw abuse of legislation in cumulative attacks in a capital market economy. Member created number of confusion offering alternate RRA based on brokerage and lots of social media misinformation. On the other hand, afriNIC is well positioned in the substance, even injunction on transfer policy has completed as not granted. The multi-stakeholder in the PDP was strong enough to resist abuse of open participation. We have had support from all RRAs, ICANN, ISOC, governments, members and community at large. We just had AIS 2023 organized by AFNOG and afriNIC and hosted by ZADNA in Johannesburg, South Africa. We are organizing community for what to do in the future and we’re privileged to receive video message from VINCEF and Ambassador Amandeep Gill, UN Secretary General Envoy on Technology. During the opening ceremony, the Deputy Minister of Ministry of Communication and Digital Technology, Philippe Mapoulani did not miss words when he called the heist a neocolonial conquest. The V4, V6 and ASN resources are for internet development in Africa. And would be difficult to change the purpose. AfriNIC did not complete the decentralized organization it planned. It could also not get diplomatic protections it had sought. Ironically, afriNIC went to Mauritius for business stability, for a technology company, but now going through litigation that comes from capital market. We should not take internet for granted and protect it for all. Thank you.

Sébastien Bachollet:
Thank you very much, Ni. Very interesting, useful and I am sure that a lot of people in this room and around the world support you and the people who try to solve the case of afriNIC because we all need afriNIC. And now I will give the floor to, I guess it’s Iria. Can you show us, show on the screen and take the floor, please? We, you need to open your mic because you are muted for the moment as I can see, yeah. Yes. Go ahead, thank you.

Iria Puyosa:
Sorry about that. Thank you, thank you, Sebastian. I kind of go back to what Li was saying at the beginning. We had a kind of wave of panics, of backlash, as he said, and now we are facing all those. So we are fighting, we are in a moment where we are listening, we are hearing a lot of voices saying, we need to regulate, we need to regulate fast because something, something seems so serious harm upon us on the internet. I’m kind of concerned about these reactions and this demand for quick response because most of the time these regulations don’t over, under pressure, are kind of ill-designed and they may break the internet. And that was, this is what we are concerned at the moment. I believe that we need to do more research on the issues before us, define precisely what the problems are and how the problems we are trying to solve and not something so big is impossible to understand and assess the trade-offs between different policies and the way in which they are suitable technical implementations for those policies. While we try to add, to regulate too fast, maybe we lost that. In the research I conducted recently in the FRL lab, we were focusing on knowing the internet as a whole, but in messaging apps. While we were trying to add in response to demands on regulating this ad, particularly trying to introduce content moderation in encrypted and messaged apps. That was kind of the call we were listening here in the United States. People were concerned about disinformation and foreign influence operations. People were concerned about notification of terrorists, violent extremists, species that may drive atrocities and child sexual abuse material. Most of the claims had the idea of, this is happening because these messages are encrypted. And so, police discontent and so those hands. And this was, it’s pretty much the, say a generalization, a simplification of the public conversation, but it’s what we’re hearing. So in our research we’re finding, well, it’s not the case. Most of the content we see is in messaged apps or have been posted and they’re also, so it’s useful, helpful for individuals, communities and society. But this thinking about harms is what dominates the public conversation. I will take also, get to the pressure we are seeing over the UK online safety bill and the US keeps online safety ad in which most of the pressure is, we need to find a way to moderate content in encrypted apps because everything running there is negative for the society, it’s harmful for the society. What the part of the war we were doing here at the different lab was trying to. to show how the content there was a variation of content with different purpose and most of them was positive, but also how different ways to deal with this harmful content that exists, we doesn’t need to break encryption, we doesn’t need to establish, impose content moderations would be undermining encryption. So that is the focus of that recent research we’re doing here. In part, our conclusion is one of the issues who sometimes get out of the conversation is one of the issues who sometimes get out of the conversation is how these policies for the flow of data, use of internet-based applications, don’t consider this is a transnational flow of data, this is an extraterritorial Scott or affected platform operations. So maybe one intended regulation in one country would be affected profoundly negatively in other countries in which rural love is known as a norm. So the world we are trying to do is try to find ways for addressing the problems existing in the platform with how breaking the fundamentals of the use, in this case, breaking the encryption. We were focusing a message in advance as we know, see, we go after encryption is not needed in messaged ads sooner rather than later. Some people are going to say encryption is not needed in the internet. We need to get rid of that because there are other harmful contents running in the internet. So this is pretty much what we are looking at at this moment. This is what we see. I see it’s a burial for the internet as a whole while we let this conversation escalate trying to undermine, in this case, encryption. In other case, it could be another value, another core principle of the integrity of the internet as a space for communications. Due to this shared concern we had in this legal, it’s the pressure for a quicker regulation, no well-defined regulation, no well-intended regulation is part of what we are trying to get into the conversation at the moment, trying to find solutions to ensure the respect of human rights, the rule of law, within the principles of necessity and proportionality without attacking the aspects where we consider core for the functioning of internet-based communications and internet integrity.

Sébastien Bachollet:
Thank you very much, Mirja. And now, last but not least, Vint Cerf, please.

Vint Cerf:
First of all, thank you for, okay, well, first of all, thank you for inviting me to join you in this session. I think all the preamble just tells you that many of the times when we try to fashion rules to make the system function in a way that’s safe and secure, we often end up with unexpected side effects. And some of them you’ve just heard from me, for example. I think what’s happened over the course of the last decade or so is that the openness of the internet, which was relatively safe, was a consequence of the people who were using it. In the very early part, the people who used it were the people who were building it. And for the most part, they didn’t have any interest in destroying it or abusing it. They just wanted to make it work. But as time has gone on and as it has become commercially available, then more and more of the world’s population have access to this. And their motivations are not exactly the same as what the original engineering teams had in mind. They’re interested in using the internet for their own purposes. There’s nothing necessarily, apparently wrong with that. I mean, business wants to use the internet in order to improve business, to grow their businesses. But there are people who are on the internet who would like to exploit their ability to amplify their voices, to amplify their messages, to deliver malware, to deliver phishing attacks, or service attacks, whatever else is motivating them. And governments have, over the past decade or so, recognized that these hazards are beginning to arise out of whatever motivations. And so they try to enact laws that will protect people using the internet. And that’s also an understandable motivation. I must admit to you that there are some countries that are more interested in protecting the regime than they are in protecting the citizens. Interestingly enough, and the difficulty is that the same mechanisms that might be used to protect the citizens are also useful for inhibiting legitimate freedom of speech or other kinds of activities that many of us would consider reasonable. And so we now have a conundrum, which is that in our interest in protecting the safety and security and privacy in the internet, we may interfere with our ability to hold parties accountable for the bad behaviors that they exhibit on the network. And that is threading the needle in some sense. Perhaps those of us who live in democracies will have to recognize that the authoritarian governments will in fact use the tools that we would argue are needed to imbue citizens with rights, to inhibit those rights. And I’m not sure that we have the freedom to inhibit that or to prevent that from happening. What that means is that the internet will not be the same everywhere that we look. You see this happening where internets get shut down from time to time because the regime believes that it either is necessary to protect the regime, or they may even believe that it’s necessary to protect citizens from harmful misinformation and disinformation. This leads to a zeal in the legislative corridors to pass laws intended to protect people’s interests. And let me just set aside the laws that are passed to protect the interests of the regimes and just focus on the more democratic environments. What can happen, however, is that the intent of those laws may be laudable, no pun intended, but they may also have side effects. So one possible example is that if the law requires a 24-hour response to the removal of harmful content, first of all, it may turn out to be literally impossible to cite one statistic that you’re all familiar with, the YouTube application at Google receives somewhere between 400 and 500 hours of video per minute uploaded into the system. I have no idea how many hours of video are exported per minute by users who are trying to download content. It’s not possible for that content to be vetted manually. We don’t have enough people to do that. And so we rely on technical means, machine learning mechanisms, which we all know are imperfect. And so not only will they not work 100% of the time, but they won’t catch 100% of the problems. And they may catch things that aren’t problems but look like problems because the algorithms don’t know the difference. Asking a company the size of Google to do something is one thing, but asking a small and medium-sized enterprise to carry out the same kind of filtering may inhibit that small enterprise from ever existing, let alone growing. So we have these undesirable side effects of well-intended laws that may prevent us from building the internet that we all would like to have. We also, someone mentioned earlier, I guess it was Jane, that there were laws that were passed, in the US anyway, telcos that didn’t want competition from community networks were able to get laws passed in the States to inhibit the building of community networks on the grounds that if a municipality wanted to build a network, it was the government interfering with freedom, competing with private enterprise that ignored the fact that a typical arrangement would be that the community would actually have a contract with a private entity to go build the municipal network and operate it, but that was sort of ignored, and then zeal to argue the other case. So I’m actually quite worried that these are not simple problems to solve, and that at the Internet Governance Forum, where we’ve spent years literally contemplating some of these problems, that we have a kind of responsibility to try to help the legislators and the regulators come to reasonable conclusions about protecting human interests, while at the same time, recognizing that there are responsibilities associated with the use of the Internet. In a previous session, it occurred to me to remind people about the social contract, and Rousseau’s observation that along with safety and security, which people are looking for in their social environment, that they have obligations not to abuse their freedoms. My freedom to punch somebody in the nose kind of stops about one centimeter away from Sebastian’s nose, and my freedom existed up to that point, but as soon as I complete the action now, I have now violated his rights. So we have still some work to do, and I think especially in the IGF context, we have an obligation to help the legislators and the regulators to find a way forward that preserves as much of the utility and value of the Internet as possible, while at the same time, protecting people from harm. One particular thing which we valued over time, I think, is anonymous use of the Internet. You shouldn’t have to be known to just do a Google search, for example. However, if you are going to use the Internet for harmful purposes, eventually, I think we would generally agree we would want those parties to be identified. Well, this gets to the notion of accountability. Many of the laws that are being passed are attempts by the legislators to articulate how to hold parties accountable for their behavior, whether that’s a private sector entity or an individual or a whole country. In order to hold parties accountable, you have to be able to identify them. So now we have a tension between privacy and the ability to reveal a party in the event that we believe that party is misbehaving. There is currently, as many of you know, an attempt to draft a cybercrime treaty, and there is a considerable amount of debate deciding on what’s a cybercrime. In some cases, you could argue that every crime that already exists can also use a computer to execute the crime. Therefore, all crimes must be cybercrimes. That’s not a good syllogism, and some of us are arguing that we should be more cautious about the treaty being focused specifically on things that you could not do without the use of a computer in the network. That’s still in debate, so we haven’t completed that yet. So my bottom line on all of this is that in our attempt to make the internet a safe and secure environment, we are going to have to accept that some of the principles that we enjoyed in the early days of the internet may no longer be fully attainable. And in particular, I would argue that accountability forces us into making parties identifiable at need. And I will offer just one very weak analogy, which some of you heard before, I suspect. When you get a license plate on the car, it’s usually just a random collection of letters and numbers, and it looks like gobbledygook to us. But there are parties who have the authority to look that license plate up and identify the owner of the car, which, by the way, may not be the driver of the car, and that’s also an important observation. But this piercing of the veil of anonymity or pseudonymity may turn out to be essential to introducing accountability into the system. Some of you have also heard my argument that agency is another element of all this. We need to provide agency to individuals, corporations, and even countries to protect their interests, which might mean, for example, the use of end-to-end cryptography in order to maintain confidentiality. And arguments are often made that end-to-end cryptography is harmful because it means it’s harder for law enforcement to detect that there is misbehavior on the network. And I sort of draw the line there in arguing that end-to-end cryptography for the protection of confidentiality is extremely important. The idea that you have a backdoor into the cryptographic system almost certainly guarantees eventually that information will be released, and then no one will have any confidentiality at all. Last point, people who are focused on the anonymous use of the internet may sometimes forget that strong authentication of your identity might turn out to be helpful to you, and that you should be adopting mechanisms that make it hard for other people to pretend to be you, because if it’s too easy for them to do that, they may, in fact, take actions on your behalf that you didn’t authorize. And so strong authentication might, I hope, become a norm in the system where it’s needed in order to make sure that you protect yourself against other people taking actions that you didn’t authorize. So, Mr. Chairman, I’ll stop there, but I hope this feeds a little bit of the thinking for the debate which should follow.

Sébastien Bachollet:
Thank you very much, Vint, Sebastien Bachelier speaking. Just, I would like to pick up one of your points. It’s when you remind us that IGF could be useful, and the exchange we have here and in the other room are not just to talk, but also, it’s to talk but to exchange between various stakeholder, and that’s an important point here also today. Now I would like to open the floor for question. You have a mic in the middle of the room. Just queue there and talk, give comments or question, and if there is the same online, please do it.

Alejandro Pisanty:
Sebastien Alejandro Pisanti here, moderator online. There’s Deborah Allen Rogers hand up as well.

Sébastien Bachollet:
Okay, Deborah, go ahead, please, thank you.

Alejandro Pisanty:
Oh, Deborah, you can ask your question.

Sébastien Bachollet:
If you can open your mic, and eventually your camera too will be great, like that we can see you for the moment your microphone is closed, as I can see.

Vint Cerf:
How many engineers does it take to turn on a microphone?

Alejandro Pisanty:
Maybe only one, but the system may be so unresponsive.

Sébastien Bachollet:
Okay, maybe, okay, maybe, Alessandro, you may be willing to start and we will try to solve the problem with Deborah, please. Thank you.

Alejandro Pisanty:
Thank you, I’ll make a very brief comment right now. The work of a dynamic coalition on core internet values is concerned with the way different things and this year it’s regulations mostly may impinge on these core values, assuming of course that they are mostly the technical principles with which the internet was built. And What we see from some of the regulation proposals is that they may actually do away or damage seriously things like the universality of reach of the internet. They may be achieved by reducing interoperability. I’m very concerned, for example, this does not mean not to do it, but find a way to do it with what Fint has said, for example, for stronger authentication or for stronger identification. We may find ourselves needing to add devices to the system or some governments or banks or such entities may decide that you need to have an extra device, maybe also on their network to do this authentication that open standards like PKI will not work. So that’s the kind of concern that we have to look into, to extract a list of these things for now, and see how they can be made to work or research over the next months. These are key points that we’re looking at, but I’ll leave the floor to other participants. Deborah says it’s not allowing her to open and I’m already trying to unmute.

Deborah Allen Rogers:
Hello, hello.

Alejandro Pisanty:
There you are.

Deborah Allen Rogers:
I’m here, but I would like to be on camera, but you all see my face in the picture. So I just want to say hello to everyone and thank you very much. I will lower my hand also. And what I wanted to say was a couple of things. I’m from New York City. I live in The Hague. My name is Deborah Allen Rogers, as you see, and I have a digital fluency lab here called Find Out Why. So I wanted to direct my question. Oh, here we go. It looks like I can start my video. Okay. Hello, everyone. Okay. So hello from The Hague. I wanted to direct my question at Jane and also at Vint. Anyone else who might want to join in, but in particular, the two of you. One is the father of the internet, and secondly, as a woman who just gave us a lot of really good intel about NOGs, for example. Do either of you or do anyone on the panel spend time working directly with Finland and Estonia on e-governance? I do some work with them and they’ve developed these models and they’ve been putting them in place for a good 20 years for e-governance and have answers to many of the questions I see that we struggle with here in Europe and that we struggle with in the United States. And the last point I’ll make is because they stay sort of under the radar screen, oftentimes their designs are sort of overlooked, I’ve noticed, in all the work that I do with various European internet forums, et cetera. So I was in D.C. this summer and we talked a lot about it at the Trans-Atlantic Partnership meetings, but I did want to raise it in this venue as well about e-governance in Estonia and in Finland, and Exroit in particular. Thank you for taking my question.

Sébastien Bachollet:
Thank you, Deborah. Questions in the room? We can start with a few questions and then – but it’s up to you. If you want, we can start, Deborah, if you want to take the floor and give some answer and then I will ask Vint also and the other participants. The question does relate to the same thing. Then go ahead.

Audience:
Do you hear me? If I have the microphone, okay. So Martin Bottemann, and indeed the big thing I’m struggling with is that this internet needs to be more and more secure, more and more reliable. We should be able to rely on it and we are working on that. Now one of the elements is indeed identification and would you consider, for instance, anonymity as a core internet value or is that something different? How can we get to a kind of standard where you combine security with anonymity via a kind of trusted service or something? Is that something where we can go? And I think it very much complements to Alejandro’s concern and what the lady just said, identity as used in these governments.

Sébastien Bachollet:
Thank you. Vint, go ahead.

Vint Cerf:
I actually would like to respond to that specifically. For a long time, I had the view that anonymity was a right that we should have and that you should be able to use the internet without identifying yourself. What we discovered, at least what I believe we’ve discovered, is that anonymity creates opportunity for really severe and bad behavior. If people think that there are no consequences for their harmful behaviors on the net, then they will continue to execute those bad behaviors. And so absolute anonymity is, in my view, not necessarily, should not be a core value. I’m surprised at my change in position, but having seen too much bad behavior that’s shielded by anonymity, I now believe that accountability is more important. That doesn’t mean that you have to identify yourself to use all of the internet’s features. That’s not what I’m arguing. But I am saying that we should tolerate mechanisms that allow for discovery. And while I say that, I absolutely understand that viewing this through the lens of the democratic society versus an authoritarian one, you get very different answers from the standpoint of an authoritarian government. The ability to identify parties is harmful to that party’s interests. And yet, if we don’t allow for that kind of discovery, then all of our interests are harmed by the bad behaviors that are not accountable and therefore difficult to inhibit. You could say, well, can’t we inhibit the bad behaviors just by using technology? Can’t we use machine learning to filter all the bad stuff out? And the answer is, as far as we can tell, that doesn’t work. Either it doesn’t work because it fails to filter, or it filters the wrong thing and therefore people’s rights are harmed because of that. And so this is going to be a relatively imperfect outcome, but I am persuaded at the moment that protecting people’s interests and protecting people from harm is really important. We can say, though, that there are certain actions where we recognize that anonymity is important because if you’re identifiable, then there could be really harmful side effects. Whistleblowing being a good example of that. But I would argue with you that even in the whistleblowing case, the most traditional means of handling that are that a trusted party receives the blown whistle and may in fact need to know who is blowing the whistle, but is obligated to keep that party’s identity anonymous. And that’s one of the ways in which you thread the needle between anonymity and identifiability and accountability. So I’d be very interested, of course, if people have arguments against this proposition that pure anonymity should not be an absolute core value anymore.

Sébastien Bachollet:
Thank you.

Alejandro Pisanty:
Can I pick up on that, Sebastian, for a second?

Sébastien Bachollet:
Go ahead. Yeah, OK. Go ahead.

Alejandro Pisanty:
Very briefly. And to further the point that was made by Deborah as well, how big an architectural change would this be? We have assumed for many years that the only identifier that the internet gives you, that’s proper from the internet, is the IP address. And everything else comes from the edge. So how big of an architectural change would that be? And then, of course, how scalable would that be? The case of Estonia, I think, is very brilliant, but has a limitation of scale in the way you can establish trust within a small society or going further out. Sorry. I don’t want to extend this question. Thank you.

Vint Cerf:
Could I respond on the Estonian side? Because the one thing which impresses me about Estonia is that 100% of the population is registered for strong authentication, 100%. They can do that in part because it’s a million and a half people. When you get to 300 million or 600 million or 1.4 billion, it gets harder. Estonia has introduced the Aadhaar system, which is attempting to strongly authenticate parties for their benefit. But everyone sitting in the room and those online can also recognize the potential risk factors of being able to identify people by biological metrics and things like that. You can see how that can be abused as well. So this is a peculiar tension that I think is not 100% resolvable. But as I say, I believe that accountability may turn out to be far more important than absolute anonymity.

Sébastien Bachollet:
Thank you. Jane. And then I will go to Ghana IGF Remote Hub and then back to Debra. Jane, please.

Jane R. Coffin:
I’ll be very brief. Debra, we’ve worked with a variety of governments around the world to work with a variety of governments around the world. But if there are some really great practices that we can glean from you, that would be exciting. I wanted to pick up really quickly on a point that Vint made about the IGF having an obligation. And I think, Vint, one of the points I want to extrapolate from that is to help find a way forward with governments to have inclusive, multi-stakeholder inclusion in policymaking and regulation. We start to exclude civil society, the technical community, academia. It’s very much not going to lead to a better regulatory and policy regime and environment. And if we don’t, the law of unintended consequences may prevail here where we may force centralization a bit more. Some governments may force centralization in their lawmaking if they aren’t including some of the smaller networks, the other instances like internet exchange points and others in the conversation and lock out multi-stakeholder inclusion. So I just wanted to put that out there before we ended.

Vint Cerf:
So, Jane, since this is also supposed to be entertainment for you, so now we’ll have this little debate back and forth. You’re not saying, I hope, or are you trying to argue that the point I’m making, that absolute anonymity may no longer be a core value in the interests of the people who use the internet? Your argument about governments and multi-stakeholder policymaking I don’t understand is an argument against my proposition. It is an argument for the utility of multi-stakeholder perspective in the formulation of policy. And I hope that what I’ve been saying is not unintentionally misinterpreted as against multi-stakeholderism. I’m a complete fan of that, believe that it should be a part of every government’s normal practices. So I see these as two very distinct things. That’s also a correct interpretation of what you were saying. Okay.

Jane R. Coffin:
I think you’re helping us point out that the obligation of the IGF, and it’s the uniqueness of the multi-stakeholder model in the IGF, to work with governments to make sure that whether it’s a discussion on anonymity or interoperability and more networks being interconnected openly is that that’s more robust policymaking regulation comes through that multi-stakeholder discussion.

Vint Cerf:
So, in fact, there’s a simultaneous obligation, I think, of members of the IGF who care about these things to engage with governments. We need to help the governments appreciate why the IGF is so important to them as they try to formulate policy lead.

Lee Rainie:
The striking thing for so many years about tech policy stuff was that it was pre-partisan, both here and in Europe in particular. The dynamic we’re talking about now, though, has hints and allegations of being swept into partisan polarization. I don’t think there’s the kind of consensus now that there might have been five or six years ago in the parties about whether anonymity shouldn’t be a core value. You see signs of it in the populist mainstream party dynamics of Europe as well. So, this is all, again, to the theme of the day, this is all organic and moving and fluid and it’s hard to settle things in that environment.

Sébastien Bachollet:
Okay. Let’s go back to the participants and, Gana, please. I hope that we can hear you. I know that we can see you, at least in my computer, but go ahead, please, Gana. And then, Deborah. And then, I will go to the room and then to the next speakers online. Thank you.

Joseph:
Thank you very much. My name is Commuter Joseph, speaking from Pentecost University, Ghana. Whilst we look at the core values of the Internet, I want to ask this question that with the VPN, virtual private networks, people use these networks to bypass restrictions on the Internet to fraud and infringe on sensitive data of others. I want to ask, what can be done to protect individual content on the Internet whilst we look at, I mean, what can the government do or what can we do to help protect the content of individuals on the Internet? Thank you very much.

Sébastien Bachollet:
Thank you, Gana. Yeah, go ahead, Vint.

Vint Cerf:
So, I think that I’ve reached the conclusion that cryptography is our friend in all of this. For example, there are many places that will insist that information about their citizens must be kept in the geophysical boundary of the country in the belief, or at least they make the argument that somehow that makes it safer. In some cases, the motivation behind that is to demand access to the information from the parties who hold the information within the geopolitical boundary of that government. We hear the term data sovereignty, for example, to argue that data about citizens shouldn’t leave the country. I will make the argument that when you insist on that, you actually lose reliability. At Google, for example, we replicate data across our data centers and we also encrypt it so that no matter where it goes, when it’s addressed, it’s encrypted. When it’s transmitted, it’s encrypted. We even have a situation or a provision for the possibility that the users hold the keys to the data and so we don’t, no matter where we put it, it is under the control of the users. My argument here would be that transborder data flows and encryption allow you to place data anywhere on the internet and protect it as long as you manage your keys properly. That is a huge challenge because key management is a non-trivial exercise. In fact, it’s one of the reasons that I did not push public key crypto into the internet for a while because while it was being developed, the people who were doing the development were graduate students. They’re not the first category of people that I would rely on for high quality key management. It’s not that they’re stupid or something. It’s just that they get distracted by silly things like PhD dissertations and final exams. So today, we have an obligation to help people manage keys and cryptography to protect their interests and to help them strongly authenticate themselves. So I’m of the view that that’s the correct way to handle data protection and not to argue that its physical location is the ideal protection mechanism, but rather cryptography.

Sébastien Bachollet:
Thank you. Deborah, please. Maybe I need to do something. Wait a second. Yeah. I guess.

Deborah Allen Rogers:
There we go. Here I am. Okay. Thank you so much for that. That’s a quotable quote, excellent. Cryptography is our friend for sure. And to add to the question that was just asked about how do we protect human rights or personal rights or personal privacy, cryptography is our friend and thinking about all the different ways in which it can be scaled. This is what I wanted to say about the point you made about a million point seven users or something like that in Estonia. And the cultural sort of, I think the cultural context of that and the idea that now that we’re on this online, offline, no line world, scale is such a reference. It’s such a, it’s changing this concept of what we can do with scale at the push of a button. And so I speak also to the CEO of XRoad who is based in Finland and he talks about a different cultural reference in Finland, one that’s a lot more conservative than the one that was in Estonia 20 years building their brand new internet system. and e-governance for their banking and their voting and etc. So I just want to make this point. I was a clothing designer in the 80s and 90s when the entire world, existing through a pandemic called AIDS, moving into global manufacturing, all going to China. This is not, and I’m in New York at 9-11 of course, this is not the first time I’ve been through these sorts of drastic transitions. As you know, Vint, I mean I hear George Carlin’s voice somewhere in the background of your voice as well, talking about, and for anyone listening please look up George Carlin, you’ll see why. So thank you about the cryptography as our friend commented and please can, if you all want to speak about or at least think about this, rethinking about this idea of scale and smaller societies that are doing things. Because test samples are small and it’s scaling a functional test sample is what works. And so we have to think about these societies. I’m living in a very highly governed, functional society here in the Netherlands now for three years. It’s different than living in other cultures that are not highly functional at this moment. I say that in reference to something that you mentioned, Lee. So I don’t want to actually go on record as mentioning which society, but non-functional and functional looks very different and I think the functionality is the point, not the size of the scale model. Okay, thank you for listening.

Sébastien Bachollet:
Thank you Deborah. We have 12 minutes to go. We have one question in the room and one speaker online and Alessandro Pisanti will read some comments online too. Therefore, let’s go to the room speaker, please.

Audience:
Roger Dingledine to our project. So this word anonymity is one that I think about a lot. I actually find the word anonymity to be confusing when people are thinking about it. I usually use the word communications metadata security or a securing communications metadata. That doesn’t trip very well off the tongue, does it? Fair enough, but the reason why I mentioned this is thinking about one of the ways that we’ve managed to thread the needle to manage both of these is looking at it from different layers. So if you tell people Tor is an anonymity tool, then they say, oh well I guess I can’t use Facebook. But it makes perfect sense to log into Facebook over Tor. You’re getting to choose what of your communications metadata you want to reveal. So by default, when you’re reaching them, you don’t automatically blurt out your identity. You then get to choose what you tell them. And Facebook doesn’t care where you are, they care who you are. And what they mean by that is Facebook level, Facebook application layer of who you are. So you log into Facebook and from there at the platform level, there’s a completely separate question about anonymity versus accountability. Do you need your real name? And so on. But the separating those means that at the network layer, you don’t automatically identify yourself. Yet, as you say, it might be beneficial in a societal way or a platform way or a community way to choose to identify yourself at a different layer. So that layering mechanism is one. I don’t want to say that it solves everything, but I think it helps us get closer to the answer. Of course, we don’t want anonymity for everybody all the time, no matter what. But we want to give people the choice of who they tell about them.

Vint Cerf:
I think that’s a really good point and I appreciate the layering argument, which makes good sense to me. You’ll notice that other elements of the internet design, especially the domain name system, has introduced mechanisms like DOH and DOT and so on in order to protect information at certain layers in the architecture while revealing it at others. And your point about choice is very well taken.

Sébastien Bachollet:
Thank you, Vin, and tyank you for your question. Shiva, please, and then Alessandro. I will give you the floor. Okay, go ahead, Shiva.

Shiva:
Can you hear me? Yes. Okay. Jane was talking about the internet exchange points and the neutrality, the intended neutrality. As far as I know, some of the internet exchange points have a commercial business model. And how far are they away from the intended neutrality? And also, if an internet exchange point can theoretically be non-neutral, can they also become tools in the hands of governments, good and bad governments, to indirectly regulate the internet or to control the internet in a certain way? And the positive question on internet exchange point is, is there any design to think of an internet exchange point for interplanetary networks, probably with a peculiar bridge to give a one-way connectivity to the global internet? Thank you.

Jane R. Coffin:
So, Shiva, that’s a, I’m going to start with your last point about internet exchange points and interplanetary, and I can feel Vint, too, right next to me. Because I think, are you still the chair of the interplanetary working group or the… No, I’m not. I’m not the chairman. I’m a member of the board, but I participate with them, yes. So, you should check out a session on Thursday that Joanna Kulisa will be running with respect to, I think that’s data governance, but in any event, there’s a paper that Joanna Kulisa and Berna Gur have written, and it was funded by the Internet Society Foundation. I’m not, you know, this isn’t an advertisement for this foundation, even though I worked at ISOC before. But the paper they put together, and another paper put together by the Internet Society by Dan York, who also will be on the panel on Thursday, talked about the potential for exchange points in space with LEOs, Low Earth Orbiting Satellites. Sorry, I should be clearer. It could be a very interesting thing, and I, and then the question is, who can participate? You know, who’s running the network as far as the LEO constellation itself? Is there neutrality if it’s only one entity, company, that can control all the traffic exchange, or is it only their traffic? It’s very complicated right now with cross-border connectivity to potential. If you have a transmission going down into one country that beams up to another satellite that’s going to beam down into another country, the whole concept of negotiating cross-border connectivity issues is complicated wildly. But I’ll stop there for a minute. Shiva then turned to Vint on the interplanetary.

Vint Cerf:
Well, let me, just setting aside the spatial notion for a moment. Internet exchange points on the ground are really powerful tools because they allow for connectivity, efficient kinds of connectivity among networks. But here’s a scary thought. Suppose that you’re in a regime where the government runs the exchange points, and it is required that all traffic between networks go through government-operated exchange points, which might lead to surveillance of the kind that you didn’t want. That takes us back to cryptography being your friend, and once again you can imagine regimes that don’t want, you know, encrypted traffic to be running through the exchange points. With regard to putting exchange points or data centers in space, one of the observations I would make is that those typically require maintenance, and so we may have some difficulty getting people up there to do maintenance. I’m sure everybody in this room does understand and appreciate that the Internet doesn’t run itself. There are millions of people who, as a daily job, help keep the Internet functioning. Otherwise it would break pretty quickly, and I wish that were not the case. I wish that our designs have been even more robust, but to be quite frank, they require a lot of attention. And Shiva, to

Jane R. Coffin:
quickly just answer your question, I was referring to the the IXs that are the neutral bottom-up, you know, not managed by governments, but to Vince’s point, there are exchanges where traffic is monitored. That’s just required by the countries, and so that’s something that does happen, and I’m with Vince on the encryption, the crypto side. Not cryptocurrency, but encryption. I don’t really care about cryptocurrencies right now. I probably should in the future, but as far as the commercial IXs, that’s a different instantiation of exchange point, and they serve a certain purpose, but they they’re not the bottom-up neutral exchanges that I was meant to be more clear about.

Sébastien Bachollet:
Okay, thank you very much. Now I give the floor to Alessandro. He will give us the last feedback online, and then I will give one minute to each of the five speakers to conclude, because we will be late in any case. Go ahead, Alessandro, please. Thank you.

Alejandro Pisanty:
Thank you, Sebastian. I’m not going to speak for myself right now. I’m going to read two comments. One comes from Iria on the chat. She says, choosing to identify is different from being forced to reveal your personal identification data in order to access the Internet or an app, and I side totally with that statement. And the other one comes from the Abuja IGF remote hub in Abuja, Nigeria, I think. Nhi mentioned that AFRINIC is registered as a private entity in Mauritius. Hasn’t this status contributed to the barrage of court cases the regional RIR now faces? While a good number of technical organizations are registered as non-profits, shouldn’t regional and global technical organizations that govern the Internet be accorded Internet governmental organization status? Those are the two points from online.

Sébastien Bachollet:
Thank you very much. We need to run to another meeting. Therefore, may I suggest that if Nhi is still online, I want to take one minute for the microphone.

Nii Quaynor:
The answer to the question of the nature of a registration private company by laws, I think the answer is no, because it has really no bearing. A commercial dispute can occur between non-profit and members, so I don’t see that as the right thing. This is a case of some member who is violating rules and is refusing to be disciplined, and is beginning to abuse the legal system by generating a barrage of court cases, at the same time trying to break into people’s account by offering them money, and so on. So it’s just a bad case that needs to be dealt with as such, because it tried to invade the policy process, it failed, it tried to force a co-chair, and the co-chair got recalled. If you look at all these things, one organization, why generate 20-something cases in a year? If you really are doing proper business, why would you have so many IPv4 addresses, and no network number, no ASN, no v6? So it’s obvious what the game is. It’s about the interest of hijacking numbers out somewhere else to use, and that one, I don’t think Africa or the world would want to see that.

Sébastien Bachollet:
Yeah. Thank you, Nii. We have less than one minute per person. Iria, please, two words of conclusion. Sorry for that.

Iria Puyosa:
Yeah. Basically, I think our consensus should be we need technical expertise in every discussion about policy. So we need to have people who know how to solve the problems, implement the solutions, and we also need the input from civil society, understanding the human rights before moving up to regulation. Otherwise, we may end with bigger problems, or a different set of problems than the ones we are trying to solve. Thank you. Nii, please.

Vint Cerf:
Just to set the right tone for the ending of this, when we’ve asked in global surveys, given all the problems that you are now talking about, we’re asking questions about in our surveys, how hard would it be and how willing would you be to give up the Internet? And there’s almost universal, under no circumstances would I give it up. So we’ve done a pretty good job by the consumer behavior and consumer sentiment.

Jane R. Coffin:
Thank you. Jane. Don’t discount your voice in helping keep the Internet open, globally connected, secure, and trustworthy. Make sure the multi-stakeholder model and the IGF continue. Thank you very much.

Sébastien Bachollet:
And I want to give the last word to Olivier Crepin-Leblond. If I am here, it’s because he’s not here, it would have been much better than me to run this meeting. But Olivier, go ahead.

Olivier Crepin-Leblond:
The host has unmuted me. Thank you very much, Sebastien. Thank you to everyone who has participated as a panelist and also as a participant in this discussion. The Dynamic Coalition has discussions throughout the year. The work is ongoing. If you’re interested in joining the Dynamic Coalition, you can go onto the Internet Governance Forum website, go into intersessional work where the Dynamic Coalitions are all listed, click on the one on co-internet values, and you can join the mailing list. There’s no membership fee or anything like that, but we do take our work very seriously. It’s extremely important. We will make a report out of this, of today’s session, and of course, it will be taken into account in the IGF messages for Kyoto. So thanks very much, and thanks, of course, to all those people that have helped with organizing this session.

Sébastien Bachollet:
Thank you very much, Olivier, Alessandro, and all the speakers. The meeting is closed now. Bye-bye. Bye, and thanks, everybody. I’m going to ambush you now. I couldn’t come up with a good way to say this on the mic, but I was really curious. I think Sue’s work as well, and Jane, which she was mentioning as well, in terms of, like, your generational model, your fourth generation model, and Jane, you were talking about the work that you were doing out in rural areas, but one of the things that we talk about in terms of the value of the interoperability, and then another challenge that has come up over the past, like, three years, at least in the U.S. and Canadian, that we get everything… … … … … … … … … … … … …

Alejandro Pisanty

Speech speed

156 words per minute

Speech length

579 words

Speech time

222 secs

Audience

Speech speed

156 words per minute

Speech length

475 words

Speech time

182 secs

Deborah Allen Rogers

Speech speed

197 words per minute

Speech length

828 words

Speech time

253 secs

Iria Puyosa

Speech speed

151 words per minute

Speech length

1039 words

Speech time

412 secs

Jane R. Coffin

Speech speed

192 words per minute

Speech length

1992 words

Speech time

624 secs

Joseph

Speech speed

137 words per minute

Speech length

110 words

Speech time

48 secs

Lee Rainie

Speech speed

185 words per minute

Speech length

2170 words

Speech time

704 secs

Nii Quaynor

Speech speed

152 words per minute

Speech length

1608 words

Speech time

635 secs

Olivier Crepin-Leblond

Speech speed

172 words per minute

Speech length

157 words

Speech time

55 secs

Shiva

Speech speed

135 words per minute

Speech length

132 words

Speech time

59 secs

Sébastien Bachollet

Speech speed

126 words per minute

Speech length

1648 words

Speech time

785 secs

Vint Cerf

Speech speed

167 words per minute

Speech length

3723 words

Speech time

1337 secs

DC-Gender Disability, Gender, and Digital Self-Determination | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Gunela Astbrink

The panel discussion focused on the topics of digital self-determination and accessibility, highlighting the importance of empowering individuals and communities to have control over their digital data. Digital self-determination was described as the need to reconsider how individuals and communities can have autonomy over their digital selves. The panel acknowledged that society is still trying to understand the relationship between our lives and the technologies we use.

The discussion emphasized the need to address the digital divide, particularly for marginalized groups such as women, queer, trans persons, and those with disabilities. The panel aimed to make digital self-determination a reality for these groups by shedding light on their unique challenges and perspectives. Feminist perspectives played a central role in the discussion, with a specific focus on women, queer, and trans persons with disabilities.

One key argument made during the panel was that digital tools should be designed with accessibility in mind. It was stated that as a disability community, their motto is “nothing about us without us,” which means that persons with disabilities should be included in the development processes and community discussions. The panel stressed the need for all digital tools to be accessible and usable for all individuals, regardless of their disabilities.

Additionally, the importance of education and empowerment for people with disabilities in the digital sphere was emphasized. The panel shared a story of a determined young woman from Malawi who, despite having a disability and coming from a poor family, managed to study IT. Her education not only empowered her but also enabled her to tutor other students and utilize digital tools, even with her physical limitations. This example demonstrated the transformative power of education in enabling individuals with disabilities to actively participate online.

The panel also raised concerns about privacy and security, particularly for people with disabilities. They acknowledged the potential privacy and security issues that individuals with disabilities, especially those with visual impairments, might face. The need to ensure the privacy and security of these individuals was underscored, emphasizing the importance of safeguarding their personal information and digital presence.

In conclusion, the panel discussion on digital self-determination and accessibility provided valuable insights into the challenges faced by marginalized groups, particularly women, queer, trans persons, and individuals with disabilities. It stressed the importance of designing digital tools with accessibility in mind and promoting education and empowerment to enable active online participation for people with disabilities. Moreover, the panel emphasized the need to ensure privacy and security for individuals with disabilities, recognizing the unique risks they may encounter. Ultimately, the panel highlighted the significance of integrating inclusivity and accessibility into all aspects of the digital realm.

Judy Okite

The analysis emphasises the significance of accessibility for individuals with disabilities, both in physical and online spaces. It reveals that the evaluation of government websites for accessibility showed that 20% of the content remains inaccessible, indicating a pressing need for improvement. This highlights the lack of inclusivity and the barriers faced by persons with disabilities when accessing online information and services.

Furthermore, the analysis argues that individuals with disabilities must be actively involved in the process of creating accessible spaces and developing inclusive technology. It references Judy Okite’s experience in Dar es Salaam, where insufficient provisions for accessibility were observed. This illustrates the importance of including the perspectives and needs of persons with disabilities in the planning and design of physical environments to ensure that all individuals have equal access and opportunities.

In addition to physical spaces, the analysis also stresses the need for awareness and empowerment about rights among individuals with disabilities. Judy Okite’s assertion of her rights for accessible facilities during her stay in Dar es Salaam highlights the importance of advocating for and asserting these rights. The analysis further states that persons with disabilities should have a say in determining what works for them or not, enhancing their autonomy and agency in decision-making processes.

Overall, the analysis stresses the need for greater attention to accessibility in both physical and online spaces. The evaluation of government websites and Judy Okite’s experiences serve as evidence of the existing barriers and the urgent need for improvement. It argues that involving individuals with disabilities in the design and development of accessible spaces and technology, as well as promoting awareness and empowerment about their rights, can lead to a more inclusive society.

Audience

The implementation of certain features, specifically Zoom’s automatic captions, has had negative consequences for individuals with disabilities, particularly those who are deaf or hard-of-hearing. These automatic captions, intended to enhance accessibility, have instead led to confusion and disempowerment. This is due to the overlapping of captions in multiple languages, which obstructs the reliance on captions and lip-reading that these individuals heavily depend upon.

In order to avoid such detrimental effects, it is argued that technology companies should collaborate closely with individuals with disabilities and conduct comprehensive user research prior to implementing new features. By involving the very users who will be utilizing these features, technology companies can gain valuable insights that will result in more inclusive technology. This call for collaboration and user research is further supported by the incident involving Zoom, which serves as an example of the negative consequences that can arise from a lack of proper user research.

Furthermore, the importance of inclusive technology development is emphasized as a means to reduce inequalities and enhance accessibility. It is asserted that by working closely with intended users, technology companies can create technology that caters to the diverse needs of individuals with disabilities. This collaborative approach ultimately leads to more inclusive technology that empowers individuals rather than inhibiting their capabilities.

To conclude, the implementation of certain features, such as Zoom’s automatic captions, has had unintended negative consequences for individuals with disabilities. To address and prevent such issues, it is crucial for technology companies to engage in comprehensive user research and collaborate closely with individuals with disabilities throughout the development process. By doing so, technology companies can create technology that is truly inclusive and empowers individuals with disabilities.

Nirmita Narasimhan

The analysis highlights the importance of policies in ensuring compliance with accessibility standards. Countries with clear policies are more likely to effectively implement accessibility measures, as policies provide guidelines on what needs to be done, how to do it, and where it should be implemented. This is seen as a positive factor in promoting accessibility. The analysis also advocates for the creation and implementation of policies in countries where they do not exist, as well as the strengthening of existing policies to promote equal access to rights and opportunities for all individuals, including those with disabilities. While many countries have incorporated the Convention on the Rights of Persons with Disabilities (CRPD) into their legislation, the analysis suggests the need for the development of domain-specific policies to address specific accessibility issues in various domains. Different strategies for advocacy are required in different situations, as evidenced in the context of India. Active involvement of persons with disabilities in advocacy and policy-making processes is emphasized, as their perspectives should be adequately represented. The analysis also stresses the need for mainstream products to be universally designed, taking into consideration varying user needs and abilities. A user-centric approach in product design and enhancement is deemed essential to improve accessibility. Overall, the analysis underscores the significance of policies, the involvement of persons with disabilities, and the user-centric approach in achieving accessibility goals.

Debarati Das

During the analysis, several significant points were raised by the speakers. A central topic of discussion was the concept of digital self-determination, which highlights the need to understand who we are as digital beings as our digital footprints continue to grow. This evolving concept addresses critical questions surrounding the ownership and control of our data in cyberspace, affirming that a person’s data is an extension of themselves. It emphasises the importance of considering the rights and autonomy of individuals in the digital realm.

One key insight that emerged from the analysis was the significance of examining the experiences of individuals with disabilities in relation to digital self-determination. It was observed that digital spaces and decisions driven by data can greatly impact the autonomy and agency of individuals with disabilities. Therefore, there is an urgent need to explore how individuals can exercise control over their digital identities and have autonomy over their digital selves. By unpacking digital self-determination through the lens of the experiences of persons with disabilities, efforts can be made to reduce inequalities and promote inclusivity in the digital world.

Another important point discussed was the value of Design Beku and its principles of Design Justice in the field of design. Design Beku, a design and digital collective founded by Padmini Ray Murray, advocates for designing with communities, as opposed to designing for them. This approach aligns with the principles of design justice, which include ethics of care, feminist values, participation, and co-creation. By involving communities in the design process, Design Beku strives to create more inclusive and equitable solutions that address the diverse needs of different groups. This approach contributes to achieving the Sustainable Development Goals related to industry, innovation, infrastructure, reduced inequalities, and gender equality.

In conclusion, the analysis underscored the importance of digital self-determination, specifically in understanding our digital identities and asserting control over our data. It emphasized the significance of considering the experiences of individuals with disabilities to promote autonomy and agency in digital spaces. Additionally, the value of Design Beku and its Design Justice principles in advocating for inclusive and community-centered design practices was highlighted. These discussions provide valuable insights for addressing the challenges and opportunities associated with industry, innovation, infrastructure, reduced inequalities, and gender equality in the digital age.

Manique Gunaratne

Technology plays a crucial role in enabling individuals with disabilities to participate equally in society. Assistive devices and technologies act as a bridge between people with disabilities and their environment, allowing them to perform tasks that they might otherwise find challenging or impossible. This can include devices such as mobility aids, hearing aids, and communication tools. With advancements in technology, artificial intelligence (AI) has emerged as a powerful tool in improving the lives of people with disabilities. AI has the potential to make life easier for individuals with disabilities by developing solutions that cater to their specific needs and requirements.

However, cost proves to be a complex barrier to accessing technology for individuals with disabilities. While emerging technologies, such as AI and smart glasses, hold promise in enhancing the lives of people with disabilities, they often come with a hefty price tag. This poses a significant challenge, as many individuals with disabilities may struggle to afford these expensive technologies. The high cost of such innovations acts as a deterrent, limiting the accessibility of these technologies to a privileged few. Therefore, there is a need for collaborative efforts between technology developers, policymakers, and advocacy groups to address this issue and ensure that cost does not impede access to life-changing technology for individuals with disabilities.

Moreover, entertainment and emotional recognition technologies can greatly benefit certain disabilities, such as autism and intellectual disabilities. Emotional recognition technologies can assist individuals with these disabilities in understanding and interpreting emotions, which can contribute to enhancing their social interactions and overall well-being. Accessible platforms and games are also vital for providing entertainment to people with disabilities. These platforms cater to their unique accessibility needs and ensure inclusive participation in entertainment activities.

In conclusion, technology holds immense potential in empowering individuals with disabilities and enabling their full participation in society. Assistive devices and technologies act as enablers that bridge the gap between people with disabilities and their environment. AI, in particular, has revolutionized the landscape by offering tailored solutions to the needs of individuals with disabilities. However, the high cost of emerging technologies presents a challenge to their widespread accessibility. It is crucial for stakeholders to address this issue and work towards ensuring that cost does not impede access to these life-changing technologies. Furthermore, the development of entertainment and emotional recognition technologies specifically tailored for individuals with disabilities can greatly contribute to their well-being and quality of life. By embracing and advancing technology, we can create a more inclusive and accessible society for all.

Vidhya Y

The use of digital platforms has brought both positive and negative implications for individuals with visual impairments. On the positive side, these platforms have opened up new opportunities for communication and independence. Email, for instance, has revolutionised written communication, which was not previously possible without the advancement of technology. Digital tools, such as apps designed to identify colours and currency, have also empowered visually impaired individuals by providing them with greater independence and autonomy.

Furthermore, assistance tools like ‘Be My Eyes’ have proven to be invaluable resources for visually impaired individuals. These tools connect visually impaired individuals with sighted volunteers who can assist them in various online tasks, such as reading CAPTCHAs. This collaboration demonstrates the power of digital technology in providing inclusive and supportive environments for visually impaired individuals. Moreover, these tools can be used creatively for tasks like matching clothing colours, further enhancing the independence and quality of life for those with visual impairments.

However, there are also negative aspects that must be addressed. Accessibility remains a significant challenge for visually impaired individuals in the digital space. Many websites are primarily image-based and lack proper labelling, rendering them impossible to navigate using assistive technologies. This accessibility barrier hinders visually impaired individuals’ ability to access information and participate fully in the online world. Additionally, understanding and keeping up with new features and technologies can be daunting for visually impaired individuals, as design choices are often not optimized for their needs.

Moreover, women with disabilities face additional challenges in digital spaces. Privacy and vulnerability concerns are particularly prominent, as crowded environments or the use of screen readers may compromise their privacy when using digital platforms. This puts them at a disadvantage, highlighting the need for further measures to ensure the digital space is inclusive for all individuals, regardless of gender or disability.

In conclusion, the digital space presents both empowering and challenging aspects for individuals with visual impairments. While digital platforms have provided newfound opportunities for communication and independence, there are still accessibility issues that need to be addressed to ensure inclusivity. Furthermore, women with disabilities face unique challenges, emphasising the importance of considering diverse perspectives and needs in the development of digital tools and platforms. By addressing these challenges, we can create a more inclusive digital environment that truly benefits all individuals.

Padmini Ray Murray

The implications of surveillance capitalism and device use are particularly burdensome for disabled individuals. These individuals face additional challenges and risks due to the compromised nature of the devices they rely on. Unfortunately, most technology designs targeted at disabled users fail to consider these implications, exacerbating the difficulties they already face.

To address this issue, it is crucial to establish effective communication channels with disabled users in order to fully understand their specific needs and requirements. By engaging in conversations with designers and technologists, disabled individuals can provide valuable insights that can inform the development of more accessible and inclusive technologies. This collaboration can lead to better solutions that truly meet the needs of disabled users, going beyond basic accessibility requirements.

Furthermore, marginalized populations, including disabled individuals, are particularly vulnerable to privacy and surveillance issues. These groups often have limited opportunities for recourse when their privacy is compromised. It is imperative to pay special attention to the impact of surveillance on disabled users and their ability to exercise self-determination. Ensuring their privacy and autonomy is essential for promoting inclusivity and reducing inequalities.

One of the challenges in technology design is the tendency to create products at scale, which hinders the ability to provide more nuanced and individualized user experiences. Technology development often prioritises mass production and standardisation, which leaves little room for customisation. However, creating customised products requires a paradigm shift in thinking, moving away from one-size-fits-all approaches. Artificial Intelligence (AI) can play a crucial role in achieving this shift by enabling more personalised and tailored solutions for disabled users.

In conclusion, there is a pressing need to create more individualised and user-tailored experiences in technology design. This entails actively involving disabled individuals in the design process, fostering collaboration between designers, technologists, and users. Additionally, advocating for their rights and addressing the unique privacy concerns they face is crucial in building a more inclusive and equitable technological landscape. By embracing a paradigm of customisation and leveraging the potential of AI, we can empower disabled users and ensure their needs are met in a more meaningful and comprehensive manner.

Session transcript

Gunela Astbrink:
you you you I think we can start. We can start. Yeah. Okay. I would like to say good morning. Good day. Good evening to everyone here in the room and also online. This session is entitled Disability, Gender and Digital Self-Determination. And this is the session on the Dynamic Coalition on Gender. So we are delighted to have a number of excellent speakers from different parts of the world who will be joining us online and also there’s a speaker here on site. And so I will pass on to the organizer of this particular session, Deborah Arte Das, point of view, and also the person responsible for the Dynamic Coalition on Gender. So please, I wish Deborah Arte was here in person, but welcome Deborah Arte online. And Deborah Arte will give us a context for this particular session and introduce a little bit more about the topic. So thank you Deborah Arte. All right. While we’re waiting for some of the online panelists to join us, I will well, there’s a concept of digital self-determination and that’s what the framing of this particular session is all about. And it relates to our digital footprints and we know how much they are growing and society is grappling with new concepts, experiences, and understandings of the relationships between our lives and the technologies that we use. And who are we as digital beings? Are we able to determine ourselves in a data driven society? How do we locate ourselves as empowered data subjects in the digital age? How do we reimagine human autonomy, agency, and sovereignty in the age of datafication? Self-determination has been a foundational concept related to human existence with distinct yet overlapping cultural, social, psychological, philosophical understandings built over time. Similarly, digital self-determination, DSD for short, is a complex notion reshaping what we understand as self-determination itself. DSD fundamentally affirms that a person’s data is an extension of themselves in cyberspace and we need to consider how individuals and communities can have autonomy over our digital selves. So this panel session will center on the intersectional feminist perspectives with women, queer, and trans persons with disabilities and experts working in the intersections of digital rights, gender, accessibility, and technology. We will explore the idea of DSD through the lens of gender and lived experiences of persons with disability. So this is drawing from a first-of-its-kind series of DSD studios organized by Point of View. Point of View is the organization in India headed by, well, the project is involved with this particular topic and headed by Debarati Das. It’s been done in four cities in India and the panel will focus on the theme of digital divides and inclusion and also delve into the ability of women, gender, and sexual minorities living with disabilities to digitally self-determine themselves using current emerging digital technologies based on lived realities of individuals from different geographies and contexts. And secondly, it will deepen understandings of the need and potential to work with persons with disabilities in developing new and emerging technologies. Thirdly, it explores the collaborative and learning opportunities to make DSD actionable and a reality for women, queer, and trans persons living with disabilities. So, we are going to look through the lens of gender, sexuality, and disability and explore a bridge between access points and so-called pain points and think of inclusive ways of determining the self in new digital life spaces going beyond accessibility and also thinking about personhood, agency, choice, autonomy, rights, and freedoms in digital spaces for persons with disabilities. We will draw from our experience of DSD studios and its outcomes, articulate an exploration of a root concept of DSD and its key components through the lens of disabilities and gender. We’ll think about how we can co-create DSD through theory, practice, lived experiences, and concrete examples. And finally, operationalize DSD via a set of core principles and policy recommendations centering the intersections of gender and disability. So, we are still waiting for the online speakers. So, I will pass on now to Vidya, who is here with me, and ask Vidya a little bit about her experiences of being a digital person online and any barriers and enabling factors around this thing about accessibility, autonomy, choice, and potentially what are the implications for a woman with a disability. But before I do that, I will introduce Vidya, who is from an organization in India called Vision Empower, and Vidya is a co-founder. Vision Empower is a non-profit enterprise incubated at IIITB in Bangalore to bring education in science and mathematics to students with visual impairment. She is a research fellow at Microsoft Research India and has authored several papers on issues concerning people with vision impairment, such as improving programming environment accessibility for visually impaired developers. Vidya has received numerous awards and scholarships such as Thai Aspire Young Achievers Awards, Reebok Fit to Fight Award, and the Diberay Ambani Scholarship for Academic Excellence, and many more. So, please, I’ll pass now over to Vidya, so I look forward to hearing your particular experiences. Please go ahead, I’ll turn it on. Yes,

Vidhya Y:
thank you so much to the organizers for having me here, and also to Gunilla. Yes, we make STEM education accessible for children with visual impairments, and I am born blind, so I have experience of growing up in India, which is one of the developing countries, and also I have experiences of going online, the digital space, as a blind person, as well as I am a woman with a disability, so I have that experience as well, so today I’ll be talking more from a lived experience perspective, and also by, I’ll also be sharing some of the observations that I’ve had with children, as well as women with disabilities from my friend circles, and things that people talk about, generally, online. So, firstly, digital space, when we talk about it, it is really huge, because whenever we say technology, that’s the only way, as a blind person, I can communicate with the world, I can be more efficient, it has opened up so many opportunities like never before. I always mention this thing that, you know, growing up in a village, I didn’t have access to technology in the growing up years, and I missed out quite a lot, but as soon as I got onto the online platforms, there was, like, so much that I could do, like, you know, even I didn’t have to ask somebody to read out news, what’s the news, even to see the time, you don’t have to go looking for a Braille watch, even when you take something simple, like something so obvious, like written communication that everyone has on a daily basis, it was never possible for me till I learned to use email, because till then, if I had to communicate with somebody who can see, it was verbally, or someone had to write it for me, or I had to write it in Braille, which majority of the people don’t understand. Now, this actually compromises so much of what you have to say, because if I were to send a message, and if I were to ask somebody to type it for me, that means I don’t have privacy, what I want to say, I cannot say it. But digital platforms have opened up so many opportunities, and definitely have given a lot of privacy to individuals with disabilities, which we don’t have, mostly because someone or the other is always there, and the more severe disability you have, from what I have observed, the lesser privacy you have. And, as we know, that a lot of people are not able to get on digital platforms are really good, as we all know. They have enabled so much that was not possible before. But definitely there are so many challenges, in general, for persons with disabilities, like firstly, the accessibility issues that we all generally talk about, the websites are not designed in a way that people can access, there are a lot of images, a lot of the things that are so obvious for other people, I’m talking from a visually impaired person’s point of view, they’re simply inaccessible, because they’re not labeled, they’re image-based. But when you talk about women with a disability, the barriers are many, too many. From what I have observed, you know, it’s an irony, actually, digital platforms, as I mentioned, they have given a lot of privacy, at the same time, you have to be so careful, because when I started using a computer, for example, I was not using a lot of video calls, it was not necessary for me, but when the, when COVID happened and when people were trying to get on to online platforms, then video calls were a must. So, for me, first, I assumed that in the computer, the camera will be the whole of monitor, that was my assumption, because I did not know. And then, I would put my screen a bit down, thinking that, okay, if I don’t want myself to be visible, I can put it down, so that people are not able to see. But once when my sister took a look at it, she was just saying that the camera is just on top of the monitor, and it’s just your finger size if you put it down. Actually, people can see you much more clearly. So, from then, it’s really difficult without taking a second opinion to do anything digitally, because you really don’t understand. You really don’t know. I feel that I have too much vulnerability, and I’m missing out a lot of things, which the world outside knows. So, I feel like taking a second opinion for everything. But once you learn the basics, once you learn how to be visible, then it will definitely empower you. But at the same time, something new would have come up, and there’ll be something that you’re missing out compared to someone who can see, for example. So, these are some of the constraints that I face on a daily basis. And also, one of the other issues when you’re using a screen reader and typing something, when you’re in places where it’s crowded. So, whatever it is reading for you, for example, you know when it’s reading B and D. So, you might not make out the difference, and you tend to send some other word instead of some other word. Or when we say voice communication that you’ll have to use, sometimes it’s really confusing because, again, there’s no privacy when you’re in a place. Suppose I’m in a conference, and I’m not able to type everything because it’s touchscreen. When I’m trying to use voice based communication, there’s no privacy. So, all of these are there, and one of the main barrier that I have found whenever I have to join online in a lot of meetings, everyone finds it, whether you have a disability or not. It’s like you cannot type at least if there’s some other disability, maybe typing may be easier if, say, you have a hearing

Gunela Astbrink:
impairment or things like that. But if you have a visual impairment, typing is a huge issue, especially on phones. But you cannot send out voice messages on some of the WhatsApp groups, for example, the ones that you have for visually impaired because the fear that someone will reach out to you and message you and things like that. It has happened so many times in the past. So, though it is empowering, it’s still restricting, and it’s not empowering in the true sense actually. So, these are contradicting points, but this is the reality. This is what happens with most people. Thank you very much, Vidya. There’s so many different experiences that you have explained to us, and that’s so important to understand what a person with a disability goes through in becoming more and more online and becoming more active online. I’d like to tell a story about a young woman in Malawi in Africa. She was, just as Vidya, supposed to be here, but unfortunately there were visa issues and so forth, and through the Assista Dynamic Coalition on Accessibility and Disability, we have provided travel support for persons with disability to participate here at the IGF. And I’d like to explain about Grace Salange from Malawi. She is a wheelchair user. She has a speech impairment, and she has limited use of one hand, and she comes from a poor family in a village, but she was determined to study IT, and so she went through school, she went to vocational college, and she got through that with very well, and now she sometimes tutors other students. And the way she uses a smartphone or a laptop is with her knuckles. That’s the way she can communicate with her digital tools. And what is important? When a person with a disability is online, who knows? There’s no like, oh, they’re different, or something like that. We are together, a digital being. And that is important, that we are then feeling like we are on the same level with anybody. We communicate in the same way superficially, even though there might be tools that are needed. But the recipient of an email or a text wouldn’t know that. And I think that’s very important. But obviously those tools, they need to be there. They need to be workable, they need to be designed with accessibility in mind. So we’re talking about tools in a general sense, we’re talking about websites based on the international guidelines, the Web Content Accessibility Guidelines through W3C. We’re talking about making sure that apps are accessible. And it’s so important when any tool, any learning platform, anything is developed, that it’s done together with persons with disability. So there is that saying in the disability community, nothing about us without us. is part really of digital self-determination. That we as persons with disability are able to be part of a development or part of the community as such. And we are respected for that. So, I just wanted to pass back to Vidya to talk a little bit about some of the privacy and security issues. Because we can imagine that as a person with a vision impairment, there are additional concerns about privacy. We all have concerns about privacy and security. But there might be some additional factors that Vidya can explain to us. Thank you. Yes. Actually, digital tools enable you to do a lot of things

Vidhya Y:
by yourself, which was not possible. For example, these days, there are a lot of color recognizers. There are, if you have a currency, there are apps which can tell you what the currency is about. Then there are apps like Be My Eyes. You know, Be My Eyes is an online app which visually impaired people can install on their phones. And persons who can see sighted people can sign up as volunteers. So, if you want any help, suppose you’re not sure if the light is on or not, and one of the huge constraints that we have is solving captures. Captures are designed for not being readable by machines so that the internet is, the privacy is not compromised. But these can be huge barriers for persons with visual impairment, especially when you don’t have audio capture. It can be very frustrating because though you know how to use a computer, you cannot use it. So, though you can navigate the website, actually you cannot without taking help. And always there will not be somebody around you. But if you use tools like these, you can, anytime, any part of the day, of course, there’s a constraint that if you know English, then any part of the day, you will get somebody to assist you. Even if you don’t know English, you can set up your local language. Whoever is volunteering can set up their language as the local primary language and can assist in that language. But, for example, in India, we have a language called Kannada. So, if I want to get help in Kannada, then I will not get a lot of users in Indian time, night time, because obviously Kannada-speaking population, for them, morning is Indian time morning. But if you know English, 24 hours, there will be someone to assist you. But really, sometimes I use these tools because you cannot expect someone to be always around with you and you need quick help. One of the things is that always people also may not be willing to help you or even if they’re willing to help you, they may not have the time. So, these tools are very good, actually, because you can call and you can ask them. In fact, I conducted a lot of digital literacy trainings as I’m working with school teachers. So, I actually guided them on installing these apps and taking advantage of them. We found really good users, you know, apart from the CAPTCHA example that I told you. So, how it works is you can call them and the volunteer who picks up the phone will tell you to take the phone and point it to the computer. Now, if you’re blind, you may not know whether the CAPTCHA is visible or not. So, they’ll tell you move right or move down. Now, I can see it better. Now, I can tell you. But, you know, when I conducted these trainings, teachers, actually, the women teachers found innovative use of those technologies. In fact, somebody was using it to match their dress. We call it sari in India with bangles. So, whether the color is matching. So, these are some of the innovative, but these were very much needed for the teachers whom we are working with and they started finding these tools very helpful. But now, talking about the privacy concern, it’s like you don’t want to depend on somebody too much because they’re not there or they may not have the time, but you’re forced to depend certain times. And at the same time, you’re very concerned about where you’re pointing the camera towards, whether it is safe, whether you don’t know what’s happening, who is picking it up. You can just know the voice, but you don’t know what data is being collected. For example, there will be, just take, for example, banking transaction. Now, at the end of the transaction, if you have to enter your CAPTCHA, it means you have to enter all the details in the beginning itself before pointing your screen towards the computer, which means the person who is at the other end can figure out what you have typed. So, that is a huge compromise, actually. I mean, people are well-intentioned, but at the same time, it’s a huge compromise. You’re not very sure, but if you were to enter the CAPTCHA in the beginning itself and then type your data, then it will time out. So, the CAPTCHA will have only 40 seconds or a minute. By then, you have to enter and submit. So, that kind of privacy concerns are there, and the privacy concerns about how much of you should be visible to the other person, where are you pointing your camera, whether it’s safe, whether you’re very unsure, actually. Like, apart from the voice, you’re not sure of what’s happening. So, these issues are there specifically for women with a disability. And even on simple platforms, which everyone uses on a daily basis, like Facebook, Instagram, all of these, we talk about accessibility issues. Those are definitely there. But, for example, now, if I were to upload all the photos that I have taken during this conference, if I have to make a blog, or if I have to put all of these on Facebook, now, what I do is I generally tell somebody to… So, my cousin has come with me. He’s going to give me the photo with the caption. But that’s all information I have. Now, I don’t know whether I want those photos to be there or not, because you’re not seeing them in the true sense, right? You’re just depending on the caption. And sometimes you might miss it. There may be four or five pictures, and there may be one caption that is there. Always, there won’t be somebody to give you those captions. So, always, it is risky, because sometimes people have told me only half of your face is visible, or this photo shouldn’t have been there. And everyone so much relies on visuals that sometimes you’re forced to take screenshots and share. And then you really have no idea of what you’re sending. So, these concerns are there. They’re very empowering. At the same time, all of these concerns are there. You just need a second opinion most of the times. Thank you very much, Vidya. There was a lot of very good examples there of particular privacy and security concerns. We did have some technical issues with the Zoom link, and I’m very pleased to say that our online speakers

Gunela Astbrink:
are nearly all there. So, we are switching back to the introduction to Devarati Das, who will explain a little bit about the project in India when it comes to this particular topic. So, over to you, Devarati. Hello, everybody. Sorry, there were some technical issues and some confusion with the link.

Debarati Das:
Thank you all very much for joining. I’m Devarati from Point of View, and we are a feminist nonprofit in India working primarily in the intersections of gender, sexuality, disability, and technology. So, to set some more context for this session today, as our digital footprints grow every day, we are really grappling with new concepts, new experiences, and understandings of the relationships between our lives and the technologies that we use. And it’s become really important to understand who we are as digital beings, what does the self mean in data-driven digital spaces, and how do we imagine things like autonomy, agency, choice in today’s age of datafication? So, digital self-determination is an evolving concept to consider some of these critical questions, and it fundamentally affirms the fact that a person’s data is an extension of themselves in the cyberspace, and we need to consider how individuals can have autonomy over our digital selves. So, today we’ll unpack some of these very critical questions through the lens of experiences of persons with disabilities from different countries and regions. And I’m very pleased to introduce our moderators. Sorry about the delay because of the technical issues. Our moderator on site is Gunela Aspring. Gunela has been very active in the disability policy programs and research for 30 years, and chairs the Internet Society’s Accessibility Standing Group. And has also served on the IGF’s Multistakeholder Advisory Group, and is the vice chair of ICANN’s Asia-Pacific Regional At-Large Organization. Our moderator online and our partner in this is Padmini Ray Murray, who is the founder of Design Beku, a design and digital collective that is based in Bangalore, India, that works to shift how we can think about design and tech as processes of co-creation and participation, centered around feminist values, design justice principles, and ethics of care that advocate also for designing with communities and not for communities.

Gunela Astbrink:
With this, I hand it over to Padmini to maybe share in brief a bit more context of how today’s topic relates to disability rights and justice, and then over to you both, Padmini and Gunela, to take the conversation. So, yeah, good. Can you hear me? Yeah, great.

Padmini Ray Murray:
Thanks for the introduction, Bevorati. It’s nice to be here, albeit virtually. So I think actually Vidya, the first speaker, has already kind of set the scene quite well, because I think as they mentioned, that digital self-determination, of course, is something that we are all kind of currently positioned in a way that we all have to think about quite deeply because of the implications of surveillance capitalism. Every single device we use is compromised by some form of surveillance. And it is very difficult for even non-disabled people to wrap their head around the implications of being online, using these devices, and thinking about how to keep themselves and their privacy safe. And I think, obviously, this burden is doubled for people with disabilities. There are two reasons for this, largely because most devices or apps, even if they are made for disabled users, might not be taking these concerns into consideration when they’re being designed. So some of our work over the last few months with Point of View has been actually speaking to designers and technologists and putting them in conversation with people with disabilities so that they can understand their needs better. Because I think something that we all kind of come across when designing technologies is that while there are accessibility guidelines, for example, those set forward by the W3D, those are often just a baseline. And there is much more nuanced requirements of disabled users that need to be taken into account. I think the second issue is that in any kind of case around privacy and surveillance, it is always the marginalized who are the most vulnerable. And there is often the least kind of opportunity and options for recourse for them. And so it becomes even more important that we look specifically at disabled users and how they might be able to pursue self-determination as a use case. So I’ll just stop there and I’ll hand back to Penelope.

Gunela Astbrink:
Thank you very much, Padmini. And just for those participants and speakers online, we did start over half an hour ago. So it means we have about 25 minutes left. So we will move on to talk a little bit about, let’s see, we’re going to talk about imagining digital tech that works for everyone. And so I’m keen to hear examples and stories of digital tech that provides accessible, safe, joyful user experiences. So if Manik Gunaratne is online, I’d like to pass the floor over to her. Manik Gunaratne is the manager of a specialized training and disability resource center of the Employers Federation of Ceylon. She has promoted inclusive economic development centering on persons with disability. And she also acts as a vice chairperson of the South Asian Disability Forum and is a founding member of the South Asian Women with Disabilities Network and a member of the Asia-Pacific Women with Disabilities United. So if Manik is there online, please go ahead and talk about how digital tech, how it can be the best we would like it to be. Thank you.

Manique Gunaratne:
Thank you, Gunala. Yes, the technology is very important for people with disabilities because that’s how we survive in the society. Because we as people with disabilities, we have a disability which we have to admit. So through these assistive devices and technology, it’s easy for us to work equally capable as people with non-disabilities. And also, if we imagine a world of technology which will assist people with disabilities, for example, if there is a world where the technology through the movements of people with disabilities, which they can inform the caregivers what the requirements of the person with disabilities, then the lifestyle would be very easy for us. And especially now with the AI, artificial intelligence, AI technology, there are so many technologies available, but the problem is the cost factor. So it’s very important that they have, because for example, for hearing impaired persons, if someone comes, say if someone rings the bell, hearing impaired persons cannot hear. Or if a dog barks, a hearing impaired person cannot hear. But when the technology is there or through a smartphone or any device, if a dog is barking, a picture can be provided a dog barking or when the doorbell is ringing. Through the smartphone, if they can indicate that the doorbell is ringing. So it will be very easy for hearing impaired persons to make life easy. And also for vision impaired persons, the smart glasses, right? If we are through the eye gestures and when we walk with the smart glasses, if we can identify what is around us and give a description, so it will be very easy. And also for people with physical disabilities, they are the people who have the mobility difficulties. So through apps and technology, they can find out places which they can access. It may be a restaurant, it may be a movie theatre. So those things are important. And also there are people with disabilities where their movements are limited. So through hand gestures and facial expressions, if they can operate the computer, they also can be equally capable as people with non-disabilities so that they can be employed and economically active. And also if there may be technology through brain functions and the way of thinking, if they can operate any devices. So those are very important. And also entertainment is not only for people with non-disability. We as people with disabilities also need entertainment, maybe playing games through smartphones and the computers. So any accessible games and technology, it’s very important. And also if technology is there to give emotional recognition, people with autism and also people with intellectual disabilities, that would be very grateful for them. And platforms which are accessible so that all of us can equally use platforms which are accessible. And if we can imagine of a world where a smartphone, sorry, smartphone, if you want to cook something, if you put all the ingredients and press the buttons and say, I want fried chips or anything cooked rice, whatever, if the end product is there. So for people with disabilities, the phone can be very smart. So and we as people with disability use a lot of devices. I’m a vision impaired person. If you can just imagine a world full of darkness around you, and that is my world. So we do work equally capable as people with non-disability through various apps, the smartphones, the laptop, and we use the Be My Eyes app which gives assistance for us and currency identifiers, carrier recognizers, a lot of apps are available through the smartphone and other devices. So world with full of technology, especially for women with disabilities is very useful for us. Thank you so much, Manik. Wouldn’t it be wonderful if all of those technologies were available so that persons with disabilities could live seamlessly and independently?

Gunela Astbrink:
And that’s what we’re all aiming for. I would like now to ask Judy Okita, if she is online, to speak a little bit to this particular imagining topic, but also talking generally about her experience of accessibility and potential barriers. And Judy Okita is from Kiktonet in Kenya and is the founder of the Association for Accessibility and Equality. And she’s been advocating for many years for better access for persons with disability, both in regard to physical infrastructure and online content. So I’ll hand over to Judy, please, if she is there.

Judy Okite:
Hello, Gunela. Thank you. Good to see you. It’s an interesting topic that we talk about accessibility. for persons with disability. And yes, I will be excited to see all inclusive technology or inclusive, you know, physical spaces, because that’s the one that really affects me the most. I know for a long time we’ve been advocating for physical accessibility even within the IGF. I hope that this year it’s much better. And the little things that we don’t get to think about, we don’t get to look into what really brings the barriers and so enables or rather puts the people into the spaces of you have to request for assistance every now and then. So one of the things that probably I’ll just mention that we have been able even to do with Kiktonet in this year, we were able to evaluate the government websites. We did that on 46 websites. Just to be able to see the access, how accessible this information for persons with disability is. Unfortunately, the highest was got an 80%. Of course, we were using the poor principles. And it was interesting, the feedback. The feedback from government was interesting because people felt if you are at 80%, then, you know, you are at a good space. But no, if you’re at 80%, that means 20% of your content is not accessible. Meaning that your content is still not accessible for persons with disability. Another thing that we found from the research that we did was that more emphasis is placed on the persons who are blind when it comes to digital content. But you will find that a person with cognitive disability is actually more disadvantaged. If the content is not understandable, if the content is not perceivable, then you’ve lost this person. They’re not going to be able to interact with your information as much as you would want all of them to. And looking at it from the Kenya perspective, it’s only a few years, maybe two years ago, that the cognitive disability was recognized actually as a disability. Then you can see how far we still are on inclusion, on ensuring that everyone is included. So I would really like to see if there are these little things that we can ensure that the persons with disability are part of our change. Yes, we want to make change, but we need to include them. Not because they want to, but because they have to be part of the process. If I can just quickly give an example. Most recently I was in Dar es Salaam in Tanzania. We were having the Forum for Freedom in Dar es Salaam. That is an annual event. We’ve worked with them before, so they know my very specific needs when it comes to the physical platform. So when I got there, they had the ramp, yes, but there is the big pavement before you get into the ramp. So my question was, how does this make sense? So yes, there is the ramp, but I will still need to be lifted up to get to the ramp. So that’s not the access that we are talking about. They had a really beautiful, accessible room, but they have this very small cubicle for the washroom. So I decided that this time around I’m not going to say much about it. I’m just going to demonstrate. So I had to call the guys from the reception, and I was like, could you please come upstairs with the wheelchair? Is there a wheelchair? So they were like, okay, yes. So they came to the room with a wheelchair, and I requested them, could you please push the wheelchair into the bathroom? And the guy is asking me, how do we do that? I’m like, that’s an excellent question. How would you expect me to use it if you cannot push it in there? It’s not that the persons with disability want to be part of the process. They have to be part of the process. We need to empower the persons with disability to really be able to know their rights. I mean, I have the right to say this is not working for me. It’s not for you to tell me, no, this is the accessible room. People use it. No. I tell you, if it is not accessible, then it is not. And I just kept telling them, if you had included a person with disability to be part of this process, the ramp would not have been this bad. I mean, the washroom would not have been this bad. It’s not about having a wide, beautiful room. It’s about having it accessible. So I would really love to see if we can do that and be deliberate in that. It’s not something that we are requesting. It’s a right. We need to be part of that. We need to be part of the move, of the change. It’s not about we are going to disturb them, or we know what it is that they need. It’s about ensuring that they are part of that process, that they are there, that they have a yes or a no, and we are able and we are ready to listen to the yes and no and make those necessary changes. Thank you very much, Granella. Thank you very much, Judy. And I think it just shows that we have this beautiful imagining of what accessibility is and what technology can do. But then we come to earth and realise some really fundamental things still need to be fixed. And I think Judy also made suggestions there about nothing about us

Gunela Astbrink:
without us. We need to be involved in those decisions on how something is built, if it’s in the built environment or in the online environment. So I now just wanted to ask the audience if there were any particular questions, comments. And first of all, Padmini, are there any online questions or comments before we go to any in the room?

Padmini Ray Murray:
So actually, Granella, since we just managed to get Nirmita in the room, it would be really nice if we could also include her in the conversation. So I think you might have a brief biography for her, but Nirmita is a widely respected and known specialist in disability rights and policy from India. So, Nirmita, maybe we could have, since we’re running a little short of time, skip to the question, which is that how do you feel policy and regulatory processes can kind of ensure the inclusion of disabled people

Manique Gunaratne:
in the creation of or the making of technologies, just like Judy was suggesting? Yeah. So first of all, apologies for coming late. I was facing some technical issues.

Nirmita Narasimhan:
So let me get to the question. I think it’s important to have policies because otherwise it ensures that people are aware that there is a need. It is mandated. It is recognized by law. There are standards to comply with. Otherwise, it is just a personal request of somebody to somebody, right? And the fact that there is a legal and a social requirement and a responsibility to comply with standards, I think that is very important to ensure that accessibility is there where we see. So if you look at the DARE index survey, it shows that countries which have policies are more likely to have accessibility implemented. And so starting from the policy, I think I would like to say that now, either we need to have policy or where we have policy, we need to focus on implementing the policy. And that gives us guidelines on what to do, how to do, and where all to do. So I think that answers your question in brief. Can I just very quickly add a follow-up question, which is that how would you advocate disabled people lobby for this kind of policy? Because it’s quite labyrinthine, right, like getting these questions to a policy level.

Padmini Ray Murray:
So if you can just maybe share an example or maybe advice as to how that might be done.

Nirmita Narasimhan:
Sure. So I think by and large, a lot of countries have implemented the CRPD and have ratified and signed and are implementing it in their legislation. But clearly, domain-specific policies have to come from within and persons with disabilities have to do that. It also depends on different strategies and different situations. For example, in India, when we had to lobby for the global and the national level copyright law, we did a whole lot of research on what are the legal models available everywhere. We ran campaigns, we had meetings, we had signature campaigns, we had a whole kind of campaign stuff happening. On the other hand, when we look at electronic accessibility, we had meetings with the officials of the electronic and IT department, and that’s how we worked with them to develop a policy. On another level, when we look at implementing the procurement standard in India, we worked again with the ministry, with an agency, and there were nationwide consultations with experts and with different academic groups and industry on what the standard should be and how it should be implemented. But clearly, the one thing that is there everywhere is that we need to be involved and we need to be motivated and get other people to be responsible for this. It’s not something which is only applicable to us. It’s something we want the country as a whole to implement, and it depends on the situation, who the people are we are in touch with. Whatever it is, we need to be proactive and we need to be ready to do more than we think it’s our job to do. Thank you very much, Nirmita.

Audience:
I’m so pleased that you got online in time to make your comments on policy. I think they are so essential. I will now ask Lydia Best to have a question or a comment, please. Thank you very much for the opportunity to add my voice. As we speak about nothing about us without us, therefore, I would like to disclose that I am deaf and I use cochlear implant. When we talk about technology and how it empowers us, it does, but it also disempowers. In this case, for example, when during pandemic the situation happened when everybody has gone online on the telephone lines, Google Meet was an excellent tool where we could very easily connect with each other and while not perfect, we were able to communicate. Mostly one-to-one. Text messages also help. For deaf people, sign language users, we know that we’ve got WhatsApp, video calls, we can use sign language. Great. When we meet, and Zoom has been mentioned today, when we meet at Zoom meetings, usually it is multinational meeting because I am representing European Federation of Hard of Hearing People and I work globally as well. When it comes to actions being involved, automatic captioning unfortunately fails us and often we are finding difficult to participate because we cannot follow what the discussion is about. Another issue is when the users are actually switching of the videos because the auto-captioning, if it’s used, is not correct enough, we need to support ourselves lip-reading. And that causes a problem. We need to actually disclose as well that we actually need someone, everyone to have their face shown correctly so we can follow. But the latest invention of Zoom is causing the biggest consternation. So Zoom has rolled out quite a few languages now in automated version. Great. Any user who is participating in the Zoom call can actually click the language they want. But do you know what happens? You suddenly have, say, someone using English language, someone else wants to actually follow Spanish language. And suddenly both of us see both languages suddenly showing up as the captioning. It creates massive confusion and lately we are forced back into using only human captioning in the international meetings because we cannot rely on the technology which actually disempowers us. Unless everybody uses just one language, usually it has to be English. So there are a lot of issues. And to me, this demonstrates the latest thing with Zoom, that Zoom did not work with persons with disabilities, with expert disabilities, and did not do the user research enough before actually putting this new feature out. And that’s something which is really distressing.

Gunela Astbrink:
Thank you. Thank you very much, Lydia, for those important comments. Is there any last-minute comments, questions from anyone else, please? And, Padmini, is there any comments or questions online? Gunal, this is Nirmata here. Is there a minute?

Nirmita Narasimhan:
I just wanted to add some more thoughts on previous discussions. And when we’re talking about nothing about us without us, and we talk about accessibility, and I just wanted to quickly mention that I think increasingly we feel the need for mainstream products to be more universally designed. I mean, even simple technologies around us that we can use. And what we need to understand is that just because it’s accessible, it’s not usable to everybody. There are different levels of users, and maybe somebody who’s an expert in technology can use something, but another person using the same screen reader or same captioning or same technology cannot. And we need to have that user-centric approach when we are talking about accessibility as well. So, yeah, with that, I just conclude.

Gunela Astbrink:
Thank you, Nirmata. I think that is a very good point to end on. And I wish to thank all our speakers online in the room. And, again, we unfortunately didn’t have our online speakers there from the beginning because of some technical issues with the Zoom links, but all the information is captured, and I’m sure that a point of view and everyone else who has participated in this session will have some very useful information to take home when it comes to digital self-determination for people with disability and especially for the gender focus on this topic. Thank you very much. Thank you so much, Gunilla. It would be great if, Padmini, you could share your concluding thoughts and comments. Great. Thanks so much.

Padmini Ray Murray:
So, yes, I think one thing that as somebody who both identifies as a designer and technologist, I think the biggest challenge that we struggle with is the fact that when we design and develop technology, we always tend to do it at scale. And this means that much more nuanced and individualized use is much harder to provide. And so I think this does require a kind of a paradigmatic shift in the way we think about creating a customized product. And I think something like AI might actually be the way forward, but we need to be able to kind of layer user interaction in such a way that individual users can toggle between different kinds of way of using and experiencing technology rather than foisting the same technology on everybody because that’s not a tenable solution. So I would urge those of you who are working in the field and, of course, people with disabilities who are affected by this to, you know, start those conversations and advocate for kind of more individualized and customized experiences rather than something one size that fits all because we know very well it doesn’t.

Gunela Astbrink:
Thank you. Thank you very much. And, Deborah, I think we’ve finished then. So thank you very much for this session, and I think we’ll conclude there. Okay. Thank you.

Audience

Speech speed

151 words per minute

Speech length

498 words

Speech time

198 secs

Debarati Das

Speech speed

140 words per minute

Speech length

359 words

Speech time

154 secs

Gunela Astbrink

Speech speed

132 words per minute

Speech length

2545 words

Speech time

1157 secs

Judy Okite

Speech speed

149 words per minute

Speech length

1067 words

Speech time

429 secs

Manique Gunaratne

Speech speed

138 words per minute

Speech length

752 words

Speech time

327 secs

Nirmita Narasimhan

Speech speed

171 words per minute

Speech length

681 words

Speech time

239 secs

Padmini Ray Murray

Speech speed

170 words per minute

Speech length

701 words

Speech time

248 secs

Vidhya Y

Speech speed

176 words per minute

Speech length

2461 words

Speech time

840 secs

DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Dudley Stephen Wyber

Libraries and librarians have a significant role to play in the realm of Open Educational Resources (OER). They serve as catalysts for the discovery, awareness, and curation of OER while helping to overcome biased views about their value. Libraries actively update their roles by connecting individuals who need knowledge with available resources, thus raising awareness of the potential benefits that OER can offer.

Librarians, in particular, contribute to the curation of OER by evaluating these resources in line with the needs of faculty and other stakeholders. They bridge the gap between various resources and users, identifying any gaps or deficiencies in the existing OER portfolio. Librarians assist in ensuring that faculty and stakeholders have access to a comprehensive collection of OER.

It is important to note that the OER landscape is currently dominated by a few regions of the world. This geographic imbalance highlights the need for greater collaboration and dissemination of OER from a global and inclusive perspective. Librarians can empower stakeholders to create and share their own OER, contributing to a more diverse and inclusive OER ecosystem.

Librarians’ involvement extends beyond curation and dissemination. They provide guidance on usage rights and assist stakeholders in navigating complex legal frameworks surrounding copyright. Librarians can advocate for better regulatory frameworks that include robust educational exceptions in copyright laws, ensuring that OER are not only accessible but also legally protected and supported.

Dudley Stephen Wyber emphasizes the importance of adopting a recurring circular learning approach in education. This model advocates for active learning and participation, encouraging individuals to learn, explore, contribute, and continuously improve. Wyber also underscores the active involvement of teaching professionals and librarians in facilitating the use of online resources. According to Wyber, simply making educational content available online is insufficient; active facilitation and support are necessary to foster uptake and utilization.

Librarians should feel confident and responsible for guiding faculty and students to make the most of OER. By providing support and assistance, librarians enhance the educational experience and help individuals maximize the benefits offered by OER.

Additionally, there is a suggestion to apply the interoperability logic used to achieve compatibility between Open Access (OA) repositories to OER repositories. The work done by organizations such as COAR in Canada serves as a reference in this regard. Interoperability between repositories would enable seamless sharing and integration of OER, contributing to the growth and effectiveness of the OER ecosystem.

Finally, it is essential to strive for equity and parity between OER and Open Access. OER should be brought to the same level of recognition and value as Open Access, creating a system where both types of resources are equally supported and encouraged. This would foster a more open and inclusive education system, benefiting learners and educators worldwide.

In conclusion, libraries and librarians play a multifaceted role in the realm of OER. They contribute through the discovery, awareness, and curation of OER, bridging the gaps between available resources and users. Additionally, librarians guide stakeholders in utilizing rights, creating their own OER, and advocating for favorable legislative and regulatory frameworks. Their involvement, combined with the adoption of recurring circular learning approaches and the pursuit of interoperability and equity, is vital in realizing the full potential of OER in facilitating quality education for all.

Tawfik Jelassi

Open Educational Resources (OER) play a pivotal role in increasing access to quality education worldwide. In 2019, UNESCO adopted the recommendation on OER, a UN normative instrument to support inclusive access to digital learning platforms. This highlights the significance and recognition of OER in the educational landscape.

The recommendation by UNESCO advocates for the use of openly licensed digital education tools that can be accessed through the Internet. By embracing OER, educational institutions and learners can benefit from a wide range of freely available, adaptable, and shareable educational materials. This promotes inclusivity and equal opportunities for learners globally.

UNESCO’s emphasis on OER aligns with the Sustainable Development Goals (SDGs), particularly SDG 4: Quality Education. OER contributes to the achievement of multiple SDGs, including quality education, access to information and ICT, gender equality, and global partnerships. The adoption and implementation of OER can help bridge educational gaps, address gender disparities, and foster collaboration among nations.

Moreover, OER is part of the broader concept of digital public goods. These digital resources, including OER, drive sustainable models of education, knowledge sharing, and innovation. The 2019 OER recommendation highlights the importance of international collaboration for content, capacity, and infrastructure development, aligning with the Global Digital Compact principles. These principles promote an inclusive, open, secure, and shared Internet, enabling widespread access to knowledge and educational resources.

In addition to the global significance of OER, there is a recognition that the internet should be used as a force for good. UNESCO envisions a digital ecosystem where the internet serves as a powerful tool for learning, advancing human rights, and sustainable development. The internet has the potential to facilitate access to information, promote freedom of expression, and provide opportunities for lifelong learning.

To guide the development and use of the internet responsibly and inclusively, UNESCO established the OER Dynamic Coalition. This coalition brings together stakeholders from various sectors to build values and principles guiding the development and use of the internet. The coalition aims to ensure that the internet is harnessed as a tool for education while also promoting peace, justice, strong institutions, and partnerships.

In conclusion, the adoption and promotion of Open Educational Resources are vital for enhancing access to quality education worldwide. The UNESCO recommendation on OER highlights the importance of openly licensed digital education tools accessible through the Internet. By embracing OER, stakeholders can contribute to the achievement of the SDGs, drive sustainable models of education and innovation, and utilize the internet as a powerful tool for learning while advancing human rights and sustainable development. The establishment of the OER Dynamic Coalition further showcases the commitment to shaping the future of education inclusively and responsibly.

Audience

During the discussion, the speakers exhibited curiosity and a desire to understand the best practices related to decentralised repositories and open technologies. The conversation extensively explored various aspects of the implementation and functioning of these concepts.

Both speakers maintained a neutral stance throughout the discussion, refraining from taking a definitive position. However, they did not provide any specific supporting facts or evidence, leaving the conversation open-ended.

The Sustainable Development Goals and their connection to decentralised repositories and open technologies were not mentioned during the dialogue. This suggests that the primary focus of the conversation was to explore the concepts themselves rather than their potential impact on sustainable development.

The main takeaway from the discussion was the speakers’ curiosity about best practices in decentralised repositories and open technologies. Although the lack of supporting evidence or detailed arguments may indicate that this was an introductory exploration or a starting point for further research, it is important to note that no additional noteworthy observations or insights were identified.

Overall, the conversation revolved around the speakers’ neutral interest in decentralised repositories and open technologies, without delving into specific examples, cases, or implications.

Neil Butcher

The analysis examines various arguments and stances regarding education policies and their impact on sustainability, intellectual property, digital accessibility, procurement processes, and the quality of teaching materials. These arguments provide insights into the importance of effective policy implementation and its influence on achieving sustainable development goals.

A key point highlighted is the need for policies to enable government agencies to use open licences. Without such provisions, it is unlikely that open licences will be effectively utilised. Another crucial aspect is the inclusion of accessibility considerations in procurement processes. The analysis argues that accessibility should not be overlooked during contract execution, as it may compromise the educational experience for individuals with disabilities.

The quality of accessible teaching and learning materials is also a prominent focus. The analysis suggests that an excessive emphasis on quantity and accessibility could overlook the importance of quality. Instead, curated collections of resources that promote high-quality teaching and learning experiences are proposed.

The government’s responsibility in ensuring accessible and supportive public education systems for all is emphasized. The analysis states that the government plays a crucial role in providing accessible and supportive education, regardless of individuals’ backgrounds or abilities. Additionally, the monetization of the education space by the private sector is critiqued, with an argument for prioritising the quality of teaching and learning experiences over financial gains.

Investment strategies in education are highlighted as a means to prioritize the quality of teaching and learning experiences for everyone. Adequate investment in education is seen as essential in providing a conducive learning environment and promoting positive outcomes for all learners.

Open Educational Resources (OER) are also scrutinized, with a warning against compromising the quality of learning experiences while expanding access. If OER does not ensure high-quality learning experiences, it may be detrimental to education.

Furthermore, the analysis emphasizes the importance of community representation in improvement processes within education. Representatives from the target communities of learners should lead improvement efforts, ensuring that the education system meets their specific needs and addresses inequalities.

In conclusion, the analysis presents various perspectives on education policies and their implications for sustainability, intellectual property, digital accessibility, procurement processes, and the quality of teaching materials. Key takeaways include the importance of effective policy implementation, the need for open licences and accessibility considerations, the role of the government in providing accessible public education, critiquing the monetization by the private sector, the significance of investment strategies for quality education, the impact of OER on learning experiences, and the importance of community representation in improvement processes within education.

Tel Amiel

Open Educational Resources (OER) projects require sustainable funding to ensure their development and continued existence. This funding can be obtained through partnerships and donations from foundations. However, the success of sustainable funding models, such as open procurement, may vary in different contexts.

The practices surrounding OER and community engagement are essential factors for their success. Without active community involvement, the implementation of OER loses its meaning. It is crucial to foster collaboration and engagement within the educational community to maximize the benefits of OER.

Policies alone are insufficient to guarantee the effective implementation of OER initiatives. They need to be actively monitored by a diverse set of stakeholders. Involving various individuals and organizations from different sectors ensures that the implementation remains aligned with the goals and objectives of OER. Additionally, OER should be seen as an evolving concept that requires ongoing monitoring and adaptation to meet changing educational needs.

OER possesses unique qualities that make it a real public good, particularly in multi-stakeholder processes. Its adaptability, remixability, and reusability enable the inclusion of diverse cultural groups and cater to different educational requirements. Engaging with these resources in a pedagogical context enhances their value as a public good.

The potential of OER is currently understated, especially in interconnected, multilateral contexts. There is a need for further exploration and utilization of OER to maximize their impact. OER’s ability to share, revise, remix, and reuse content makes it a valuable resource that can enhance education on a global scale.

Successful implementation of OER requires the allocation of serious responsibilities and the active involvement of individuals. Without meaningful participation and responsibility, OER initiatives may stagnate and fail to realize their objectives. Therefore, it is crucial to involve people at all stages of the implementation process to ensure the effective utilization of OER.

In conclusion, sustainable funding is crucial for the success of OER initiatives, and partnerships and donations from foundations can provide the necessary financial support. Open procurement models are advocated by governments for sustainable funding, but their effectiveness may vary depending on the context. Community engagement, active monitoring by stakeholders, and recognizing the unique qualities of OER as a public good are vital for their successful implementation. Further exploration and utilization of OER are needed, especially in interconnected, multilateral contexts. Meaningful implementation of OER requires the involvement and allocation of responsibilities to individuals. Without active participation, OER risks becoming stagnant legislation with limited progress.

Moderator – Michel Kenmoe

Various stakeholders engaged in discussions about the importance of Open Education Resources (OER) and the challenges associated with its adoption. It was universally agreed that raising awareness among decision makers is crucial for OER adoption. Decision makers play a significant role in implementing and supporting OER initiatives. Developing OER strategies helps raise awareness and garner support from stakeholders.

The involvement of middle to top-level management was seen as vital for the successful implementation of OER. Without their support, gaining buy-in and implementing the recommendations for OER adoption would be difficult. This highlights the importance of securing support from influential individuals within educational institutions and policymaking bodies.

One major challenge in realizing OER strategies is concerns over funding. Governments are particularly concerned about finding adequate resources to support OER implementation. One suggested solution is for governments to ensure that part of the budget for OER production is supported by donors. This approach would alleviate the financial burden on governments and facilitate the production of open educational resources.

Designing OER strategies requires a collective effort involving multiple stakeholders. It was observed that five countries successfully developed their OER strategies through such collective efforts. This highlights the importance of engaging all relevant stakeholders, including educators, policymakers, and educational institutions, in developing and implementing OER strategies.

An important observation from the discussions is that many West African countries lack a dedicated budget for educational resource production. This poses a significant challenge to implementing OER strategies. The absence of a budget specifically allocated to educational resource production hinders the development and dissemination of OER. Therefore, it is imperative to raise awareness about the importance of investing in educational resource production and secure adequate funding to support OER initiatives.

In conclusion, the discussions on OER emphasized the need for raising awareness among decision makers, securing middle to top-level buy-ins, addressing funding concerns, fostering collective efforts involving multiple stakeholders, and promoting investment in educational resource production. These insights are crucial for the successful adoption and implementation of OER, contributing to the goal of quality education (SDG 4) and partnerships for sustainable development (SDG 17).

Patrick Paul Walsh

The stakeholders involved in the discussion, including government, academia, the private sector, and intergovernmental systems, agree that engagement is crucial for a comprehensive partnership. They recognize the need to work with UNESCO, SDSN, and a joint committee to implement the UNESCO-EOR recommendation. Additionally, there is a partnership agreement in place to manage an open education resource overlay platform, repository, or journal.

To ensure the quality of submitted courses, a rigorous quality assurance process has been established. Courses are evaluated not only for their academic and scientific content but also for compliance with UN policies and legal frameworks. The objective is to provide a community of practice with guidelines and playbooks on ensuring quality in submitted courses.

Various educational technologies are being used to manage and organize the courses. This includes open journal systems, copyright licensing management, and other tech tools. The effective utilization of these technologies is considered essential for managing the courses.

Community engagement is emphasized as a crucial aspect of the project. Collaborating with various user groups such as governments, corporates, academics, and schools is necessary to develop the required metadata and effectively manage the archives. This collaboration is referred to as “diamond engagement” and is seen as essential for the system to work effectively.

The freedom to create and contribute to a global knowledge commons is a fundamental principle. The open education resource recommendation supports the creation and contribution of educational content to the global knowledge commons. The content should be easily accessible, and everyone should have the opportunity to contribute freely.

The project also places importance on accessibility and inclusivity. Materials, including slides and videos, should be made accessible to all, including those with visual impairments. Ensuring compliance with disability regulations and providing equal access for everyone is considered crucial.

The decentralization and adaptability of open education resources to local contexts are promoted. It is essential to make sure that the resources can be repurposed and translated to suit specific local contexts. This flexibility ensures that the resources remain relevant and applicable in different regions.

There is a concern about the control of academic work archival by commercial entities. The argument is that academic works should not be owned by private entities, and hosting and archiving should be done by libraries rather than commercial entities.

Decentralized repositories are seen as beneficial as they allow for easy updates of courses. This enables courses to be updated locally and reuploaded to the system, ensuring that the content remains up-to-date and relevant.

Behavioral issues and the psychology of implementing digital infrastructure are important factors to consider. Jeffrey Sachs has highlighted the reality of sunk costs in initiating such projects, and the marginal costs of implementing digital infrastructure are relatively low. There is also the potential to add commercial value to the project, which could eventually generate returns on investment.

Government mistrust in receiving returns on their investments poses a significant challenge. The argument is that governments need to invest now for future returns, but past experiences of not receiving expected returns have eroded their trust.

There is disagreement regarding the commercialization of open education resources. While some reject the idea of commercializing the infrastructure or content, others propose value-added commercialization with profit-sharing arrangements if a private entity gains income from the public resource.

Advocacy exists for public or stakeholder ownership of open education resources. The argument is that open education resources should be either publicly owned or owned by relevant stakeholders to ensure their accessibility and availability to all.

In conclusion, the stakeholders involved in the discussion emphasize the importance of engagement in building a comprehensive partnership. Quality assurance processes have been implemented to ensure compliance with UN policies and legal frameworks. Various educational technologies are being utilized to manage the courses effectively. Community engagement is crucial for developing metadata and managing archives. The discourse on open education resources highlights the freedom to create and contribute to a global knowledge commons, as well as the need for accessibility, decentralization, and public ownership. Behavioral issues and government mistrust pose challenges, but there are also opportunities for commercial value and return on investment. Collaborative efforts and a shared vision are crucial for the successful implementation of open education resources and the promotion of quality education for all.

Melinda Bandaria

In order to create a more inclusive education system, it is crucial for teachers to have an awareness of who is excluded and the reasons behind their exclusion. Some common barriers include the cost of learning materials, physical challenges such as hearing or sight impairment, language barriers, and cultural diversity. By understanding these barriers, teachers can better address the needs of excluded students.

To enable more inclusive teaching and learning, teachers should possess knowledge of accessibility guidelines, universal design for learning, and cultural and linguistic diversity. The Web Content Accessibility Guidelines provide a framework for making online platforms accessible to different types of learners. Integrating the basic principles of universal design for learning into Open Educational Resources (OERs) ensures that they can be accessed by all students. Furthermore, translating OERs into local languages and respecting cultural diversity can enhance inclusivity.

Open Educational Resources (OERs) are a valuable tool in making teaching more inclusive and breaking down barriers. OERs address the cost barrier of learning materials, as they are freely available for use. They can also be modified to integrate features of universal design for learning, tailored to meet the needs of diverse learners. Additionally, translating OERs into local languages ensures that content is accessible to students who face language barriers.

Teachers need to possess the necessary skills and knowledge to make OERs more accessible and inclusive. Training programs for teachers should include training in cultural and linguistic diversity, understanding copyright laws and licences associated with OERs, and the ability to convert OERs into alternative formats such as OJO, Braille, and simplified text. Making OERs compatible with assistive technology and determining the readability of materials are also important skills for teachers to have.

The training for teachers should not stop at developing OER materials but should go beyond that to include a wide range of knowledge and skills to make OERs more inclusive and accessible. This requires ongoing learning and continuous professional development. Teachers should not only develop and share OERs but also make them accessible and inclusive for all learners, which necessitates additional knowledge and skills.

To ensure the quality of OERs, a quality assurance framework is important. This framework enables the evaluation of the OERs that teachers use, ensuring that they meet certain standards of quality. It serves as a guide for teachers in selecting and utilising high-quality OERs that enhance inclusivity in education.

Both teachers and universities have a role to play in ensuring the quality of OERs. Teachers are crucial in creating and sharing OERs, while universities can support them in this process. OERs are often reused, remixed, translated into local languages, and shared by teachers and universities, making collaborative efforts essential in enhancing the quality and inclusivity of OERs.

Policies should be implemented to promote the development and use of OERs. Institutional policies can actively encourage the use of OERs, creating a supportive environment for teachers. Moreover, it is beneficial to use public funds to produce OERs and make them open access, ensuring that cost is not a barrier to their availability.

Incentive systems for faculty members are also important in promoting the use and creation of OERs. Especially for universities, providing incentives to teachers and faculty members who utilize and create open educational resources helps foster a culture of innovation and inclusivity in education.

In conclusion, creating a truly inclusive education system requires teachers to have an understanding of barriers and exclusion, as well as the necessary skills and knowledge to make learning materials accessible and inclusive. Open Educational Resources (OERs) serve as a powerful tool in overcoming barriers and promoting inclusivity. By implementing policies and providing support, both teachers and universities can play a vital role in ensuring the quality and accessibility of OERs. With ongoing training and incentives for faculty members, education can become more inclusive for all learners.

Zeynep Varoglu

The OER (Open Educational Resources) Recommendation 2019 was unanimously adopted by all member states, providing a clear definition of OER and focusing on capacity building, policy implementation, quality assurance, inclusive multilingual OER, sustainability, and international cooperation. Zeynep Varoglu played a significant role in presenting and supporting the OER Recommendation 2019.

Open procurement models have become popular for developing and sustaining OER projects, although their effectiveness can vary depending on the country or context. While open procurement is seen as a transition to a more sustainable OER model, its implementation may face challenges in certain countries.

Multi-stakeholder working groups play a crucial role in monitoring policies and ensuring the success of OER initiatives. These groups can adapt to changes in OER through collaboration and representation of perspectives from all stakeholders.

Community engagement is identified as critical for the relevance and success of OER initiatives. Incentives and recognition are important for motivating individuals at all levels to actively participate in advancing OER goals.

The OER Dynamic Coalition event at the Internet Governance Forum (IGF) is a vital platform for knowledge sharing and collaboration among stakeholders. With around 500 participants from government, institutions, and civil society, it focuses on implementing the OER Recommendation.

The importance of openness in education and knowledge sharing is emphasized during the event. Zeynep Varoglu actively supports this idea, advocating for openness in education.

In conclusion, the OER Recommendation 2019 provides a comprehensive framework for the development, implementation, and sustainability of OER initiatives. Stakeholder involvement, such as Zeynep Varoglu’s support and multi-stakeholder working groups, along with community engagement and platforms like the OER Dynamic Coalition event, contribute to advancing OER goals. Emphasizing openness in education and knowledge sharing is crucial for promoting inclusive and quality education globally.

Lisa Petrides

The Institute for the Study of Knowledge Management in Education, led by Lisa Petrides, focuses on various aspects of Open Educational Resources (OER). They are involved in building OER libraries, providing professional development, and researching the impact of OER. Moreover, they emphasize the significance of OER repositories as the infrastructure supporting libraries. The institute promotes the implementation of the CARE framework, which prioritizes good stewardship of OER by emphasizing contribution, attribution, release, and empowerment. They also stress the importance of understanding the provenance of resources to build a transparent knowledge base. Additionally, the institute advocates for the accessibility and inclusivity of OER, viewing educators as experts in their knowledge, promoting decentralization in knowledge distribution, and resisting commercial private partnerships in education. They emphasize the need to integrate various open areas, such as education resources, pedagogy, data, science, access, and publishing, for better outcomes. Through these efforts, the institute aims to contribute to quality education and drive positive changes in the education system.

Session transcript

Moderator – Michel Kenmoe:
in Senegal. No, no, no. So it’s a pleasure to have you. We hope that our other panelists will be able to join us online and that they can participate in this session. So let me check once again. Do we have Zeynep online? I just rang her. She’s coming. Okay, great. Okay, thank you. So, and Neil? Neil Boettcher? Neil Boettcher is not yet online. Yeah. I hope, I hope while, while waiting for them to join, why don’t we give us two minutes for each of you to introduce yourself. Let’s say one minute, not two. Okay. This isn’t quite about one minute. Yeah, okay. Yeah. One minute.

Lisa Petrides:
Hello. My name is Lisa Petrides and I run the Institute for the Study of Knowledge Management in Education and we build OER libraries and we do professional development and we do a lot of research around the impact of OER.

Tel Amiel:
My name is Tel Emil. I’m a professor at the University of Brasilia. I had the UNESCO Chair in Distance Education and we had the Open Education Initiative, which is an activist research group for open education. Thank you. Over to you.

Patrick Paul Walsh:
Yeah, you have it. Yeah. Hello, everyone. So, my name is Patrick Paul Walsh. I’m a full professor at University College Dublin, but on secondment to the UN Sustainable Development Solutions Network as Vice President of Education and Director of the SDG Academy. Thank you.

Dudley Stephen Wyber:
Thank you very much. My name is… Stephen Weiber, I’m Director for Policy and Advocacy at the International Federation of Library Associations, which is sort of the global peak organization for libraries of all sorts.

Moderator – Michel Kenmoe:
Thank you very much, dear participant. We want to wish you a warm welcome to this session on the transformative role of open educational resources in digital inclusion. We are going to start the session by listening to an opening remark from Mr. Taufik Djelassi, who is the Assistant Director General for Communication and Information.

Tawfik Jelassi:
Excellencies, ladies, and gentlemen, dear colleagues, I am pleased to address you today at the 2023 IGF Forum and the first session of the Open Educational Resources Dynamic Coalition. This year’s theme, The Internet We Want, brings together policymakers, experts, civil society, and businesses to tackle the challenges and opportunities in our evolving digital landscape. UNESCO is committed to fostering dialogue and cooperation for a more inclusive, secure, and sustainable internet for all. We envision a digital ecosystem where the internet serves as a powerful tool for learning, and open educational resources play a pivotal role to increase access to quality education worldwide. In 2019, UNESCO adopted the recommendation on OER, which is a UN normative instrument to support inclusive access to digital learning platforms. Today, we gather in Kyoto, the ancient capital of Japan, to explore the transformative potential of OER in the age of the internet. Where information is needed. information, and educational materials are abundant. In alignment with the UN Secretary-General’s call on our Common Agenda, UNESCO has been advocating for the adoption of openly licensed digital education tools to be accessible through the Internet. The 2019 OER recommendation guides our efforts towards an open, accessible, and equitable education future. It emphasizes international collaboration for content, capacity, and infrastructure, aligning with the Global Digital Compact principles for an inclusive, open, secure, and shared Internet. Central to our discussion is the recognition of digital public goods, especially OER, defined by the UNESCO OER recommendation. The five areas of action, namely capacity building, policy support, inclusive and multilingual quality content, sustainability, and international collaboration, form the foundation for accessible online learning platforms benefiting both learners and educators. Digital public goods, such as OER, drive sustainable models of education, knowledge sharing, and innovation, thus contributing to the sustainable development goals, including quality education, access to information and ICT, gender equality, and global partnerships. This session is not only about dialogue, it’s a call for action. Digital transformation is rapidly reshaping societies. The platform society is intertwining digital platforms and artificial intelligence. We must navigate data privacy, transparency, and governance intricacies to effectively harness their potential. We call for all governments, partners and stakeholders to unite to implement the 2019 OER Recommendation and other norms that cultivate open and secure spaces for education. As stakeholders, our collective efforts through the OER Dynamic Coalition are crucial in shaping an inclusive, equitable and digitally empowered future via open educational resources. Your contributions will be invaluable in advancing our shared mission. Dear participants, UNESCO has been actively promoting open educational resources to expand access to quality education worldwide, underlying principles such as openness, accessibility, privacy and freedom of expression in the digital age. The OER Dynamic Coalition brings together stakeholders from various sectors to build values and principles guiding the development and use of the Internet. Let us work together to ensure that the Internet remains a force for good, advancing human rights and sustainable development. Thank you for your kind attention.

Moderator – Michel Kenmoe:
Thank you to the ADG, the Assistant Director General for Communication and Information at UNESCO for this opening remark in which, among other points, he highlighted that this meeting is about a call for action. We were normally to have Zeynep to present the Dynamic Coalition. I don’t know if Zeynep is online. Zeynep? So far, she’s not. It’s not yet online. So we are going to have a series of session during which some of our panelists will have shared their experiences from the different initiatives in which they are involved throughout the world. So I’m going to invite Dr. Melinda Bandaria to share their experience on the critical role in developing, creating, and reusing, as well as adapting and sharing OER. What skills do teachers need to ensure that the OER used in the courses is inclusive and accessible? Over to you. She’s joining us online.

Melinda Bandaria:
Yes. Thank you very much, and good day to everyone. Thank you for having me in this session to share my perspective about OERs and the important role of teachers in making OERs accessible and inclusive. So as introduced, I am Dr. Melinda Bandaria, and I am participating from the Philippines. I am also full professor and chancellor at the University of the Philippines Open University and appointed as ambassador of Open Educational Resources by the International Council for Open and Distance Education and has been actively involved in the OER Dynamic Coalition of UNESCO. So as to the question, considering that teachers and educators play a critical role in developing, creating, reusing, adopting, and sharing OER, so what are the skills and knowledge do teachers need to have so that we can ensure that OERs that are being used in their courses are inclusive and accessible? As we go through the skills and knowledge, it should also guide us in terms of developing training programs, courses for OERs, especially with the participation of our teachers. So… First, teachers need to know who are excluded in the teaching and learning ecosystem and why they are excluded. This knowledge would enable the teachers to put in place mechanisms and implement strategies to address the identified barriers. So in most cases, the barrier has to do with the cost of the learning materials, which using OERs aims to address. The other common barriers include physical challenges like hearing or sight impairment, language, given that most OERs are in English language, and other learners may feel excluded because of disregard to cultural diversity. So considering this, the teacher should have knowledge on the following. First is accessibility guidelines, like for instance, the Web Content Accessibility Guideline to make the online platform accessible to various types of learners. Universal design for learning, the knowledge about it can guide the teachers on how they can integrate even just the basic principles of universal design for learning to the OERs that they will be using, especially given the nature of the OERs that they can be reduced, then teachers can integrate the basic features of universal design for learning to these OERs. Cultural and linguistic diversity or making the content inclusive. In one of the studies conducted in Southeast Asia, one of the barriers cited by students on the use of OERs is that they are not available in the local language. So teachers can translate these OERs that they will be using in their courses and make sure that there is respect to cultural diversity, that there’s nothing in the content that would be offensive to a specific person.

Moderator – Michel Kenmoe:
Thank you, Dr. Melinda, for your input and for clarifying some of the principles that may actually help teachers to create content that are inclusive. Let me return to Zeynep, who informed that she’s not. online. Zeynep, can you make a short presentation of the OER dynamic correlation before we move forward? Yes, can you hear me? Yes. Okay, can you put up

Zeynep Varoglu:
the slide? Is it possible or not? No? If it’s not, it’s okay. This is the second slide. Otherwise, I’ll just go on. It’s a great pleasure to be here with you today. I’m very sorry there’s something wrong with the camera and I will try and fix it during the course of the session. I would just like to present you very quickly the OER recommendation 2019. This recommendation was adopted by all member states by consensus in 2019 and it basically has a very clear definition of OER which explains to you exactly what OER is and what it is not. I will read it out to you right now. The definition is that any learning, teaching or research material in any format that resides in the public domain or is under a copyright that has been released under an open license that permits no-cost access, reuse, repurposing, adaptation and redistribution by others. There is a clear definition of open license. I would invite you to go to the website of UNESCO, look up the name of the OER recommendation 2019 to have the full text. There is five areas of action and we’ll be going through each of the areas of action in this presentation. The first one is capacity building, the second is policy, the third one is on quality, inclusive multilingual OER and the fourth is on sustainability and the fifth one is on international cooperation. And the international cooperation is the basis of this OER recommendation, which of this OER dynamic coalition, which brings together the panel before you. I’d just like to also point out that the stakeholders in this recommendation are the entire knowledge community. So we have the education community, we have libraries, museums, and we have also publication. You have on the screen in the chat, if you’re online, you have the text of the recommendation there. We have a very full panel, so I will stop here and give the floor back to you, Michelle, to continue.

Moderator – Michel Kenmoe:
Thank you, Zareb. Can we stop the presentation, please? Yes. Okay, thank you. I want to check to know if Mr. Papaluga is online. Has he joined us? Oh, no. Papaluga was to share the experience on how learners can draw from the various cultural, linguistic, and socioeconomic background to create inclusive OER content. If he’s not online, then let me check if Ms. Jian Osman. Is he online? If not, can I check to make sure that Mr. Nail Bocha is online? Nail? Yes, I am online. Thank you, Nail. So this gives us the opportunity to move forward with the second part of this presentation, where I’m going to invite Elisa Pretodist to share her experience on OER repository. Elisa?

Lisa Petrides:
Yes, thank you. Will the slide be on the screen? Would it be easier to share from my screen? He can share. Okay, go ahead. I’ll use the right slide. But I’ll not be able to move. with this. Great, thank you so much. So I want to talk about really the sharing of knowledge and what that means in terms of OER libraries and repositories. The repository is really the underlying infrastructure of libraries. They’re vast and diverse. They’re across the world. They contain often metadata description of how content is created and used and adapted, which is extremely important. It’s not enough to have platforms where these content reside, but it’s equally as important to know, have very good descriptions for both the educator as well as the learner who is going to be using these resources. It’s not enough just to have a whole library if we don’t really understand what’s in it and why we might want to use it. Just like the librarian in a physical library, that person is probably one of the most important people in terms of their function, in terms of the search and discovery. So similarly in the online content, we rely on the metadata and often librarians behind the metadata creation to guide us through that kind of content. I want to just talk about this through the CARE framework, which is something that you can look at careframework.org because it’s, what it is, is the CARE framework is a way to show what is good OER stewardship and how to become OER stewards of OER. And so I thought it might be an interesting way to apply the CARE framework to platform and tools and how they can be designed in a user-centric way. So the first part of CARE, so its contribution is the… see attribution, release, and empower. So contribution is about advancing the awareness, improvement, and distribution of OER. And what this means specifically in terms of platforms and metadata is that we really have to focus on portability, interoperability, and the ability to adapt or localize. In terms of attribution, we’re talking about conspicuous attribution. And what I mean by that is if we don’t know the provenance of the resource, where that resource came from, how it’s been used along the way, we really lose the ability to describe and build a transparent knowledge base. And as you heard Zeyneb talk about in the OER recommendation, what we’re trying to create is really a commons, the knowledge commons around OER. The third piece, 30 seconds, did you say? No? Release, making sure that the content can be used beyond the platform in a way that it requires the platform to be interoperable with others. And last is empower. And perhaps I think one of the most important attributes today is meeting the needs of all learners, including those who have been traditionally excluded. So this requires content that is culturally relevant, inclusive, and accessible to those with disabilities. And again, when we think about the metadata that’s describing this content for search and discovery, I think that the CARE framework really helps to illuminate what those factors are. Thank you.

Moderator – Michel Kenmoe:
Thank you, Elisa, for sharing that on the importance of metadata and also OER repository. I’m turning now to Stephen to ask about the importance of collaboration, how to make this possible, collaboration between educational stakeholders to support OER initiatives.

Dudley Stephen Wyber:
Thank you very much, Michelle. And thank you for the invitation to be here today. I think just as an introductory point, there’s a lot of talk at the moment about digital public infrastructures and digital public goods. And OER is such a powerful example of this and is so often overlooked. So it’s really important that we’re having this session here today. So I think. At risk of repeating Lisa’s points, but without an attractive acronym to make sense of them so that everyone can take notes. The roles that libraries tend to play, often it is, I think as you said, supporting with a discovery awareness. As we know, the fact that something is available on the internet does not necessarily mean that it’s actually accessed or used. There’s an awful lot of shouting into the void online. Libraries have proven effective in so many cases and actually then updating their original roles of putting people who need knowledge in touch with knowledge, raising awareness of the possibilities. I think combating some of the assumptions there are that because OER is free, it’s worthless. And there is always this sort of human tendency to believe that unless you’ve paid for something, it’s not worth it. Wrong. Overcoming some of the ideas and the prejudices that doubtless exist about OER as resources. I think Lisa’s already covered the point about curation, but I think curating in a way that responds to need. Actually, again, bridging the materials that are out there, the resources that are out there, working with faculty, working out what’s actually there. So again, there’s that bridging role in there. I think, once again, working with educational stakeholders to take a critical overview. And I’m conscious, again, I risk echoing Lisa in this point that clearly the landscape of OER that’s available right now is, it is primarily from some parts of the world. There’s an awful lot coming from the parts of the world that have produced traditional textbooks and traditional materials. But given the training and given the experience they have in trying to evaluate the whole of knowledge that’s available, librarians can have a really powerful role working with stakeholders to think, well, what’s missing? What are we not seeing as opposed to what we are seeing? And actually then, once again, again, working to make sure that we’re coming up with OER that fits. I’m going to jump to the last point, but also in that role of encouraging. Librarians can have a really powerful role, too, in giving guidance about how do you use rights, what are the options, what are the channels for faculty, for education stakeholders to feel sufficient agency, sufficient empowerment to produce their own, which really does require to work with materials that are there, to produce their own materials, to share them, to really actually deepen that knowledge commons. And then I think the final point is, please do count on librarians as allies in pushing for legislative and regulatory frameworks that are favorable, that have decent educational exceptions in copyright, so that you’re not unnecessarily held back in using materials for educational purposes. It fits within the recommendation, but it’s an ongoing fight.

Moderator – Michel Kenmoe:
Thank you very much, Stephen, for sharing this. I’m going to turn to Patrick. When we are considering stakeholder engagement, I think the private sector can play a key role in this. So Patrick, what are the strategies that we can put in place to engage the private sector?

Patrick Paul Walsh:
Yeah. So the answer, just to say, the question I prepared is the broad partnership, which is a partnership between government academic libraries, intergovernmental system, and the private sector. So it’s the whole comprehensive partnership. So we have signed, or we’re working with UNESCO, SDSN, and a joint committee to implement the UNESCO-EOR recommendation. And we have a partnership agreement that we’re going to run what’s called a open education resource overlay platform, or repository, or journal, whatever way you want to think about it. And basically, we basically want to have courses submitted to us that we can quality assure and recommend. that we can put into archives that are properly metadata, open license, et cetera, quality assured, and then they can be used inappropriately in government for educational training or corporates or schools or academia in their courses. And of course, the whole reason for demonstrating this on the essay, for example, if we did it with SDG Academy courses, which are all up on edX, is to really show a community of practice how you’d actually do this with guidelines and kind of playbooks that people could actually apply this in other contexts. But just to give a sense of the partners and what’s going on. So one, people should be able to submit their LMS as their courses, and they’d be refereed and not just refereed from the point of view of academic and science content, but also adherence to say UN policy or UN legal frameworks, et cetera. So they’re quality assured and published in the normal academic way. When they went to the repositories, they will follow fair care and Farr’s principles. So thank you for explaining the care principles. But basically, this stuff has to be findable, accessible, interoperable, reusable, but there has to be what I call good citizenship or stewardship of it and also good governance of it. You do need quite a lot of ed tech, and I’ve actually listed all the kind of ed technologies that you’d have to use for this type of, let’s call it publication or e-publication in terms of the open journal systems or the way you would do your copyright licensing or the way you would manage your indicators and metadata and so on and so forth. But just to give you a sense that, and just two seconds then. So where the partnership comes in though, when we’re developing the metadata and how it’s archived, we have to talk to the users, and the users are governments who have training in their LMSs, the corporates. who have their HR training, the academics who are, and schools who are doing their curriculum and their courses. And in a sense, you have to have what we call the diamond engagement. So it’s not enough just to do diamond publication, which is free to publish and free to use, but you actually have to work with the curators and then the users to get the whole system working effectively, or else it’s not going to work.

Moderator – Michel Kenmoe:
Thank you. Thank you very much for three of you for this session, during which you have shared your experience on how to achieve a multi-stakeholder approach into the development of OER, and also on how to engage the different stakeholders in academia, the private sector, and the realization of inclusive OER. I’m going to turn to Zeynep for the next session, the next panel. Zeynep?

Zeynep Varoglu:
Thank you very much, Michel. We have the pleasure now to look at Now and Forever, about sharing resources within a policy framework and within the framework of sustainability. Our first speaker is Neil Butcher, who’s going to look at issues related to national education policies. Neil, the floor is yours.

Neil Butcher:
Thank you very much, and greetings, everyone, from Johannesburg in South Africa. As you can see here, I’m focusing on national education policies. I think what we’ve seen in the world of OER is that sustainability really depends on governments developing and implementing sustainable policies. There’s a lot of OER policies. Unfortunately, many of those policies exist in paper, but are not really being implemented in practice. And I think in the context of the discussions on accessibility today, it’s important just to recognize that 15% of people around the world have some form of disability. So governments really are the key agency that are going to be responsible for ensuring that the good ideas that we’ve heard about in the previous presentations… are implemented and sustained and financed. So we’ve spoken about the importance of content accessibility, the application of critical principles, the repositories that are available to support web accessibility, and so on. And so in the bottom bullets, what I’ve just tried to unpack is some of the important things that are critical for national policy. And I think that starts with bullet four, which is to develop policies that provide for the understanding and application of open licenses to content and software. This may seem like an obvious point, but if our intellectual property and copyright policies nationally are not providing for and enabling government agencies to use open licenses, then it’s unlikely that that will actually ever be done. We also then need in our policy to unpack the meaning of digital accessibility and its practical implications for policy. And the practical implications are the important part. There’s a lot of lip service to the importance of digital accessibility, but the kinds of ideas you’ve heard about in the previous slides really need to be documented in policy and the implications for content development and other processes that are being funded by governments need to be stated very explicitly. So these explicit requirements about digital accessibility need to be contained in the policy, and they need to be binding in the sense that when governments are spending money on content development, there needs to be an obligation that this is built into what government agencies are expected to procure. So the accessibility plan for existing national and other education initiatives, the kinds of ideas we’ve heard about in the previous presentations on the repositories, these initiatives are really important, but if government is not committing to sustaining them on an ongoing basis, we’re unfortunately not going to see that the kind of impact that we’re looking for and that’s been discussed by my colleagues. And so that will bring me to the last point that I consider to be the most important. is policies need to be explicitly stating what the accessibility considerations should be for content creation projects, for educational projects, and how those need to be embedded in the procurement processes. So I think this is the key hurdle at which we tend to stumble, is that we have a lot of good principles and ideas often documented in policies or contained in guidelines, but when we get to the point of procurement and when there’s urgency to move ahead with, say, procuring a content creation policy for the development of educational materials at national level, unfortunately, the procurement processes don’t enforce obligations for the service providers to make sure that the content they’re creating adheres to accessibility guidelines and making sure that that’s a condition of payment for the services being received. So unfortunately, what tends to happen is the contracts are executed and this critical consideration of accessibility is left on the sidelines. So I would say of all the things that we can do that would be most important, we lead up to this one. If we don’t include some references to the importance of accessibility and making sure that there is accountability for delivering those obligations in the procurement process, all of the other excellent work that we might’ve done will unfortunately have been for nothing. So I think those are some of the critical guidelines at the national level we need to consider. Thank you very much.

Zeynep Varoglu:
Thank you very much, Neil. It’s a very clear presentation on the national policy issues. Colleagues, I’d also like just now to ask Melinda Bandeleria to kindly come back to the point that was started at the beginning, which was on bringing this national policy into the classroom in terms of institutions. The person, the colleague who’s kindly taking care of the slide, if they could go to the second slide, you’ll see the slide from Melinda. Melinda, the floor is yours.

Melinda Bandaria:
Yes, thank you very much, Zeynep. And as I mentioned earlier, the skills and knowledge that the teachers should have so that they can make OERs more accessible and inclusive should guide the policies and also in developing training programs for teachers. So I have mentioned already cultural and linguistic diversity, and also the knowledge about copyright laws and licenses that are associated with OERs. So about the skills that should also be integrated into the training programs for teachers. Of course, teachers should know how to convert their open educational resources materials into alternative formats such as OJO, Braille, or even simplified text to cater to students with different needs. They should also have the skills to provide captioning and transcription for hearing impaired learners when reusing OERs and be able to provide descriptive text for hyperlinks and alternative texts for images, especially for those who are screen readers. And of course, the technological skills will be very handy so that they can make sure that the OER platforms and materials that they are using are compatible with assistive technology that the learners, different types of learners will have access to. And the most probably, we are not very much conscious about this, is for them to determine about the text readability of the materials that they are using, and knowing how to determine, like using different mechanisms like the Pug Index measurement. So at the end of the day, it is also making use of the technology platforms to make this materials open educational resources material. So what I’m trying to emphasize here is that our training for teachers should not stop with them developing. sharing, knowing the licenses appropriate for the materials that they are producing, but also acquiring this different knowledge and skills, which are essential to make the open educational resources that they are using more accessible and inclusive to the various types of learners. So, I think that’s all from my end. Thank you very much for allowing me to finish my presentation and contribution to this forum. Good day to everyone.

Zeynep Varoglu:
Thank you very much, Melinda. Thank you very much. So, there’s a very concrete response to policy, which is put into action at the national level and at the institutional level. And with that, I’d like to turn the floor now to give the floor now to Michel. Michel is a communication information advisor from UNESCO Dakar, and he will talk about successful example of an OER initiative at a regional level, which can serve as a model of good practice. So, Michel, the floor is yours. I think it’s further on.

Moderator – Michel Kenmoe:
Yeah. Thank you, Zainab. As part of our initiative for implementing the OER recommendation in West Africa, we started by conducting research with the different stakeholders, the academia, the teacher training institution to understand what are the shortcomings that may prevent the adoption of OER. And we came out with some observation that without middle to top level buy-in of the OER, it’s going to be difficult for most of the country to actually engage in the implementation of the recommendation. So, we turn out to raising awareness among decision makers, the Minister of Education, Minister of Youth, and Minister of Higher Education, and also all the middle decision makers within the education sector, and also to explain to them the importance of OER and how OER can actually contribute to quality education within the country. country. And this led to what? This led to commitment from many of the countries in West Africa to develop a national strategy for OER. So we start with Burkina Faso. So we were successful in actually developing, with the Ministry of Higher Education, a national OER strategy. And that is yet to be validated. As you know, the country has been into some troubles, and then this has halted the progress toward the adoption of the OER national strategy. We also succeeded in convincing the Senegal to engage in the elaboration of its own OER strategy. So today, we are working toward the validation of the national strategy. It was a collective effort with multiple stakeholders involved in the design of the strategy. It’s covered all the dimension of the OER, actually contextualizing with the reality of each country. We also made the same thing in Togo, where the country also engaged in the development of the OER strategy, the same thing in Congo, and also in Djibouti. So so far, we have about five countries that are in the process of adopting the OER national strategy. And all the strategy, what is really interesting is that the very process of elaborating the OER was quite interesting in raising awareness for the recommendation. Because by being involved in the process, many came to have a better understanding on the why and on the importance of the OER recommendation. So that today, we are having in many of those countries, there are a team of experts who are becoming the advocate for the OER within the country. So the challenge that we see so far is the challenge of funding. We have seen that everywhere where the strategy was developed, there was this concern about how the government is going to actually fund. How are they going to find the resources to actually support the realization of the strategy? And one of the suggestions was that government can actually ensure that from now on, whenever there is a project with donors involving the production of educational resources, government should ensure that at least part of the project support the production of open educational resources within the country. So we hope that with that experience, we are still in the process of the adoption of the OER strategy. But the strategy in all those countries has already been elaborated, but still to be validated at the national level. So this is what I can share regarding the experience that we have in West Africa. Thank you.

Zeynep Varoglu:
Thank you very much, Michel. And thank you for sharing this experience in West Africa, which is very, very, very strategic. I’d like to give the floor now to Dr. Talamil, who is the Jung Professor and UNESCO Chair in Distance Education at the University of Brasilia, Brazil, who will talk about sustainability models. Tal, the floor is yours.

Tel Amiel:
Thank you. So one of the things that we have to worry about based on a couple of presentations that came before is, what does it mean to be sustainable for OER? And of course, the first thing is the issue of money, right? Whenever we’re funding these kinds of things, just like when we talk about free software, we know that development of free software and development and sustaining of these projects takes money. And so there are many ways. And I just want to highlight three. One of them is related to what Michel just mentioned, which is the idea of open procurement. So many governments around the world are trying to implement open procurement in their systems. And so there are many ways of doing that. And so one of the things that we have to worry about is how do we build the infrastructure to be able to be able to do that? And so I think the first thing is, how do we build the infrastructure to be able to be able to do that? And so I think the first thing is, One of the ideas that we push the most is this idea that if you’re using public funds, you should have public assets, public goods. So open procurement models are very popular, but I think they fluctuate quite a bit. I mean, in some countries, it’s very easy to push the idea of complete open procurement. Everything that you produce with public funds should be open. In other countries, we have to be open to the idea that this might not work exactly as we expect. You have to be more restrictive on your licenses, or not all resources will be open, and some will and some will not. I like to think of open procurement as a transition, you know, especially if you’re going from an all rights reserved model. You have to kind of try different ways of making this work until maybe eventually you’ll get complete open procurement. But there are other ways to do this. Just like with free software, we have models for open with added value. So you might provide the resources for free, which is a key stone of OER. The resources must be free, but then services like customization or training and all these kinds of other things can be by cost. And then also, something that doesn’t last forever, but is good to get things started, particularly in new projects, whether it’s in a government or institution, is partnerships and donations from foundations. I think people are very keen on funding these kinds of things for openness. But then the financial aspect is one. For OER particularly, there are two others. Neil mentioned policy, so I’ll be very brief on this, but it’s not just about putting the policies on paper. We have plenty of those, and some are much more effective than others. But one of the things that works really well is having working groups that are cross-sector. We’ve heard a lot about multi-stakeholders, but actual multi-stakeholders with people actually doing things and representing their corners of the world, doing things together and monitoring these policies. That works really well in many countries. That has worked very, very well. And groups that can evolve, right? So OER is not something that stands over in time as one solid thing. The entry of generative AI has changed quite a bit our perspective. and OER, so we have to have people thinking about this from the perspective of teachers and legal issues and so forth. So these working groups work for that. And finally, OER, as it’s an educational endeavor, that’s the core. The practices around OER is what really matters. And so if you don’t have community engagement, if you don’t have people that are buying into this at all levels, it makes absolutely no sense. It’s just legislation. It’s just money. It’s just resources, right? So we have to have people that have incentives, that have recognition for doing these kinds of things, and that can continuously raise awareness about where OER is at that moment.

Zeynep Varoglu:
Thank you very much, Kel. And right now, we’ve been very efficient with our time, so I’d just like to take this opportunity to actually do a meta discussion, because in fact, in front of you today on the screen, the majority of these colleagues are actually on the advisory board of the OER Dynamic Coalition. And this is the first OER Dynamic Coalition event at the IGF. We’re all very honored to be here before you. We have in the OER Dynamic Coalition, it was started in 2020. And the principle of the Dynamic Coalition was started in 2020, and we became an official IGF Dynamic Coalition in March 2023. But the spirit of Dynamic Coalition was in the body of this text from the beginning of the discussions, and in the background document to the text of the recommendation, which was presented before the member states. And before you, and in many of the presentations we have had, Melinda, who is the advisory board chair for capacity building, also Lisa, who is for policy, and Kel, who is for sustainability, and Neil for communication. And I’m sorry, I’m just going through the list. You have the different members of the party of the. of the advisory board. And these, the OER Dynamic Coalition has brought together, brings together up to now 500 stakeholders from the different stakeholder groups that were presented at the beginning of the session, the knowledge, community, education, culture, and also publications. And we bring together stakeholders from government, institutions, and civil society. And it focuses on knowledge sharing and collaboration in the implementation of the recommendation. And this format has turned into a very useful way of maintaining the dialogue and maintaining discussions and making the issue of the implementation of the OER Dynamic Coalition a priority for governments and institutions to date. And it’s a great pleasure to be here before you. We have some time ahead of us. So I would just like to maybe ask the panel two questions that were in the discussion, but unfortunately we haven’t had time to look at it, but I will just perhaps put it to the panel for the moment. The first one is how OER can be tailored to diverse needs of learners in terms of cultural, linguistic, and socioeconomic backgrounds, fostering inclusive learning. And this goes to the area of the recommendation, which deals with quality, multilingual, inclusive OER. Could I ask perhaps if there are anybody, I don’t want to put anyone on the spot, but I will nonetheless do so. I hope you don’t mind. Could I perhaps give the floor to Lisa to start with? Would that be okay with you, Lisa?

Lisa Petrides:
Absolutely, of course. Okay, you’re to the left of Tali. And for those of you who don’t know Zainab, who has been running and spearheading this dynamic coalition from the beginning, she asks us to do something, we do it. So thank you, Zainab. I think the issue, let me just start by saying OER, as many of us I think on this panel here see it, is a public good. And just like air or water, education should be accessible to everybody who wants it, who needs it. And so I think what’s been really important in OER is to think of this practice of open education as something that brings education opportunities to not only the mainstream of education, but for those who have been excluded, for those who have left and we want them to come back, to those who simply have been, in some cases, somewhat not a part, or even in the worst cases, invisible to the processes of education. And we think about places where there’s no school systems that are operating because of war or other situations like this. So when we think about diversity of learners, I think the idea that the use of open educational resources as a knowledge transfer, as a knowledge building, is quite transformational. We’re not just talking about what already happens in our education systems. We’re talking about inclusive voices. So in some cases, that’s where students themselves are involved in the content creation, where faculty in higher education or teachers in primary schools are using their own cultural context and localization to actually adapt OER. And this is where we’re seeing some of the biggest transformational changes in the use of OER. And that is all around the world. I can speak for the US. But for many other parts of the world as well. So Zeynep, who do you want to have this next?

Zeynep Varoglu:
Is anyone else in the room? Does anybody want to say something? Tell, I see you smiling. all the way from here in Paris. This is nice. Would you like to add anything?

Tel Amiel:
I was waiting for you to give me the order. No, I think that one of the things that we talk about here, and especially in this context of IGF, is this presence of many different cultural groups and many different needs. And we understate the power of OER for doing this. I mean, if we’re talking about public goods, it means including everybody. And one of the greatest trends of OER is adaptability, remixability, being able to share and revise and remix and reuse, which is quite unique. And we don’t explore that enough, I think, especially in this multilateral, multi-stakeholder process of having people really engage with these kinds of resources is something that pedagogically makes a lot of sense and makes it really a public good.

Zeynep Varoglu:
Thank you. Thank you very much. I’m kind of handicapped here because I can only see what the screen shows me. So I can only see you up to Michelle. But Michelle, perhaps you can see

Patrick Paul Walsh:
better than I can. Would someone like to? Paul here. So just what I think is congratulations to everyone who was part of actually putting together the OER recommendation, because it really is a wonderful instrument. Just to answer your question, though, I think what’s in the recommendation, which is really important, is that this kind of freedom to create and to contribute to the global knowledge commons, that’s so important. And we have to even think about people with disabilities. So people in any part of the world should be able to freely and easily contribute to the content. And that’s one freedom. The other freedom, obviously, is accessibility. And I really like the previous speaker who talked about the content has to be like the PowerPoint slides and videos, et cetera, has to be compliant to people with visual impairment, et cetera, et cetera. That’s very important. But then the key point is that when you you use it, you can repurpose it, translate it, put it into your local context, put it back into the global knowledge commons again. So it’s really so important to keep it decentralized and in decentralized repository so all that can happen. But that’s why I think the recommendation is so wonderful because you might just think, oh yeah, free education resources, but it’s not about that. It’s actually about how they’re created, how they’re accessed, how they’re repurposed. So there’s much more to this than just what it looks like OER. And Stephen would like to contribute if that’s okay.

Zeynep Varoglu:
Sure, thank you, thank you.

Dudley Stephen Wyber:
Yeah, at risk of just reemphasizing a couple of points so far, I think I want to draw firstly on what Paul was saying about the knowledge commons. And actually, Edna, it’s an idea that was very strongly brought out in the futures of education report a couple of years ago about this idea of trying to move away from a sort of single direction model of you shall learn this body of knowledge and that is what you shall learn to a much more sort of recurring circular approach where you learn, you explore, you contribute, you improve. And that’s quite a radical thing it feels like, but actually making that clear that that’s the model that we’re going for is a significant one because it does create agency and it creates responsibility. And the other thing I wanted to pick up on something that Paul said about diamond engagement and this idea that it’s not just at the producer side, but also it’s important to have people therefore on the ground whose responsibility is not just to make sure that the stuff gets on the internet in the first place, but then that the stuff is taken down and used. And of course, that’s logically a role that obviously teaching staff have, but librarians in particular have and accepting that, I don’t know, we can’t just assume that if we shout out to the internet, someone will actually make use of the stuff and it’ll actually work, no. We can’t have a supply side only approach. approach here. We need to have a demand-side approach.

Zeynep Varoglu:
Thank you very much. I don’t know if there are any other inputs in the thing. Neal has raised his hand. Neal. Two people have raised their hands. Neal, please go ahead.

Neil Butcher:
Thank you very much, Zeynep. Maybe just to build on what previous colleagues have said, I think one of the things that I could emphasize possibly is just the importance of making sure that we don’t necessarily think that more is better, and that we focus collectively on ensuring that the way in which we invest resources has a very strong focus on producing high-quality teaching and learning resources and OER for accessibility purposes. I think very often we have a very technical way of thinking about that when we do engage in accessibility, so we just take materials and make them accessible at that technical level, but we’re not actually considering whether or not the quality of the teaching and learning materials justified making them accessible in the first place. The Internet is flooded with content, and the more flooded it becomes, the more I think that carefully curated collections of resources that we can feel confidence are encapsulating high-quality teaching and learning experiences of the kind that we just heard about. Stephen gave some really good examples, I think, of how that might look in practice. We just need to make sure that we take the time to invest properly in what we’re doing and not just rush the process of taking a whole lot of content and making it accessible. I think that’s doing a disservice to learners rather than helping them in the long run.

Zeynep Varoglu:
Thank you. Thank you very much. Melinda, would you like to add anything?

Melinda Bandaria:
I just would like to support the points raised by Neil in saying that we have to make sure that what we are using are quality OERs, so it is very important. that we have this quality assurance framework, which we can integrate in evaluating the open educational resources that especially the teachers are using for their courses. So that’s one point. And the important role of the teachers, the important role of the universities in making sure that this OERs, that’s what’s being circulated in the web, in the internet, are quality materials that are being reused, remixed, translated into the local languages, and shared alike by the teachers, by the universities who are into it. And of course, the more important thing is putting in place the policies that will support or provide the conducive environment for the OERs, the use, the development, and sharing of OERs to flourish. So if it is not possible for a national policy to be there immediately, then probably institutional policies can start the work, can do the work, and make sure that we have these things or the five action areas on the OER recommendations can be undertaken. So role of universities and policies, even at the institutional level, and then the national level

Zeynep Varoglu:
policies. Thank you. Thank you very much. Thank you. We have, from the participants in the room and online, would anybody like to add anything? It’s a very funny thing to have moderation online and in the room, because you can only see so many, I see now only Melinda’s face, but I’m sure there’s a lot of people behind Melinda, but I can’t see them. So in that case, if we have some time left, and I would just like, yes, yes. We have one person in the room.

Audience:
Yes, thank you. Yeah, it’s a question, exactly. Niels Brock, DW Academy. My question was about the experience with decentralized repositories, so I would be interested if there are any best practices that some of you could share about this. and maybe also specifically in like open technologies for this, thank you.

Patrick Paul Walsh:
Yeah, so the basic idea, you know, ironically, I think a lot of universities who are engaged in getting up the rankings on let’s say commercial or branded journals, right? They want citations, that’s part of, you know, impact factors, it’s part of the whole reason why you get ranked as a university, but ironically, the way to gain citations to get a citation dividend is to make sure that on your research portal or your profile, that actually you link a preprint or an open paper in some way to the actual citation, because that will actually, if you put it in the repository, the local repository in your local library, you know, it’s more findable, and you know, you put in a keyword and people come to you, they don’t have to go to the branded journal, they come to you through the search engines, right? And if your metadata is really good, they’ll find you really easily, right? And then of course, they’ll read your preprint, but they’ll actually cite your actual publication and seemingly the citation dividend across different disciplines is enormous, right? So there is actually quite a bit of work, you know, on let’s call these learning objects like PDFs to put into repositories, okay? So what I’m talking about here though, is that I know particularly during COVID that on our LMSs, we all have kind of folders of digital objects, you know what I mean? Like videos and homeworks and so on and so forth. And the idea is that if you standardize how that LMS is structured, and that also can be archived in a local repository, right? That you’re able to, again, you know, true platforms to actually point. So for example, eLife do it for biology and it’s an overlay of repositories of the researchers and they publish their papers. So the idea of our platform would be actually to highlight LMS folders that you could just click, go to a repository, and then you can pull it up into your own LMS, right? This is basically the idea. And it’s a network of learning objects, you know, multimedia kind of learning objects, right? But the real benefit to doing it decentralized, and it’s actually because of Algebra Science and these, I think that the libraries are for interlibrary loans and they’re interoperable. The repositories are so interoperable because they’re actually building it for doing all this work of hosting and archiving for the commercial entities, right? So the academics create the work and sign over the property, right? And then they sell that back to the libraries. And then the libraries do all the work and archiving and preservation and other, this is ridiculous, right? So we have to try and get rid of the middle person and we have to try and just work librarians, academics and others just to work together to make that happen, right? But what was the point I was trying to make was, I’ve lost the point, but the decentralized system, you’re getting the, oh yeah. The key thing is that you can update your course locally. You can repurpose. that locally are, like others can take it, translate it, put it back into the system again, right? So in other words, rather than just giving away, you’re probably right, giving away a PDF that you can’t edit that goes, you know, into a library so that you can’t edit, that’s just nonsense as well, right? We should be able to update what’s in our repository. So my course, I might change 10% every five years, you know, but you see the idea that you would update it and that it becomes a kind of a real-time repository rather than something like, you know, 2005 publication in Nature or something, yeah.

Moderator – Michel Kenmoe:
Thank you very much. Any other question from the room or online? Any comment, contribution? I just want to, following just what Amiel has said about the open government procurement, what we have learned from the context of West Africa, and this was a great surprise for us, is that we discovered that in many of the countries with whom we were working, they have no budget for educational resources production, no budget. So the idea of open procurement doesn’t fit in that context. So we had to, as part of the OER strategy, to raise awareness on the importance for the government to actually engage in the production of educational resources, adaptation and remixing, supporting initiative related to this. So we should not take for granted that countries already are committed to produce educational resources. It’s not the case. And we have this, and I’m saying all the five countries with whom we were working, they have no budget, no budget line to produce educational resources. I’m not talking about open education, but educational resources, they have no budget line. So this is a. a key challenge in such context. And the importance of raising awareness of the importance of a country to actually engage in the production of educational resources.

Lisa Petrides:
Thank you so much for making that comment. I think that if we think of the origins of the teaching profession, that the teacher or educator was the person with the internal knowledge. And over time, we’ve developed a system where, in fact, there are experts out there, all the way to the textbook publishing company. And this whole industry has started. And this is where the money and the procurement happens. But in fact, the native knowledge is around the educator. And it’s also around the learner, who is living, breathing, working in a community with a lot of knowledge and understanding. We found early on, when we were going to certain places to talk about OER, people were seeing what we had done in our library, OER Commons. And they would say, that’s nice. But we have oral histories here. Or we have other native or indigenous languages here. The knowledge is there. And if we think about having it, it’s sort of rethinking what teaching means, and who teachers are, and how teachers are trained. But we’ve gotten so far away from the idea where the educator is actually the expert in their knowledge. And that might be some kind of perspective that is brought there as well.

Moderator – Michel Kenmoe:
Thank you. This was a very insightful exchange. You know, at the beginning of this panel, the Assistant Director General for UNESCO invited us to use the opportunity of this discussion to lay out key actions to undertake in order to advance the agenda of OER recommendation. So I’m going to turn to each of you. Let’s say two minutes. Or OK, three minutes. Yeah. What is the takeaway, and what are core actions that we need to consider for the future of the implementation of OER? I’ll start with Emil.

Tel Amiel:
So I think based on the experience that we’ve had in Brazil for 10 years developing policy on this is give people serious responsibilities for OER, make it a serious element, and give them the responsibility to do it and expect things to happen from people. So create the policies, get people involved, and then give them serious responsibilities for taking care that it’s going to be implemented. Without that, I think that if we don’t have people actually involved in this and around this, but the incentives to stay, it just becomes another piece of legislation that doesn’t move forward. It’s an agenda item that people talk about, but nothing ever happens around it. That would be, I think, the biggest takeaway for me.

Moderator – Michel Kenmoe:
Thank you. Stephen?

Dudley Stephen Wyber:
So I think I’d probably underline, and this is probably sort of a takeaway recommendation for the sector I represent, the importance of trying to get ourselves to the same stage with OERs as we are with open access. And a point I would have made to the colleague from DW Academy is that there’s already a lot of really good work around how do you get interoperability to happen between OA repositories through organizations like COAR in Canada? Can we apply that same logic to OER repositories? And then just come back to the question you were actually asking me now, how do we mainstream? How do we do an end run around the development process here and make sure that librarians are seeing, in just the same way as they provide materials, that they really feel confident and they feel responsible for helping their faculty, for helping students make the most of OER so that they feel agency in order to help other people feel agency.

Melinda Bandaria:
Yes. Thank you very much. My key takeaways, of course, I’m very much focused on capability building as one of the action areas in the OER recommendations. So, initially, we were so much focused on just raising awareness, ability to use and develop and share OERs, but this discussion really brought us back to the essentials of making OERs more inclusive and accessible. So, we have to go beyond in this capability building initiatives. And, of course, I just would like to go back to what was also contained in the OER recommendation, and it will also bring us to that discussion on the lack of resources to produce OERs. And part of the OER recommendation is invoking that the public funds can be used to produce OERs. And if we use the public funds to produce these educational resources, then we are morally obliged to make them open access materials. So, I guess this is something that we should be doing, our advocacy, our commitment to making OERs more popular in terms of use and development. And the incentive system, especially for universities, that’s the sector I am representing, incentive system for the faculty members, for the teachers, when they use and create and share open educational resources to the community. So, these are my key takeaways from this forum. Thank you.

Moderator – Michel Kenmoe:
Thank you, Melinda. Lisa?

Lisa Petrides:
Thank you. I have three quick things. One would be to resist this urge for strategies where one size fits all. I think the comment about decentralization was key, and we have to really keep working on that. and what that really means to have localized control of knowledge. Yet, in a decentralized model, it filters up in a way where we really do build this knowledge commons. The second piece is to not be seduced by the commercial private partnerships that are becoming much in vogue today. They’re wrought with a lot of internal problems that ultimately, I fear, will result in the locking up of knowledge. Not to mention, there are so many privacy concerns once you have commercial interests in terms of how data is used, who uses it, and data for whom. And the third one is a real positive recommendation, our takeaway, which is we really need to build bridges across open. That’s open educational resources, open pedagogy, open data, open science, open access, open publishing. Did I miss any of the opens? I think we’ve been operating in silos for too long, and we really need to start connecting those for real.

Moderator – Michel Kenmoe:
Thank you. Patrick?

Patrick Paul Walsh:
So I fully endorse just what Lisa said. Excellent. Just to come back to my big thing, and of course, I’m going to hope to implement this overlay repository journal of SDG courses. But I guess the thing that keeps me awake at night is behavioral issues, as I call them, or psychology. So in other words, just to take an example of one of the stakeholders is the government. So you have to change the mindset. So what’s the problem there? So Jeffrey Sachs was discussing our project at the TESS, the Transforming Education Summit. And I think he said something very important. So he said the reality here is that there’s a bit of a sunk cost to set this up. I’m an economist, so there’s sunk costs and there’s marginal costs, right? So think of putting in electricity or a digital infrastructure, you know what I mean? Like to set up the power. points, to put in the railway tracks, to put in the ports. No individual can really do that. That has to be done by government. So there is a bit of a sunk cost to get this up and running. The beauty of it, though, is that the marginal costs are very low. And in fact, once it’s open, as Tell Amir was saying, there is possibilities to add value or commercialization, which would actually pay into the resource. So I could put sums together for the government saying, if you put up so much money and you put it into your policies and procurement, I can guarantee you within five or six years, the costs to librarians, to the academics, everyone was going to be way reduced, the marginal cost. And in fact, if any of these global knowledge commons issues are commercialized in any way, your property will actually accrue income or accrue value added. So I can create a business model. The problem for that, though, is you’re saying to the government, you put money up now and change your policies, and then later, you’re going to get a return. And that doesn’t sit well with government, because too many times have they given money for a return in the future, and they’ve never got the return in the future. Now, I could go on in terms of, what are the incentives for academics? What are the incentives for librarians, for interoperability of ed tech, interoperability of all the open opens, and so on. So to me, the problem is mindset and coherence and cooperation. And it’s not necessarily financial or technical or anything like that. It’s a real, what I call behavioral mindset issue that you have to address.

Moderator – Michel Kenmoe:
Thank you. Naeem, are you online? Naeem?

Neil Butcher:
Thank you, Michel. I think the two key takeaways that I’d like to just reemphasize is, first of all, to recognize, notwithstanding nice conversations about how we should support the private sector in monetizing this space. and I’m not sure I agree with a lot of that. I think we have to recognize that the responsibility sits squarely with government to make sure the public education systems are accessible for all. And that involves proper investment in creating learning environments that actually support real accessibility. And the second takeaway for me related to that is that the investment strategy for that has to ensure the quality of the teaching and learning experience for everyone. I think if OER as a public good is simply expanding access to poor quality learning experiences for people at the margins, it’s doing the world a disservice and we need to make sure that the emphasis is very strongly on improving the quality of the learning experiences. I would just add one last and possibly obvious point, which is that the only way in which we can ensure that this happens successfully is to make sure that the processes by which this all takes place are actually led by representatives from the target communities of learners that we are aiming at. And I think if we look around the panel and certainly at this stage, I think it’s clear that we have a lot of work to do to make sure that we bring in the voices of the people who we hope will benefit from these conversations. So I think that’s another critical challenge that we face as we move forward.

Patrick Paul Walsh:
Yes. So Niall, just hopefully we’re on the same page. So I’m not saying you should commercialize the infrastructure or the content or anything like that. So this is a point about value added. So for example, if a private school takes the material, puts a letter book on it, adds things and then sells it out there that they should really, if they’re bringing in income, there should be some rent sharing on a public resource. Or. if a commercial company takes it and actually is doing kind of upskilling and training and again is charging money to do that, that there should be kind of rent sharing. So it’s commercialization on the margin if you like, but it’s not on the infrastructure or the open education resource at all. That has to be publicly owned or stakeholder owned as you say. So I hope that’s okay. And you mightn’t like the idea of the value added either, but just to be clear that it’s not commercialization of the platform or the actual resource.

Moderator – Michel Kenmoe:
Thank you. Zeynep, do you have one last comment? Zeynep? I see some mute. Zeynep? She’s online. I see that she’s online. Zeynep? I don’t know what technical issue. I want to express our warm thanks to all the panelists and all the participants of this session, those who joined us online and those present here in Kyoto. Can we give a round of applause to all our panelists and the participants, please? Thank you very much. Yes, Zeynep?

Zeynep Varoglu:
Yes, it works. OK, sorry. It’s been a bad, bad technology morning. Thank you so much. I was saying that it takes a village to raise a child, they say, but it takes a whole world to make learning possible. And it’s through an open educational. resources that I think the knowledge can really be shared. The point of this recommendation and the point of this panel, the point of this discussion, is about sharing knowledge openly. And I’d also like to thank very much all the panelists here and online. Just to let you know that the colleagues that are joining us online, we’re coming from three different continents right now into your room. And it is a great pleasure to be here. We would all very much like to be there in person, but unfortunately it hasn’t been possible. But thank you very much to all of you. Thank you, Zeynep. And have a great day to all of us. Thanks.

Audience

Speech speed

179 words per minute

Speech length

57 words

Speech time

19 secs

Dudley Stephen Wyber

Speech speed

187 words per minute

Speech length

1132 words

Speech time

364 secs

Lisa Petrides

Speech speed

164 words per minute

Speech length

1465 words

Speech time

537 secs

Melinda Bandaria

Speech speed

162 words per minute

Speech length

1352 words

Speech time

501 secs

Moderator – Michel Kenmoe

Speech speed

143 words per minute

Speech length

1841 words

Speech time

774 secs

Neil Butcher

Speech speed

183 words per minute

Speech length

1367 words

Speech time

449 secs

Patrick Paul Walsh

Speech speed

195 words per minute

Speech length

2344 words

Speech time

721 secs

Tawfik Jelassi

Speech speed

116 words per minute

Speech length

549 words

Speech time

284 secs

Tel Amiel

Speech speed

232 words per minute

Speech length

1117 words

Speech time

289 secs

Zeynep Varoglu

Speech speed

162 words per minute

Speech length

1611 words

Speech time

598 secs

Climate change and Technology implementation | IGF 2023 WS #570

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explores various topics related to climate change and technology in the global south. One key point highlighted is the importance of accountability and responsibility in addressing climate change. It emphasises that governments, corporations, and individuals all need to take responsibility for their actions and work towards mitigating climate change. The analysis also mentions concerns over digital colonisation and the quest for digital sovereignty, particularly in global south countries. It points out the potential exploitation of resources by technology companies from developed nations.

Another topic discussed is the challenge of tackling electronic waste sustainably. While recycling initiatives exist in countries like Brazil, the analysis highlights difficulties in handling electrical and electronic devices due to harmful substances like lithium. It emphasises the need for sustainable solutions to effectively manage electronic waste.

The analysis also examines the search for successful examples of technology mitigating climate change impacts, especially in the Amazon region of the global south. It advocates for leveraging technology to address climate change, reduce emissions, and protect sensitive ecosystems. However, it does not provide specific examples or evidence of successful implementations.

Furthermore, the analysis draws attention to the importance of localising global climate change solutions. It highlights the relatively poor performance of Hong Kong, despite its significant economic power and infrastructure. This suggests the need for tailored solutions that consider local contexts and challenges, rather than relying solely on global strategies.

The role of lobbying and negotiating with decision-makers is also emphasised as a means to advance climate change agendas. The analysis stresses the importance of engaging with policymakers to influence climate-related policies and decisions. However, it does not provide specific evidence or examples of successful lobbying efforts.

The potential of the Internet of Things (IoT) in creating energy-efficient systems and reducing carbon emissions is another topic discussed. The analysis highlights the positive impact that IoT can have on sustainability efforts but does not provide supporting evidence or specific examples.

Lastly, the analysis addresses the need for accountability in adopting costly technologies and the role of lifecycle assessment in defining avoided emissions. It mentions ongoing discussions in Europe regarding the European Green Digital Coalition. This highlights the importance of considering the environmental impact of adopting new technologies and ensuring that the benefits outweigh the costs.

In conclusion, the analysis raises various important aspects related to climate change and technology in the global south. It underscores the need for accountability and responsibility, addresses concerns over digital colonisation and digital sovereignty, discusses challenges in tackling electronic waste sustainably, explores the search for successful technology implementations, advocates for localising climate change solutions, emphasises the importance of lobbying and negotiation, highlights the potential of IoT, and stresses the need for accountability in adopting costly technologies. However, it lacks in-depth evidence and specific examples to support these points. Nonetheless, it raises key issues that require attention and further exploration.

Moderator

Climate change is a pressing global issue that demands immediate action. It is acknowledged as one of the most pressing issues in the world. The seriousness of this concern is emphasized by the devastating impacts of climate change witnessed worldwide, including extreme weather events that serve as evidence that the Earth is changing. Igor, one of the participants, highlights the urgency of taking immediate climate action.

Technology has emerged as a crucial tool in addressing climate change. It is seen as a catalyst for change and offers potential solutions. Various technologies, including renewable energies such as wind, solar, and hydropower, are being utilized to combat climate change. These technologies provide valuable alternatives to traditional energy sources that contribute to greenhouse gas emissions. Furthermore, the session explores how technology can be leveraged to transform social, educational, and environmental aspects, offering concrete solutions to combat climate change.

However, it is crucial to ensure that technology is used responsibly and does not harm the environment. The responsible usage of technology is a fundamental consideration, as it can have adverse effects on the environment. The session emphasizes the need to find ways to ensure that technology does not adversely affect the environment itself, highlighting that great power comes with great responsibility.

Young people are recognized as key actors in addressing climate change. The session highlights the crucial role that young people play in combating climate change. Their active involvement and engagement are crucial for driving change and implementing sustainable solutions.

Artificial intelligence (AI) is also identified as a tool that can assist in mitigating and adapting to climate change. AI can optimize electricity supply and demand, leading to energy consumption savings. AI can also aid in developing early warning systems for severe disasters and accurate climate forecasts, contributing to climate change adaptation efforts.

Despite the positive contributions of technology, there are negative impacts that need to be addressed. The production and usage of technology contribute to surges in energy demand and the environmental impacts associated with the hardware life cycle. These concerns highlight the importance of considering the environmental implications of technology.

Collaboration between various sectors is deemed necessary to maximize the potential of technology in combating climate change. Governments, businesses, research institutions, and individuals are encouraged to collaborate and create incentives for sustainable practices and eco-friendly technologies. By working together, a more comprehensive and impactful approach to addressing climate change can be achieved.

The European Union’s twin transition approach, which combines green and digital strategies, is seen as a significant step towards battling climate change. The EU has committed to cutting its climate emissions by half by 2030 and aims to be climate neutral by 2050. This approach demonstrates the potential for combining digital advancements with environmental sustainability.

Transparency is highlighted as a crucial aspect in addressing the environmental impact of digitization. It is suggested that the lifecycle of applications, including design and conceptualization, should be accounted for, with measurement of material consumption carried out independently. Accessible and transparent results would allow for a better understanding of the environmental impact of digitization.

Circular economy principles are advocated as a means of reducing political dependence and promoting sustainability. The adoption of circular economy practices, such as recycling and resource conservation, can contribute to economic stability and security while reducing the negative impacts on the environment.

Equitable access to digital tools is emphasized as a necessary step towards addressing climate change. It is crucial to ensure that all population groups, including older people and structurally discriminated groups, have equal access to digital resources. Additionally, increasing digital sovereignty, which involves individuals having control over their own data, is seen as a crucial aspect of empowering individuals in the digital age.

Implementing technology solutions to combat climate change can be challenging, particularly in regions with a lack of infrastructure, high costs, and a lack of knowledge. These challenges highlight the need for targeted support and investment in these areas to overcome barriers and enable technology adoption for climate action.

Accountability and compliance regarding environmental laws and technology are critical to ensuring that technology initiatives are aligned with sustainability goals. The session raises concerns about the difficulty in ensuring compliance with environmental laws and court sentences. It suggests that supervision bodies and legal systems need to be strengthened to address these issues effectively.

Efforts from all sectors – including the private sector, academia, the tech community, the United Nations, and governments – are called for to find cheaper technology solutions to fight climate change and overcome existing challenges.

The preservation of biodiversity is mentioned as an important consideration in the context of climate change. The threat posed to the Brazilian biome due to temperature increases is highlighted, calling for urgent action to preserve ecosystems and biodiversity.

The power and influence of big tech companies are also scrutinized, particularly regarding the exploitation of data and resources of local citizens. International organizations are urged to work towards curbing the excessive power of big tech companies and preserving the interests of local communities.

Transparency and consumer awareness are seen as essential elements in promoting responsible behaviors in the digital age. It is suggested that if consumers were made aware of the impacts of data centers or unethical data practices, they might change their behaviors and support more sustainable practices.

Standards are recognized as crucial in promoting sustainable digitalization. The European strategy for green digitalization includes the implementation of standards to ensure that digitization aligns with sustainability goals. However, it is acknowledged that standardization bodies should strive for inclusivity and representation, ensuring that all stakeholders can contribute to the development of these standards.

Credibility issues associated with climate change reports are mentioned, indicating the need for effective checks and measures. It is essential that reports on climate change are credible and reliable to guide decision-making and demonstrate progress towards climate goals.

Lastly, the importance of legal and political collaboration is highlighted. It is noted that successful examples exist when politicians and legal teams worked together in areas such as patents and biodiversity aspects. It is emphasized that international agreements and disputes cannot be resolved solely through legal means, requiring the active involvement of politicians.

In conclusion, addressing climate change through technology requires immediate action and collaboration across various sectors. While technology offers potential solutions, responsible usage, transparency, and equitable access must be prioritized. The session highlights the role of young people, artificial intelligence, and circular economy principles in combating climate change. Challenges related to implementing technology solutions, accountability, and the preservation of biodiversity are also recognized. The excessive power of big tech companies, the importance of transparency and standards, and the need for legal and political collaboration are additional considerations in the fight against climate change.

João Vitor Andrade

The provided data highlights the potential of the internet and technology in addressing the global challenge of climate change. The arguments put forward are that these tools can play a crucial role in combating climate change by enabling innovative solutions, facilitating information sharing, and promoting sustainable practices.

One argument suggests that the internet and technology can enable innovative solutions by using artificial intelligence and improved sensors to collect real-time environmental data, such as deforestation, temperature, and air quality. This data can help identify strategies to mitigate climate change.

The importance of information sharing facilitated by the internet and technology is also emphasized. Rapid dissemination of knowledge and best practices can enable individuals, organizations, and governments to make informed decisions and take appropriate action in the fight against climate change.

Technology is also seen as a means to promote sustainable practices. Smart grid technologies, for example, can optimize energy distribution and consumption, reducing waste and making energy systems more efficient and environmentally friendly.

The internet and technology are recognized for their potential to reduce greenhouse gas emissions through virtual meetings and remote work, reducing the need for commuting and business travel. This can lead to a reduction in carbon emissions.

Precision agriculture technologies are also highlighted as important tools in the fight against climate change. These technologies can optimize crop production while reducing the use of water, fertilizers, and pesticides, contributing to reduced greenhouse gas emissions.

Stakeholder collaboration is emphasized as crucial in leveraging the potential of the internet and technology in addressing climate change. Collaboration between governments, businesses, NGOs, and individuals can maximize the impact of internet and technology-based solutions.

In addition, the analysis includes a neutral stance on climate change, suggesting that it is a problem for humans rather than the world. This highlights the need for increased awareness and understanding of the interconnectedness of climate change and its global impact.

There is also a call to rethink the system for distributing energy, focusing on efficiency rather than just production. The use of artificial intelligence to distribute energy efficiently to areas with higher or lower consumption is proposed as a solution for reducing wastage and promoting affordable and clean energy.

Lastly, there is a negative view expressed against the extensive use of fossil fuels in energy production. The contribution of countries like China, with significant coal-based energy production, to higher carbon emissions is highlighted. This underscores the importance of transitioning to cleaner and more sustainable energy sources.

Overall, the data highlights the potential and importance of internet and technology in addressing climate change. Collaboration, innovation, and sustainable practices are emphasized as key to effectively mitigating climate change and creating a more sustainable future.

Igor José Da Silva Araújo

Climate change is a pressing issue of global concern that requires urgent attention. It poses a significant threat to our planet, as evidenced by extreme weather events, changes in rainfall patterns, and rising temperatures. Human behaviour plays a pivotal role in the origin of climate change, with activities such as burning fossil fuels and deforestation contributing to greenhouse gas emissions. Acknowledging the impact of human behaviour is crucial in developing effective strategies to combat climate change.

Technology plays a crucial role in the fight against climate change. Renewable energy sources, such as wind, solar, and hydropower, offer sustainable alternatives to fossil fuels, reducing greenhouse gas emissions and promoting long-term sustainability. Adaptive practices, such as cultivating drought-resistant crops and implementing early warning systems, help communities respond proactively to the adverse effects of climate change.

Taking responsibility and acting now are essential to finding effective solutions to climate change. By doing so, we can mitigate its threats and safeguard the well-being of our planet and future generations. It is imperative that we adopt sustainable practices and utilise technology as allies in combating climate change. By addressing our actions and pursuing resilient solutions, we can make a positive impact and ensure a sustainable future for all.

Rosanna Fanni

The analysis highlights several key points regarding sustainable digitalisation. The first major point emphasises the need for better transparency and assessment of the environmental impact of digitisation. The report suggests that there is a lack of systematic data on the environmental impact, particularly throughout the lifecycle of digitisation. To address this, independent measurements and the accessibility of results are required. This would enable a more comprehensive understanding of the environmental footprint of digital technologies and help to identify areas for improvement.

Another important aspect identified in the analysis is the promotion of more entrepreneurial thinking and a compliance culture in relation to environmental sustainability. The argument is that creating environments where sustainability is viewed as an opportunity rather than a hurdle can drive innovation and economic growth. Furthermore, educational programs and awareness initiatives are seen as essential for fostering a culture of sustainability and ensuring that individuals are well-informed about the importance of sustainable practices.

The analysis also emphasises the need for a legal commitment to sustainability by design and default. This implies that ecological sustainability should be integrated into the design process of digital technologies, and the impact of these technologies should be visible to users. By making sustainability a legal requirement, companies will be compelled to consider the environmental consequences of their products and services, leading to more sustainable outcomes.

The circular economy approach is advocated for dealing with critical raw materials. Efforts should be made to reduce political dependence on countries with large raw material deposits. Moreover, the expansion of recycling practices can contribute to reducing the demand for new raw materials. This circular economy approach is seen as central to ensuring the long-term availability of critical raw materials and reducing their environmental impact.

Transparency and accountability in digital education, particularly with regards to artificial intelligence, is another important point raised in the analysis. Manufacturers are encouraged to provide clear explanations about how these technologies work and the implications they have. Additionally, special consideration should be given to children to ensure that they are prepared for the digital world and that their rights are protected.

The analysis also highlights the importance of equitable digital access for all, including older adults, children, and other structurally discriminated groups. Efforts to bridge the digital divide and ensure that everyone has equal opportunities to access digital technologies are crucial for promoting inclusivity and reducing inequalities.

Furthermore, the analysis suggests the need for increased digital sovereignty and the curbing of the power of big tech companies. It is argued that individuals should have control over their own data and decisions about its use. Additionally, educational initiatives are required to enhance media literacy and awareness, ensuring that individuals are empowered to navigate the digital landscape.

The analysis also highlights the significance of transparency in understanding the impact of big tech companies. More global reporting about tech companies is deemed necessary to inform consumers about their practices and allow them to make informed choices.

In terms of standards, the analysis stresses their importance in the strategy of European sustainable digitalisation. However, there are questions regarding how these standards are produced and whether inclusiveness is being prioritised. It is essential to ensure that standards are developed through a collaborative and inclusive process to guarantee their effectiveness and relevance.

Lastly, the analysis underscores the need for political prioritisation of green and sustainable digitalisation. Without political commitment and support, progress in these areas is unlikely to be achieved. Policy decisions and initiatives should prioritise environmental sustainability alongside digital transformation to ensure a sustainable and inclusive future.

In conclusion, the analysis highlights multiple crucial aspects of sustainable digitalisation. These include better transparency and assessment of the environmental impact, promoting entrepreneurial thinking and compliance culture, legal commitment to sustainability, circular economy practices, transparency and accountability in digital education, equitable digital access, increased digital sovereignty, curbing the power of big tech companies, transparency for consumers, the importance of standards, and political prioritisation of green and sustainable digitalisation. Emphasising and implementing these aspects will contribute to achieving a sustainable and inclusive digital future.

Denise Leal

The analysis covers a range of topics related to climate change, technology solutions, environmental law, biodiversity, ESG reports, and engagement between legal and political entities. One key issue highlighted is the lack of necessary infrastructure and knowledge in certain countries to successfully implement technology solutions. In Latin America and the Caribbean, for example, there is a significant deficit in infrastructure needed to support the implementation of these technologies. Moreover, technology solutions are often expensive, making them inaccessible to many people, and there is also a lack of knowledge and skills needed to effectively work with these technologies. This poses a significant challenge in achieving the Sustainable Development Goals (SDGs) related to industry, innovation, infrastructure, and climate action.

Another argument put forth is the need for cheaper technology solutions to combat climate change. The analysis suggests that there are countries that cannot afford expensive technology solutions, and therefore, more effort should be focused on developing and making available affordable alternatives. This would enable broader adoption of these solutions, fostering real progress in addressing climate change and achieving the SDGs.

The analysis also sheds light on the difficulties in ensuring compliance with environmental protection rulings. One of the main challenges identified is the lack of adequate supervisory bodies to effectively monitor and enforce compliance with these laws. Supervisory bodies are often small and insufficiently resourced, hampering their ability to carry out proper supervision. This raises concerns about the overall accountability and compliance of environmental laws, which is crucial in safeguarding the environment and achieving peace, justice, and strong institutions.

The negative impacts of climate change on biodiversity and species extinction are also emphasized in the analysis. It is highlighted that a significant portion of the Cerrado, a Brazilian biome, is projected to be lost due to climate change, resulting in the potential extinction of various species. Additionally, the analysis suggests that climate change has already caused some species to become extinct worldwide. These findings underscore the urgent need for action to mitigate the effects of climate change and protect biodiversity in order to achieve the SDGs related to life on land.

Regarding ESG (Environmental, Social, and Governance) reports, the analysis raises concerns about their authenticity due to potential inaccuracies and lack of a foolproof verification system. While standards and checks are in place for these reports, there is a notable absence of an efficient method to confirm their truthfulness. This challenges the reliability of ESG reports and calls for improved verification systems to ensure transparency and accountability in responsible consumption and production, as well as climate action.

The analysis also highlights the importance of collaboration between legal and political entities for effective resolutions. Successful examples of politicians and lawyers working together on patents and biodiversity issues are cited, underscoring the need for political and legal teams to align their efforts. This collaborative approach is crucial in achieving the SDGs related to peace, justice, and strong institutions.

Lastly, the analysis acknowledges the value of traditional communities’ successful environmental protection methods. The recognition of their effective methods highlights the importance of incorporating indigenous and traditional knowledge systems in environmental conservation efforts. This insight can contribute to achieving the SDGs related to life on land and underscores the need for respecting and valuing diverse approaches to environmental protection.

In conclusion, the analysis highlights several key challenges and recommendations related to climate change, technology solutions, environmental law, biodiversity, ESG reports, and engagement between legal and political entities. It underscores the importance of addressing these issues to achieve the SDGs and calls for collaboration, accountability, and the incorporation of diverse perspectives in environmental and sustainable development efforts.

Speaker

Artificial Intelligence (AI) has the potential to play a significant role in understanding climate change and mitigating its effects. It can optimize electricity supply and demand, reducing energy waste and greenhouse gas emissions. Furthermore, AI can enhance energy management systems, leading to more efficient resource utilization and a shift towards renewable energy sources. It also enables the development of early warning systems for severe weather events, improving preparedness and response efforts.

AI’s ability to provide accurate climate forecasts and predictions is another key advantage. By analyzing large amounts of data, AI algorithms can identify patterns and trends, allowing for more reliable projections of climate changes. Additionally, AI can predict crop yields and determine suitable locations for planting, contributing to stable food supply despite changing climatic conditions.

However, it is important to recognize the negative environmental impacts of technology proliferation. Rapid advancements in electronic devices and their shorter lifespan contribute to the growing problem of electronic waste (e-waste). Manufacturing electronic components is energy-intensive and water-dependent, and improper disposal of e-waste can have harmful consequences for both the environment and human health.

Therefore, it is crucial to use technology responsibly and consider both its positive and negative impacts. Responsible consumption and production of technology should be prioritised, considering environmental implications throughout the product lifecycle. This includes implementing policies and regulations to reduce e-waste generation, promoting recycling and proper disposal methods, and encouraging the development of sustainable and eco-friendly technologies.

Furthermore, leveraging AI to rethink energy usage and improve energy distribution is essential for achieving a sustainable future. By utilizing AI algorithms and advanced analytics, countries can optimize energy distribution networks, making them more efficient and reliable. This can lead to a significant reduction in energy waste and contribute to the goal of affordable and clean energy for all.

To address the global e-waste issue, urgent actions and strong policies are necessary. This involves engaging communities and giving them a voice in policy implementation and necessary actions. Collaborative efforts between governments, industry stakeholders, and individuals are crucial to effectively tackle e-waste and promote responsible consumption and production practices.

In summary, while AI offers promising solutions for understanding and mitigating climate change, it is essential to approach technology with a balanced perspective. Utilizing AI in energy management, climate forecasting, and agriculture can yield significant environmental benefits. However, negative impacts associated with technology proliferation, such as increased energy demand and e-waste, must be addressed through responsible consumption and production practices. With urgent actions, strong policies, and community engagement, AI and technology can be harnessed to create a more sustainable future.

James Amattey

Technology undoubtedly offers numerous benefits to society, but it also has a negative impact on climate change. The staggering number of devices globally, over 6.2 billion, each equipped with two or more chips that require frequent charging, contributes to significant energy consumption. These devices, such as smartphones and laptops, perform high computational tasks that demand substantial amounts of power, resulting in increased energy consumption and carbon emissions. Despite the transition to USB-C, a more energy-efficient charging technology, concerns over energy consumption persist.

Furthermore, the worldwide Cloud infrastructure for apps adds to the energy demands. Cloud servers, responsible for hosting and processing data for various applications, consume a significant amount of electricity. This consumption originates from the need to power and cool extensive server networks required to handle the vast amount of user-generated data. As our reliance on cloud-based services continues to grow, so does the strain on energy resources and the subsequent environmental impact.

Moreover, electric and autonomous mobility, hailed as a solution to curb fuel emissions, present a new set of energy challenges. Surprisingly, the computational power required to move an electric or autonomous vehicle exceeds that of conventional vehicles running on fuel. This increased computational power demands a substantial amount of electricity to power the intricate systems that enable electric and autonomous mobility.

To address the rising energy demands of electric vehicles (EVs), national-level policy adjustments are necessary. Expanding the charging infrastructure and implementing mechanisms to seamlessly integrate EVs into transportation systems are vital. Governments can play a vital role by providing incentives and support to encourage the adoption of electric vehicles, laying the foundation for a sustainable future.

In conclusion, while technology brings numerous benefits to society, it also poses challenges concerning climate change. The widespread use of devices and the energy demands of cloud infrastructure significantly contribute to energy consumption and carbon emissions. Furthermore, electric and autonomous mobility introduce new energy challenges that require careful consideration. Policymakers and industry leaders must collaborate to balance technological advancements with environmental sustainability, finding innovative solutions to mitigate the negative impact of technology on climate change.

Session transcript

Moderator:
forward. Good morning, everyone. My name is Millenia Mantany, and I would like to welcome you all in this session of climate and technology. In this session today, I’m also joined by my co-moderator, who is online. His name is Igor, and in this session, we’ll also have a diverse of perspective from researchers, advocates, industry leaders, to share insights and explore how technology can be a catalyst of change. Okay, so before we get deeper into our discussion, I’d like to introduce to you our speakers for today, and we have them here. And we have Rosanna, we also have Denise, and we also have Joao, and we have Sakura, but online, we’re also joined with James. Yeah, I would really like to thank and appreciate each one of you for making it here today by joining us in this session. And today’s my first time moderating a session, so I’m not so sure how I feel. I’m excited, and yeah, I hope you guys enjoy. Yeah, so before we move further to this discussion, I would like to maybe at least say something about this session. As we all know that technology, I mean, climate change has been one of the most pressing issues in the world so far, and we have seen the role that technology plays in ensuring climate change is actually addressed. We have seen different ways, like renewable energies and all, but then we’ve also seen efforts that organizations and individuals put to address climate change. One thing that for me I know is that with great power, it comes great responsibility. So the more we use technology to solve the environment, the more we would want to find ways how we can ensure that technology doesn’t really affect us. And yeah, I would also like to invite my co-moderator from online, Igor, if he can, like, add something before we get deeper into our discussion. Can everyone hear me? Everyone hear me? Yes, yes, yes. Thank you.

Igor José Da Silva Araújo:
Good morning. Good night to you, too. Good morning. Good night to you, too. And dear participants, I’m Igor, José da Silva Araújo, rising in a small town in the Brazilian Northeast. I’m a young activist, a law student, and a representative of the Civil Society, the Latin American and Caribbean group. And it’s an honor to be here today to discuss a topic of great importance, climate change and technology implementation. We are currently in a critical juncture in a human story where climate change poses an imminent threat to our planet. Every day we witness the devastation impacts of this change worldwide, from natural disaster to the loss of biodiversity and threats to food security. Our common home, this planet, is undergoing unprecedented climate change. And we don’t need a scientific date to confirm in this moment what we witness daily. Extreme weather events, change in rainfall partners, and rising temperatures are palpable evidence that Earth is not as it used to be. As young activists and representatives from different sectors, we recognize that you represent not only the future, but also the present. We are not just the generation of tomorrow, we are the generation of today. We understand that our current actions have a direct impact on the future that we want to build. So the origins of the climate issues drive us to act now, to take responsibility and the pursuit of the fact solutions. So this is where technology plays a crucial role. And however, we are not alone in this journey. So history has taught us that humanity is capable of overcoming the most complex challenge when we come together and act with determination. Climate technology, including renewable energy such as wind, solar, and hydropower, as well as adaptive practice like drought-resistant crops and early warning systems, are allies in the fight against climate change. They offer hope and concrete solutions to address this global challenge. But are they truly efficient? And is all that we can do in this moment? Our hot table will address fundamental questions like this and aims to stimulate new perspectives from each of you. This session aims to broaden perspectives on the whole of technology and addressing climate change, identify the types of technology and investment needed to achieve our goals, and understand the implications of this environment scenario. So our discussion is based on the principle that while nature reacts to this change, it’s human behavior that plays a fundamental role in this origin. So as we progress in this panel, let us remember that technology is a powerful tool that is how we use it that makes the difference. So we are here not only to discuss the challenge, but also to share concrete ideas and solutions. And most importantly, we are here to inspire action. So this is an opportunity for all of us to learn, share, and collaborate on potential technological solutions that can transform the economic, social, educational, and environmental aspects, and ultimately improve the quality of life worldwide. So finally, I appreciate the presence and interest of each one of you in this vital discussion for our planet. I will be here to provide the support in the online and moderation. And thank you so much for now.

Moderator:
Thank you very much, Igor. Yes, as he said, our main aim of this session is first to raise awareness on the role that technology can play in addressing climate change, but we’ll also be able to make some recommendations on how we can improve policies on climate change. Yes, but then in this discussion today, we’ll have some questions that are going to be guiding whatever that the speakers are going to be presenting. And the first one is, how can we, how can the internet and technologies collaborate to fight climate change? And the second one is, which kind of policies about technology and internet could collaborate on the theme of climate change? And what are the negative impacts of technology in climate change? Yes, so without wasting so much time, I’d like to give the floor to our first speaker, who is Sakura. So she’ll introduce herself, her stakeholder group, where she comes from. Yeah, and one thing to note, I hope you have noticed that on our panel today, we have young people. So I’m so excited to hear from them. Yes, thank you very much. Welcome.

Speaker:
Thank you, Millennium. I’m Sakura Takahashi from Japan. I’m speaking here today on behalf of Climate Youth Japan, which is a youth environmental NGO in Japan. I’m a student studying climate science and geospatial analysis in Keio University. In addition, I have several experiences of being part of youth interventions in the United Nations, such as attending Climate Change Cup, the Asia-Pacific Regional Ministerial Forum of UNEP as a delegate of the Children and Youth Major Group, and serving on OECD Youth Advisory Board 2022. In conjunction with my activities and area of expertise, I’m so excited to talk about the synergy of climate change and technology implementation. I would like to answer the first question and third questions. The first question is, how can the internet and technologies collaborate to fight climate change? So, well, we have various technological ways to tackle climate change-related issues, such as IoT, artificial intelligence, blockchain, climate prediction, and forecasting, and so on. I would like to discuss how artificial intelligence, so AI, can accelerate climate actions from the viewpoint of mitigation and adaptation. In the climate change discussion, we mainly have two approaches, mitigation and adaptation. Mitigation is to reduce greenhouse gas emissions to alleviate climate change. Adaptation is to take measures to adapt the effects of climate change, including reducing risks of adverse effects and exploring new solutions to live healthy in a changing climate world. In terms of mitigation, artificial intelligence can optimize electricity supply and demand. On the supply side, AI algorithms are being developed to optimize electricity supply by reflecting weather conditions and demand-side electricity usage. AI can also be used for building energy management in urban areas where electricity is primarily consumed. For example, the study found that it’s expected to save energy consumption by 9% during the summer season by learning the relationships between operation data of heat source equipment and total energy consumption in the building, and applying an optimization model created from learning results. That’s how AI can contribute to optimization of the supply-demand balance from production to consumption of electricity, contributing to the reduction of GHG emissions. In terms of adaptation, AI can enable us to develop early warning systems for severe disasters and more accurate climate forecast and prediction. Improvements of computing capabilities through supercomputers and the assimilation of global observation data by satellites have enabled more accurate and consistent weather and climate forecasts than were possible several decades ago. This has made it possible to reduce damage by taking early countermeasures in evacuation from extreme weather events and associated disasters. In addition, satellite data and climate models can be used to predict crop yields and determine suitable locations using machine learning, thereby contributing to a stable food supply under ever-changing climatic conditions. In this way, AI can help humans adapt to climate change’s adverse effects and find new opportunities. From these practices, I definitely believe that artificial intelligence can take an innovative role in tackling climate change. And I will move to the third question, which is what are the negative impacts of technology in climate change? So technology, including AI, significantly contributes to our urgent needs to respond to climate change, as stated through previous questions. However, it also has negative impacts on the environment and our life. I’d like to elaborate on this point in terms of energy consumption and the environmental impacts of hardware life cycle. In terms of energy use, the proliferation of electronic devices, data centers, and communication networks has driven a surge in energy demand, primarily by burning fossil fuels. Data centers, which power our digital world, are notorious energy gatherers. According to IEA, global data center electricity consumption in 2022 is estimated at around 1 to 1.3 percent of global electricity demand. Moreover, everyday gadgets like smartphones and laptops, from manufacturing to operation and disposal, collectively add to energy consumption and carbon emissions. In terms of life cycle hardware impacts, the production of electronic devices relies on resource-intensive processes, including mining rare minerals and metals, emitting greenhouse gases and polluting water. Manufacturing electronic components is energy-intensive and water-dependent. Rapid technological advancements lead to shorter product life cycles, resulting in a growing electronic waste, e-waste problem. E-waste disposal can release hazardous chemicals into the environment if not managed properly, exacerbating pollution and heat house risks, especially in developing countries. Additionally, planned obsolescence practices incentivize frequent replacements, driving resource consumption and e-waste generation. As technology has positive and negative impacts, as stated before, on the environment as well as climate, we need to gain literacy to understand both aspects of technology and use it wisely for creating a more sustainable life on the earth. Thank you.

Moderator:
Thank you very much, Sakura. Thank you for your contribution on how AI can be used to mitigate climate change. And next, I would invite James, who is online. Please welcome. The floor is yours.

James Amattey:
All right. Thank you very much. I believe I am very audible. Yes. So my name is James Amate, and I am from Ghana, specifically from the African group. I come from a background of software innovation, where we use software tools to improve daily lives from education down to how we move goods in delivery and logistics. Now I’m going to take this topic from a different angle, and that would be on question three, which is how technology is negatively affecting climate change. Now technology is an enabler of the digital economy that we are in. Fortunately, it’s able to help me to join you all the way in Japan, even though I’m seated right here in Ghana. But also, unfortunately, there are some limitations that this is causing to our climate. Now Sakura mentioned some of them, and I’m going to highlight a few more. Now other than the 6.2 billion devices that exist globally, according to Gartner, there over two chips per device. Now, these chips handle a wide range of processing and computational ability. And these processing and computational ability leads to battery drain, which requires frequent charging. Now, this frequent charging comes from a wide range of tools and mechanisms that have been put in place, including the widely known USB-C, which has been a standard that has been implemented since 2012, I believe, and is currently in the latest model of the iPhone, which was released a few months ago. Now, our research has shown that despite the change to USB-C, there is still a high level of consumptions, that is consumption of energy that’s required to keep phones running because of all the apps that exist today. Currently, the Google Play Store, which has over 1.2 million apps, and all of these apps require computation of one sort to be able to handle whatever processes they have. And these computations usually rely on cloud, which Sakura mentioned. Now, cloud in itself is an enabler of security and allowing the global service exportation. So for example, Uber is made in the US, but here in Ghana, I’m able to use Uber, and that’s because of cloud, but also cloud has its data. And also because of the structure of cloud and how it is the infrastructure and the investment that goes into it, you sometimes realize that it takes a lot more to run these apps than it actually costs to create them. And these involvements sometimes lead to that negative effect. Now, in Africa, where energy consumption is very high, but the production is very low, it sometimes becomes a deficit to the society, which is supposed to benefit, because there are some places where there is, should I say, energy inefficiencies. So to be able to balance the national production to the consumption of users and the requirements of these devices, it sometimes becomes a burden and actually leads to the creation of more energy. And that creation can be a good thing, but sometimes we need to ask ourselves, what is the source of that energy? Fortunately or unfortunately, that’s the most energy relies on fossil fuels. And so there is still that negative carbon defect that is currently going on. Now, because I come from the mobility space, I currently focus on mobility as a domain, that’s electric mobility, autonomous mobility. And this also brings a further, should I say, constraints on the energy produced. Now, previously cars were run on floor and there’s not too much reliance on electronics, but now with EVs and then the birth of autonomous mobility, the computational power that is required to move a car autonomously from one point to another is actually greater than how much fuel it usually costs when cars are just reliable for. So even though we have solved one of the problems that came with mobility that was floors and then the release of carbon dioxide and carbon monoxide through the exhaust, now we have a different problem of trying to understand how much of electric power is required to move mobility, how much of electric power needs to be generated to charge and then move these things, how much of the national grid has to be allocated to drivers who are switching over to EVs and how much of a policy adjustment do we have to have on a national level to be able to accommodate the needs of EVs because EVs are moving from say personal automobiles to now industry level automobiles as high as construction automobiles. And these are going to take a very huge drain on the climate. So hopefully by the end of this talk, we will be able to delve into how we got here and how we can mitigate some of these problems without necessarily injuring the innovation that’s this place. So I hope this gives a little more light on the conversation and thank you for the opportunity.

Moderator:
All right, thank you very much, James, for your contribution. Without wasting so much time, I’ll welcome the third speaker, Joao. Yes.

João Vitor Andrade:
Hi, everyone. I’d like to thank you all to be present here today. My name is João Vitor, I’m from Brazil. I’m a law student and at the moment I’m into the Brazilian Supreme Court like an internship. This is an important topic to discuss today. And I think we have to discuss, we have to bring up into an event like this one, like the EGF and into some events into the United Nations. So I will bring up some ideas about the first question, how the internet and technologies collaborate to fight climate change. According to some global researches, just the Latin America could lose about $17 trillion between 2021 and 2017, 2070. About this topic, I’d like to emphasize that the internet and technology can play significant roles in the fight against climate change by enabling innovative solutions, facilitating information sharing and promoting sustainable practices. Here are some ways technology can help us fight against this problem. I’d like to highlight some, I don’t have much time to talk about them, but I’d like to say some. First one, data collection and analysis. The artificial intelligence can highlight improved sensors that collect real time environmental data, such as deforestation, temperature or quality, which can be used to develop climate monitoring and research. The sensors can be used not just to monitor the data, but can instantaneously warn policy or the responsible entity to struggle problem. Into the Brazil, for example, we have the IMPI, that’s an important institution that have been doing an excellent job, an excellent work into the Brazil and have been helping us, have been helping the government to create some solutions to the country and not just to the Brazil, but to the Latin America. Advanced analytics and machine learning algorithms can process vast amounts of data to identify patterns, trends and anomalies related to climate change, helping researchers and policy makers make informed decisions. The second one, renewable energy integration. Smart grid technologies can optimize the distribution and consumption of energy sources, becoming possible to reduce energy production from fossil fuels as coal and natural gas, reducing carbon emissions. In Brazil, for example, there is a waste of energy annually equal to a 20 million houses consumption in a period of one year. This is a lot more than we can think that is good. Something like we can use this energy, for example, to help the Latin America. And I think this also happens into the Europe and Asia and other continents. Energy management systems and demand response technologies can help balance energy supply and demand efficiently. About the carbon footprint reduction. The AI and the internet can virtual, can help with virtual meetings and remote work made possible by the internet, can reduce the need for commuting and business travel, lowering greenhouse gas emissions. This phenomenon was lived by all of us into the pandemic period, showing us that it’s possible to reeducate society for this new moment of history that demands our effort to achieve a common objective. E-commerce and digital services can replace traditional brick and mortar retail, reducing the environmental impact of physical stores. This is a good option to not just the Latin America, but to the North America, to the Europe, to the Asia and to the Africa. We know that all continents have a lot of stores and we know that all these things can collaborate to the gas emissions like CO, CO2 and others. We can think or rethink about these things to reduce carbon emissions and can collaborate to reduce, to struggle the climate change. About sustainable agriculture, and this is an important point to the Latin America because countries like Brazil, for example, a significant part of the GDP of my country come from the agriculture. Precision agriculture technologies can optimize crop production, reducing the use of water, fertilizers and pesticides, which contribute to reduce greenhouse gas emissions. Internet connected sensors and drones can monitor silo conditions and crop health, enabling more efficient farming practices and collaborating to combat the climate change. This is a suggestion, like I said, not just to the Latin America, but to all countries who have great productions into the agricultural area. And I think we can discuss about this topic because the Brazil have been working on it to solve the problems about the gas emissions into the agricultural production. And this is an important point to discuss to countries like, for example, India and China that have a large productions of many things. About the climate communication and education, social media and online platforms can raise awareness about climate change and mobilize global efforts. We can use the online platforms, like, for example, Telegram, WhatsApp and other ones to show to many people. And this is an important point because in my country, for example, the population of Brazil, something about 80% of the population have access to internet. So we can use these things to achieve the population that many times don’t know about these important topics. So many times we just excuse these important points in places like this one. And the population of the countries, people around the globe don’t know about these things. Many times we are talking about climate change in a place like this one. And many people in Brazil, for example, don’t know about it. And many politicians like the ancient president of Brazil collaborate that things like this one don’t achieve the population. So the next one about the circular economy, and I think I’ll have more points by the time it’s up. So I can say about the circular economy, climate monitoring and early warning systems, transportation and climate modeling and prediction. So collaboration between governments, businesses, research institutions and individuals is crucial to leverage the full potential of the internet and technology in the fight against climate change. Policymakers can also incentivize and regulate sustainable practices and the development of eco-friendly technologies to accelerate progress. So this topic, the climate change, is not a problem of the world, but it’s a problem to the humans. We’re not talking about the planet. The planet will continue to exist, but we are talking about the existence of the humans. If humans don’t treat this problem like an important one, the humanity will finalize his existence into the new years. So we have to talk about this one. We have to think this thing into the governments, into a place like this one. We have to talk about into the colleges. And I think if we do this, if we do this homework, we can reduce and collaborate to the health of the planet. So thank you so much.

Moderator:
All right, thank you so much, João. Because of the time, I’ll advise the next speakers to use a few minutes. Yes, so I’ll welcome Rosana. Yes, please take the floor.

Rosanna Fanni:
Thank you. Thank you very much. I’m also very delighted to be able to speak here today. Yeah, my name is Rosana Fani. I am German and Italian, and I was based in Brussels until just a few weeks ago. And I’m actually today speaking behalf of the German Youth IGF, which I’m very, very happy and honored to represent here today. And we, as the German Youth IGF, have been actually discussing this topic as well. And we were convening together in September as part of our German IGF, so the local IGF that happened in Berlin. And we had an event, so to say, where we discussed the intersection between sustainability and digitalization. So how can the two things go hand in hand? And I will share some of the results that we have discussed and which we wanted to bring to Kyoto, to this IGF, to the global IGF. So I’m very, very happy, and I also thank the entire team that is now still sleeping in Europe for all the work that we have done together. But first, maybe let me share a few words about actually what the European Union is doing in the space of the green and digital. We in Europe have, I think, quite soon understood that the topic is crucial. I mean, Greta Thunberg and the climate movement has actually originated in Europe, as you know, in Sweden. And we feel a big responsibility also because Europe is a big emitter of climate change and climate emissions. And so this is why in 2019, the current European government or the European Commission has come up with very ambitious climate goals that we should achieve. So by 2030, so that’s in six years almost already, we should cut our climate emissions by half. And by 2050, we should be climate neutral, so net zero. That’s very, very ambitious also for the European economy that also relies often still on very traditional and very resource-intense ways of fueling the economy. But we also have understood that we need. to do it. We need to go ahead and so the European Commission has decided that they have a certain strategy that’s called the EU twin transition. And the EU twin transition is basically the combination of green and digital transitions together. So the idea is that only through technology and through data sharing innovation we can actually also make our economies more sustainable and climate friendly. And we have already heard that there are certain contradictions. However to this topic we know that the green and the digital can also clash. As my previous panelists have said for example more technological waste, e-waste and data privacy concerns when more data is shared. We have also difficulties when we do for example new and large language models that consume a lot of energy and so on. And we saw in the German IGF in September we have thought about what could be enabling conditions and what would be needed for policymakers to really enable a more just and digital transition that respects the rights of citizens not only in Europe but also globally. And so we have come up with three different areas where we would present so to say our recommendations. This is firstly the area of ecology or basically the environment. And the second area is the economy. And the third area is the social aspects. So I will start with our recommendations for the ecology or environment. The first point that we have concluded that we need more is better transparency. We think we need better systematic data on the environmental impact of digitization. So what we have already heard earlier. We need to understand better the entire lifecycle of the application. So not the internet that my tablet is using now but actually from the very moment on that the tablet is designed and that it’s conceptualized and built together in a factory. And we need more transparency as consumers about it. We should know how much the materials, the digital devices that we use actually consume. And we should also have the measurements should be taken out, carried out independently. So it should not be the companies themselves that may you know make some numbers nice but it should be independent measurement. And the results should also be made accessible in an accessible form. So not very complex reports that you have to study over hours but it should be very clearly and visible for users. Next point on the ecology is that we want to promote intrapreneurial thinking and a compliance culture. So we argue that we need to create new environments in which environmental sustainability is seen as a chance by startups for example or entrepreneurs. And that it also gives economic advantages instead and also long-term investments instead of something where you have to comply and where you have to tick the checkboxes so to say. And we think that this can happen through educational programs, raising awareness programs and yeah in order to ensure that innovation and sustainability go hand in hand. And the third point of the ecology is that we want a legal commitment to sustainability by design and sustainability by default. So what I already said earlier in the design process already ecological sustainability should be included and also weighed as a factor of importance alongside other economic factors or performance related aspects. So that really consumers can see how much actually this device is sustainable or used sustainable approaches. And this is also sustainable by default. Okay then I will move to the aspects of the economy and there we have two points that we would like to present. The first one is independence and we believe that the circular economy approach should play a really central role in reducing the political dependence on individual countries with large critical raw material deposits. Maybe a little information square for those of you who haven’t really yet heard about critical raw materials. These are rare earths and minerals that are in everyone’s phones and tablets but they are also really crucial for for example solar panels or you know autonomous vehicles. So without those critical raw materials we could not be producing the technologies anymore that we use today and that we are solely reliant on for sustainable energy production for example on solar panels. But the problem is that these critical raw materials are mostly concentrated in a few countries so it’s very hard to get access to them and most countries are very dependent on those countries to allow them access. And so our point is really that we would need more independence and also expand recycling which is another point that I get to. Recycling and other circular economy initiatives where we could then reduce our independence on those countries but already use the critical raw materials that we have been extracting and to in order to really strengthen economic stability and security. When it comes to research funding and we also want to extend funding for the applied circular economy so that also researchers can better conceptualize how the value chain of those materials is and also how maybe new jobs can create it along this need. So last but not least the social aspects because we also believe that sustainability and digitalization should be benefiting all and not just the few privileged ones not only also in Europe but also worldwide. A key concern is still that we need more transparency and accountability in the context of digital education especially artificial intelligence. So we believe that manufacturers should have an obligation to explain actually their products especially also to children so that it’s clear to children I mean to us maybe it’s clear if you see ah this is made by an AI that it’s we know we understand it but it’s still very difficult to explain it to children. And we think it’s really important to prepare children for the digital world and also make them aware that there are risks and challenges. Then we will put forward another point on participation. So we want more equitable access for all population groups including also older people and children and also structurally discriminated groups and I think this is also very much in line with the panel because in Europe we have quite a good access already but if we look worldwide then we need much more and what needs to be done much more in terms of connectivity and enabling people to meaningfully participate in the digital environment. And last but not least we also hope to increase a digital sovereignty so we mean that the internet stays open free and secure and that we can have data sovereignty so that the data is not captured and sold by big tech companies but that individuals can decide over their own data where it’s going and what it is used for. And last but not least also educational projects especially in media and media training and media awareness and also to include the common good in digital policy. Thank you.

Moderator:
Thank you very much Rosanna. Thank you for sharing important points that you were taken from the youth IGF Germany. And lastly I would like to invite Denise.

Denise Leal:
Please welcome. Hi everyone. So I am Denise Leal from Brazil. I am here representing the Latin American Caribbean region and I am happy to say that I am also a former fellow from the program Youth from Brazil. I am seeing some people here that came with the delegation. I am also but today I’m here also representing the private sector. I know I am young but yes I’m representing the private sector. And also I am a researcher at the Brasilia University and my research it’s related with it. I am part of the natural resources law and sustainable development research group. And I would like to add some things that we have researched to this discussion to this topic. The first thing that I would like to say is we know that climate change exists is a problem and we know that we have solutions. Now we have heard a lot about the solutions that it’s possible to implement that we could implement technology solutions that we could use to help in helping solving the problem. But there is an aspect which is do we have the necessary infrastructure, the need infrastructure in every country to implement the solutions? Can we really implement them? When we talk about Latin American Caribbean we don’t have the need infrastructure to implement these technologies. Not all of the technologies. It’s expensive and also there are many people that don’t have the knowledge to deal and to work with these technologies. We need to put more effort on making cheaper solutions cheaper technology solutions to the countries who cannot buy the expensive solutions that well work really well but they don’t are accessible to these countries. Another point that I would like to bring is when it comes to legal disputes, when it comes to technology and environment disputes, what is the end of it? What is the final decision? What do judges decide when it comes to legal disputes? What do we have? So we have researched it in Brazil and it’s a cooperation actually of Brasilia University with Chile, France and also Canada and we’ve noticed that yes we have a lot of litigation on the theme of environment but when it comes to the end we have some good decisions that protect the environment but we have other problems like how can you guarantee that these decisions will really work? What we have in Brazil and in other countries that we have researched is you have a decision, a legal decision that will say that you have to protect the environment that yes this is there is a law saying that you have to protect the environment but in the end there is no way to comply with it like the fiscalization that it’s not easy and this is a huge problem like the control, accountability and compliance of the environment laws and court sentences are very fragile and many times the supervisory bodies are small and incapable of making a true and a constant daily supervision. So I wanted to add to the discussion this important aspect that we have researched in Brasilia University because sometimes we think that okay we have technology and we can implement it we are going to solve the climate change problems but it’s expensive and secondly it’s important to know that it’s hard to keep watching, keep your eye on it and how we don’t have and one of our policies questions is which policies can we make, can we build, can we think about to guarantee that we are really fighting against climate change and implementing technology so I would suggest that more than thinking about new laws how can we make the laws that we already have on environment topics really work so what I think is that we need to have more work, hard work on compliancy and comply with these laws that already exist. I think that everyone, the private sector, civil society, academia, tech community, United Nations and all the country’s governments especially those with the economic possibility and interest should put more effort on helping to find cheaper technology solutions to fight climate change and otherwise there are people in countries who won’t have the possibility of implementing it. To end my speech I think that we talk in a way the end is war, the world is ending for us but the world has already ended for some species. 45% of Cerrado Brazilian biome will end with an increase of 0.7 Celsius degree so it’s not one degree it’s 0.7 it’s less than a degree in 45% almost the middle of the whole biome is going to end with this increase so we are worried with our futures but what about the environment rights like doesn’t this species have the right to exist? Thank you so much and I also want to say thank you to my family and friends who are here and Obrigada. Alright thank you very much to our dear speakers

Moderator:
and I hope all of us have heard what they have presented and I’m actually from this discussion what I have noted mostly is the sense of accountability and responsibility that each one of us has to play to make sure that climate change is really addressed so yeah but since we are like out of time I’d like to open the floor to our participants if you have any contribution

Audience:
any question yes please use the mic behind yes hello everyone I’m Manu I’m from Brazil and I represent here Instituto Alana which is an organization dedicated to the protection of child rights and when we talk about the environment and digital rights it’s very very special to talk about children and I thank you for bringing this point and one thing that I would like to add to this debate is how can we think about the everlasting effects of digital colonization when we are talking about global solutions to problems that we have now and I think a great example it’s what happened earlier this year in Uruguay where Google wanted to build a big big data center and we talked about AI so much in this forum and about solutions that need this kind of infrastructure but the people there were having they couldn’t have water for their own consumption and then we had a government who was privileging the interest of a private company of a global power and that interest in that the interest of the local population so that’s a question like a global solutions are very important but we have everlasting effects of colonization and we are suffering with them and how do we think about digital sovereignty when we think about the solutions and when we think about how can we build in our countries like Brazil for instance and well building solutions that are not actually just serving for the purpose of big global interests and companies who are dominating this economical debate. Thank you very much for your contribution and we’ll move to the next person. Good morning everyone I’m Phelps I’m from Brazil I’m part of the youth delegation from Brazil and my question is about how could we deal better with electronic waste as one globally. As Sakura mentioned electronic devices have life cycles smaller as time goes by and this program of solvency is a really big deal and I can say this because there in Brazil, for instance, we have some local initiatives for recycling and that’s really important for us. Besides that, when it comes to electrical devices, it’s not that simple. Those initiatives do better when we say about paper, when we talk about plastic, but electrical devices, there is another level of treatment. I think I could say that. So lithium and other substances are really nocive to people and to the environment and in the all the environment and when they are used to technologies that even when they are used to technologies that could help us against climate change. So we have some kind sort of a problem right there. We create technologies that could help us against climate change, but use some kind of these substances in them. So we have kind of a cycle there and my question is where sustainability by design can appear in this scenario of high amount of technological waste and as UN says that’s a global issue and global issues are connected and that matters when we talk about climate change. Thank you very much. Hi everyone, sorry for the voice. I am Carla Braga. I am a mentor of the Brazilian Youth Delegation as well. I am Executive Director of the Amazonian Youth Corporation for Sustainable Development and I wanted to understand if we have any successful examples of our experience about facing the impacts of climate change in global south and if possible considering the Amazon region that has used the technology to face the challenge of climate change. Thank you. Thank you very much. Alright, do we have a question online? Igor? Any question online? No, no. Alright, so we’ll, ah, sorry, please welcome. Hi, this is Jasmine from Hong Kong.Asia. So I have, so I just want to respond to what Stanis just mentioned. I agree that like it also depends on how, you know, like each nation’s capacity and each territory, like how do they deal with, you know, like climate change and the problem of, you know, the key is how do we localize the so-called global solution into each, you know, different context. But the thing interesting I find out because, you know, like previous, like day zero, we actually, the .asia, we relaunched our e-commerce internet and that’s here. So we’ve done study about 14 jurisdictions about, you know, like energy consumption, efficiency, and also economy aspect of this jurisdiction. And it’s actually interesting that Hong Kong is actually not in a good position that we thought it is. So like I’m kind of like sad to say that Hong Kong is not in a very good performance status. So I just want to raise a point here. It seems like it’s not just about the capacity. So obviously we do have economic power and also infrastructure to localize the global solutions for, you know, like to tackle the climate change things. But here I just, I also want to get, you know, some inspiration and maybe good case practice from you guys when you have to identify the decision maker, you know, to talk about your agenda and your, you know, your thought about how to tackle with climate change. How do you identify them and how to lobby with them and, you know, negotiate some thoughts that you have as a youth. So I think that’s it, what I want to ask about. Thank you very much. Thank you very much, we, yeah, let’s do that. Hi, I’m Irene from, oh, sorry, what is your name? Hi, I’m Ethan from Hong Kong as well. So I believe that Sakura have just talked about how internets and technology to fight, can collaborate to fight climate changes. So actually I’ve been working on some, some same project, some projects that is related to this topic. And I have just a very short question, is that how can internet of things be harnessed, harnessed to create a more energy sufficient system and reduce carbon emission? And that’s all, thanks. So hi, I’m Irene from IEEE and it is very refreshing to see all these young people. So I think I should put you also in contact with the IEEE young professionals task for climate change. So I, innovation is very close to IEEE and I was doing, analyzing patents for a living for a long time, working also with NP Brazil. So I think we know as a, as a fact that we have enough technology innovation and I think the examples that you mentioned around, which are the win-win situations where it’s about energy efficiency are an easy sell for companies. But the question is, who is taking the accountability about, you know, adopting the other technologies which are very costly? And I wonder where the other, you would have some other thoughts on beyond some incentives that the government’s could give, because the question is, are these enough? And I would like to have your thoughts about what do you think is the importance in, Rosanna mentioned before about the importance of lifecycle assessment. So I was wondering, there is a discussion in Europe about the European Green Digital Coalition, about how do we define, for example, avoided emissions when we talk about net impact. So I was wondering what are your thoughts about systems thinking in that, and what is the role of standards in that? Thank you so much.

Moderator:
Thank you very much. Since we’re out of time, I’ll welcome the speakers, if you can respond to any of the questions asked. Please. Yeah, one minute or second.

Rosanna Fanni:
Okay, thanks a lot for so many questions. I’m glad to hear that we have sparked so many ideas and thoughts. I will just maybe touch on two points, the one point on a colonization and the other point on standards. So first on colonization, absolutely, that’s a huge problem. I think also, especially big tech companies have way too much power, as we know, and there should be, I think, more concerted efforts by the United Nations, by other supranational and international organizations to curb the power of those tech companies. But at the same time, I think, again, transparency is super important, because if consumers would really see the impact of, for example, a Google data center in, why did you say, yes, perhaps there would be, you know, also a mind change from the consumer side and from the recipient side. So I think it’s really important to bring more transparency and also to have more global reporting about those cases, because there’s similar cases with Meta doing the Open Africa ICT project, where they scan biometric data of citizens and use citizens to explore, so to say, the 3D landscape. The second point on standards, I hate actually not mention it, but standards are also one part of the European strategy, of course, and to standardize green digitalization together. I think standards are absolutely crucial, but again, there the question is, how do those standards bodies produce those standards? Is it inclusive enough? Is it also representative of civil society and maybe members that cannot afford to be in those standardization bodies? And definitely, I think, in the end, all these questions that we discuss ultimately political questions that policy makers have to tackle first, and if policy makers do not put their priority on green and digital or digital sustainable digitalization, we will not get anywhere. So I think it all originates in political priorities and making this even more topic, and then through standardization and other measures that I’ve already mentioned. Thank you. Thank you very much,

Moderator:
Rosana. One more contribution, please, and then the rest we can. So thank you for

Denise Leal:
this amount, this large amount of questions. We are happy with your participation. So I’ve noted something here. I wanted to speak about the question of successful examples and lobby, the question of lobby and about the standards. Beginning with this last one about the standards, we have ESG with all those standards, but I think that the reports, they are, we have a lot of lies on the reports. So there is a problem. How can we really read these reports on climate and with the standards that deal with climate change and believe on them? I think that we need to work more on how we can check these reports and how they are made, because the standards are good, but they are there and we don’t have how to check them. And again, we are with the problem of compliance, how we check those things and how can we make them really true, because I think that the standards are good, but we don’t have how, we don’t have a really nice way to check if they are being true in the reports. And the successful examples and also about lobby, I think that we can talk about this too, together. We had some, we have studied some examples when politician and lawyers have worked together in, to create some solutions on patents and biodiversity aspects. So they, we know that when it comes to international treats, we have some problems because you cannot solve things only with legal disputes. You need politicians to help you. So what we have noticed in the international aspects of environment and legal disputes is that when you have these two groups working together, the legal group and the politician group, you might have something, some good example of success in the end. I cannot say that we have a lot of successful examples, like we know there is more traditional communities that have successful examples of how to protect environment, but it’s really small. It’s a thing that we can adopt in our small communities, but if we are speaking in a more big way, looking globally for a solution, we must make our politician and our legal teams work together, like the judge and the politician must work in the same line. They must be aligned, it’s what I think, and what we have noticed in our research. Thank you.

Moderator:
30 seconds, please. Okay, I will answer the question from, I forgot the name of the

João Vitor Andrade:
of the sir of the Hong Kong, but about the electric system, about the consumption and about the distribution, how we can build a better system. I will give the example of my country to answer your question because I think I have more knowledge about that and so I can explain better. The question that I have to think is, how we can build a better system where we can distribute and not just think about the production, but how we use correctly the energy in our countries. Because many countries in Brazil, I include Brazil here, just thinking about the production, for example in my country at the moment we have a discussion about the production of wind energy offshore, and in Brazil we have a lot of debates about this topic, because it’s more cheap to produce energy instead of to rethink the system, to rethink the system to build a better system for a good distribution. So what I think that the countries have to do is, where is in the region of the country we can use AI to do it, what part of the country have a higher consumption, what have a low consumption. So we can use the AI to achieve these numbers and can rethink about it, because if we use it we will not need to produce more energy, but just to distribute correctly into the country. So if we do it, we can reduce, for example, the use of fossil fuels, like coal, and large countries like China, for example, have a great production of energy based on coal. So if we think, if we build a better system to distribute correctly the energy, we can reduce the use of fossil fuels, for example, and you can reduce the carbon emissions and collaborate to fight against the climate change. So I think I don’t have much time, but thank you so much for the question. Thank you very much. Thank you for various questions. I’d like to

Speaker:
answer about e-waste and energy efficient systems. Well, about e-waste, how we can tackle the e-waste, because it’s a global problem. It’s definitely, I totally agree with it, because the e-waste is not the problem that related to the, directly related to the countries, about the e-waste, e-waste production country and consumption countries. So I think the community, the engagement and the policies that support those activities and initiatives at the local levels are important, because if we can create good policies, if the local people or the people on the ground can’t take the actions or provide their voices to the decision-makers, the policies are not implemented. So how we can take these problems seriously and take actions urgently is really important. So at the first big step that we need to, what happens in the other areas in the same world, and also as well as knowledge sharing from the other areas, because even if we have the multifaceted programs in the different part of the world, we can have something, we can learn about something from the latest problems. And also, I think we need opportunities to discuss and learn about the case studies more, because we can get the feel of the centre, how to say that, we can let people be involved in the same programs. And regarding the energy-efficient systems, how the internet harnesses energy-efficient systems, the speaker from the Hong Kong asked, I think that smart grid and the energy consumption and production at the local levels are really important. So some areas in Japan, mainly in the metropolitan cities, we take the local heat management system, and also we are trying to develop smart grid systems that can manage the energy supply and demand in the local, I mean, the specific areas focusing. So the smart grid and the local level, the energy management systems are really important. Thank you.

Moderator:
Thank you very much, Sakura. James, please, one line to close, to conclude your… All right. Since he’s not there, I’d really like to appreciate each one of you for participating and joining us in this session. Thank you very much for being an amazing audience, for asking questions and contributing. See you around. Our speakers will be outside for any questions, any contacts, please let’s meet outside. Thank you.

Rosanna Fanni

Speech speed

157 words per minute

Speech length

2038 words

Speech time

778 secs

Audience

Speech speed

158 words per minute

Speech length

1335 words

Speech time

507 secs

Denise Leal

Speech speed

145 words per minute

Speech length

1326 words

Speech time

549 secs

Igor José Da Silva Araújo

Speech speed

130 words per minute

Speech length

600 words

Speech time

277 secs

James Amattey

Speech speed

133 words per minute

Speech length

933 words

Speech time

422 secs

João Vitor Andrade

Speech speed

152 words per minute

Speech length

1642 words

Speech time

650 secs

Moderator

Speech speed

131 words per minute

Speech length

923 words

Speech time

423 secs

Speaker

Speech speed

134 words per minute

Speech length

1198 words

Speech time

535 secs

Current Developments in DNS Privacy | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

David Huberman

The summary emphasises the importance of DNS privacy, as DNS queries can reveal personal information about individuals. The clear text nature of DNS data until a few years ago made it accessible to anyone. This highlights the urgent need for developing protocols that ensure DNS privacy. The DNS was created in 1983, but privacy-focused developments only began in the last five to six years.

Paul Makapetris, the inventor of DNS, is credited for solving significant issues regarding the scaling and knowledge of all hosts on the internet. Prior to DNS, existing processes were unable to scale effectively. The creation of a distributed system through DNS enabled anyone to access information about hosts and their corresponding IP addresses, greatly enhancing the functionality and efficiency of the internet.

Jeff Houston, the chief scientist of APNIC, is regarded as a highly respected authority in the field of internet engineering. His deep understanding of the internet and its engineering aspects is acknowledged by David Huberman. As a thought leader, Jeff Houston is considered one of the best sources for discussing technical considerations related to DNS privacy.

In conclusion, DNS privacy is crucial due to the potential exposure of personal information through DNS queries. The delay in developing protocols for DNS privacy is seen as a missed opportunity, considering the long history of DNS and the recent start of privacy-focused developments. The invention of DNS by Paul Makapetris is credited for resolving critical issues associated with scaling and knowledge of internet hosts. Overall, Jeff Houston’s expertise in internet engineering is seen as valuable for discussions on the technical considerations of DNS privacy.

Geoff Huston

DNS privacy is an incredibly important issue, as DNS queries can track online activities, and if someone sees your DNS queries in real time, they essentially have access to all your secrets. Manipulations of DNS queries are also possible, as applications believe the first answer they receive. However, the DNS industry has made positive strides towards improving DNS privacy and security. Efforts such as query name minimisation and implementing encryption protocols like HTTPS and QUIC are being employed to protect DNS transactions. Despite these advancements, there is a challenge in balancing the need for an efficient network with the need for privacy in the DNS. Additionally, the technical community is working towards an opaque system that removes attributions in name use, but this may lead to a loss of transparency. The role of ICANN in DNS privacy is uncertain, and applications have gained control over the DNS, leaving traditional infrastructure operators behind. This shift towards application-driven technologies presents challenges for infrastructure operators. Overall, DNS privacy is a critical concern, and while improvements are being made, there are still challenges to address.

Manal Ismail

The European General Data Protection Regulation (GDPR) has had a significant impact on the GTLD Whois landscape. It mandates the reduction of personally identifiable information in registration data, radically changing the landscape. However, implementation of GDPR varies depending on the registry or registrar involved, resulting in a fragmented system. This has introduced several key issues, including increased ambiguity regarding the differentiation between legal and natural persons.

To address these challenges, there is a pressing need for standardized regulations and mechanisms for accessing non-public registration data and responding to urgent requests. However, reaching an agreement on the necessary policy recommendations has proven difficult. For example, the Governmental Advisory Committee (GAC) has found the proposed three-business-day timeline for responding to urgent requests unreasonable.

Another challenge arises from the lack of policy applicable to domain registrations subject to privacy proxy services. The use of privacy proxy protection has increased over time, and governments within the GAC are unsure of how to address this issue. The absence of clear policies in this regard makes it difficult to ensure compliance and protect privacy rights.

Improving the accuracy of GTLD registration data is a prioritized area of work. The GAC principles place great importance on the accuracy of this data, and ICANN is preparing a comprehensive assessment of the activities it may undertake to study accuracy obligations in light of applicable data protection laws and contractual authority.

During discussions, Manal Ismail expressed agreement with Steve and Farzi regarding the significance of data collected during the proof of concept. This demonstrates the recognition of the value of such data in informing decision-making and shaping policies.

Moreover, Manal Ismail believes in the necessity of constructive and inclusive discussions within ICANN’s bottom-up multi-stakeholder model. Despite diverse views, all participants were observed speaking from a public interest perspective. This highlights the importance of finding a balance between privacy and safety while considering the broader societal impact of ICANN’s decisions.

In conclusion, the GDPR has brought about significant changes in GTLD Whois records, necessitating the need for standardized regulations and mechanisms for accessing registration data and addressing urgent requests. The lack of policies applicable to domain registrations with privacy proxy services poses additional challenges. Efforts are being made to improve the accuracy of registration data. It is crucial to recognize the value of collected data during the proof of concept and engage in constructive and inclusive discussions to strike a balance between privacy and safety within ICANN’s bottom-up multi-stakeholder model.

Audience

During the ICANN62 Policy Forum, discussions on data privacy and access covered several crucial points. One speaker highlighted the potential harm caused by publicly accessible personal data of domain name registrants. For 20 years, this sensitive information, including mailing addresses, phone numbers, and email addresses, was available to the public. This raised concerns regarding the potential risks and harm that could arise from such unrestricted access to personal data.

On the positive side, another speaker mentioned the improvement brought by the advent of privacy proxies. This development allowed for increased privacy protection by masking personal information in domain registrations. This was seen as a step in the right direction towards improving domain privacy.

The forum also acknowledged and appreciated ICANN’s focus on DNS privacy. In one of the workshops, ICANN specifically titled it as DNS privacy and emphasized the importance of privacy in addition to access. This recognition highlighted the commitment to address privacy concerns and protect the data of internet users.

Transparency and accountability regarding law enforcement’s access to people’s data were deemed important. It was stressed that governments and law enforcement agencies should be transparent in their requests for access. This would ensure that there are clear processes in place for requesting and granting access to personal data, minimizing the potential for misuse or abuse.

Concerns were raised about the implementation of metrics for requester’s access, particularly when the requesters are from authoritarian countries. Questions were posed regarding the accessibility of data to law enforcement in such countries and the verification process to ensure compliance with ethical standards. These concerns emphasized the need for a robust system that prevents unauthorized access to personal information.

The audience also expressed the need for clarification on who has access to the data and how it is granted. This highlighted the importance of defining and understanding access privileges to ensure that data is only accessed by authorized entities and for legitimate reasons.

The adoption of the Registration Data Access Protocol (RDAP) was seen as a positive development. RDAP is a standardized protocol aimed at improving data access and security in domain registrations. However, concerns were raised regarding data privacy and security under the new protocol. The example of Indonesia was mentioned, where a local law prohibits the disclosure of data, even for legitimate law enforcement interests. This highlighted the challenges of reconciling different data protection regulations and ensuring compliance.

Data ownership was emphasized as a fundamental aspect of data protection and privacy discussions. Registrars were highlighted as having an obligation to comply with the data protection laws of the country whose residents’ data they hold. With potential obligations under multiple data protection laws based on the nationality of residents, the need for clarity and understanding of data ownership became crucial.

The forum also recognized the importance of ICANN, IETF, and IANA in addressing DNS privacy and developing policies. There was an expectation for these organizations to be actively involved in considering the costs and benefits of potential tools and providing guidance on DNS privacy.

Regarding Request Distribution Reporting System (RDRS), concerns were raised about its adequacy as a measure of demand. The need for improvements, such as the ability to allow bulk uploading of requests and retaining requester data for analysis, was suggested. It was proposed to hire a privacy lawyer for an in-depth review to ensure the system’s effectiveness.

The uncertainty of registrar participation in RDRS and its potential impact on requesters’ engagement was highlighted. It was remarked that promises on the operation of RDRS could not be successfully delivered due to the unknown number of participating registrars. A negative initial experience discouraging further engagement was also mentioned as a potential consequence.

Suggestions were made to retain data for evaluation purposes to provide an incentive for requesters to continue participating, despite potential initial disappointments. The low submission of requests indicated that some requesters might be tackling the issue without relying on data, but the importance of data retention for downstream analytics was emphasized.

Making participation in the Expansive Secure Synchronized Access and Disclosure (ESSAD) program mandatory was seen as beneficial. It was recognized that ESSAD could potentially serve as a valuable resource for data gathering and enhance the effectiveness of data access and disclosure.

ICANN’s participation in discussions on DNS abuse was mentioned, indicating a commitment to address and mitigate abuse issues in the domain name system. This participation demonstrated the recognition of the importance of maintaining a secure and abuse-free online ecosystem.

The lack of uptake of encrypted DNS, DNSSEC, and other protocols was highlighted, raising concerns about the security of the internet infrastructure. The need for end-user involvement in the design and implementation of standards was emphasized to ensure better adoption and implementation.

Lastly, the importance of not compromising enterprise cybersecurity through the “going dark” phenomenon was emphasized. Privacy was viewed as deluded without security, and it was emphasized that removing all data without ensuring proper security measures would lead to a worse privacy condition than before.

In conclusion, the discussion at the ICANN62 Policy Forum highlighted the necessity of addressing data privacy concerns while ensuring responsible data access. It underscored the importance of transparency, accountability, and clarity in the process of granting data access, especially for law enforcement agencies. The adoption of protocols such as RDAP and ESSAD were seen as positive steps towards improving data privacy and access. However, concerns regarding privacy, security, and the participation and effectiveness of various systems were also raised, emphasizing the need for continuous improvement and collaboration among stakeholders to ensure a secure and privacy-focused internet ecosystem.

Becky Burr

The discussion revolves around the need to protect privacy in the Domain Name System (DNS), particularly with regards to WHOIS data. WHOIS data contains information about the registrant of a domain name, and access to this data can potentially be misused for phishing, fraud, and suppressing free expression.

To ensure appropriate handling of data, it is important to adhere to fair information practice principles, which include principles of lawfulness, fairness, transparency, and accountability. These principles should guide the way data is dealt with in the DNS.

One notable development in 2018 was when WHOIS data went offline and became accessible only upon request. This change was made to provide better accountability and protection of privacy in the DNS ecosystem.

While ICANN (Internet Corporation for Assigned Names and Numbers) plays a role in supporting and facilitating registrars in their data processing responsibilities, it cannot dictate the outcome of the balancing test that registrars must perform when determining the accessibility of data. The responsibility for data processing lies with the respective registrars.

Queries associated with an IP address can provide information about individual and institutional internet uses. However, it is argued that not all queries associated with an IP address should be public. The public nature of the DNS is essential for resolving queries, but privacy considerations should also be taken into account.

Registrars, who hold the data, make decisions about the release of data based on a variety of circumstances. These decisions are informed by the relevant laws, regulations, and the registrars’ own company policies. The release of data should consider legitimate interests and the privacy rights of the individuals involved.

Data ownership is a complex issue that is fundamental to the discussion of data protection and privacy. Modern data protection laws apply not just to processing data within a country but also to the information about residents of that country. When users register a domain name with a registrar, they agree to the registrar’s privacy policy. Additionally, the ICANN contract requires registrars to make certain disclosures.

Compliance with the law is crucial for registrars. Even if registrars are located outside a particular country, they may have obligations under the law of the country where the resident whose information they hold is located. Therefore, registrars must comply with the applicable laws and regulations governing the processing of data.

In terms of encouraging participation, it is suggested that collecting data for downstream analytics can serve as an incentive for registrars to participate. This data can offer valuable insights into the DNS ecosystem. There is even a suggestion to make participation mandatory for all registrars, as it is seen as important for the overall functioning and improvement of the system.

Finally, there is an acknowledgement of the importance of understanding the needs of requesters through the system. This understanding can help address any issues or concerns and improve the overall experience for all parties involved.

In conclusion, the discussion highlights the importance of protecting privacy in the DNS, specifically in relation to WHOIS data. Fair information practice principles should guide appropriate data handling, and registrars are responsible for complying with relevant laws and regulations. Data ownership and privacy are complex issues that need to be considered in the context of data protection. Encouraging participation and understanding the needs of requesters are also essential for the effective functioning of the DNS ecosystem.

Yuko Yokoyama

ICANN (Internet Corporation for Assigned Names and Numbers) is developing a new service called RDRS (Registration Data Request Service), which aims to simplify the process of requesting non-public GTLD (Generic Top-Level Domain) registration data. RDRS will act as a centralized platform for registrars to submit and receive data requests, benefiting stakeholders such as law enforcement agencies, IP attorneys, and cybersecurity professionals.

RDRS is a voluntary service and ICANN cannot force registrars to disclose data through this platform. The decision to disclose or not lies with the registrars, and RDRS operates as a proof of concept service for up to two years.

Key features of RDRS include the automated identification of domain managers, eliminating the need for requesters to identify them themselves. Additionally, requesters will have access to their past and pending requests within the system.

It is important to note that the disclosure of requested data is not guaranteed by RDRS. Each registrar conducts a balancing test before deciding whether to disclose data, taking into account local laws and other applicable regulations. This ensures compliance with legal regulations and protects individual privacy rights.

Only ICANN accredited registrars have access to the RDRS system. They act as intermediaries between requesters and the platform, holding the registration data and routing requests accordingly.

In summary, ICANN’s RDRS aims to streamline the process of requesting non-public GTLD registration data. It provides a central platform for registrars to submit and receive data requests, benefiting stakeholders such as law enforcement agencies, IP attorneys, and cybersecurity professionals. However, the decision to disclose data is ultimately up to the registrars, considering local laws and regulations. Only ICANN accredited registrars can use the RDRS system.

Session transcript

David Huberman:
you for your patience. Welcome to this workshop. I could use my glasses, thank you. Welcome to this workshop. We are going to be discussing current developments in DNS privacy. And why are we going to be discussing that? Well, when the internet was created in the 1960s, 70s and 80s, we were just trying to engineer solutions that would work, that would allow us to intercommunicate. And in 1983, Paul Makapetris was able to invent the DNS to solve two important problems that we were having at that time. One of them was about scaling. One of them was being able to know what all the hosts on the internet were. And the way we were doing it did not scale at all. And so the DNS was able to fill that gap by creating a distributed system that would allow everyone everywhere to know all the hosts and all the names on the internet and map them to the IP addresses that are necessary for computers to talk to one another. Importantly, it also, okay, thank you. Importantly, it also enabled email, it enabled email at scale, because email before 1983, you had to be a human router, you had to describe all the different steps that an email needed to take in order to reach its destination. The DNS allowed us to scale that. So all you needed to know is where someone was me at ICANN.org. And then on the left side of the app, all you needed to know was my address, david.huberman for the local routing of that email. Okay, so why am I giving you all this history? Because from 1983, until really, five or six years ago, ish, all of the DNS data that everybody in the world used and communicated in their queries was all in clear text, it was all out in the open for everybody to see if you were listening on the wire, or if you were operating a DNS element that was looking at your query. And this is a problem because DNS queries have a lot of information about who we are and what we’re doing. It’s 2023, or back then it was 2017, 2016. And this was not acceptable anymore. Privacy is a right, privacy is a responsibility. So we began to develop solutions to increase the privacy of the DNS. I am very honored and you are very lucky that we are going to hear today from four world class experts, who are going to talk about some of these developments, both historically and contemporarily. Today on the panel, Becky Burr. Online, we have Manal Ismail, we have Jeff Houston, and we will end with some new developments here at ICANN with Yuko Yokoyama. So to begin our session and to set us in a good historical perspective and a good legal perspective, it’s my honor to introduce Becky Burr. Becky Burr is a member of the ICANN board. She’s a world class privacy attorney in Washington, D.C. And most importantly, ICANN is entirely Becky’s fault. So if we could please put presentation one, if we could please put those slides up. Becky, to you.

Becky Burr:
Thank you so much and thank you all for being here. If we could go to the next slide. We’re going to talk about two aspects of DNS privacy. I bet some of you are here because you heard DNS privacy and you thought, okay, this is about IP addresses and queries and things. Some of you heard DNS privacy and you’re here because of WHOIS. We’re going to talk about both of those things. And our hope is that we’ll go through the presentations pretty quickly and have a lot of time for discussion. So we’re going to go sort of way back in the WHOIS way back machine world. In 1998, when the U.S. government issued the white paper on domain name management, it said we need to have an organization that ensures that there are policies out there that require that registrant data, including name and address and contact information, is included in the registry database and available to anyone with access to the internet. Now, the world has changed a little bit and I think if you recall the discussion about access to domain name registration data in NIS2, the European Commission directive, European Union directive, it says something slightly different. It does say that policies should ensure that registrant data is collected, that it includes all of those things, and it should be available to people who have legitimate, people who are verified and authenticated to have legitimate interests in that data. So we’ve come a way down the road in terms of finessing the way we think about access to registrant data to reflect the fact that the world has evolved in terms of considerations about privacy. If we go to just the privacy principles, I’m not going to give you a lecture on data privacy law. I’m just going to tell you that almost all data privacy law is built on fair information practice principles and they’re very fundamental principles that guide how you deal with data in an appropriate way. We’re not going to talk about all of these. This happens to be the formulation of fair information practices that’s found in the GDPR, but they’re all quite similar. The things that we need to talk about in terms of DNS privacy for our discussion today is the lawfulness, fairness, and transparency principle that provides that if you’re processing personal data, it has to be lawful and fair and transparent. And fairness is the issue that we’ll think about here in terms of is the processing harmful, unduly harmful, unexplained, misleading, or surprising to the data subject, and accountability and controllers, the people who make decisions about what data is collected, how it’s processed, what’s it used for. Under modern fair information practice principles, those people are accountable for their use of their processing of data. So let’s just talk about the fairness analysis because I think if we’re all on the same page, we’ll do better here. As I said, it’s about is the processing unduly detrimental or surprising or harmful to a data subject? And you think about it in a couple of ways. What’s the purpose of the processing? Is the processing, is the purpose legitimate? Is it legal? Is it ethical? There’s a necessity component, which is do you need to process this data to achieve the goal that we’ve just decided is legitimate? Is it proportionate? Is there a less intrusive way to get the information you need, achieve the goal that you’ve set out with processing this? And you take those two things and you apply essentially about balancing tests. And you say, given the purpose that I want to balance, that I want to process this data for and the considerations about whether there’s a less intrusive way to do it, how does this balance out? How do my legitimate interests compare to the fundamental privacy rights of the individual data subject? So if we apply that in the context of domain name registrant data, we can go back and talk about the original purpose way back when was really for engineers to resolve routing issues. They needed to be able to get in touch with somebody to resolve an issue. But the function of this data has evolved over time. It’s now used, and this is not a new development. This is really since commercialization on the internet began in 1992. This data can be used to identify and mitigate cybersecurity threats, to fight crime and fraud, to protect consumers, and to protect intellectual property. But it also can be, and most assuredly is being used for marketing, for phishing, for fraud, and for suppression of free expression. So there are some important reasons to process this data. There are also some significant potential for misuse of the data. And when we’re talking about balancing, we think about that. The necessity test is, does the registrant data need to be publicly available for anyone to process? Is there a less intrusive way to address the legitimate interests of cybersecurity threats, crime prevention, and the like? And we saw that in 2018 when GDPR went into effect, WHOIS went essentially offline. It wasn’t published on the internet for anybody to see without any kind of accountability. You had to come and ask for it. And that also was a way of making users more accountable. Because before, nobody knew who was looking at WHOIS information and for what purpose. If you have to ask for it, if you have to provide an email address, there’s more accountability. So considering both those tests, is the access to DNS data proportionate? Is it fair? Is it lawful? The answer to that, I’m sure, is clear as mud to everybody, because it really depends on the context. It depends on so many variables that you can’t have a bright line test. You have to think about this in specific context. So the question then becomes, who gets to decide? And I think it’s useful to focus on this for just a minute. Because remember, we said we’re going to talk about fairness. We’re also going to talk about the accountability principle. And under every data protection law that I’m aware of, the controller, the person who makes the decisions about what data is collected and how it’s used, is responsible and is accountable for applying that balancing test. So in the domain name world, registrars are surely controllers. I don’t think that there’s any question about that. Lots of people will debate whether ICANN is a joint controller or not. I don’t think we need to do that for our purposes. You can decide whatever you want on that. Because whatever the answer is, ICANN can’t determine the outcome of the balancing test for registrars, who are themselves controllers and who must conduct the balancing test themselves. So that puts ICANN in a very difficult position. Because people say to ICANN, why are you not making registrant data available? And the answer is, we don’t get to decide. The registrar who has the data, who is delivering it to somebody in response to a request, they are under the law, accountable for and responsible for making that decision. And even if ICANN was willing to say to every registrar on the planet, if you get fined, no problem, we’ll indemnify you, we’ll take care of you. ICANN still can’t have a policy that says you must do something that you think is against the law. So I just think that we really need this question of who decides has to be fundamental in the way we think about privacy. I didn’t mean to do that. So if ICANN can’t dictate the balancing test outcome, can its policies and can its tools facilitate and support that process? Can we make it easier for people to submit requests? Can we make it easier to ensure that registrars have the information that they need to conduct the balancing test that they’re required to conduct? We’re going to talk about that. Yuko is going to talk a little bit about that in the context of a tool that ICANN is rolling out shortly. Now, the other kind of DNS privacy that we’re going to talk about is the more technical kind. And the DNS, what IP address corresponds to what name, is necessarily public. If that information is not public, you can’t resolve queries. But the queries themselves are not necessarily public. And IP queries that are associated with the IP address of the requesting server can tell you things about individual and institutional internet uses, who’s searching for what. People will differ on how good a tool it is to do that, but there’s no doubt that you can get some information. And there are several technical organizations who are working on this aspect, and Jeff Houston, who is on with us, will talk about that. So I’m going to move on quickly to Manal Ismail, our dear colleague from ICANN, who is going to talk about the government perspective on these issues. And I’m hoping that we will have a lot of time for a discussion, because we almost always can have a pretty lively conversation here.

David Huberman:
Okay. If we could please put the presentation down, thank you. And Manal, if you are using slides, if you would be so kind as to share your screen. Otherwise, if not, please go ahead.

Manal Ismail:
Thank you. Thank you very much, David. I’m not using slides, so I’m good to start. And good morning, good afternoon, and good evening, everyone. I’m sorry to miss the opportunity to be with you all in Kyoto. My name is Manal Ismail. I’m chief expert internet policies at the National Telecom Regulatory Authority of Egypt and former chair of ICANN’s governmental advisory committee, the GAC, and now representing Egypt on the committee. Thank you for inviting me to join this distinguished panel on DNS privacy, and many thanks, Becky, for the excellent setting of the scene. In light of the evolving landscape in DNS governance and the ongoing changes related to access of registration data, governments are striving to strike the right balance between, on one hand, privacy protection and responsible handling of DNS registration data, and on the other hand, ensuring transparency, accountability, and access to accurate and reliable registration data. There are great efforts by ICANN in that respect so far, but I will focus my intervention on four key public policy concerns from a government perspective, of course. Related to, first, reduction of registration data and the differentiation between legal and natural persons. Second, access to non-public registration data and the timeline for for response to urgent requests. Third, the privacy proxy service and fourth and last on accuracy of registration data. So to start with the reduction of registration data and the differentiation between legal and natural persons, as you may all know, and as Becky has highlighted previously registration data were made available through free and public Whois services. Starting the 25th of May, 2018, the European General Data Protection Regulation came into force mandating the reduction of any personally identifiable information which changed radically the GTLD Whois landscape and left ICANN grappling with the potential impact of this on Whois services. Just before that on 17 May, 2018, the ICANN board adopted an emergency measure referred to as Temporary Specification or TempSpec for short, in order to enable registries and registrars to comply with the GDPR while maintaining the existing Whois system to the greatest extent possible. This TempSpec allowed the registries and registrars to redact registration data information unless of course provided with registrant’s consent and required the registries and registrars to provide reasonable access to non-public registration data only based on legitimate interest and for a legitimate purpose. This created the fragmented system with distinct policies depending upon the registry or registrar involved and introduced a number of important issues including distinguishing between registration data of legal and natural persons. And this is to allow for public access to legal person’s data since it does not fall within the limit of the GDPR. The relevant policy development process proposed only a mechanism to facilitate the differentiation for those who wish. So it’s kept merely as an option. Therefore, governments in the GAC urged the development of more precise policies that would protect personal data while publishing non-personal data. Noting that a significant percentage of domain names are registered by legal entities and that some analysis shows that a considerably larger set of registration information was redacted around 57.3% as compared to what is required by GDPR estimated to be only 11.5%. Now moving to access to non-public registration data and the timeline for response to urgent requests. In continuation of community work to develop an accreditation and access model that complies with GDPR, a policy development process was conducted and proposed a standardized system for access and disclosure as said for short where consensus was achieved on aspects relating to accreditation of requesters and centralization of requests. Yet agreement could not be reached on the policy recommendations necessary to provide for standardized disclosure. And the ICANN community is now expecting the rollout of a voluntary proof of concept, the registration data request service which is expected to inform future consideration of the ASAN in terms of demand and I believe Yoko will be speaking more to this. Yet certain public concerns are likely to remain such as the lack of centralization with regard to disclosing data, lack of a mechanism for review of disclosure decisions and worries that the recommendations could create a system that is too expensive for its intended users. Additionally, governments, members of the GAC of course are concerned regarding the timeline for response to urgent requests. And by urgent here, we mean limited circumstances that pose an imminent threat to life, injury, critical infrastructure or child exploitation. The proposed timeline contains improvements of course such as an explicit reference to the general expectation of the response within 24 hours and the requirement to notify the requester if additional time is needed. Yet it allows for not one but two extensions that could bring this timeline up to three business days. And the GAC finds three business days not a reasonable time period for responding to urgent requests and moreover, the use of business days injects uncertainty into the process where the three business days would stretch to seven calendar days depending on diversity of global holidays and work weeks. So in an effort to reach a compromise, there was a proposal that the extension notification must include three things. First, confirmation that the relevant operator has reviewed and considered the urgent request on its merit and determined additional time is needed. Second, rationale for why additional time is needed. And third and last, the timeframe for response which is expected of course, not to exceed two business days from the time of receipt. In a recent exchange, the board requested more information from GAC members on their experience with urgent requests and the GAC confirmed its intention to provide such information and acknowledged ongoing work to gather scenarios and use cases of urgent requests and related experience with contracted parties. Moving to the third point on privacy proxy, governments within the GAC are concerned that there is still no policy applicable to domain registrations subject to privacy proxy services which in effect create a double shield of privacy. The GAC requested that at least the registration data record clearly indicates whether the data is protected by a privacy proxy service or not. Particularly that per a study by Interisle Consulting Group, the use of privacy proxy protection has increased over time from 20.1% in 2013 to about 29.2% in 2020. In addition, lessons learned from the COVID experience indicated that 65% of a sample of domains reported to the FBI had registrant data obfuscated by privacy proxy services. And again, during a recent exchange between the GAC and the board, it was acknowledged that the use of privacy proxy services is increasing and it was suggested that meaningful access to registration data would mean integrating the privacy proxy providers into the system similar to how the registrars are integrated. Finally, on accuracy, in GAC principles on GTLD, who is services issued in March, 2007, the government stressed the importance of accuracy of who is data and who is services and that the who is services must comply with applicable national laws. Also, ICANN by-laws recognize that ICANN shall work with supporting organizations and advisory committees to improve accuracy and access to GTLD registration data, as well as consider, of course, safeguards for protecting such data. In addition, dedicated ICANN review teams have considered levels of accuracy of registration data, where the first team found that only 23% of who is records were fully accurate. And the second assessed that data accuracy rate is still high, as high as 30 to 40%. In response to that, ICANN had put in place who is accuracy reporting system, which has stopped publishing reports because it relied on collecting publicly available data. And in 2021, an accuracy scoping team was formed. However, its work has been suspended given data protection concerns around whether ICANN has legitimate purpose that is proportionate. And the work is currently pending outcome of engagement with the European data protection authorities, as well as negotiations between ICANN and contracted parties. So to conclude, the GAC is currently examining opportunities for advancing work on accuracy of registration data. And ICANN is preparing a comprehensive assessment, I understand, of what activities it may undertake to study accuracy obligations in light of applicable data protection laws and its contractual authority to collect such data. I leave it at this, apologies if I exceeded the 10 minutes and I’ll turn it back to you, David, please.

David Huberman:
Oh, thank you so much, Manal. That was extremely helpful to understand the four key public policy concerns that the GAC has today in this area. There’s a lot to discuss here and there’s a lot of people, I think, who would like to have their voices here, but bear with us for just a few more minutes, please, because we wanna finish setting the table here so we can have a good interaction and a good discussion. I’d like to turn now online to our friend, Geoff Houston. Jeff, are you with us?

Geoff Huston:
Yes, I am, thank you.

David Huberman:
Great, thanks, Jeff. It’s good to hear your voice. He probably doesn’t need an introduction, but just in case, Jeff Houston is the chief scientist of APNIC, the Regional Internet Registry in the Asia Pacific region. Jeff embodies the concept of a thought leader. Jeff is someone who understands the internet and how it’s engineered better than almost anyone in the world. And so to help us understand some of the technical considerations in DNS privacy and putting the discussion that Becky and all have laid out for us in a technical context, Jeff, can you take us through some of this, please?

Geoff Huston:
Yes, thank you. And I must admit, I have to say first off that my background is technical, not policy-based. And so when you say DNS privacy to me, I don’t immediately swing over to the registration issue. I don’t. The massive use of privacy proxies, the corporatization of large amounts of the internet, to my mind, that looks like a minor issue. The burning conflagration, the elephant in this room is actually somewhere else. If I can see your DNS queries in real time for any value of you, you have no secrets because everything you do, everything, even the ads you get delivered to your screen, starts with a call to the DNS. And so if I know what questions you’re asking, when you’re asking it, then you have no secrets from me. And you go, well, okay, that’s pretty bad. But unfortunately, it gets worse because the applications that use the DNS that run on your computer or on your phone are not just naive. They are almost criminally negligent in terms of their naivety because they believe the first answer they get. Not the answer. Any answer that is first that reflects the information in the query, that’s the truth. So if I can see your queries and I can jump in first with the wrong answer, you’re mine, I own you. And you can’t even see that it happened because although the queries and answers are in the clear, the DNS innards are incredibly opaque. No one knows where the answer came from. It just comes. It’s like magic. It’s lightning fast, but you can’t check it. You just believe it. Now, you might say, well, so what? But the issue is that this property of the DNS has been used and abused by many over the years. It’s no surprise that the internet’s capitalism is basically based around surveillance and advertising. The more knowledge that advertisers have about me as a user, then the more valuable the ads that can be sort of splashed in front of my eyeballs, the more money the advertiser makes, the more money everyone else makes about me. So knowing about me is critical. And seeing my DNS is a sure path to actually obtain that phenomenal knowledge. Now, it’s not just advertisers, it’s not just commercialism, it’s public entities. Various public bodies have been caught with their fingers in the DNS till looking hard. Malware, all kinds of criminal activity have also focused around the incredible naivety of the DNS. From the technical perspective, the reality came to the answer that enough was enough. It was time to actually arm the DNS with some level of protection against casual eavesdropping and intervention. And there have been three areas in the last five years that have been radical steps forward in making the DNS more private. And they’re quite effective. The first is stopping the DNS being gratuitously noisy. When I want to resolve a.very.long.name that may.have.bad components, then literally everyone gets to see that’s the name I’m trying to resolve. From the root service to the top levels to the second domain and so on. I’m telling the world of my interest in that particular domain. I shouldn’t be doing that. And as it turned out, there was a protocol error way back from 1983. It seemed like a good thing to do at the time. It’s been a disaster. And so we’re doing now a practice called query name minimization. And little by little, we’re clearing up that particularly important leak. But that’s not the heart of what we’ve changed. You may have noticed recently that almost every web page is now HTTPS, not just HTTP. And if you’re using a number of popular browsers, if you go to something that doesn’t have that magic S, that doesn’t use a secure and authenticated channel, the browser goes, hang on a second, this doesn’t look very good. Are you really, really sure? And more recently, some browsers are going, I’m not going there. It’s not protected. I’m not gonna help you in being silly here. It’s not gonna happen. We’re doing the same in the DNS. And little by little, we’re taking this open, very insecure protocol and transforming it. with the same technology we’ve used to protect the web. We’re using PLS is the name of the protocol, but we’re putting the DNS transactions behind a wall of incredibly good encryption. It’s no longer possible to casually eavesdrop. And we’re going even one step further, because if you think about it, a web page, a DNS query, they all look the same. So why don’t we just put the DNS into HTTP? Why don’t we put it into this new protocol called QUIC and wrap the whole thing up with some pretty heavy, heavy duty encryption? So now there is no possibility to be a casual network eavesdropper. But now we’re thinking about more than this, because the real problem is that I, Jeff, am making the query. Do I really have to? Because in the HTTP world, to make the web even faster, there’s a technology called server push. It says, I know you’re going to go to that web page. I really do. And even if you don’t, bandwidth’s available. Here’s some answers. Here’s some objects in advance. So if you touch, you know, if you click, bingo, instant answer. We can do the same for the DNS. We can pre-provision answers. Here’s the results of your search page. And by the way, all those URLs, here’s the DNS. I never make a query. I don’t get caught asking. It’s not me anymore. I’ve gone dark. So with a little help from DNS security, DNSSEC, and chain validation, we’re within a hair’s breadth of actually taking the user out of the picture and making the entire DNS go dark. Now, that means there’s only a few places that know you. And one of them is what we call an open DNS resolver. Normally, your ISP knows that. But there are a few folk like Google and Cloudflare who are very big in this game as well. And it might be very good to have privacy. But if you’re sharing all your secrets with Google, is that really private? Or is that really the veneer of privacy without the substance? And so we’re now working on even better forms of security and privacy, where who I am and what I’m asking for is split apart. And no one knows the two in conjunction. Apple are playing with this with their Apple private relay systems. We’re now no single party knows what the user is asking for. Nobody. That information is only available to the end user. The interior of the system knows nothing. So over the last few years, we’ve seen astonishing leaps in making the DNS more private to stop this kind of tampering and observation of the DNS. It’s not quite the end of the story, because we’re now starting to use the DNS for things other than simply resolving names. We’re using it for content steering. It’s the new routing protocol. When you actually go to a web page, the answer that you get will be different to the answer that I get. You’re in Japan. I’m in Australia. The answers necessarily might well be different. So the DNS has this tension of to give you good answers, you need to expose a little bit about who you are and where you are. But if you really want privacy, you don’t want to expose anything. And fighting that tension between an efficient and fast network and a private network is actually where the substance of the DNS privacy debate is today. So to my mind, registration is a small fire down in the corner of the roof. And I appreciate there is a bit of a fire, and it’s a problem. But the raging problem is the fact that the DNS kind of makes the internet an incredibly exposing experience to folks that you’ve never met, never will meet, but know all about you. And that is a deeply discomforting view that I think from a technical perspective, there’s a lot of energy going into trying to fix that. Thank you.

David Huberman:
Thank you so much, Jeff. If I may make an aside real quick, I’ve been listening to Jeff talk to us as a community for about 24 years. And most of the time, Jeff is yelling at us. Jeff is very unhappy with us, because Jeff has seen everything that’s broken, and he’s telling us we need to fix it. And what was really nice about the last 10 minutes is Jeff shared with us some of these astonishing leaps that we’re making, and actually achieving some of the goals and fixing some of the brokenness that we’ve had for 40 years. There’s one more piece to this. Jeff gave us some really good input about some of the technical considerations to improving this on the DNS side. But next to me, we are very honored to have Yuko Yokoyama. And Yuko is going to talk to us about the next steps that ICANN is taking in helping advance this conversation. If you would please put presentation to slides, thank you. Yuko, I would like to first introduce you to everybody. Yuko Yokoyama is ICANN’s Program Director for Strategic Initiatives. And Yuko is currently leading two programs at ICANN, the Data Protection and Privacy Program and the DNS Abuse Program. Yuko is fluent in English and Japanese and currently resides in Los Angeles, California. So, Yuko, please, if you would, take the microphone and talk to us about ICANN’s Registration Data Request Service. Thank you, David. Konnichiwa. Yuko Yokoyama desu.

Yuko Yokoyama:
Kidding. Just kidding. My name is Yuko. I’m from ICANN. Thank you for the introduction. Today, I’m going to talk about the tool that ICANN is making. And this tool is going to make it slightly easier for data requester and data holder in exchanging information for the request for non-public registration data in the GTLD space. That would be me. Okay. So what is a registration data? Maybe you guys don’t need to be lectured about what it is, but simply put, it’s a contact information, identifying information of the domain name holder, such as names, addresses, and phone numbers. This information is used in a variety of reasons, right? It could be a law enforcement trying to do the criminal investigation or the IP lawyers trying to hunt down the IP infringement, or maybe it’s just trying to resolve technical issues related to the network within that domain name. So as Becky and Manal has talked about, these domain name registration data, such as we used to call it, who is information we’re trying to now call it a registration data within ICANN, it used to be public to anybody and everybody who wanted that data. But with GDPR and other emerging privacy laws around the world, it is now largely redacted. And if you want that data, you have to jump through the hoop to get that. So how are those people who have legitimate interest to want that information, getting that information, such as law enforcement or IP lawyers or cybersecurity professionals, how are they getting this information? Not easily. They would have to first figure out who owns that domain name, which registrars managing that domain name, and then they would have to find the contact information of that registrar, call them up, figure out their own process and procedures on how they accept the data requests. It may be web form, it may be email, it may be just a simple phone call, who knows. But they’re trying to have to figure out the individual registrar’s method to try to submit their redacted data information requests. So obviously it is not ideal. So as Manal mentioned, through ICANN’s various multi-stakeholder policy development process, they have come up with this thing called System for Standardized Access and Disclosure. This is shortened as SSAD. There was a policy recommendation, 18 policy recommendations related to SSAD, and this SSAD envisioned to have pretty great features. It connects data holders and requesters in a standardized manner. Requesters’ identity is verified and the system accredits them. And there were service level agreements specified and other processing requirements. And lastly, it envisioned a paper use model, so requesters who want the data through this SSAD needed to pay for the usage of the system. That said, when ICANN conducted an analysis of these policy recommendations, it turned out to be very complex and possibly very much cost prohibitive. So we needed to figure out first what the demand is out there in terms of such a centralized system, and if there were enough user pools to sustain this system and the paper use model. So here comes the RDRS, the registration data, yes, registration data request service. So RDRS is a proof of concept service that will be operated for a period of up to two years. It is much simpler than SSAD, where there’s no identity verification or accreditation. It is also free. It can be used by anybody in the world who wants to use the service. They can simply sign up and submit their data requests. As the RDRS is not a result of the consensus policy through ICANN’s process, this is currently a voluntary system, meaning that not every data holder, meaning in this case registrars, are needing to participate. So ICANN accredited registrars can choose to participate to receive the request through the other side of the RDRS, or they can choose not to participate. Another thing to note is that there is no service level agreement. Again, this is because it’s a voluntary system. So why are we building this? As mentioned, we first need to figure out if there’s really a demand for such a centralized system, and if so, what kind of volume and what kind of demand, what kind of user pools there may be. And the data that we can collect through this proof of concept two-year operation, this will inform the future of what we can do about this non-public registration data. If there’s demand, great. If not, also, that’s good to know. And you know, through this exercise, we can potentially get some idea in terms of what kind of tools would really be beneficial to the world. Part of this, as you may all know, ICANN is very much about transparency. So once this service launches, we will be publishing the monthly metric report so that you can all sort of figure out what we’re seeing in terms of usage. So how does RDRS work? So it is a centralized platform, just like SDAD, and it allows submission and receiving of non-public GTLD registration data requests. There’s a standardized form and the ability to upload any sort of attachment to make your case as a requester. This means that you don’t have to make a phone call or you don’t have to figure out who owns this, who manages this domain name. You don’t have to cater to individual registrars’ process. Sounds pretty good. But one thing to know is that registrars will be the one who’s making the determination of whether the requester has legitimate interest and decide whether to disclose the data or not. So let’s talk about data disclosure. It is a heavy microphone. So it is a simplified system. Therefore, all communications between the requester and registrars will be taking place outside of the system, including the data disclosure. Disclosure methods will be based on registrars’ choosing, and system does not and cannot guarantee the disclosure of the data. A disclosure decision is solely lying within the data holder. In this case, it is the registrars. I want to stress this point that ICANN cannot, through contract or any sort of policy, obligate registrars to disclose data in any particular case because the law requires registrars the data holder to do the balancing test, as Becky mentioned earlier. So who can use this service? So obviously, data holder side, this would be the ICANN-accredited registrar who choose to participate. And from the requester side, anybody who wants a non-public GTLD registration data. So it could be, as mentioned, law enforcement agencies, IP attorneys, government agencies, cybersecurity professionals, anybody who may hold a legitimate interest. So it could be beyond those people that I’ve mentioned. Since this is a proof-of-concept service, there are some limitations or restrictions. For example, the data holder side, we are not considering registry operators to be part of this. We’re also not envisioning to utilize this system for CCTLD-related registration data. So that’s something important to note. This is only for GTLD domain names. So I’m not going to go through all these next two slides. It talks about benefits for registrars and requesters. As mentioned, on both sides, it will be a streamlined process, you know, standardized form and centralized platform. And you don’t have to figure out who manages the domain. The system will automatically do that for you. And from the requester side, again the same thing, and there’s also a template feature so that you don’t have to fill out the same form over and over if you are submitting more requests than just once. So both registrars and requesters benefit from this standardized form. It’s easier, less pain, I would say, and it acts like a ticketing system. So you can review your past request, your pending request, and what you may about to submit. So when is this exciting tool becoming available? So the system created with Privacy by Design, it’s nearly completed in terms of development work. The launch date is expected to be later this year, probably late November. In fact, we have already opened up the service to ICANN accredited registrars for their early onboarding so that they can become familiarized with this new tool. And then when the service launches to the general public, the requester pool, they will be ready to receive requests already. So I want to conclude this presentation with this. As you all know, the landscape of the internet privacy has been quickly changing, and it will obviously continue to evolve. And balancing the rights of the data subjects and timely access to domain name registration information is crucial more than ever. ICANN is striving to seek ways to evolve with the ever-changing environment and landscape through our multi-stakeholder bottom-up consensus building model. I’d like to encourage all of you, if you are part of the requester pool. If you ever need registration data within the GTLD domain name ecosystem, then this system is for you. As mentioned, this is a proof of concept service. Therefore, more people utilizing it, more accurate and useful data we can produce, which would lead to a better tool in the future. So please spread the word and be ready to use this system in November. Thank you so much for your time.

David Huberman:
Okay, thank you very much, Yuko. That was a very clear and very succinct explanation of this new tool available to everybody. Okay, you’ve all been very patient. Thank you. While we’ve set the table here, it is now time for questions and answers. We have quite a lot of people online who are watching. And online, if you have questions, you may raise your hand. You may type questions in the chat and our online moderator, Patrick Jones, will read them to us. I am going to start in the room. There are microphones on either end. There are microphones at the table. The first person who has gotten my attention is Farzaneh. So please go ahead and take a microphone and talk to us.

Audience:
Thank you, David. My name is Farzaneh Badi. For 20 years, we published domain name registrants’ personal sensitive data, their mailing address, their phone number, their email for everyone on the internet to have access to. This could lead to doxxing in dictatorships and where LGBTQI is illegal. Could actually lead to persecution. And website owners most of the time don’t know that their private sensitive information was being published. Then the privacy proxy came along and there was some kind of improvement. But I just want to give people, I see a few people who have not been involved with ICANN. But I just want to show the gravity of the issue. But I want to also congratulate ICANN. This workshop is one of the first workshops that ICANN actually titles it as DNS privacy and focuses on privacy and not just access. So this is a major improvement and I’m very thankful for that. But then when we don’t have, then we are now talking again about the issue of access and I have several questions. One is that when we talk about the metrics for the requesters, what sort of metrics are we talking about? Are we going to say how many from law enforcement requested access? And if this is globally accessible to everyone, since it’s free, is it also accessible to law enforcement in some authoritarian country? How can you actually verify that? The other thing that I have seen that the government ask for is the request to have confidential logs. And this is very dangerous. We need the governments and law enforcement to be transparent and I know how ICANN has responded to that request. But we need transparency in law enforcement’s request for access to people’s data, personal, private data and they are people. I don’t know where Manal got the stats about that, like the major part of the registrants or organizations and they are legal, but also there are personal information in legal, in even when they are legal entities, there might be their personal information. It might be their name and family name and that’s a personal information. Anyway, I’m not gonna give you more speech, but I think that there are many aspects that we need to think about and this session has been very, very, and thank you, Becky, for that fabulous presentation. It was very, very inclusive of all the aspects of privacy. Thanks.

David Huberman:
Thank you. Please, sir, gentlemen.

Audience:
Good afternoon. My name is John Sihar Simanjuntak from the ID Registry Indonesia. So my question is, so that the only accredited register that can access the data, it’s just my clarification, I think, actually, and then the question is actually how you can grant it to other requests? Maybe, I mean, this is something that we have to define really carefully because the really big question is, do really I can understand how can be granted the request? I mean, because in each country, maybe such different entity can be different situation. That’s my question, actually. I think the first one is about the clarification about who can access that and how you can grant it. I think you have to definitely define exactly what the meaning to be granted of the access. Thank you.

Yuko Yokoyama:
Thank you for your question. So I’m going to answer your first question, which was the ICANN accredited registrars being the only one who’s using the system. So they’re the ones who hold the data. They’re not the requester side. So requesters come to the system and request the registration data for certain domain names. And if that domain name is managed by the ICANN accredited registrars who use and who participate in this RDRS, then the request gets routed to that registrar and that registrar will conduct the balancing test and determine whether to disclose the requested data or not based on the local laws and other applicable laws. I don’t know, Becky, if you want to add something else.

Becky Burr:
No, I think the second part of your question was sort of what are the circumstances under which somebody would have access to the data. And as I said, first of all, the registrar who has the data and has to make the decision to give it out is going to apply the law that they’re subject to, the law, the regulations and the policies from their company that reflect the law and the policy that are relevant to them. Depending on a huge variety of circumstances that are relevant, they’re going to decide what kind of information they need to make a determination about whether they think the person who is requesting the information has a legitimate interest in that information and whether that legitimate interest is overridden by the fundamental privacy rights of the individual. So they’ll conduct a balancing test. They’ll decide if they have the information they need to make that determination and that decision will be based on and informed by the law that they’re subject to.

David Huberman:
Patrick.

Audience:
Thanks, David. It’s Patrick Jones from ICANN. Wanted to also mention, since we don’t have any remote questions yet in the chat, that one of the other elements that’s changing is that we’re moving to a new, more secure, more standardized protocol called the Registration Data Access Protocol. So in the past and historically, all of this registration data has been delivered through a protocol called WHOIS. And many of the registries are already delivering this registration data through the RDAP protocol. My point to Jeff to see if you might be able to touch on this a bit more as well. I believe all GTLB registries are already doing this. We’ve been going through a contract change process with the generic top-level domain registries to enable them to use the RDAP protocol and many country code registry operators are also using it. So with that, I’ll turn it back to the panel to note that RDAP is something new and we’ll be using this with the system. Okay, did we want to have a quick follow-up here, sir? Please. Yeah, following the explanation actually. So Indonesia, we have already the same similar GDPR actually last year, since last year. And the question is since you have the database can access through the accredited registrar, meanwhile the registrar maybe it’s not the native data. I mean, not the owner of the data. How can the registrar can provide the data while the registrar is not accountable to that data because it’s data maybe crossed globally from the ICANN database. So I mean, in our law, of course, this is not allowed to give data. Even the legitimate interest is the police, let’s say, but since the registrar is not the owner of the data, I think it’s still not allowed. Thank you.

Becky Burr:
So the question of data ownership is so fundamental to the discussion of data protection and privacy that it would take us the rest of our natural lives to discuss it here. So I think we’ll just skip that part of it. If your information is with a registrar in Malaysia, that registrar is certainly subject to Malaysian law. It’s possible that a registrar in another country also has obligations under Malaysian law with respect to your data as well because the way modern data protection laws work is it tends to apply to processing of information about a resident of the country. And it doesn’t only apply to processing within the country. So even if a registrar is outside of the country, they may have obligations under Malaysian law. But, and I am not an expert on the Malaysian data protection law, I can tell you that there are circumstances, the balancing test that we talked about where it would be appropriate or it would be okay for a registrar to disclose that data. And then there are gonna be circumstances where it wouldn’t be okay for a registrar to disclose that data. Just in terms of the issue, when you sign up for, when you register a domain name with a registrar, you will be asked to agree to their privacy policy. Their ICANN contract requires them to make certain disclosures as well. And so there are some contractual provisions that flow from you to the registrar. But the point is, the bottom line is that the registrar has to comply with the law that applies to the processing of that data. And if it’s Malaysian law that governs that processing, then they have to comply with Malaysian law. And if it’s Irish law, that or European Union regulation that applies, they will apply that law to it.

Audience:
Steve. Thanks, Steve Del Bianco. I work with NetChoice, a trade association in Washington, but also very active at ICANN for the last 20 years and in the business constituency. And I am most eager to hear more about Jeff Houston’s elephant. Namely, what would ICANN, IETF and IANA have to do? How would you be involved? I can’t be involved in that element of DNS privacy on the development dissemination of those protocols and standards and what policies would be developed to address it. And then allowing the community to suggest costs and benefits of some of the tools that Jeff has talked about. But I think that elephant is not going anywhere. He’ll wait a little bit. What’s more immediate is within a few weeks, we’ll start the RDRS, turn it on and offer it. And I was part of the group that did the EPDP as well as the small team in RDS and RDRS. And I’d always believed that it would be a false promise to think that a system like that would be an adequate measure of demand. Because the demand that we’re talking about is the demand to solve a problem, a requester like a commercial organization is trying to stop fraud that’s harassing their own consumers and undermining their businesses reputation across a wide variety of an audience. There might be IP attorneys looking to protect their IP, but it’s usually to protect consumers that are getting defrauded. That’s the character of that. We also have security researchers as well as security professionals trying to stop a current attack that’s going on. And then you have law enforcement, which Manal talked a little bit about. Now, historically, WHOIS helped to decrease the time it took and decrease the cost it took to start that investigation of solving the problem. It was only part of, a small part of solving the problem. You’re probably well aware that even before GDPR, we had an increasing proportion of registrations that would go privacy proxy. That was of concern, because it meant that ICANN needs to embrace the privacy proxy providers to accredit and hold them accountable to the standards of performance. And that got interrupted, of course, by the effective date of GDPR fines in 2018. And ICANN’s reaction to that led to a dramatic reduction in the value of using WHOIS to begin to solve problems that often are maybe not rise to the level of urgency that Manal gave us, imminent threat to life, right? Or critical infrastructure. But it’s quite urgent if your customers are being defrauded at the rate of thousands a day, because they’re being directed to another website or a fraudulent Red Cross donation site to take advantage of a new natural crisis. So the demand for RDRS may not be indicative of the demand for an SSAD that achieved some of the benefits. I mean, it’s not a replication of what value SSAD would provide. So there shouldn’t be an assumption that the demand value will transfer. Let me explain why. That the value of a requester submitting a request won’t be sufficient with RDRS to motivate a lot of use. So we’re having to find other ways to motivate the use of RDRS. And that’s a real challenge, because the promised value of RDS, as well as experience value is low. So I’ll reiterate something that I’d like you to consider. There’s still time. There’s still time. before you deploy, to do a few things that will increase the likelihood of value and increase the use. Not only the use in terms of the monthly metrics, but increase the use in a way that gives us the data we need to determine whether asset is worth doing, or determine whether new policies are necessary. Yuko, you’re well familiar with this, but one is to allow a request or to load a bulk, but a batch of requests for multiple domains that might be in use right now with a threat to my customers. And ICANN said no, didn’t want to do that, thought it would delay the release date. And I’m a programmer, so if that’s true and it’s a problem, why not still work on it? Put it on the queue for something that could be announced within a few weeks or months of the release. Make it the second thing that comes out, so it doesn’t jeopardize the release date. And then finally, I would say that retain in detail all of the data that a requester submits even if it turns out that the registrar is not participating. That data is essential to do analysis at any point before we conclude whether there’s demand. Because that analysis would show the quality and the quantity of who’s requesting, why they’re requesting, what evidence they presented, what reason or legitimate reason they’ve offered. And then on the other flip side, were the registrars who did participate, how fast did they respond and how well did they apply the balancing test? Because you can look at the evidence if you retain it. Now, if there’s a concern that you don’t want me to see all that data, that’s fine. Let’s hire a privacy lawyer that ICANN lets look at the data and comes back with a more qualitative analysis as opposed to just a handful of metrics. So there are things that ICANN can do in the four weeks remaining. Start to work on bulk uploads, right? And don’t throw the data on the floor. Don’t throw the data away. If it turns out that I put all the work into formulating requests and I provide all this evidence and screenshots, oh, but the registrar isn’t participating. So we lose the ability to understand what the demand was because that is the measure of demand. The measure of demand is the requests I put in, and if you throw most of them on the floor, you’re not going to get a good answer. Thank you.

Becky Burr:
So as we discussed, I do think there is some value in doing analytics on the data. But I’m a little confused about one thing that you just said, which is I think you’re saying that people are only going to use it if they, they’re not going to, in other words, they’re not going to use it to make requests. They’re going to use it to create the data. I mean, okay, so then why, well, then I’m confused because I guess I don’t understand what you’re saying, what the value, the value that you’re seeing, that you’re talking about is not the return of the data or the ease of the submission. It’s the creation of information about requests.

Audience:
If I could just clarify, Becky. We can make promises about what RDRS will do when it turns on, but none of us know how to effectually deliver on the promises because we have no idea how many registrars, registrars whose domain names are the subject of requests, will be participating on day one. We’re not going to be very clear about who the nature of that is, nor can we make promises about the fairness that’s done in the evaluation of the balancing test. So you make promises, but ultimately it’s the first experience, the first taste the requesters get when they submit their first several requests. And if it turns out that over half of the first batch of requests I put in were for non-participating registrars, you’ve just really diminished the interest of that requester to bother to do any more. So once they get a bad taste, or they get participating registrars that take four days to come back and say, nope, you don’t pass the balancing test, if the first taste is bad, then that requester community, I would like them to stay engaged, Becky. I want them to stay there to provide evidence of demand, but they have to have some assurance that having been disappointed at the actual experience and taste, is there a reason to do another set of requests? You know, I put 10 in, five were non-participating registrars, and the rest took three days to get back, and the balancing test said take a hike. I’m not going to come back to you unless there’s some other reason. So think of it as a two-step reason. First is, maybe I’ll get a good taste back, maybe it’ll actually help me stop an existential attack. But if it doesn’t, for reasons that are outside of your control, if it doesn’t, retain the data necessary to analyze the nature of demand that was there, and that will provide an additional incentive for people to continue to try the requests.

Becky Burr:
Okay, so I mean, I just want to confirm that what you’re saying is that the data, the analysis that, the retention of data for downstream analytics is an incentive for participation. Okay, I get that. I think we have to acknowledge here that the analysis that we just talked about in terms of is there a less intrusive way to accomplish your goal, stopping crime, protecting intellectual property, whatever it is, the only source of information that we have at this point is who’s going to submit requests, who submits requests. What we hear anecdotally is that registrars are not getting very many requests. And the consequence, which I think is a logical conclusion from that, is that requesters are getting, they are attacking the problem in a different way that doesn’t use the data. And that is the fundamental essence of the balancing test. So all I’m saying, and by the way, ESSAD isn’t going to change that outcome at all. So we are very much encouraging all registrars to participate. As you know, the board did suggest that the JNSO Council consider policy development to make participation mandatory, because we know just how important that is. But I do think for this collective data gathering, we have to ask requesters to make their needs known through the system.

Audience:
Edwin, did you want to add to that, please? Edwin Chung here. I just wanted to respond to Steve’s other question. So just a note, Edwin from DotAsia, also serving on the ICANN board, but I’m not speaking in my capacity of board, but as a general participant. But on the first item that you mentioned, the elephant or the burning question that Jeff has, ICANN is, and both the ICANN community and ICANN itself is actively participating in discussing those issues. And I think Jeff can add to that. At the ICANN meetings, actually those were discussed a few years back. You might remember DOA, DOH discussions. That’s part of what I think Jeff mentioned, and Jeff, please add to it. And follow from that, what is called OARC. I don’t know whether you know OARC, but I don’t really know the long version. It’s Operational Research Analysis Coalition, something like that. But that, I think, is also part of the multi-stakeholder model in the work, because I want to emphasize one thing, and the reason why this session is here at the IGF and not at ICANN is because of that broader sense. And this is an issue that is not just an ICANN issue, but an issue that we need to involve other stakeholders for which ICANN is one of them. The Who Is Matter has a slightly smaller element to that, but the other part has a bigger element to that, and it’s closer to things that we talk about, such as DNS abuse. ICANN can do one part of it, but there is a much larger DNS community that needs to do further work on those type of things. So I don’t know whether Jeff wants to add to it, but just a quick answer is that ICANN and the ICANN community are working on that other elephant as well. And hopefully, we’ll shrink or start walking away, but Jeff, you definitely have much more

Geoff Huston:
to say. I have some bad news. I have some very bad news for you. The issue is that once abused, there’s no coming back. And the response from the technical community is heading down a path that, quite frankly, touches upon an addiction in this industry. We are addicted to open DNS data. What if the DNS and its use generated no data whatsoever, nothing, could not see it, nobody, no matter how good their tool? What then? What names am I going to look up for registration if I don’t know what names are being used in the first place? Once you get a totally opaque system, then this entire conversation heads down an entirely different path that very few people are actually prepared to think about right now. But the response from the technical community is to create precisely that picture, that there is no attribution in the use of names whatsoever if we head down this area of abuse, obfuscation and encryption. This is nothing left. So everything that we’ve been thinking about, yes, we know how the DNS works. We understand what DNS abuse is and so on. Once you go down this privacy path to its destination, all the lights go out. It’s dark. And at that point, it’s an entirely different world. And exactly how we’re going to respond as law enforcement, as engineers, as network operators, when the network has all the lights turned off, is a question I don’t think anyone’s actually able to answer. Now, what’s ICANN’s role in all this? I’m not sure ICANN has a role other than be another onlooker into what is either going to be a phenomenal success for privacy or a phenomenal tragedy. I don’t know which either at this point. But I do know the path is inexorable and the answers are certain. We are going to turn the lights off. And that’s just one of those things that’s going to happen. It’s an interesting area to contemplate.

David Huberman:
Thank you, Geoff. We have three minutes left. Andrew?

Audience:
Fantastic. You may as well stay there, Jeff. Because I thought he was looking bored. So I wanted to bring him in. Anyway, two things. The picture you painted, Jeff, before when you touched on encrypted DNS and DNSSEC and so on was all very positive. You didn’t mention, unusually for you, the lack of take-up of many of those. So designing the protocol is interesting, but that’s the start of the solution. It’s a long way short of the end. And I would suggest part of the problem for the lack of take-up is the – so ITF, when it develops standards, there’s little, in reality, no involvement of the end-user community. So maybe we will get better take-up of standards if we bother to involve the end-users in the design process. Because we’re designing things that are either too hard to implement or they’re not interested in implementing. So ticking the box because we’ve got the protocol is not really that interesting in solving this. And then just briefly and finally, to your point on when we get to that destination or everything going dark, if we have a diverse standards community, which includes CISOs, I can tell you from personal experience of talking to them, they will be horrified at that because we kill enterprise cybersecurity when we go dark. And then if we think we’ve got privacy, we’re deluded because at that point, we’ve got no privacy at all because we’ve got no security. So I think the privacy purists, I think, are kidding themselves if they think they get privacy by removing all the data. That’s when they really have a problem. They’d be far worse than it ever was before.

Geoff Huston:
Andrew, just think for a second, and I’ll respond very, very quickly. The biggest tension in the internet is between applications and infrastructure, and the applications have won the game hands down. QUIC is not a transport protocol. QUIC is an attribute of the application. The application designers, and particularly the browser engines, have lost patience with the rest of us. Privacy in the DNS is not a DNS infrastructure problem anymore. It’s what browsers are doing and where applications are heading. They have the money, the agility, the update factory and infrastructure. And so the DNS is being taken away from traditional DNS operators because they’re basically too slow and the job they’re doing is not good enough from the perspective of the application. And that battle for control, round one happened in Firefox, round two will happen in Chrome and Safari. In fact, Apple is probably there already with its own relay, Apple privacy relays. So Andrew, the fight is happening further up the protocol stack, and the application folk who have the money, the agility, and the motive appear to be winning hands down at this particular point in time. The infrastructure folk are being left behind. It’s interesting to think about. Thank you.

David Huberman:
Manal, you have your hand up.

Manal Ismail:
Yes, it’s a more of a general comment and I feel obliged because I’ve been mentioned twice so quickly to agree with Steve on the importance of the data collected during the proof of concept and also to agree with Farzi that there are many aspects to this discussion. And it’s very interesting to see that despite the diverse views, we are all talking from a public interest perspective. So on one side, it’s privacy, on the other side, it’s safety. And I hope we can utilize ICANN’s bottom-up multi-stakeholder model to continue a constructive and inclusive discussion to be able to strike a right balance in that respect. Thank you.

David Huberman:
Well, thank you so much, Manal. That is actually a wonderful way to end it because unfortunately, my friends, we are out of time, even though there are more questions. Thank you very much for coming. I’d like to thank Becky Burr, Yuko Yokoyama, Goeff Houston, Manal Ismail, Patrick. Thank you for getting up so early and being our online moderator. And thank you all for coming. This concludes that session. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

3123 words

Speech time

1115 secs

Becky Burr

Speech speed

141 words per minute

Speech length

2559 words

Speech time

1090 secs

David Huberman

Speech speed

173 words per minute

Speech length

1351 words

Speech time

468 secs

Geoff Huston

Speech speed

168 words per minute

Speech length

2135 words

Speech time

761 secs

Manal Ismail

Speech speed

134 words per minute

Speech length

1728 words

Speech time

776 secs

Yuko Yokoyama

Speech speed

147 words per minute

Speech length

1811 words

Speech time

740 secs

Can (generative) AI be compatible with Data Protection? | IGF 2023 #24

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Kamesh Shekar

The analysis examines the importance of principles and regulation in the field of artificial intelligence (AI). It highlights the need for a principle-based framework that operates at the ecosystem level, involving various stakeholders. The proposed framework suggests that responsibilities should be shared among different actors within the AI ecosystem to ensure safer and more responsible utilization of AI technologies. This approach is seen as crucial for fostering trust, transparency, and accountability in the AI domain.

Additionally, the analysis emphasizes the significance of consensus building in regard to AI principles. It argues for achieving clarity on principles that resonate with all stakeholders involved in AI development and deployment. International discussions are seen as a crucial step towards establishing a common understanding and consensus on AI principles, ensuring global alignment in the adoption of ethical and responsible practices.

Furthermore, the analysis explores the role of regulation in the AI landscape. It suggests that regulation should not only focus on compliance but also be market-oriented. The argument is made that enabling the AI market and providing businesses with a value proposition in regulation can support innovation while ensuring ethical and responsible AI practices. This market-based regulation approach is believed to be beneficial for industry growth (aligning with SDG 9: Industry, Innovation, and Infrastructure) and economic development (aligning with SDG 8: Decent Work and Economic Growth).

Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not provide specific principles or regulations, it emphasizes the importance of a principle-based framework, consensus building, and market-based regulation. These insights can be valuable for policymakers, industry leaders, and other stakeholders in developing effective and responsible AI governance strategies.

Jonathan Mendoza Iserte

Artificial intelligence (AI) has the potential to drive innovation across sectors, but it also poses challenges in terms of regulation, ethical use, and the need for transparency and accountability. The field of AI is rapidly evolving and has the capacity to transform development models in Latin America. Therefore, effective regulations are necessary to harness its benefits.

Latin American countries like Argentina, Brazil, and Mexico have taken steps towards AI regulation and have emerged as regional leaders in global AI discussions. To further strengthen regulation efforts, it is proposed to establish a dedicated mechanism in the form of a committee of experts in Latin America. This committee would shape policies and frameworks tailored to the region’s unique challenges and opportunities.

The adoption and implementation of AI will have mixed effects on the economy and labor. By 2030, AI is estimated to contribute around $13 trillion to the global economy. However, its impact on specific industries and job markets may vary. While AI can enhance productivity and create opportunities, it may also disrupt certain sectors and lead to job displacement. Policymakers and stakeholders need to consider these implications and implement measures to mitigate negative consequences.

Additionally, it is crucial for AI systems to respect fundamental human rights and avoid biases. A human-centric approach is necessary to ensure the ethical development and deployment of AI technologies. This includes safeguards against discriminatory algorithms and biases that could perpetuate inequalities or violate human rights.

In conclusion, AI presents both opportunities and challenges. Effective regulation is crucial to harness the potential benefits of AI in Latin America while mitigating potential harms. This requires international cooperation and a human-centric approach that prioritizes ethical use and respect for human rights. By navigating these issues carefully, Latin America can drive inclusive and sustainable development.

Moderator – Luca Belli

The analysis delves into various aspects of AI and Data Governance, shedding light on several important points. Firstly, it highlights the significance of comprehending AI sovereignty and its key enablers. AI sovereignty goes beyond authoritarian control or protectionism and involves understanding and regulating technologies. The enablers of AI sovereignty encompass multiple elements, including data, algorithms, computation, connectivity, cybersecurity, electrical power, capacity building, and risk-based AI governance frameworks. Understanding these enablers is crucial for effective AI and Data Governance.

Secondly, the analysis underscores the need to increase representation and consideration of ideas from the Global South in discussions about data governance and AI. The creation of the Data and AI Governance Coalition aims to address issues related to data governance and AI from the perspective of the Global South. It highlights the criticism that discussions often overlook ideas and solutions from this region. To achieve comprehensive and inclusive AI and Data Governance, it is imperative to involve diverse voices and perspectives from around the world.

Moreover, the analysis emphasizes that AI governance should be considered a fundamental right for everyone. It is mentioned in Article 1 of the United Nations Charter and the International Covenants on Political, Civil, Economic, Social, and Cultural Rights. Recognizing AI governance as a fundamental right ensures individuals possess agency and control over their own technological destiny.

Furthermore, the analysis notes that the development of an international regime on AI may take between seven and ten years. This estimate is influenced by the involvement of tech executives who advocate for such an agreement. Due to the complexity of AI and the multitude of considerations involved, reaching international consensus on an AI regime requires ample time for careful deliberation and collaboration.

Lastly, the examination reveals that the process of shaping the UN Convention on Artificial Intelligence could be protracted due to geopolitical conflicts and strategic competition. These external factors introduce additional challenges and intricacies into the negotiating process, potentially prolonging the time required to finalize the convention.

In conclusion, the analysis offers valuable insights into AI and Data Governance. It emphasizes the importance of understanding AI sovereignty and its enablers, advocates for increased representation from the Global South, asserts AI governance as a fundamental right, highlights the time-consuming nature of developing an international regime on AI, and acknowledges the potential delays caused by geopolitical conflicts and strategic competition. These findings contribute to a deeper understanding of the complexities surrounding AI and Data Governance and provide a foundation for informed decision-making in this domain.

Audience

The analysis explores various topics and arguments relating to the intersection of AI and data protection. One concern is whether generative AI is compatible with data protection, as it may pose challenges in safeguarding personal data. There is also an interest in understanding how AI intersects with nationality and statelessness, with potential implications for reducing inequalities and promoting peace and justice. Additionally, there is a desire to know if there are frameworks or successful instances of generative AI working in different regions.

Privacy principles within Gen-AI platforms are seen as crucial, with 17 initial principles identified and plans to test them with 50 use cases. However, the use of AI also raises questions about certain data protection principles, as generative AI systems may lack specified purposes and predominantly work with non-personal data for profiling individuals.

There is a call for a UN Convention on Artificial Intelligence to manage the risks and misuse of AI at an international level. However, the analysis does not provide further details or evidence on the feasibility or implementation of such a convention. Potential geopolitical conflicts and strategic competition between AI powers are also highlighted as potential barriers to developing a UN Convention on Artificial Intelligence.

The “Brussels effect” is mentioned as a factor that may have negative impacts in non-European contexts. Concerns are raised about premature legislation in the field of AI and the need for clear definitions when legislating on AI to ensure comprehensive regulation. The analysis covers a broad range of topics and arguments, though some lack supporting evidence or further exploration. Notable insights include the need for privacy principles in Gen-AI platforms, challenges to data protection principles posed by AI, and the potential hindrances to global cooperation on AI regulation.

In conclusion, the analysis offers valuable insights into the complex relationship between AI and data protection.

Giuseppe Claudio Cicu

Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing society. Its integration is positive, as it enhances strategy setting, decision making, monitoring, and compliance in organisations. However, challenges arise in terms of transparency and accountability. To address this, an ethical approach to AI implementation is proposed, such as the AI by Corporate Design Framework, which blends business process management and the AI lifecycle. This framework incorporates ethical considerations like the human in the loop and human on the loop principles. Furthermore, it is suggested that corporations establish an Ethical Algorithmic Legal Committee to regulate AI applications. This committee would act as a filter between stakeholders and AI outputs, ensuring ethical decision-making. Additionally, there’s a call for legislators to recognise technology as a corporate dimension, as it has implications for accountability, organisation, and administration. By developing appropriate regulations and norms, responsible and ethical use of AI in corporate governance can be ensured. Overall, AI has potential benefits for corporate governance and business processes, but careful consideration of transparency, accountability, and ethics is necessary.

Armando José Manzueta-Peña

The use of generative AI holds great potential for the modernisation of government services and the improvement of citizens’ lives. By automating the migration of legacy software to flexible cloud-based applications, generative AI can supercharge digital modernisation in the government sector. This automation process can greatly streamline and enhance government operations. AI-powered tools can assist with pattern detection in large stores of data, enabling effective analysis and decision-making. The migration of certain technology systems to the cloud, coupled with AI infusion, opens up new possibilities for enhanced use of data in government services.

To successfully implement AI in the public sector, attention must be given to key areas. Firstly, existing public sector workers should receive training to effectively manage AI-related projects. Equipping government employees with the necessary skills and knowledge is essential. Citizen engagement should be prioritised when developing new services and modernising existing ones. Involving citizens in the decision-making process fosters inclusivity and builds trust. Government institutions must be seen as the most trusted entities holding and managing citizens’ data. Strong data protection rules and ethical considerations are crucial. Modernising the frameworks for data protection safeguards sensitive information and maintains public trust.

The quality of AI systems is heavily dependent on the quality of the data they are fed. Accurate data input is necessary to avoid inaccurate profiling of individuals or companies. Effective data management, collection, and validation policies are vital for meaningful outcomes. Strong data protection measures, collection, and validation processes ensure accurate and reliable AI-driven solutions. Developing nations face challenges in quality data collection, but good quality data and administrative registers are necessary to leverage AI effectively.

In conclusion, successful AI implementation in the public sector requires government institutions to familiarise themselves with the advantages of AI and generative AI. Workforce transformation, citizen engagement, and government platform modernisation are crucial areas. Strong data protection rules and ethical considerations are essential. The quality of AI systems relies on the quality of the data they are fed. Proper data management, collection, and validation policies are necessary. Addressing these aspects allows government institutions to harness the full potential of AI, modernise their services, and improve citizens’ lives.

Michael

The analysis examines the issue of harmonised standards in the context of AI and highlights potential shortcomings. It is argued that these standards might fail to consider the specific needs of diverse populations and the local contexts in which AI systems are implemented. This is concerning as it could result in AI systems that do not effectively address the challenges and requirements of different communities.

One of the reasons for this oversight is that the individuals involved in developing these standards primarily come from wealthier parts of the world. As a result, their perspectives may not adequately reflect the experiences and concerns of marginalised communities who are most impacted by AI technologies.

While some proponents argue that harmonised standards can be beneficial and efficient, it is stressed that they should not disregard the individual needs and concerns of diverse populations. Balancing the efficiency and standardisation of AI systems with the consideration of local contexts and marginalised populations’ needs is paramount.

The tension between the value of harmonised AI standards and the disregard for local contexts is noted. It is suggested that the development of these standards may further entrench global inequities by perpetuating existing power imbalances and neglecting the specific challenges faced by different communities.

In conclusion, the analysis cautions against the potential pitfalls of harmonised AI standards that do not take into account diverse populations and local contexts. While harmonisation can be beneficial, it should not be at the expense of addressing the specific needs and concerns of marginalised communities. By striking a balance between efficiency and inclusivity, AI standards can better serve the needs of all communities and avoid perpetuating global inequities.

Kazim Rizvi

In his paper, Kazim Rezvi delved into the important topic of mapping and operationalising trustworthy AI principles in specific sectors, focusing specifically on finance and healthcare. He discussed the need for responsible implementation and ethical direction in the field of AI, highlighting the potential synergies and conflicts that may arise when applying these principles in these sectors. To address this, Rezvi proposed a two-layer approach to AI, dividing it into non-technical and technical aspects.

The non-technical layer examines strategies for responsible implementation and ethical direction. This involves exploring various approaches to ensure that AI technologies are developed and deployed in a manner that upholds ethical standards and benefits society as a whole. Rezvi emphasised the importance of involving multiple stakeholders from industry, civil society, academia, and government in this process. By collaborating and sharing insights, these diverse stakeholders can contribute to the effective implementation of AI principles in their respective domains.

In addition to the non-technical layer, the technical layer focuses on different implementation strategies for AI. This encompasses the technical aspects of AI development, such as algorithms and models. Rezvi emphasised the need for careful consideration and evaluation of these strategies to align them with trustworthy AI principles.

Moreover, Rezvi highlighted the significance of a multi-stakeholder approach for mapping and operationalising AI principles. By involving various stakeholders, including those from industry, civil society, academia, and government, a more comprehensive understanding of the challenges and opportunities associated with AI can be gained. This approach fosters partnerships and collaborations that can lead to effective implementation of AI principles in relevant domains.

Rezvi also discussed the need for coordination of domestic laws and international regulations for AI. He pointed out that currently there is no specific legal framework governing AI in India, which underscores the importance of harmonising laws in the context of AI. This coordination should take into account existing internet laws and any upcoming legislation to ensure a comprehensive and effective regulatory framework for AI.

Furthermore, Rezvi explored alternative regulatory approaches for AI, such as market mechanisms, public-private partnerships, and consumer protection for developers. While not providing specific supporting facts for these approaches, Rezvi acknowledged their potential in enhancing the regulation of AI and ensuring ethical practices and responsible innovation.

In conclusion, Kazim Rezvi’s paper presented an in-depth analysis of the mapping and operationalisation of trustworthy AI principles in the finance and healthcare sectors. He highlighted the need for a multi-stakeholder approach, coordination of domestic laws and international regulations, as well as alternative regulatory approaches for AI. By addressing these issues, Rezvi argued for the responsible and ethical implementation of AI, ultimately promoting the well-being of society and the achievement of sustainable development goals.

Wei Wang

The discussion centres around the regulation of Artificial Intelligence (AI) across different jurisdictions, with a particular focus on Asia, the US, and China. Overall, there is a cautious approach to regulating AI, with an emphasis on implementing ethical frameworks and taking small, precise regulatory steps. Singapore, for instance, recognises the importance of adopting existing global frameworks to guide their AI regulation efforts.

In terms of specific regulatory models, there is an evolution happening, with a greater emphasis on legal accountability, consumer protection, and the principle of accountability. The US has proposed a bipartisan framework for AI regulation, while China has introduced a model law that includes the principle of accountability. Both of these frameworks aim to ensure that AI systems and their designers are responsible and held accountable for any negative consequences that may arise.

However, one lingering challenge in AI regulation is finding the right balance between adaptability and regulatory predictability. It is vital to strike a balance that allows for innovation and growth while still providing effective governance and oversight. Achieving this equilibrium is essential to ensure that AI technologies and applications are developed and used responsibly.

The need for effective governance and regulation of AI is further emphasized by the requirement for a long-standing balance. AI is a rapidly evolving field, and regulations must be flexible enough to keep up with advancements and emerging challenges. At the same time, there is a need for regulatory predictability to provide stability and ensure that ethical and responsible AI practices are followed consistently.

In conclusion, the conversation highlights the cautious yet evolving approach to AI regulation in various jurisdictions. The focus is on implementing ethical frameworks, legal accountability, and consumer protection. Striking a balance between adaptability and regulatory predictability is essential for effective governance of AI. Ongoing efforts are required to develop robust and flexible regulatory frameworks that can keep pace with the rapid advancements in AI technology and applications.

Smriti Parsheera

Transparency in AI is essential, and it should apply throughout the entire life cycle of a project. This includes policy transparency, which involves making the rules and guidelines governing AI systems clear and accessible. Technical transparency ensures that the inner workings of AI algorithms and models are transparent, enabling better understanding and scrutiny. Operational and organizational transparency ensures that the processes and decisions made during the project are open to scrutiny and accountability. These three layers of transparency work together to promote trust and accountability in AI systems.

Another crucial aspect where transparency is needed is in publicly facing facial recognition systems. These systems, particularly those used in locations such as airports, demand even greater transparency. This goes beyond simply providing information and requires a more deliberate approach to transparency. A case study of a facial recognition system for airport entry highlights the importance of transparency in establishing public trust and understanding of the technology.

Transparency is not limited to the private sector. Entities outside of the private sector, such as philanthropies, think tanks, and consultants, also need to uphold transparency. It is crucial for these organizations to be transparent about their operations, relationships with the government, and the influence they wield. Applying the right to information laws to these entities ensures that transparency is maintained and that they are held accountable for their actions.

In conclusion, transparency is a key factor in various aspects of AI and the organizations involved in its development and implementation. It encompasses policy, technical, and operational transparency, which ensure a clear understanding of AI systems. Publicly facing facial recognition systems require even higher levels of transparency to earn public trust. Additionally, entities outside of the private sector need to be transparent and subject to right to information laws to maintain accountability. By promoting transparency, we can foster trust, accountability, and responsible development of AI systems.

Gbenga Sesan

The analysis highlights the necessity of reviewing data protection policies to adequately address the extensive data collection activities of AI. It points out that although data protection regimes exist in many countries, they may not have considered the scope of AI’s data needs. The delayed ratification of the Malabo Convention further underscores the urgency to review these policies.

Another key argument presented in the analysis is the centrality of people in AI discourse and practice. It asserts that people, as data owners, are fundamental to the functioning of AI. AI systems should be modelled to encompass diversity, not just for tokenism, but to ensure a comprehensive understanding of context and to prevent harm. By doing so, we can work towards achieving reduced inequalities and gender equality.

The analysis also underscores the need for practical support for individuals when AI makes mistakes or causes problems. It raises pertinent questions about the necessary steps to be taken and the appropriate entities to engage with in order to address such issues. It suggests that independent Data Protection Commissions could provide the requisite support to individuals affected by AI-related concerns.

Additionally, the analysis voices criticism regarding AI’s opacity and the challenges faced in obtaining redress when errors occur. The negative sentiment is supported by a personal experience where an AI system wrongly attributed information about the speaker’s academic achievements and professional appointments. This highlights the imperative of transparency and accountability in AI systems.

Overall, the analysis emphasises the need to review data protection policies, foreground people in AI discourse, provide practical support, and address concerns regarding AI’s opacity. It underscores the significance of transparency and accountability in ensuring responsible development and deployment of AI technologies. These insights align with the goals of advancing industry, innovation, and infrastructure, as well as promoting peace, justice, and strong institutions.

Melody Musoni

The analysis explores the development of AI in South Africa as a means to address African problems. It emphasizes the significance of policy frameworks and computing infrastructures at the African Union level, which emphasise the message that AI can be used to tackle specific challenges that are unique to Africa. The availability of reliable computing infrastructures is deemed crucial for the advancement of AI technology.

Furthermore, the analysis delves into South Africa’s efforts to improve its computational capacity and data centres. It mentions that South Africa aspires to be a hub for hosting data for other African countries. To achieve this goal, the government is collaborating with private companies such as Microsoft and Amazon to establish data centres. This highlights South Africa’s commitment to bolstering its technological infrastructure and harnessing the potential of AI.

The discussion also highlights South Africa’s dedication to AI skills development, with a particular focus on STEM and AI-related subjects in primary schools through to university levels. This commitment emphasises the need to provide quality education and equip the younger generation with the necessary skills to drive innovation and keep up with global advancements in AI technology.

However, it is also stressed that careful consideration must be given to data protection before implementing AI policies. The analysis asserts that existing legal frameworks surrounding data protection should be assessed before rushing into the establishment of AI policies or laws. This demonstrates the importance of safeguarding personal information and ensuring that data processing and profiling adhere to the principles of transparency, data minimisation, data subject rights, and campus limitation.

Moreover, the analysis sheds light on the challenges faced by South Africa in its AI development journey. These challenges include power outages that are expected to persist for a two-year period, a significant portion of the population lacking access to reliable connectivity, and the absence of a specific cybersecurity strategy. This underscores the importance of addressing these issues to create an environment conducive to AI development and implementation.

Additionally, the analysis points out that while data protection principles theoretically apply to generative AI, in practice, they are difficult to implement. This highlights the need for data regulators to acquire more technical knowledge on AI to effectively regulate and protect data in the context of AI technology.

In conclusion, the analysis provides insights into the various facets of AI development in South Africa. It emphasises the significance of policy frameworks, computing infrastructures, and AI skills development. It also highlights the need for prioritising data protection, addressing challenges related to power outages and connectivity, and enhancing regulatory knowledge on AI. These findings contribute to a better understanding of the current landscape and the potential for AI to solve African problems in South Africa.

Liisa Janssens

Liisa Janssens, a scientist working at the Dutch Applied Sciences Institute, believes that the combination of law, philosophy, and technology can enhance the application of good governance in artificial intelligence (AI). She views the rule of law as an essential aspect of good governance and applies this concept to AI. Liisa’s interdisciplinary approach has led to successful collaborations through scenario planning in military operations. By using scenarios as a problem focus for disciplines such as law, philosophy, and technology, Liisa has achieved commendable results during her seven-year tenure at the institute.

In addition, there is a suggestion to test new technical requirements for AI governance in real operational settings. These settings can include projects undertaken by NATO that utilize Digital Twins or actual real-world environments. Testing and assessing technical requirements in these contexts are crucial for understanding how AI can be effectively governed.

In summary, Liisa Janssens emphasizes the importance of combining law, philosophy, and technology to establish good governance in AI. She advocates for the application of the rule of law to AI. Liisa’s successful engagement in interdisciplinary collaboration through scenario planning highlights its effectiveness in fostering collaboration between different disciplines. The suggestion to test new technical requirements for AI governance in real operational environments provides opportunities for developing effective governance frameworks. Liisa’s insights and approaches contribute to advancing the understanding and application of good governance principles in AI.

Camila Leite Contri

AI technology has the potential to revolutionise various sectors, including finance, mobility, and healthcare, offering numerous opportunities for advancement. However, the rapid progress of innovation in AI often outpaces the speed at which regulation can be implemented, leading to challenges in adequately protecting consumer rights. The Consumer Law Initiative (CLI), a consumer organisation, aims to safeguard the rights of consumers against potential AI misuse.

In the AI market, there are concerns about the concentration of power and control in the hands of big tech companies and foreign entities. These companies dominate the market, resulting in inequality in AI technology access. Developing countries, particularly those in the global south, heavily rely on foreign technologies, exacerbating this issue.

To ensure the proper functioning of the AI ecosystem, it is crucial to uphold not only data protection laws but also consumer and competition laws. Compliance with these regulations helps ensure transparency, fair competition, and protection of consumer rights in AI development and deployment.

A specific case highlighting the need for data protection is the alleged infringement of data protection rights in Brazil in relation to ChatGPT. Concerns have been raised regarding issues such as access to personal data, clarity, and the identity of data controllers. The Brazilian Data Protection Authority has yet to make progress in addressing these concerns, emphasising the importance of robust data protection measures within the AI industry.

In conclusion, while AI presents significant opportunities for advancement, it also poses challenges that require attention. Regulation needs to catch up with the pace of innovation to adequately protect consumer rights. Additionally, addressing the concentration of power in big tech companies and foreign entities is crucial for creating a fair and inclusive AI market. Upholding data protection, consumer rights, and competition laws is vital for maintaining transparency, accountability, and safeguarding the interests of consumers and society as a whole.

Session transcript

Moderator – Luca Belli:
All right, we are almost ready to go. It’s almost five past five. Should I give you a heads up to start? We can start. We are already online. OK, fantastic. Good afternoon to everyone. My name is Luca Belli. I’m professor at FGV Law School, where I direct the Center for Technology and Society. And together with a group of friends, many of whom are here with us today, we have decided to create this group, this coalition within the IGF, called the Data and AI Governance Coalition, where, as you might imagine, we are discussing already data and AI governance issues, and with a particular focus from the global south perspectives. So the idea to create this group was born some months ago during a capacity building program that we have at FGV Law School. It’s called the Data Governance School LATAM, which is itself the sort of academic spin off of a conference. We host CPDP LATAM. You might know the European one. There is also a Latin American one that we host in Rio every July. And so after these three days of intense discussions on data governance and AI in March, actually in April, at the end of April, we figured out that it was good to keep on maintaining this very good interaction we had, and even try to expand them to bring new voices. Because one of the main, let’s say, critiques that emerged is that frequently these discussions about data governance and AI have a novel representation of global north, if we can say so, ideas and solutions, and the severe under-representation of global south ideas and concerns, and even solutions sometimes. So the idea was precisely to start to discuss how to solve this. And as many of us have a research background or are interested in doing research, we decided to draft this book that we managed to organize and print in record time. But I have to also to disclaim that this is a preliminary version. So if you want actually to give us feedback on how to improve it, or in case anyone is interested in proposing some additional very relevant perspective, we might have missed. For instance, we know that the only region that is still a little bit poor in the book is Africa. The others are very well covered. And we are going to actually create the form. If you tape in your browser bits.ly slash DAIG, like Data and AI Governance, DAIG23 in capital letters, you will arrive directly on the form where you can also download for free this book. If you are allergic to Google Forms, which is something that may absolutely happen, you can even use another mini URL, bits.ly slash DAIG2023, where there is the direct downloading option of this from the IGF website without having to fill any form on comments. But if you want to provide us comments, actually we are here to hear them. The book deals with three main issues, AI sovereignty, AI transparency, and AI accountability. I’m not going to delve into the transparency and accountability part, because we have a very large set of very good speakers that will explore the various details of these topics from very different perspectives. I’m just going to say two words on the first topic, AI sovereignty, which is actually an application, an implementation of what I have been working with some colleagues from another project, the Cyber BRICS project, with regard to digital sovereignty over the past years. And the fundamental teachings of the past years have been of two types. First, there are a lot of different perspectives on digital sovereignty. A lot of people see this as authoritarian control or protectionism. But also, there are a lot of other perspectives, including those based on self-determination and the fact that both states or local communities or individuals have the right to understand how technology works, develop it, and regulate it. And there is nothing authoritarian in all this. And actually, it’s a right of all peoples in the world, according to Article 1 of not only the Charter of the United Nations, as we are in the United Nations context, but also the International Covenant of Political and Civil Rights, and the International Covenant on Economic, Social, and Cultural Rights. So it’s a fundamental right of everyone here to be the master of your own destiny, if you want, in terms both of social rights, governance, but also technology. And so the fundamental reflection of the first part of this book is about this. How do you achieve this? And in the chapter I’ve authored, I identify what I call the key AI sovereignty enablers that are eight key elements that go from a stack, an AI sovereignty. is stuck. They go from data. So you have to understand how data are produced, harvested, how to regulate them. So data, you have algorithms, you have compute, you have connectivity, you have cybersecurity, you have electrical power. Because something that many people don’t understand is that if you don’t have power, you cannot have AI at all. You have to have capacity building, which is short of transversal. And last, but of course not least, you have to have AI governance framework based on risks, which are the main thing that we are actually trying to regulate. But I think that if we only regulate AI through risks, we only look at the tree and we miss the forest. Because there are a lot of other elements that interact and they are interrelated. So that is, in a nutshell, the first chapter. I was very honored to have Melody and her co-author, Ceaseless Nile, that was one of the former directors of the South African regulator to draft a reply on this framework with regard to South Africa. There is another one with regard to India. And then there are a lot of other very interesting issues analyzed by our distinguished speakers of today. So without missing any more time, I would like to pass the floor to the first speakers. We have in this first slot of speakers, we have some more general perspective and we delve into the generative AI part. And then we delve again, we zoom out into other transparency and accountability, more general issues. So I would like to pass the floor to Armando. I’m not going to list all the speakers now. I will present them each one by one because there are a lot. So first we have Armando Mazueta, that is Director of Digital Transformation at the Ministry of Economy of the Dominican Republic. Please Armando, the floor is yours.

Armando José Manzueta-Peña:
Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to share with you some important insights regarding AI. how governments, for example, are trying to use AI, specifically Gen AI, to modernize their infrastructure and provision of public services. Well, how to begin with this? Well, a few technologies have taken the world by storm, the way AI has over the past few years. That’s something that’s a reality. Not even the blockchain revolution had had this much impact on the world as AI had. And its many use cases have become a topic of public discussion, not just for the technical community or the so-called tech bros, but say all people has been discussing on how to implement AI one way or another. And generative AI in particular has a tremendous potential to transform society as we know it for good, give our economies a much needed productivity boost and generate public and private value, potentially in the trillions of US dollars in the coming years. Well, the value of AI is not limited to advances in industry and retail alone. When implemented in a responsible way, where the technology is fully governed, data privacy is protected, and decision-making is transparent and explainable, AI has the power to usher a new era of public services. Such services can empower citizens and help restore trust in public entities by improving workforce efficiency and reducing operational costs in the public sector. On the back end, AI likely has the potential to supercharge digital modernization in by, for example, automating the migration of legacy software to more flexible cloud-based applications or accelerating mainframe applications modernization, which is one of the main issues most government has. Despite the many potential advantages, many governments are still grasping on how to implement AI and gen AI in particular. In many cases, public institutions around the globe face a choice. They can either embrace AI and its advantages, tapping into technology potential to help improve the lives of the citizens they serve, or. They can stay on the sidelines and risk missing out on AI abilities to help agencies more effectively meet their objectives. Government institutions early develop solutions leveraging AI and automation over concrete insights into technology’s public sector benefits, whether modernizing the tax collection system to avoid fraud and predict trends, or using automation to greatly improve the efficiency of the food supply and production chain, or to better detect diseases before they occur and prevent major outbreaks, such as the pandemic that we had before. Other successful AI deployments reach citizens directly, including virtual assistants and chatbots to provide information to citizens across many government websites, apps, and messaging tools. Getting there, however, requires a whole-of-government approach focused on three key main areas. The first one is workforce transformation, or digital labor. At all level of governments, from national entities to local governments themselves, public employees must be ready for this new AI era. While that can mean hiring new talent like data scientists and developers, it should also mean providing existing workers with the training they need to manage AI-related projects. The goal is to free up time for public employees to engage in high-value meetings, creative thinking, and meaningful work. The second major focus must be citizen engagement. For AI to truly benefit society, the public sector must need to put people in front and center when creating new services and modernizing the existing ones. There is potential for a variety of uses in the future, whether it’s providing information in real time, personalizing services based on particular needs of the population, or hastening processes that have a reputation for being slow. For example, anyone here has ever to field paperwork or has to suffer doing impossible lines or queues just to receive documentation or have to—that must be repeated at several institutions just to receive the same service. that they need. And the thing is, most of the governments, for example, don’t have interoperability or any sort of services just to exchange information freely. And it’s something that, with AI and other related infrastructures, that’s something that we could be solving very quickly. The third one is the government platform modernization. And governments are regularly held back by true transformation, by legacy or ancient systems that are tightly coupled with workload rules that require substantial effort and cost to modernize. For example, public sector agencies can make better use of data by migrating certain technology systems to the cloud and infuse it with AI. Also, AI-powered tools hold the potential to help with pattern detection in large stores of data and be able to write applications. This way, instead of seeking hard-to-find skills, government institutions or agencies can reduce their skill gap and tap into the evolving talent. Last but not least, no discussion of responsible AI in the public sector is complete without emphasizing the importance of the ethical use of the technology throughout the lifecycle, design, development, use, and maintenance, something which most governments have promoted for years, to put it simply. Along with many organizations that belong in the health care industry or the financial sector, for example, government and public sectors must strive to be seen as the most trusted institutions because it holds most of the citizens’ data, one way or another. So if the citizens don’t trust the governments, how they can even trust all the institutions that exist in the same nation? That means that humans should be able to continue to be at the heart of the services delivered by government while monitoring for responsible deployment by relying on these five core aspects for trustworthy AI. Explainability, fairness, transparency, robustness, and last but not least, privacy. When we talk about explainability, it means that an AI system must be able to provide a human interpretable explanation for expeditions and insights to the public in a way that does not hide behind technical jargon. In government, there are many trends and many conversations regarding algorithmic transparency because that is the major aim, to reveal what’s in the black box and so for everyone to see how an AI system works and how it was built so we understand how it provides this insight and how it deploys and how it functions. The second one is fairness, that an AI system’s ability to treat individuals or groups equitably depending on the context in which the AI system is used, countering biases and addressing discrimination related to protected characteristics such as gender, race, age, and other status. Transparency, an AI system’s ability to include and share information on how it was being designed and developed and what data from which sources have fed the system, which is something that I previously mentioned with explainability, which is something that is closely related to it. Robustness, an AI system must be able to effectively handle exceptional conditions such as abnormal abilities in input to guarantee consistent outputs. And last, privacy is basically the ability to prioritize and safeguard consumers’ privacy and data rights and addressing existing regulations in data collection, storage, and access and disclosure, which is why it’s important that besides implementing AI, we also should be consistently improving, modernizing the frameworks that entice us everything related to data protection because if we don’t have those rules in place, there is the possibility that many people, not just in the private sector but also the government, use the data that is stored in the government databases to do harm, to use it as a political weapon. and many other things. So it’s important that we have strong data protection rules in place so the data isn’t used against the same citizens that the government is there to protect and to serve. Just to conclude, if AI is implemented in a way. I’m going to ask you to conclude quickly because we have a lot. OK, just a quick conclusion. If we implement AI, including all the traits mentioned above, it can help both governments and citizens alike in new waves. We can generate public value, but in a way that allows all the citizens to benefit from it and to build a future that we all want to live in. Thank you.

Moderator – Luca Belli:
Thank you very much, Armando. And thank you very much for giving us this initial inputs on what is the ideal that governments should strive for when they have to automatize their system and implement AI. And now I would like to give the floor to Gbenga, that might have a more critical perspective and less ideal. And it’s very good to have both these perspectives to try to synthesize our own opinion. Please, Gbenga.

Gbenga Sesan:
It’s like you framed my conversation already. I’m glad we’re having a lot of conversations around AI. This is my second panel on AI today. Thankfully, this is more focused on generative AI and data protection. But I think one of the advantages of having such conversations over and over is that you get to tease out all the points and ask the questions. And what I want to do is to very quickly, so that you don’t have to say I should conclude, I want to speak very quickly to three things. One is in terms of policy. The other is in terms of people. And if I have more time of my six minutes, I’ll conclude on practice. And by policy, I mean that we already, in many cases, have data protection regimes in many countries. There are countries that still don’t have data protection regulation. Of course, this presents an opportunity for them to have this conversation within the context of massive data collection and processing for AI. But for those who have, it means that this is also a chance to have a review. And I say this as an African who is excited that now finally the Malabo Convention has been ratified by as many countries, so it’s in force. But also concerned that it happened so late that Malabo Convention, the text of the Malabo Convention is to say the least outdated. And of course, there have been calls for reviews. There are countries that are literally just ignoring the fact that they have more recent policies on the subject. So I think in terms of policy, we need to have a conversation about how to make sure that existing data protection policies are useful as we have this conversation about massive data collection and processing. People are putting in their data, it’s being processed. And that takes me to my second point of people. I work in civil society and that means that much of my work is centered on people. And it means that when we have all these conversations over the last year, I mean, November 30, oh, actually, it’s just a month away. So November 30 is the birthday of CHAT-GPT, as everyone knows. So it’s been one year. And there’s been a lot that’s happened since then. But at the center of all this is people, the data owners themselves. I’ll give a very simple example. When CHAT-GPT came, a lot of people were just typing and typing because don’t forget many times, the reason why people engage with either social media or new platforms or new technology. which is the way we do, is that for many people, it’s literally magic. You know, you put in where you’re going, and then the map tells you how to get there. And it tells you there’s going to be traffic. And it’s almost like magic. But the problem is that many times, because people don’t understand that when they put in their data, that’s the input that is being processed. The output is what you get. But the input is also important. So I think in terms of people, we need to have a conversation around demystifying AI, which is one of the reasons I’m glad we’re having all these conversations over the last two or three days, for people to understand, you know, when I put in data, I’m training the system. When I ask questions, the response I’m getting is based on what input has already been given. Of course, that goes to the need for, and we talked about that a bit earlier today, that in modeling AI, we need to make sure that diversity, this is not about tokenism. This is real diversity. Otherwise, we’re going to build systems that don’t understand context. I’m going to cause more problems than solving things. And finally, it’s on practice. And I think this is where the data protection commissions come in. Hopefully, data protection commissions that are independent already understand the need to have conversations with various stakeholders. And the practice is, what happens if something goes wrong when I’m using, you know, any platform or system that is powered by artificial intelligence? You know, someone shared an article with me a few days ago. It was supposed to be an article about myself, but I read the article and I was confused. Because at the beginning, it was accurate, and then it gave me a master’s degree that I don’t have from a school I haven’t attended. And then it said I was on the UN I-level panel on digital cooperation, which is very close. Because, you know, I’m on the IGF leadership. panel, but not the one on digital cooperation. And this is quite tricky. And this, by the way, is one area of criticism from me to say that, what happens when I use this and something goes wrong? Who do I talk to? And I think this is one place where people, institutions, that already answer questions with data protection can come in. So I’ll close it here and say that it’s really important that we center this on people. But apart from saying that, there’s a need to review policy when necessary. People are the center of this. And when it comes to practice, what do I do when something goes wrong? Who do I talk to? We need to demystify this black box. Fantastic, Beng.

Moderator – Luca Belli:
I really like this trilogy of policy, people, and practice. Actually, while you were speaking, I was thinking that, in the best case scenario in most countries, we have some sort of policy, but the people part is almost inexistent. Even in the most in the country that have data protection, for instance, for 50 years, like in Europe, most of people would not be aware of their rights, let alone in the developing world. And the practice part is still something pretty much non-existent everywhere. All right, on this initial energy and optimism, let’s get to the third speaker of this first round, Melody Mussoni. Please, Melody, the floor is yours.

Melody Musoni:
Good afternoon, everyone. Thank you, Luca. I’m happy that you are bringing these issues around data protection and how laws and how that can help with regulating AI. And I’ve been following a couple of discussions around AI policy and regulation. And I keep on wondering, what exactly do we want to regulate here? Because when we look at law, it is quite vast. There are different areas of law. Are we looking at it from a direct perspective? intellectual liability, criminal liability, are we looking at intellectual property issues, data protection, there is a myriad of issues that I think when we have these discussions around AI policy and regulation, we need to keep that at the back of our minds on what exactly do we want to regulate, are we regulating the industries, are we regulating the types of partnerships that we may end up having, or it’s just going to be specifically data protection. And I’m sure some of our speakers will speak on the limitations that we have with data protection laws. And coming to my section on the chapter we wrote on South Africa, what we did was we looked at the case framework that Luca spoke about earlier, looking at how these key AI enablers can actually apply within the South African framework, and hopefully that can also be replicated across Africa and other African countries. And I’m just going to touch on four important key findings from the research that we have already conducted for South Africa. And the message that we are getting throughout is that there is the need for AI made in Africa to solve African problems. So when you go through some of the policy frameworks at the African Union level, for example, the digital transformation strategy, looking at the data policy framework, that is the message we keep getting across, that there is that urgency for Africa to start looking into AI and innovation to actually develop African solutions or homegrown solutions to deal with African problems. And then the second key point I want to emphasize in looking at South Africa is the issue of computational capacity and data centers and building the data in cloud market in Africa. So you understand, of course, that with AI development. government, it would depend more on the availability of computing infrastructures to host, to process, and to use data. And with South Africa, what we have noticed is that there are efforts to actually improve on its computational capacity. There have been discussions about having as many data centers within the country as possible, the private sector, the likes of Microsoft, Amazon. They’ve been actually working closely together with government to make sure that there are data centers on the continent in South Africa. So the vision for the country is not just to have data centers in South Africa to cater for businesses and government in South Africa, but also to become a host or to attract other African countries to actually host their data within South Africa. And there was a draft policy that was published sometime in 2020 called the National Data and Cloud Policy, and that policy seemed to actually point towards a direction where South Africa wants to locally own, to make sure that locally owned entities are active in the data market and promoting local processing and local storage of certain types of data. And as you can imagine, like with data localization, it’s something that is not so popular. So there have been clash back from different stakeholders. And now, as I understand, there have been an update on that draft policy. It’s yet to be finalized, the updated version. It’s yet to be released. But what we anticipate is we want to see this revised data and cloud policy to focus more on better regulation of foreign-owned infrastructure instead of indigenizing all existing infrastructures. while also promoting our public-private partnerships. And the third point I also want to speak on, which also supports this notion of AI sovereignty for Africa and for South Africa in particular, is the commitment towards AI skills development. So there is, again, what we are getting from going through the fragmented policies is that South Africa is hoping to build its own pool of AI experts to research and develop AI-driven solutions to address some of the problems that it has. And there are different programs, starting from basic primary education level all the way through to university levels, which are focusing on STEM subjects as well as AI-related subjects. Of course, the question would be, how long are these initiatives going to be actually implemented? Most of them, they are still strategies and they are still plans that are yet to be actually implemented. So it’s still a long process. And the last point I also want to point out is they still need to have an AI strategy. The country doesn’t have a clear AI strategy or an AI policy, but I would like to say or to think that it’s important for countries to first prioritize, like Kibenga said, data protection issues before you rush to have an AI strategy or an AI policy or law in place. So starting from what are the low-hanging fruits? We have data protection laws. Are they adequate enough to address some of the data processing activities? Do we have cybersecurity, cybercrime laws? To what extent do they cover issues like deepfakes if someone is going to commit a crime and they are using AI technologies? To what extent do the existing legal frameworks that we have, are they adequate? Are these legal frameworks addressing some of these issues? And of course, just to finalize, there are of course challenges that the country and other African countries are facing and likely to face in development of AI systems and even with data processing. Issue of power outages, unreliable power supply in South Africa, it’s now a very big problem. Almost every day there are electrical outages and load shedding and it’s been said that it’s going to run for a period of two years. So imagine you rely on electricity and already the amount of time you spend online is going to be cut short because there is no electricity. So that’s also a challenge that the country is facing. The second challenge, I think it applies to all other digital projects, issue on meaningful connectivity. Yes, there have been massive deployment of different digital infrastructures. Now we are moving to 4G and 5G, but still about 16 million people are still unconnected in the country. And then also the need again for stronger cyber security. So there are laws on cyber crime, there are laws on protection of critical infrastructure, but there is still no strategy specifically to deal with cyber security. And also coming at the last point on implementation of the laws that we have, especially data protection laws, there’s always going to be that challenge that our data regulators will not have the capacity and even the expertise to understand some of the AI tools that are in place to be in a position to actually assist with implementation and enforcement of the laws. So those are my thoughts.

Moderator – Luca Belli:
Thank you very much, Melody, and also for stressing how these issues are interconnected. And many of the most relevant ones are infrastructural issues, particularly, I would like to stress something that you mentioned about… compute and cloud computing there are actually three main corporations that have almost 70% of the cloud computing market Google Cloud AWS and Microsoft Azure then a little bit of Chinese corporation a little bit of Huawei and a little bit of Alibaba but then basically the entire world relies on five corporations to do AI and generative AI so that is that is a huge challenge because even if you want to if you want to find an alternative it’s some an investment that takes ages it’s ten years investment in the best case scenario to have something minimally reliable and no government is in charge for ten years or has the vision to do something in ten years so it’s it’s really something that is worth thinking about all right let this now the moment for the first break for questions so we can take two questions and then we will get into the second segment of the session so if you have questions you could line yes you can raise your hand you could there is a mic there for question so if we can take two and we have a quick round of replies and then we get into the second segment then we will take more questions and then we’ll take more question at the end all right so we have one there and yet I see two hands there so if you could use that mic and introduce and explain mention who

Audience:
you are thank you very much hi my name is Shuchi I work for nationality for all which is basically an organization that deals with nationality rights my background is not really in AI which was why I was so interested in this conversation because I really wanted to understand the question that this panel really proposes whether it can whether generative AI can be compatible with data protection and I understand the challenges that we’ve all been speaking about and those have been deeply insightful but for the for the second phase of this panel I would be super interested to actually know if there are sort of frameworks or if there’s any sort of ways that we actually have if this has basically worked in regions because again My background isn’t in AI. So I was really curious to know, because it’s very in line with statelessness and nationality.

Moderator – Luca Belli:
Yes, in the second, be sure that in the second segment, we will speak about this. So that was the quick reply to your question. So maybe if we can have another one, an extra one, if there is. This was a very fast reply. So another one, yes.

Audience:
Hi, my name is Pranav. I’m a technology law and policy professional. And I also had the opportunity of contributing to this report with a paper on generative AI, thinking about privacy principles. And the gentleman, the speaker, also mentioned about why there is need to ensure data protection within Gen-AI platforms. And my question is from everyone on the panel and in the room is, what are some of the key privacy principles at a normative level that should be ensured so that these Gen-AI platforms can comply with? And I have teased this question with identifying 17 of them in my paper. And this is just the first step, to seek inputs at this global forum. And then I would like to test those principles by deploying it on around 50 use cases and then make it better. So if at a normative level, you have any ideas that these are some of the key principles that should definitely be there, that level of consensus building would be really helpful for our people. Thank you.

Moderator – Luca Belli:
Fantastic. And yes, let me also mention that we have 24 chapters here with almost 30 authors. So given the time constraints and also space constraints, we were not able to have everyone. We plan to have webinars where everyone can present and have feedback. If you want, anyone else who wants to have, or even has an answer, actually. Even if you want to, we want to have a conversation here in this segment. So if anyone from the audience also wants to give a reply, you are very welcome to reply. And then we will have feedback from this panel.

Audience:
Thanks a lot, Luca, to give me the floor. And thanks to. to the previous speaker. First, I would like to thank you. It’s very glad to hear from the southern countries the voice. And that’s very important. As we got the problem of AI and data protection, that’s a very big question. And I have worked hard on that problem. It is quite clear that AI put into question a certain number of data protection principles. And I would like to have your feeling there about. First, the question of finality, question of purpose. Normally, you must have a determined purpose. And with generative AI system, you have no more the possibility to have a specified purpose. The second problem is the question of minimization. It is quite clear that it is totally contrary to the AI functioning. AI functioning is working on big data. You do not know, a priori, which kind of data will be interesting and pertinent for achieving your purpose. Another problem, and you have mentioned that, is the problem of explainability. It is very difficult to make AI system explainable, because it is quite clear that there is no logic. As Vint Cerf said, it is quite clear that you are working on correlation and not on a certain logic. And so you have no logic at all. I have other problems, but we might come back on this issue. It is quite clear that, as we got the problem of personal data, it is quite clear that AI are working more and more on non-personal data. And they are using that for profiling people. So it would be absolutely needed that data protection legislation will enlarge its scope.

Moderator – Luca Belli:
All right, these are very good questions. Do we have initial replies from the panel? Melody, yes, you can go first.

Melody Musoni:
I agree with you. We have more questions than we have answers. So looking at the protection of personal information of South Africa, we provide a framework to say when it comes to automated decision-making processes and profiling, these are the conditions that have to be met, and then looking again at the basic data protection principles on transparency, data minimization, data subject rights, campus limitation, it’s, the principles are there, but I think application, that’s where the problem is, because we don’t have any data, but I think application, that’s where the problem is, because it’s much easier to say, okay, in this context, this is the principle on processing of personal data. You need to know the purpose, you need to be very transparent, when it comes, especially with facial recognition technologies, we need transparency, can data subjects exercise their rights? So in principle, in theory, the principles apply, but then when it comes to practice, and especially with generative AI, I think we have more questions, and that’s why I was saying even with our data regulators, there is that level of expertise from someone with more technical knowledge on how that actually, the technical side of AI, that can be translated into the legal side. In my opinion, there is more questions than answers.

Moderator – Luca Belli:
Armando is an answer.

Armando José Manzueta-Peña:
Actually, I don’t have, like I said, like you said, there are many questions that are still to be solved, to be answered regarding AI uses, but I think it applies to most systems. the use of data, and any system, depending on any platform or any technology, its quality will depend on the quality of the data itself that the system has been fed on. And if we don’t have, as you said, the public protections in place, and we don’t have the data that is properly collected, and it’s properly minimized, then the system will, of course, will do a profiling of the person, of the company, or the subject itself in a way that doesn’t necessarily translate into the realities or provide a solution to solve a certain problem. So in that case, besides having strong data protection rules, there should be also strong data collection and data validation regarding the quality of the data itself in order for AI or any system to provide a proper solution or actually of use of any help at all. And that’s the main challenge that we as a government has, especially in that part, in the developing nations, because having data of good quality, good administrative registers, it’s the main issue that we’re facing right now, just to give this any use.

Moderator – Luca Belli:
Okay. This provides us a very good segue to the second segment of the session. So let me give the words to the regulator. We have Jonathan Mendoza, who is Secretary for Data Protection at the National Institute for Transparency, Access to Information, and Protection of Personal Data of Mexico. Please, Jonathan, the floor is yours.

Jonathan Mendoza Iserte:
Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this topic to the table, especially Luca Belli, a leader in the Latin American region. Data governance and trust has become a crucial topic, and we find ourselves at a critical juncture in the history of technological advancement. Artificial intelligence is rapidly evolving, offering boundless potential for innovation growth and improvement in our daily lives. But in the same way, we must also recognize the challenges it poses for its regulation, ethical use, and the importance of promoting AI transparency and accountability. In the Latin American region, steps have been taken toward regulating artificial intelligence. However, we must remember that the region is very diverse and has technological deficiencies that only allow access to technology for some sectors and groups of the population, therefore closing the digital crest is a primary task. Even though there are some exercisers that are part of the efforts to regulate artificial intelligence, there needs to be a full instrument dedicated entirely to it. In 2019, the members of authorities of the Ibero-American Data Protection Network issued general recommendations for processing personal data in artificial intelligence. Also in the region, it seems to be a trend closer to the ethical use of technology, but how could we ensure that algorithms are fair if they are not accessible to public scrutiny? How can we balance the ethical design and implementation of AI? Artificial intelligence can contribute enormously to the transformation of development models in Latin America and the Caribbean, to make them more productive, inclusive, and sustainable. But to take advantage of its opportunities, and minimize its potential threats, reflection, strategic vision, regional and multilateral regulation and coordination is required. According to the first Latin America Artificial Intelligence Index in 2023, Argentina, Brazil and Mexico are regional leaders in participation in international spaces to influence the global discussion on AI. In the global context, according to the McKinsey Global Institute, the use and development of AI in multiple industries will bring mixed economic and labor results. 2030 estimations of AI are $13 trillion will be the impact of AI in the global economy. 1.2% will be its contribution to the annual gross domestic product globally. $15.7 trillion will be the additional income to the global GDP. 45% of the benefits of AI will be for finances, healthcare and the automotive sector. As Chris Newman, Oracle’s principal engineer said, as it becomes more difficult for humans to understand how AI tech works, it will become harder to resolve inevitable problems. In our interconnected world, multilateralism plays a key role because AI knows no borders and international cooperation is not just beneficial but imperative. We must ensure that AI respects fundamental rights with a human-centric approach, abiding biases. The paper I co-authored with my colleagues Nadia Garbacio and Jesús Sanchez is a proposal to start a debate on AI in the Latin American region. We propose the creation of a dedicated mechanism that contributes to AI-related matters. Cooperation and strategic alliances with the Organization of American States will help us achieve this goal. To facilitate the implementation of this proposal, it is suitable to create a committee of experts that analyze and agrees on the importance and urgent need to contribute through non-binding mechanisms to the situation regarding the use and implementation of existing and yet to be developed disruptive technologies given the risk they could imply in the private life of users. The objective of this committee of experts must be built on goodwill and on the exchange of knowledge and good practices that promote international cooperation based on multilateralism and the opportunities that it offers us to strengthen the protection of human rights, joining efforts with other international organizations that have also spoken out on the matter, as well as with groups of economic powers that have shown their concern about this panorama of the new digital age. The work of this committee will be based on a mechanism that will seek to analyze the specific cases, issue recommendations, provide follow-up, and develop cooperation tools. Let’s be part of the conversation to maximize the benefits of AI for our societies while minimizing its potential risks. We must remain committed to fostering international cooperation as well as strengthening these efforts to ensure that AI serves humanity’s best interests.

Moderator – Luca Belli:
Thank you very much, Jonathan, and let me also stress that ENI has been doing a lot of excellent work, both in terms of attempts of policy experimentation and international cooperation in trying to put forward some recommendations on how to work and regulate with generative AI. Staying in the Latin American region, I would like to ask Camila. has also been one of the minds behind the construction of this group since April to provide us a quick overview of what’s happening in Brazil.

Camila Leite Contri:
Perfect. Thank you so much, Luca, for the invitation, for the creation of the group, for all the amazing job that you do in FTV, and also a pleasure to be here with you. Considering that I’m from Brazil and also from a consumer organization in Brazil, I would like to focus on that. We are talking about data privacy, but as Melody mentioned, we are not only talking about data privacy. We have several other rights that we have to consider. So I’m going to talk first about the general risks that we can talk when facing the challenges of generative AI. Second, about the laws that might interconnect on that, focusing on data protection, but also on consumer protection. And also talk a little about the Brazilian context in terms of legislation and ways ahead, too. AI has lots of possibilities. And for example, EDAC works in financial services, in mobility services, in health. And all these areas can benefit from AI and generative AI. But as we can see, it has two sides. We have both an opportunity and a challenge on dealing with that, especially because innovation goes in a speed that regulation does not follow. So that is why it’s important also to think about current legislations that have to be applied when we are facing that. Some general risks that you are tired to hear, like we have issues related to power. We have issues related to wrong output on the use of this technology to manipulate people on bias, discrimination, privacy, vulnerabilities. And we also have a challenge in here coming from a global south country. And it’s a table of global south in here, which is dependence. So we are talking about how to protect people from that. And we rely on other countries, on other technologies. And how we can do that, how can we build the sufficient power on that? So it’s a great challenge that, obviously, I don’t have an answer. But I hope that we can build on that. Also, one important thing is the techno-solutionism that this kind of technology bring. Because when we do that, we disregard the context. And that is the reason that I want to talk more about Brazil. But before talking about Brazil and the different laws, I would also like to bring the issue of concentration of power. Once we are talking about generative AI, of course, we think about CHEP-GBT. But we are not only talking about CHEP-GBT. We depend not only on foreign companies when we are talking about the global south, but rely on big techs. And we know that these big techs can bring lots of solutions, but lots of abuse, considering that they dominate the market. So that is why it is important to consider not only data protection law that, of course, is extremely necessary, but also consumer law to protect people in the end. We are putting people in the center. And these people are also consumers. We are all consumers. And competition law also to face that. So first law that is important that we have to comply in its existence, and we have to enforce that, is competition law. The second one is data protection, as we are mentioning. And to develop on that, I will talk about a case in Brazil that was brought by a really known organization in Brazil, which is a really known person in Brazil, which is Lucca Belli, about this. And third of all, also consumer rights have to be respected. We are talking about transparency. We are talking about access to information, which is basically consumer traditional. rights Beyond that we have also IP IP law, of course copyright and But I’m going to to focus on that Okay talking about Brazil. Brazil is a huge market not only in terms of Market in general, but also on AI so Brazil is the fourth country on the use of chatty PT so it’s a concern that we have to to consider on that and Since is a since it is a concern I’m going to spend a little more time talking about the petition that was presented to the data protection authority in Brazil by Luca about about Not complying with the data protection law in Brazil and of chatty PT not complying on the law I’m gonna focus on the on the rights that the that was That was requested on this petition which is to know the identity of the controller of the data This is a minimum thing to know The second one is to access all the personal data that have respect to the person that is affected So this is about self self determination and as Luca mentioned, this is not only data protection, right? But this is a human right in the end The third one is the right to have access to clear and adequate Information on the criteria and the procedures used on the formulation of the automatic response these three topics can Luca brought it but everyone is affected by that not only in Brazil but also other countries and Also this this kind of complaint could have been brought also by the consumer authority Because we are talking about access to information in the end So this is a provocation also for you Like we have to think on how we can advance on that not only in Brazil, but other countries Unfortunately, I have some bad news that the data protection authority didn’t go forward with this this process, which I think It’s not only sad, but it’s an absurd And I hope that we can the authority can advance on that because it’s an important issue But nowadays the data protection authority is bringing a consultation on sandbox of AI But when we bring cases like this when Luca bring a case like this, they don’t advance on that. I Don’t know why Second context that Jonathan also brought me just ask you to wrap up in one minute. Okay, just one minute the the network of authorities in in the Ibero-american region also is focusing on chat GPT issues of legal hypothesis XR exercise of rights and transfers of data, which is interesting because the data protection authority in Brazil is also present on that and We have to comply with existing laws, but we can also advance on future frameworks as you were mentioning so In Brazil, we have a bill also on that and we hope to advance on that But meanwhile, we have we have to comply with existing laws. Thank you. Sorry for extending. Thank you very much

Moderator – Luca Belli:
Just a very brief comment because that case that she was mentioning that concerns me personally It’s it’s very also frustrating to say that even when there are laws in place and rights in place when there are every law has needs to have elements of flexibility not to Not to to be able not to regulate Technology in a way that is to strict and and allow the advancement of technology But when there are clauses of flexibility like what is an adequate information about how your data are processed or what is in Adequate information about the criteria according to which your data are utilized to train models That is the moment where the regulator has to enter into the game because adequate anything is adequate is the favorite word of lawyers together with reasonable because you can charge hefty prices and fees to your clients to debate what is adequate or reasonable. But the role of the regulator is precisely to say to enterprises, to people, what is adequate and what is reasonable. And it’s a little bit frustrating when the regulator don’t do it. And they find also that some very curious practices of data scraping by some corporations are maybe considered as adequate or reasonable because those are very hard to believe and to think as reasonable and adequate practices. Anyway, not to get into very personal matters. I would like to ask if our online panelists are online. Can you hear us? I would like to ask if Wei Wang is connected. Wei, can you hear us? Sure. OK, so actually, we have an example where generative AI has actually already been regulated in China that has just issued some specific recommendations on it. And so it’s quite interesting to understand what is the situation in China with regard to regulation of generative AI and data protection. So please, Wei, the floor is yours.

Wei Wang:
Thank you so much, Luca, as always. And thank you for having me today, at least virtually. Yes, and it’s very cool to meet quite a few new and older friends, at least virtually. Yes, and as per the content of our report, I think I’m supposed to share some Asia perspectives on regulating artificial intelligence in the first place. Since I came back from Latin America to Asia, yes, I have attended quite a few events, both online and in-person. I happen to find that quite a few, I mean, Asia jurisdictions are cautious in regulating AI. They prefer to let ethical framework to go first rather than making how to go come first. And they also prefer minor steps where what we call precise regulation. For example in Singapore the governance model prefers a light touch and a voluntary regulatory approach for a I basically the aims to use a I as a tool for economic growth and improve and improving quality of life. But they also acknowledge that Singapore might have to adopt to existing global frameworks instead of creating new regulations in isolation. So this is sort of I mean global least perception. I distinguish those Asia jurisdiction from others like you you Brazil you can the United States. Do you always. I think as all of us know you and the Brazil are adopting comprehensive acts or abuse at a UK model is based on a pro innovation idea so far at least while the United States seems to stick to the liberal market idea. Still I got my contract. China has a sector specific approach instead. For instance in the areas of recommendation and I present this is technology and a generator via as Luca has mentioned. So as some from the FPF I mean the future private firm argued that data protection authorities are becoming sort of default regulators for a I in this time gap. In the case of China the PIPO it’s personal information protection law as well. Articles for a double 24 27 55. They are clearly relevant to regulating automated decision making and official recognition. And then the newly established in Turin measures on generative I basically highlights the importance of ensuring the use of data and the underlying models from legitimate sources in compliance to the revolution laws and regulations as regards IP and the data protection. But. way. But things are it seems are becoming I mean more interesting as quite a few to reduction are considering a big change in this sort of regulatory model. For example in both states and China as you may be already aware of the recent proposed bipartisan framework for a U.S. act advocates for a regulatory focused on legal accountability and consumer protection proposing a license and regime administered by an autonomous oversight entity. Similarly in China a research group on the Chinese Academy of Social Science of which I’m currently to invite member as well drafted a model. I love proposing a negative least based and risk based approach to governing. I there are some similarities with the U.S. act but there are also some nonsense as well. But but I mean generally the model law introduced the principle of accountability catalyzing the entities along the value chain at assigned duties or responsibility in terms of retention disclosure at a manual assistant to data disclosure or data sharing with an institutional intent of fostering a transparent system. That being said some of the jurisdictional perspectives are reaching a conscience as we got the AI governance. But this also requires more continued comparative studies for example about more to go both sides and approaches. Those new development basically highlights the response of jurisdiction to address those challenges of AI with the focus on accountability principles were tailored obligations and a proactive technology design. You’ve been probably coming. I mentioned that the technos solution is a solution isn’t. But it’s still essential to seek an implementable where up up the operationalizable. I’ve mentioned in our chapter sort of about the reason lies lies about. requiring a sort of a long-standing balance between adaptability and the regulatory predictability to ensure effective and end-to-end governance within the dynamic AI landscape. We will definitely keep coming across the question of regulation versus innovation. And I think our DC is a perfect place to achieve this goal, I believe. So in this regard, I look very much forward to continuing the collaboration within and beyond the group in the near future. Okay, I think that’s all from me today. Thank you for having me here virtually today. Yes, I will hand it back to you, Luca.

Moderator – Luca Belli:
Thank you very much, Wei. And actually now we have, this is a good segue to enter into the last speaker of this segment, the Smriti Parshira from India. Smriti, can you hear us? Smriti, are you connected? Yes, I can hear you. Yes, so Smriti can bring us a little bit of, is going to broaden a little bit our perspective with some concrete cases from India, and then we can expand on this in the last segment. Please, Smriti, the floor is yours.

Smriti Parsheera:
Thanks so much, Luca, and hello to everyone in the room and online. So as Luca mentioned, I’m gonna really be a little broader than the topic which is suggested, which is more specific to generative AI. And my intervention in this book talks about the question of transparency and the interpretation of what really transparency should mean in the AI context. And this is a term which is well regarded now, well accepted in most AI strategies. India also has a AI strategy, and it talks about the principle of transparency among others. It’s also a principle that’s reflected in different ways in data protection law. So India very recently has adopted its data protection law. And the philosophy of transparency does come about when you think about processes like notice and consent, access to information, correction, redress facilities. So all of this does speak in some way. to transparency and very often in the AI context, transparency is connected with explainability and accountability. And what I do in this intervention is really, I say that when we often think about transparency in the AI context, it’s often the tools or even the discussions are very much about the technical side of transparency. So it’s about algorithmic transparency, transparency of the model itself, but the paper argues that we really need to step back and take a broader lens because we know that there are a number of actors who are involved typically in any AI implementation. And therefore transparency like every other principle you see in AI principles should permeate through the entire life cycle of the project. And in this paper, I specifically identify three layers and this is mostly in the context of, large scale public facing applications. And I take the case study of one such application in India in the context of facial recognition systems for entry into airports, which is something which is being seen across the world and many other countries you see similar system. And the argument of the paper is that, there are at least three layers of transparency that you need to think about. The first is policy transparency. So it’s about how did this project come about? Is there a law backing it? Who are the actors involved? Which government departments, ministries took this decision through what open and deliberative process? The second is about technical transparency, more well understood questions about a transparency of the model, what kind of data was used? Who designed the code? What does the code do? How well does it work, et cetera. And the third is about operational and organizational transparency, which is really about which is the entity which is finally giving effect to this. How does the system work on the day to day basis? What are the kind of failures that you’re seeing? What is the kind of accountability mechanisms that exist for this entity? And who exactly is it answerable to? Is it answerable to the parliament, to the public? Like what are the mechanisms for transparency within this body? And then I apply this in the paper. I’m not gonna go into great detail due to paucity of time into. the findings, but there are three, you know, broad observations that I made. One is that transparency in the policy sense cannot just be about imparting information to the public about the existence of such systems. It has to be a bit more deliberative about, you know, why we are bringing this, should we bring this in the first place, et cetera. The second point is about, you know, there is a culture of third parties working with the government, either as philanthropies, as think tanks, as consultants. There is the need for transparency, not just about who developed the code and whether we were transparent in the procurement process, but even how did these ideas come about and there is need for transparency, you know, at a deeper level. And finally, tools of transparency. Very often, if you have entities outside of the public, private sector, nonprofit bodies running these systems, then will the, you know, tools of transparency, which are in the form of right to information laws, for instance, in India, apply to these entities? And we see in this particular case study, which I study here, that the design does not enable the application of, you know, transparency and public disclosure, which a public body would be faced with in this particular structure. So I’ll stop with that. And people in the room, I would love to hear your comments if you have it later. Thank you, Luca.

Moderator – Luca Belli:
Fantastic, Smriti. Now, we have to do, to have a series of actions in the next five to 10 minutes, because we will have the possibility for participants to ask questions. At the same time, we will have the speakers of the initial two rounds that will move to the first row of chairs, and the speakers of the last round that will move to this part of the table, because for organizational purposes, we have to, speakers have to be here. So if you have questions in the room, please, this is the moment for you to ask questions using the mic there. We have questions from, oh, yeah, sorry. So let me also thank Shilpa Singh, that is our remote moderator, and you can take the mic and ask the question from the…

Audience:
participants. There’s a question from Mr. Amir Mokaberi. He’s from Iran and his question is that, could shaping the UN Convention on Artificial Intelligence help to manage its risk and misuse at international level? Do geopolitical conflicts and strategic competition between AI powers allow this? What is the

Moderator – Luca Belli:
role of IGF in this regard? That’s a very, very open question. I don’t know if the new set of panelists has any ideas on this. My personal take is that it will take a lot of time before having international agreement on any international regime on AI and that is precisely the reason why many tech executives, or at least some of them, may be advocating for having an international regime because they know very well it will take between seven and ten years to be developed and maybe start being slightly meaningful. So it’s, I don’t know if we have other opinions here in the panel on international organization. I think actually this is a very good connection with Michael Karanikola’s paper because he’s really, actually coincidentally, he’s the first speaker of this last slot and coincidentally he has written an excellent chapter in this book about this, exactly this topic. So Michael, no one better than you can reply to this and start presenting your paper, please.

Michael:
Thank you and I’ll start by echoing the other panelists and thank you, thanking Luca. I’m amazed at how quickly this has come together and with such a great group of authors. So my paper focuses on emerging transatlantic frameworks for AI that are being developed under the auspices of a handful of powerful regulatory blocks, namely the US, the EU, and China, and examines the implication of this trend for the emerging AI governance landscape. Gonna have to go through this very quickly so I won’t go too deeply into the paper. But just in response to the question about the potential UN framework, I discussed the OECD framework as well as these different structures. And I think there’s a broader tension between the value and benefits and efficiencies of harmonization, right? And the tendencies of harmonized standards, whether it’s at the UN level or the Brussels effect or the California effect or whatever, to trample over important local contexts, not only in terms of the needs of populations being impacted by AI, but also in terms of how, at a more basic level, how harms are framed and the assumptions and prioritizations that are inherent in any legislative framework. And I argue in my paper that there is a challenge in terms of trying to develop a harmonized structure that it’s going to fail to take into account diverse populations, particularly when the people that tend to have a seat at the table in the early development of these standards tend to be from wealthier parts of the world. So I explore that tension. I think that it’s, I’ll caution by saying that it can also be overly reductionist to view this dynamic purely in global north and global south terms, that there are a lot of different dimensions to this. But ultimately, I say that as frameworks begin to coalesce into transnational standards, it’s important to query whether they actually represent the needs and concerns of those on the sharpest edge of technological disruption, and whether the development of these standards and the harmonization of these standards. has the potential to further entrench inequities on a global scale. So that’s a two-minute version of my paper. And I’m happy to chat further, folks, have questions.

Moderator – Luca Belli:
Fantastic, Michael. Also for actually having provided both a reply to the question and the presentation of your paper. I guess you have also a question. Yes, so I think we can do this. We can take this question, and then go through the presentation. And then it will be the first question to be replied at the end of the presentation. OK, yes, please go ahead.

Audience:
I just wanted to build on what Michael just said. My name is Michael Nelson. I work at the Carnegie Endowment for International Peace in Washington. And one of my colleagues is Anu Bradford, who wrote the book The Brussels Effect, and now has a new book on digital empires that covers some of the same territory. I look forward to spending more than two minutes with your ideas. Anu and I have a friendly debate about whether the Brussels effect sometimes becomes the Brussels defect. One part of it is what you just said. Other countries are taking European language designed for a European legal system and putting it in a place where it doesn’t really work. But a more important problem, particularly with the AI Act, is they’re writing law that’s, I think, way too premature. They haven’t even really got a definition of what is AI. I’m a physicist. I’m not a lawyer. But when I was working on Capitol Hill, the first thing we did was get the definitions right, not just defining what you’re regulating so you can have a box, but defining what you’re not going to regulate. So I guess my question for anybody who wants to take it is, how do we avoid this problem of imposing these aspirational goals on a vague field of technology that will be totally different in 18 months?

Moderator – Luca Belli:
Thank you very much, Michael, for this. Excellent comment. As we have started with 10 minutes of delay, we might have a margin of 10 minutes at the end, because I saw already a, yes. So we can rush with the last round of very quick presentations. So the next one will be Kamesh Shikhar.

Kamesh Shekar:
Thank you so much, Luca. And so, yeah, so I guess we have very few time to rush through the paper. But our intervention, our chapter, talks a little bit or answers some of the questions that the first panel spoke about also. So I’ll very briefly touch upon the three things that we do in our paper and what’s the background to that. As we all know that there is already a lot of buzz around the uncertainty over the AI regulations and AI technologies itself. And just a response to that, we still see a lot of frameworks happening at the various levels, right? Like you use the strategy documents and legislations cropping in here and there. But one question, very important question that we try to answer through our chapter is that, OK, tomorrow we bring a framework and we say that AI developers have to follow a certain amount of principles. Will everything become fine, right? And that’s where our paper comes in and asks, what about AI deployers? What about impact population who also interface with the technology at this moment, right? So AI technology used to be B2B. But now it’s also B2C, the generative AI technologies where we also interface with it and use it. So it’s that specific, that’s this very specific question that we’re trying to ponder over and suggest a framework called as a principle-based framework at the ecosystem level where various responsibilities are divided across various stakeholders within the ecosystem such that collectively. or collaboratively, we can make the entire ecosystem of artificial intelligence utilization safer and responsible. So how we went about doing this is that first thing is we tried to like map the impact and harm across the AI lifecycle. At the different stages, let me give you an example and that makes it very clear, exclusion, okay? So if we talk about exclusion as the end impact of whatever is happening, adverse implication, it just doesn’t happen because one particular aspect has gone wrong. There are various aspects which come together at the different stages of the AI lifecycle and across this AI lifecycle, we all know that there are different players involved. So all of these implications come together and make the exclusion happen. So we went about actually mapping that and this also answers, kind of resonates with Melody’s point on what is the liability, where the liability or the responsibility lies with, right? Like we need to understand who and what they do. After doing this, obviously the organic progression is like what’s the principles that everybody has to follow. And this also answers somebody’s question from the online is that like the consensus building in principles. We have a lot of principles available out there on AI, right? So, but we need to now start having a conversation. Hey, you have those principles and I also think that this is the principle I resonate with. So I think that’s the starting point. Maybe that’s an answer to the question also starting for a point for like at the international level, everybody coming together and like discussing about something collaborative and like legitimate for the international level. So we map all the principles and then like the third point is like operationalization, which is also like spoken. In operationalization, like what we went about doing is that like very specific, you know, gap that we are trying to fill is bring out the differences in the principle at the different like, you know, stages. And show that, like, hey, when we talk about, giving an example again, like, human in the loop as a principle, we keep talking about it. But at the operationalization level, when we come to planning to designing stage, human in the loop means differently, right? Like, it means you have to engage with the stakeholders or, like, you know, you have to bring the impact population into the room and et cetera and stuff. But same principle means differently at the, like, you know, other levels. So that is what the difference that we bring. Thirdly, you know, final point, like, before I conclude, is that is also now it’s the impact. We’ve mapped the principles, operationalization, and also now it’s implementation, right? And that goes ultimately to your governments. So there what we try to do is that, like, you know, look at a little bit from also like somebody mentioned, the last speaker mentioned that, like, you know, there is a market in Brazil for generative AIs. That’s the case for any developing country. So we need to balance that approach and see, like, not necessarily regulation has to be, you know, compliance based, right? Like, it can also be market based. How can we enable the market? So we are trying to, like, look in that way and, like, how to operationalize this framework into that market based mechanism where there is a value proposition which the, you know, businesses see. So this is what, like, we do in the paper. Yeah. I can take more.

Moderator – Luca Belli:
Fantastic. So at this time, let me thank all the last set of panelists for being very concise because I know that we have time constraints and I know that our tech support, they are so kind to give us five or ten extra minutes to finalize. And so let me give the floor now to Kazim Rezvi for his very short presentation. Thank you.

Kazim Rizvi:
Thank you so much. So I think just moving on from what Kamesh was talking about and, you know, we have two papers as part of this brief. And the second paper actually looks at, you know, mapping and operationalization of trustworthy AI principles. So while what we are doing… what Kamesh is saying in terms of the first paper is to sort of just come up with all the principles. Here we are sort of looking at certain sectors where we have to sort of look at understanding the synergies and conflicts with respect to these AI principles and how they’ll play out. So what we try to do over here is basically look at two areas. One is the finance and finance sector, and the second is a health care sector. And for these two sectors, we sort of come up with certain principles which we believe are critical for operationalization and to make sure that you are deploying trustworthy principles on the ground. So the paper has adopted an approach where it has looked at the technical and non-technical layer of AI. Within the technical layer, there is basically looking at different implementation solutions and how do you integrate these solutions with the responsible AI framework which we are developing. And the non-technical layer is basically sort of exploring strategies to sort of look at responsible implementation and ethical directions, et cetera. And all of this has been done through a multi-stakeholder approach. So we’ve advocated for a multi-stakeholder approach towards mapping and operationalization of AI principles. I think that’s something which we’ve been very clear about because we believe that you need a different set of stakeholders. You need the industry, the civil society, the academia, the government, et cetera, to sort of come up and look at how these principles will be operationalized for these two particular sectors. So we’ve spoken to experts in these sectors. We will also be sort of looking at certain discussions and see if some of these principles can be implemented effectively. And also to look at domestic coordination of regulation. So what we’ve also identified that AI, there is no specific act or law which governs AI. in India. So we have tried to come up with some sort of principles where you have the privacy law, you have a IT law which regulates the internet, you have different other laws which are coming up. How do they all work together? And how do they harmonize with each other with respect to regulation of AI in the future? So at one level, we are talking about domestic coordination. We’re not saying that, look, you have to regulate it sort of very stringently. But existing internet laws, how can they be harmonized? And the second is around international coordination. And I think that’s where, even previously, what Kamesh was talking about. And this is something which we’ve looked at is, at a global level, can we come up with some sort of models or frameworks to identify implementation? And then looking at these two sectors, health care, what is required, what are the principles which are key for health care which may not be necessary for finance sector? So that kind of mapping and operationalization is what we are doing right now. And then we’re also sort of looking at alternative regulatory approaches. So we are talking about market mechanisms, private-public partnerships, even looking at consumer protection within the developers. And how do we ensure safety, et cetera? So I think that’s something which we’ve looked at as well. And the idea is to look at deployment and implementation and testing it with one of these two sectors.

Moderator – Luca Belli:
The technical support is telling me that we have to move fast without breaking things. And so let me pass the floor to the last two speakers that will very fast expose their brilliant papers. So Claudio, Chico, you have a presentation. We have a very last presentation, and then a very last other presentation. If we can have this online, can we have? of the presentation. Maybe in the interest of time, let me. Yes, we have a presentation. Excellent.

Giuseppe Claudio Cicu:
So konnichiwa to everyone. And thank you, Mr. Thank you, Professor Bailey, for the introduction. I will very quickly dive deep into the relationship between artificial intelligence and corporate governance. Because as we all can see, artificial intelligence is reshaping our social, economical, and political environment, but also the corporate governance framework and the business processes are being affected by this technological revolution. Indeed, we are hearing about, for example, an appointment of artificial intelligence as directors that, legally speaking, I really doubt about it, but it’s happening in this time. So I have the feeling that we are going toward a new form of corporate governance that I have labeled as a computational corporate governance model, where artificial intelligence is an auxiliary instrument, or can maybe substitute the directors in the main function of corporate governance bodies, like, for example, strategy setting, decision making, monitoring, and compliance. So I have put a question to myself. We are going toward a technologization of the human being. I’m afraid of it. So as we know, we have a lot of problem in this kind of revolution. For example, the main problems that I’m working on in my paper is about the transparency. the accountability problems. And so for this reason, I tried to create a framework to allow the corporation to implement artificial intelligence in an ethical way in the corporate governance and business processes. My proposal that I named AI by Corporate Design Framework is grounded on the business process management, the field of management that can allow to analyze, to improve the processes in the corporation. And it is just posed with the AI lifecycle. And I divided both of them in seven steps to combine them to control artificial intelligence and enhance the principle of human in the loop and human on the loop. Of course, this model is also grounded in a human right, a global AI framework. And it’s based also in a privacy by design principle that states that it’s better prevent than react. Under the corporate governance, and quickly, I’m going to conclude, I propose a creation of a new committee, the Ethical Algorithmic Legal Committee, composed by a mix of professionals, like, for example, not only directors, but consultants that can act as a filter between the stakeholder and the output of the artificial intelligence. And also, I conclude with asking, not only to myself, but also to you, if it’s not the case that the legislators start to think about that technology as a corporate dimension, as happened in Italy, for example, with reference to accountability, organization, administration, my answer is yes. I think that is the time.

Moderator – Luca Belli:
Thank you. Fantastic and thank you very much for doing this excellent and detailed presentation in literally three minutes. Excellent. So we have now the final one. The one by Lisa, the last but of course not least. Please Lisa the floor is yours.

Liisa Janssens:
Thank you very much. My name is Lisa Janssens and I will very shortly explain where I am from because that’s also connected to the paper that I have written. I’m a scientist at the Department of Military Operations at the Dutch Applied Sciences Institute and I have a background in law and philosophy and I combine those two disciplines in my projects and I work together with mathematicians, engineers and I’m very proud to say that because it’s very difficult to work actually interdisciplinary together. So my job at the Institute, I’m now there for my seventh year and since two years it actually works out to work together and how I’m doing that, I found a way how to work together. So scenario planning, scenarios, military theater scenarios can be a platform that you meet each other from different disciplines. You stay on your own discipline but you can meet each other in one focus point of problems and how to solve problems from the technical point of view and how to connect those two. For example rule of law mechanisms because I am trying to seek for new requirements from the from the point of view of rule of law tenets because we can find an agreement within the United Nations but also in the European Union and also in the USA that the rule of law matters and is very important to adhere to. So the rule of law for me is about good governance and if I connect it to AI it’s about good governance of AI. How do we do that? So, I am looking for new technical requirements informed from multiple disciplines, law, philosophy and technology and I found a way how to work together and that is a scenario that is like a very good informed operational scenario and you can even test the new requirements. For example, that’s very ambitious but we are going to try to do that in a NATO project via Digital Twins or even maybe a real setting, an operational test environment. Thank you.

Moderator – Luca Belli:
Fantastic. Fantastic. And so now, as everyone has been so patient to stay here until the end of the day, it’s 6.36 so you all deserve a free complimentary copy of the book and the first that will run here will deserve it. The other ones will have a free access PDF that you can download already on the page of the Data and AI Governance Coalition. I repeat, you can also use the mini URL bit.ly slash DIG23 or DIG2023. You have both. You can use the form to give us feedback. You can speak with us now to give us feedback. We can have a drink now together so that we can give us feedback. All feedback is very welcome. And thank you very much. Really thank you very much especially to the – I don’t want to diminish the importance of the first two sets of panelists but this last one has been fantastic and thank you a lot to the technical teams. You are excellent and you have done tremendous work. Thank you very much.

Armando José Manzueta-Peña

Speech speed

171 words per minute

Speech length

1779 words

Speech time

625 secs

Audience

Speech speed

167 words per minute

Speech length

1002 words

Speech time

361 secs

Camila Leite Contri

Speech speed

182 words per minute

Speech length

1243 words

Speech time

410 secs

Gbenga Sesan

Speech speed

187 words per minute

Speech length

1035 words

Speech time

333 secs

Giuseppe Claudio Cicu

Speech speed

133 words per minute

Speech length

509 words

Speech time

230 secs

Jonathan Mendoza Iserte

Speech speed

134 words per minute

Speech length

791 words

Speech time

354 secs

Kamesh Shekar

Speech speed

188 words per minute

Speech length

956 words

Speech time

305 secs

Kazim Rizvi

Speech speed

182 words per minute

Speech length

704 words

Speech time

232 secs

Liisa Janssens

Speech speed

153 words per minute

Speech length

373 words

Speech time

146 secs

Melody Musoni

Speech speed

158 words per minute

Speech length

1579 words

Speech time

600 secs

Michael

Speech speed

167 words per minute

Speech length

428 words

Speech time

154 secs

Moderator – Luca Belli

Speech speed

168 words per minute

Speech length

3484 words

Speech time

1244 secs

Smriti Parsheera

Speech speed

224 words per minute

Speech length

890 words

Speech time

239 secs

Wei Wang

Speech speed

157 words per minute

Speech length

839 words

Speech time

321 secs