Governing Tech for Peace: a Multistakeholder Approach | IGF 2023 Networking Session #78

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

As a representative of a youth organisation operating under the remit of the United Nations Department of Political and Peacebuilding Affairs (UNDPPA), Manjia exhibits a keen interest in the role that young individuals might fulfil in the ‘Tech for Peace’ initiative. Her focus on youthful involvement originates from a firmly held conviction that engaging this demographic is not merely beneficial but absolutely crucial to the success of the undertaking. However, Manjia’s curiosity is not limited solely to the participation; she also shows a genuine interest in uncovering and scrutinising any established best practices or forthcoming plans connected to fostering youth engagement in ‘Tech for Peace’.

The term ‘Peace Tech’, central to this discourse, warranted clarification, provided intrepidly by the director of Access Now. This term constitutes an umbrella reference encompassing a diverse range of domains, from the advancement of human rights to the safeguarding of environmental justice. The director’s commentary extended beyond mere terminological elucidation, presenting some critical perspectives concerning how ‘Peace Tech’ could be interpreted and utilised.

At the heart of these insights was a concern surrounding ‘techno-solutionism’ or the excessive dependency on digital tools and technology to address inherently complex human challenges. Arguing against the overreliance on technological solutions, the director emphasised the need for traditional peace-building efforts. Hence, he advocates embedding technical discussions within broader dialogues on peace-building and other interrelated sectors to guarantee a comprehensive approach.

These contemplations and sentiments underscore several Sustainable Development Goals (SDGs) set forth by the United Nations. Notably, these viewpoints reflect the core aspirations of SDG 16, which promotes Peace, Justice and Strong Institutions, and SDG 9, oriented around Industry, Innovation and Infrastructure. The dawn of digital rights and the care for ‘techno-solutionism’ serves to underscore the complex relationship between technology and peace-building, shedding light on the concurrent requirement for innovation and circumspection.

Evelyne Tauchnitz

The discourse principally revolved around three domains: peace, human rights, and the influence of technology. Central to the discussion was Evelyne’s comprehensive definition of peace. She emphasised that peace extends beyond the mere absence of direct or physical violence. Peace should also incorporate the eradication of structural and cultural violence, forms of oppression that can prevail in society even without overt conflict.

Evelyne underscored the integral role that human rights have in defining our comprehension of peace and ‘peace tech’. She identified human dignity and freedom as the cornerstone values that bind together peace and human rights. She argued that these principles ought to form the bedrock when evaluating peace technologies. The central argument was rooted in the notion that any advancement in technology should not infringe upon basic human rights under the pretext of facilitating peace. If a technology is found transgressing these human rights, its categorisation as ‘peace tech’ becomes problematic.

However, Evelyne expressed her concerns that not all technologies, notwithstanding being labelled as ‘peace tech’, necessarily harmonise with the values of peace and human rights. This suggests a need for more stringent scrutiny of such technologies and the establishment of robust standards for defining and categorising ‘peace tech’.

The symbiosis between human rights and peace was further analysed, affirming that respect for human rights is indispensable but not ample for peace. Peace was envisaged as a broader concept that transcends the domain of legal enforcement of rights. In contrast, human rights were characterised as more narrow in scope, subject to legal enforceability.

An additional perspective broached was the potential paradigm shift in our perception of peace due to digital transformation. Evelyne posited, optimistically, that digital technologies could be leveraged to forge a more comprehensive peace. However, she also underlined the necessity for prudence, especially in instances such as social scoring systems, currently instrumental in China for rebuilding societal trust.

On the whole, the discussion reiterates that while technology and human rights are pivotal in shaping peace, vigilance is required to ensure that the pursuit of peace does not inadvertently engender inequality or violate fundamental human rights.

Moses Owainy

Moses Owainy, the esteemed CEO of Uganda’s Centre for Multilateral Affairs, brings to light the necessity for a more diversified dialogue in peace tech, asserting the significance of integrating a multitude of global perspectives. Concentrating his argument on Sustainable Development Goals (SDGs) 16 and 9 – namely peace, justice, strong institutions, industry, innovation and infrastructure – Owainy stresses the need to embrace a wide range of viewpoints while deliberating peace tech initiatives.

Owainy adeptly highlights a key consideration: the varying impact of peace tech in different geographical contexts. He underscores that results of these tech endeavours differ noticeably between regions such as Uganda, Kenya or Nigeria and their deployment in more advanced economies. This accentuates the pressing need for context-sensitive, adaptive approaches to peace tech in diverse socioeconomic landscapes to ensure equitable outcomes.

Simultaneously, Owainy instigates a discussion on the very terminology used in peace tech dialogue, criticising the commonly used term ‘global peace’. He denotes its inherent ambiguity, advocating instead for the term ‘international peace’. Owainy asserts that this revised term would better embody the collaborative efforts between various states and multi-stakeholder groups, in their shared pursuit of peace.

In conclusion, Owainy’s insights guide towards a more inclusive and adaptable narrative in peace tech, suggesting a shift in terminology for clearer understanding and collaboration. His observations also underscore the need for nuanced, context-specific application of peace technologies, acknowledging regional differences and necessity of adaptability.

Marielza Oliveira

Marielza Oliveira acts in the prestigious role of Director for Digital Inclusion, Policies, and Transformation at UNESCO, guiding the organisation’s endeavours towards the advancement of digital inclusion and policy transformation whilst supporting the broad Sustainable Development Goals (SDGs) of fostering innovation and reducing inequalities. Her work is primarily centred around the protection of two fundamental human rights: freedom of expression and the right to access information.

A crucial aspect of this role is their commitment to UNESCO’s ultimate mandate; fostering peace in the minds of men and women and facilitating the unhindered flow of ideas. This aspiration is brought to fruition through various media, underlining the value UNESCO accords to the digital ecosystem.

To guarantee the regulatory approach towards internet platforms also respects this mandate, UNESCO has emphasised the necessity of regulations that uphold human rights. This conversation was furthered in its recent ‘Internet for Trust’ conference, demonstrating the organisation’s positive sentiment towards internet regulation sensitive to human rights.

Moreover, UNESCO intends to build the capacities of stakeholders to counter digital exclusion and inequality. This ambitious goal will enable better participation in the digital ecosystem, striving for a more inclusive internet environment. These capacity-building efforts strongly align with SDGs centred on industry, innovation, and infrastructure, as well as reduced inequalities.

In regard to achieving such aims, significant results are being accomplished through multi-stakeholder approaches. These alliances involving tech companies and government agencies aim at maximising opportunities and fostering a diverse cohort of partners. Examples of these advancements include the ‘AI for the Planet’ project combatting climate change and the United Nations Technology Innovation Lab utilising technology innovatively to foster peace-building. ‘Social Media for Peace’ is another remarkable project designed to combat online polarisation.

UNESCO stresses the importance of ensuring that internet regulation does not infringe upon the rights to freedom of expression and access to information. The organisation is currently planning to release guidelines encouraging an environment of regulation that emphasises processes rather than content, promoting transparency and accountability. This approach counters potential restrictions that could limit the freedom of expression for journalists, activists, and others.

The analysis underscores UNESCO’s leadership role in utilising technology and promoting digital inclusivity to achieve significant societal benefits and its commitment to embedding these principles within its operational structure and processes. It’s robust standing on maintaining a balanced approach to internet regulation, which safeguards human rights whilst promoting accountability, underpins its dedication towards peace, justice, and robust institutions. Such commitment reflects the considerable correlation between UNESCO’s operations and the wider SDGs.

Mark Nelson

Co-directors Mark Nelson and Margarita Quihuis are spearheading significant advancements in peace technology at both the Peace Innovation Lab at Stanford and the Peace Innovation Institute in The Hague, striving to commercialise their peace technology research. This pioneering approach, utilising modern technology, aims to reshape how peace is perceived and established globally.

A fundamental component of this research is the use of sensor technology. In the last two decades, sensors capable of detecting and monitoring human social behaviour have seen groundbreaking advancements, transforming how human interactions are observed and understood in real time. This invaluable insight underpins the development of detailed peace metrics in high resolution.

The concept of ‘persuasive peace’ lies at the heart of their innovation, favouring influence-based strategies over conventional coercive tactics. By crafting customised approaches suited to varying individual circumstances, they are redefining how peace is achieved and maintained.

An essential element of their endeavours is to establish a essential link connecting peace technology with capital markets to create a viable ‘peace finance’ investment sector. They aim to capitalise on the intrinsic but often unquantifiable value of peace, setting the stage for a new era of peacekeeping and conflict resolution strategies.

To summarise, Nelson and Quihuis’ groundbreaking work embodies a progressive approach to advancing the principles encapsulated in SDG 16 (Peace, Justice, and Strong Institutions) and SDG 9 (Industry, Innovation and Infrastructure), demonstrating the transformative power of technology in the pursuit of peace. Their efforts are leading the way in peace innovation, reshaping our understanding of peace, and developing or promulgating peace technology as a crucial aspect of infrastructure development and peacekeeping strategies.

Moderator

The debate primarily delves into the interplay between technology and peace, with a focus on the contemporary role of technology in fostering peace across varying social contexts, as evidenced through initiatives such as the Global Peace Tech Hub. This initiative, championed for advocating the responsible use of technology, centralises on the principle of instigating cross-sector dialogues and sharing enthusiasm for the possibilities that technology can provide in the attainment of peace.

The discourse also recognises the dual aspect of technology as both an agent of positive change and a potential risk. This bipolar facet of technological advancement was brought into the spotlight, with the positive influences such as the enhancement of democracy and increased access to services being weighed against the negative implications such as the promotion of misinformation and the potential to catalyse societal polarisation. Going forward, the panel agrees unanimously that strategies are required to be formulated to mitigate these risks if technology’s full potential for peace promotion is to be exploited.

UNESCO’s efforts at encouraging climate action with projects like AI for the Planet are highlighted as an epitome of how collaborations among diverse parties in the technological arena can result in meaningful outcomes, even when corporations are unable to commit substantial resources. The use of technology to counteract climate change receives a positive appraisal, especially when it supports initiatives that allow various actors to make a pronounced difference without substantial, long-term commitments.

Additionally, the dialogue underscored the significance of the partnership between UNESCO and the European Commission, embodied in the Social Media for Peace project. These efforts, seen as integral to initiatives such as the Internet for Trust guidelines, aim to regulate internet platforms not through content, but through process, asserting the importance of transparency and accountability in this space.

An essential reflection in the debate is the profound transformation peace technology has undergone. Innovative advancements enabling real-time analysis of human behaviour and interactions have led to a significant shift from coercive peace mechanisms towards more persuasive peace technology. This evolution has been lauded as a historic shift for the human species, opening the potential for fostering peace through persuasion rather than coercion.

The role of the youth in propelling human rights and peace movements, despite the inherent risks associated with digital activism, is highlighted. The willingness of young people from countries, including Thailand, Sudan, and the USA, to remain in the forefront of these movements, despite potential threats like spyware attacks and concerns over digital permanence, is noted.

Also significant is the sense of mistrust within certain societal factions. Phrases such as ‘peace tech’ and ‘tech for good’ are seen as potentially contributing to a trust deficit, with neither the technology sectors nor large humanitarian agencies being considered entirely trustworthy when it comes to personal data.

Finally, with the array of perspectives on peacekeeping through technology, the shared consensus was the need for continuous dialogue within the tech community. Seen as crucial to dismantle existing silos and establish a common understanding of technology’s role in peace, this call for consistent multi-stakeholder dialogue served as a central theme throughout the discussion. The breadth of opinions within the peace tech community, from seeing technology as a tool for peace to viewing it as a new battleground, or even a threat to peace, further underlines the importance of these dialogues.

Speaker

The concept of peace transcends the domains of security, extending to incorporate values of freedom and equality. It constitutes a comprehensive vision for societies’ development that endeavours to eliminate all forms of violence – direct, structural, and cultural. However, this idealistic vision can face threats from inappropriate use of technology. Instances where technologies infringe upon human rights pose a significant issue for peace and security on a global scale. Regrettably, the notion of peace can be misappropriated as a brand for technologies that may not contribute to harmonious societies.

Concurrently, recognising the potential of technological advancements, the United Nations is diligently prioritising digital transformation and technology. The UN Secretary General has been vigorous in advocating for this priority, particularly within the realm of cyber security. A testament to this focused effort is the creation of the Office of the Technology Envoy, established to champion these changes on a grand scale. Indeed, there’s a burgeoning understanding of the need for the UN Security Council to incorporate the monitoring of digital and cyber roles into their mandates, signalling the significance placed on technology in global security guidelines.

Nevertheless, amid these advancements, concerns regarding the right to access the internet and freedom of expression persist as substantial challenges in the digital era. Access Now, a focused human rights organisation, primarily champions these digital rights. Incidents like the deployment of spyware during the Armenia-Azerbaijan conflict highlight how tech misuse can affect peace negotiators and exacerbate conflicts. Furthermore, the role of social media platforms in content governance can hold far-reaching implications during crisis times, a stern reminder that technology can be a double-edged sword. In extreme situations, internet shutdowns have inflicted harm on human rights and obstructed conflict resolution.

Indeed, human rights are a necessary condition for peace. Nonetheless, peace as a concept demands a more comprehensive understanding and acceptance, exceeding that of human rights alone. Concord isn’t exclusively about respect for human rights but also significantly involves the mechanisms and ethics by which specific political decisions are made.

The digitisation of society could conceivably impact our comprehension and notion of peace. Young individuals are poised to play a key role within this transformative process. Impressively, youth across the globe are already spearheading movements championing human rights and peace, frequently risking their personal safety in pursuit of these causes.

Nevertheless, a pronounced trust deficit exists towards tech sectors and large humanitarian agencies handling personal data. Passive monitoring via ubiquitous sensors is viewed as a threat, illuminating the public’s discomfort with potential breaches of privacy. This underlines the challenge of striking a balance between leveraging technological advances and safeguarding human rights and security. Thus, whilst digital advancement offers vast potential for societal development, it is crucial to remain cognisant of their inherent risks to maintain peace, justice, and strong institutions.

Session transcript

Moderator:
I see a few of my participants I don’t know if they are able to share the video or audio to talk during the session or interact because the moment I’ve seen only Mark Nelson was able to activate his video and I’m not sure if they technically can also activate their video or not. How are you in Tokyo? All good? Do you start or? Well I can start anyway this is gonna be a very of course informal session. It’s a networking session. How many of you have ever attended a networking session before? No? Good. So that’s what we’re gonna start sharing this experience together because I have no clue what’s a networking session is but I guess it’s gonna be a very informal moment and to discuss, share enthusiasm, interest around the tech and peace, whatever that means. Exactly something that we could I hope we’re gonna come out with some common ground on on this specific topic. So I’m Andrea Calderaro. I’m Associate Professor International Relations at Cardiff University also affiliated at the European University Institute with the Global Peace Tech Hub. Today we have a series of colleagues that participate in this session from remote. Michele Giovanardi in particular who is the coordinator of the Global Peace Tech Hub and the leader of this initiative. Then we do have online I saw a Mark Nelson as director of the Stanford Peace Innovation Lab. Evelyn Taknitz, Senior Researcher at the Institute of Social Ethics. Yes that is and Peter of course who’s in the lead of access now. So and then of course we have other colleagues here in the room and then of course I give you of course the opportunity to introduce yourself. Michele do you have anything else to add? Yes so first of all we have a few more online participants but I’m not sure they can activate the camera because of the structure of the online session so they didn’t have the permissions but they’re listening in and if they want to interact of course there’s a chat function and of course the sound I guess they can intervene and talk. As you anticipated the idea is to really having a moment for us we have been running this this networking session for the last three years at IGF and it’s always a space to share some of the ideas and to learn about each other projects and work around technology and peace. So maybe we cannot see the online audience from sorry the in-presence audience from online so it would be nice maybe to start with a round of introduction or I can introduce briefly what is the global peace tech hub and and maybe so that we can have a quick reaction also after that. So up to you if you want to do a first round of introductions and then maybe we can go more into the topic. Go ahead the first of all the audience anyway we are two more people in addition to what you see on the main stage so it’s gonna be a very short round of introduction. So go first with the global peace tech hub and then we’re gonna keep this of course as much informal as possible. Great okay so just like a few words of what is the global peace tech hub what is global peace tech. We also have Maria-Elsa Oliveira joining online so if you can please give her co-host rights so that she can also activate a video and participate that’s for the organizers. So here is I just have three slides so nothing much just to understand what is global peace tech. We define global peace tech as a field of analysis applied to all processes connecting local and global practices aimed at achieving social and political peace through the responsible use of frontier technologies. This is very kind of it sounds very official but it’s something that we can just start with as a framework. So basically what’s the basic idea of the global peace tech hub. We know that emerging technologies bear great opportunities for positive change but at the same time they have threats and risks that we need to mitigate. So the idea is that in the 21st century we’re battling between these two opposite forces and this is a common trend that we can see with different technologies and with different functions. We know how the hope of the internet to be this force for democracy turned into a force for online disinformation, polarization, violence. We know about the opportunities of digital identities and in terms of giving access to finance and to services to people and marginalized people but also about all the privacy and data ownership and cybersecurity issues that come with that. We know about the great potential of telepresence to build empathy and trust but also about the risk of hate speech and deep fakes. We know about the use of data and potential of early warning response systems but also all the issues related to how this data are managed and secured. So this can go on and on and the list is very long. So the question is this one. So how can we at the same time mitigate some of the risks related to these emerging technologies but at the same time investing in those initiatives that they’re using technology for peace and there are many projects that we mapped more than 170 projects that are using technology in different ways to enhance different functions of peace at the global level. So on the one side is about mitigating the risks with the mapping and sorry with the regulation, capacity building, education, good governance. On the other side is about mapping these peace tech projects and understanding they are assessing their impact on peace processes and also try to shift public and private investment in these peace tech initiatives that work in partnership that we can build between different stakeholders. Hence the topic of this networking session. So a multi-stakeholder approach to peace tech, to governing tech for peace and first we would like to know who is on the call, what you’re doing, what you think technology can contribute to peace and second after we get to know you and so we can know each other. We would like to know what you think of the topic of the day. So how can different actors come together, actors from academia, governments, public sector, think tanks, NGOs come together in this puzzle and create synergies to achieve this goal, these peace goals through the responsible use of technology. So I will give the floor back to you and maybe we start with the first round and then we see where the conversation goes and I really invite online participants somehow, chat, audio, video to interact as well and give us their take about this and introduce themselves as well. Super, okay thank you Michele for providing such a good overview about the variety of, I mean the perspective that you’re privileged in in looking at the relationship within tech and peace. Yes I will let like Evelyn maybe you can introduce yourself and provide also your insight on the topic of what is exactly your perspective on

Evelyne Tauchnitz:
tech and peace. Thank you very much Andrea. Yes my name is Evelyn Tauchnitz. I’m a senior researcher at the Institute of Social Ethics at the University of Lucerne, Switzerland and my research focuses on digital change, ethics, human rights and peace of course, what we’re talking about today. So like if we’re really talking about what is peace tech, I think that’s a bit the guiding question here today. We first have to consider also what do we understand by peace and that is a really challenging question by itself and it’s not the first time we’re discussing that but we had a conference last year also where we dedicated some thoughts on that and I don’t think we reached a conclusion. It’s a really difficult question but I can tell you like how I’m using this concept of peace in my own research. So first of all I think that peace really is a vision, like it can show us in which direction we would like to develop as well together as humanity but also like in what kind of societies we would want to be living and although there will never be like absolute peace in all corners of the world or even in any society, we still need to know a bit like what would it look like and there I find it really helpful first to look at this positive definition of peace meaning that it should not only mean the absence of direct violence but also of structural violence and cultural violence so there should be no injustices, there should be equality, poverty, access to health care, access to education, so trying to eliminate the causes that then often give rise to direct violences. So it means really that even if you have no direct violence there is still, usually in any society you would still have some forms of injustices, some forms of discrimination which leads me also to the third form of cultural violence which legitimizes the other forms of violence like for example if women are not allowed to go to school this is both a mix of structural ones when they cannot access the school physically but it’s also a form of cultural violence. Why is it not boys that cannot access the school? Why is it girls? So probably there’s never going to be any society free of this violence but so when we talk about peace tech I think it’s really important to keep in mind that it’s more than security only because your security is a very narrow definition of peace like for example if we think about surveillance technologies they might increase security but then again there is different problems like of ambivalency means our personal freedom might be reduced because that has collected about us and we know that or even if it’s in a conflict setting we might not meet certain people anymore like you would not have big assemblies of people because they would be afraid that one person being there also one contact might be suspicious and then I’m meeting that person so rather I should not meet too many people and if it’s a bit a big crowd so more likely it would be that somebody is there who doesn’t have a good record so to say. So it really influences also on democracy it influences on like freedom of expression it how we move even how we move like how do we behave in public squares and I think it’s really important to just look at this from an ethical perspective like when we look at ethics you might have good purposes for instance but still it could have like negative consequences as in the case of surveillance you want to increase security but at the same time it might impact on personal freedoms or civil and political rights. So I think what I’m doing in my own research we need some kind of point of reference when we talk about peace like what kind of peace do we want what kind of peace technologies do we want and I’m using their human rights as I really as a baseline so if you have technologies that violate human rights I find it problematic to to say there are peace tech for example because they’re increasing security but peace in my understanding also is really based on on these values of human dignity and freedom as well and human dignity and freedom there are two basic values that connect both peace and human rights and that’s not by coincidence of course but it’s historically linked because human rights were the universe the creation of human rights was adopted after the Second World War after this devastating experience and well I don’t know really if I should go on but that’s really a topic of discussion maybe that I would like to raise rather than me speaking but what do we understand by peace or peace tech concretely or like what kind of technologies should we should we gather under this term like which technology really merit to be called peace tech because sometimes I see a lot of like almost advertising or marketing efforts to like use this brand of peace tech but if you look at it more closely it’s maybe not so much peace peace peace as it’s as it’s supposed to be so yes that’s so far my input or something I would like to discuss with you thank you very much

Speaker:
fantastic thank you and when it comes of course human rights and access now has been at the forefront on variety of human rights battles and you Peter have been also involved in a series of processes in the UN context so you could respond on your yeah your perspective absolutely thank you yeah well I I feel like I want to network with both of you after this I think this isn’t getting at the point of the session and yeah I really appreciate your remarks for two reasons I think there’s that Martin Luther King quote that paraphrase it badly peace isn’t the absence of conflict or the absence of war but the presence of justice which I think yeah leads to accountability and then thank you also yeah for recognizing the the need for this tech to be respecting of human rights that’s certainly where we come from at access now we are a human rights organization that’s got our start during the Green Movement in Iran and have since expanded globally working at the intersection of human rights and new technologies but I think around 2018 we recognized that increasingly we were working in context characterized by conflict fragile and conflict deflected states and then the communities who were in the midst of those fleeing those or you know trying to work in the diaspora to end those conflicts and I recognize this and have built it into our work plans most directly on our on our international organizations team adding a senior humanitarian officer who brings experience from that sector but you know this is really just an acknowledgement that the digital rights that we’ve worked to protect are most at risk in conflict affected persons and communities and that I think from you know monitoring and and prediction to mitigation to resolution to accountability a lot of the digital tools are infused and becoming essential to each of those steps I think so I can mention a few programs that we have to make it more practical we were we have tracked internet shutdowns intentional disruptions of access to the Internet since the Egypt shutdown in 2011 and have brought this this narrative this terminology and this this charge to end shutdowns through our hashtag keep it on campaign brought that to the UN and seen say a lot of positive resonance and a number of UN bodies and among states that you know recognize internet shutdowns you know go beyond the pale that increasingly these are linked to conflict events and times of unrest but that they really levy a number of harmful effects on human rights and increasingly we would say to the resolution of those conflicts themselves and so that keep it on campaign continues alongside our work on content governance so in some ways the flip side but I think this came out during a lot of the conflict in Ethiopia was how overrun a lot of social media platforms in particular were with incitement content and the unresponsive or you know the perceived indifference or even an ignorance of those tech companies to deal with what was happening on their platforms of course in Myanmar as well led to us to work with partners to create a declaration on content governance in times of crisis we’re following that up with a full report so you know when it comes to shutting down the Internet or you know allowing the proliferation of harmful content you know showing there’s a number of ways that freedom of expression is directly at the heart of I think a lot of these responses and interventions and then more recently we’ve tracked especially through our digital security helpline which is recognized as a tool that DPPA and others used to refer people to it provides free of charge technical guidance advice to civil society which we define pretty broadly 24 hours a day and 12 languages and increasingly we’re getting reports to that helpline of really invasive targeted surveillance and spyware and I mentioned this particularly because a recent report we wrote described the use of spyware in the midst of the Armenia-Azerbaijan conflict which has flared up again recently but we were actually able to detect infections on the devices of the actual negotiators of the peace on the Armenian side the ombuds person who was directly involved in talks their phone was completely infected throughout and so So, spyware is now, is a tool of war and of, you know, efforts to monitor peace negotiations. And finally, yeah, we are mapping now all of the tech that we can find being used in humanitarian situations, and we’ll definitely look at the mapping project that was presented just now on peace tech. I think with human rights, you know, we’re taking a human rights lens, where’s the human rights due diligence? What are the procurement processes? How do we determine, you know, whether this tech, you know, has it been analyzed openly? Is it human rights respecting, as you said, I think. So, you know, that’s the kind of work that we do more directly in pursuit of our mission to help communities and people at risk, and then we try to bring lessons from that frontline work to the United Nations through the OEWG cybersecurity process, and I’ll stop talking soon. But yeah, I’m happy to talk more on that later, and I’m joined by my colleague, Carolyn Tuckett, who runs our campaigns and rapid response work. Thanks. Super. Thank you. I do have actually a two-finger question, and sorry if I know that we should keep this roundtable going on. But I mean, since you are exactly following all the UN process, and since UN is, I guess you’re there because we know that UN is supposed to be this organization ensuring peace, so yeah, what do you think the UN is moving in, whether the UN is moving in the right direction? Of course, the UN Open-Ended Working Group is a process that I’m following as well, and I’m not sure whether we can feel satisfied about that. Well, yeah. I mean, it’s undeniable that from the top down, the Secretary General has put digital transformation and tech and cyber at the top of his agenda. You’ll see that by creating the Office of the Technology Envoy, and we, for our part, have been pushing the UN Security Council to integrate monitoring of the role of digital and cyber on its mandate and on the situations that come before it. And we haven’t seen much evidence, well, we don’t know a whole lot about what happens at the Security Council because it is so opaque by design, especially to civil society, but we’ve not seen really dedicated talks there in ways that could be helpful. Of course, they’re going to disagree over Article 51 and the right of security and offensive and defensive measures in cyberspace, but how about just understanding their ongoing situations at the council that they look at on Sudan, on Yemen, on Colombia? How is tech being integrated and influencing those conflicts, and what role could it play in bringing about some resolutions? So, yeah, we have a focus program on the Security Council now, and we invite others who monitor that body, as well as the first committee’s work. And then, you know, there are the cybersecurity and the cybercrime processes continuing right now. The Cybercrime Treaty looks like it’s going to come imminently, and it’s been really interesting to see how they scope that, making how wide of a lens they take on what constitutes a cybercrime. It’s certainly going to be relevant for people involved in peace tech. And yeah, I mean, I think the Summer of the Future finally does include the new agenda for peace, which is supposed to be where I think they’ll speak to cybersecurity alongside complementing the Global Digital Compact. So those are both meant to be signed and delivered in September 2024. So there’s a couple things to look at. Super.

Moderator:
So, going back online, Michele, you might have a better overview about who’s online than I do. Yeah, no, I just wanted to say that maybe before, since we have just one hour and we are out through and we have some people, just to check, you know, who’s in and who’s on the call, who’s in the meeting, and both there and online, maybe we do like a really easy round of who we have in the room, and then we go through more to the answering questions and go more and deeper into the discussion, because I’m just afraid that if we go one by one, then we will not get to the end, and I would also like to get a sense of the online audience who is online. So if you agree, maybe we do like just like a couple of minutes each, just around to see who’s around, and then we go back to the more in-depth discussions. What do you think? Hello? Hello? Yeah. So, yes, no, no, please, Mark. So let’s make this also, let’s interact also with the online audience. Mark Nelson, director of the Stanford Peace Innovation Lab, might have some insight on this. Michele, did you want me to speak while people are doing little intros in text, or should I wait for the? Let’s see if we can do like just who you are and what you do, like in one, two minutes and do this round first, and if when we’re done, we go back to you and Marielsa, and we get some more in-depth perspectives. Yeah. Okay. Happy to.

Mark Nelson:
So very quickly, my name is Mark Nelson, with my colleague and partner, Margarita Kiwis, I co-direct the Peace Innovation Lab at Stanford and also our independent institute in The Hague, the Peace Innovation Institute, where we are working to commercialize our research in peace tech. And I focus on the measurability that is possible now because of technological advances, so peace metrics that are possible in very high resolution in real time, and how that connects the emerging space of peace technology to capital markets and the potential to create peace finance as an active investment sector, and we’re working very hard on that lab part. So that’s the quick overview. Thank you. So I see Marielsa Oliveira online. Yes. Hello, everyone.

Marielza Oliveira:
Marielsa Oliveira from UNESCO, I’m the Director for Digital Inclusion, Policies and Transformation, which is a division within the communications and information sector of UNESCO. Our task in the CI sector, that’s what we call communications and information, CI sector for short, our task is to defend two basic human rights, freedom of expression and access to information, and leverage those for the mission of UNESCO, which has a mandate literally of building peace in the minds of men and women, and a mandate to enable the free flow of ideas by word and image, through every type of media and so on. So for us, the digital ecosystem is incredibly important, and we’ve been doing different types of interventions on it for quite a long time. From one side, on the freedom of expression, for example, recently we had the Internet for Trust conference, which looks at the regulation of internet platforms in ways that respect human rights. And on the other side, on the access to information side, we are working on building capacities of different types of stakeholders, so that they can intervene properly and participate properly in this process, because there’s quite a lot of digital exclusion and inequality happening, giving voice to specific sides only, and we want to see a more inclusive internet in that sense. So in short, I’ll stop here, thanks. Thank you so much, and I see Moses Owaini online, do you want to briefly introduce yourself? Yeah, thank you very much.

Moses Owainy:
I am very far deep in an area where the internet connection is really poor, so I may not be able to turn my video on, but my name is Moses Owaini, and I am the Chief Executive Officer of the Centre for Multilateral Affairs in Uganda. We are a platform that seeks to advance the perspectives of African countries in multi-stakeholder conversations and discussions like this one, and that’s why I said in the text that I’m really glad that I’m in this discussion, because the overall concept of global peace tech, in my view, cannot be fully complete without the perspectives of many stakeholders that some of the speakers have already alluded to. For example, if you look at the realities that are in parts of sub-Saharan Africa, where I come from, but also Pacific, Caribbean, the future of what the tech, the peace around technology and cyberspace is shaped also by their realities. So today, people are talking about cyber or technology for peaceful means, but what does that mean for a country like Uganda, Kenya, or Nigeria? And that meaning might imply a different form for countries that are more developed or more advanced economies. So for me, I’m glad that this conversation is here, but I’m just imploring the speakers and the organizers that also future conversations around this could also likely draw from all of these multi-stakeholder perspectives and actors that you have clearly talked about. So there’s one point. My second contribution, which is really just maybe a question you are all university professors and you know this, and a lot of critiques come around this, especially when you use the term global peace, because we are in a world of international relations where states are relating each other bilaterally, multilaterally. So is there, and I seek to be corrected, I may not be knowing, so are we right to say, is it more accurate to say we are in a global world or in a world of international, you know, or in an international world? So for me, the term global peace is a little bit, with all due respect, a little bit ambiguous, but if you have something like international, you know, peace in the tech, then it creates more meaning, it is more relatable because it implies that many different states are, you know, collaborating and working together with all these multi-stakeholder groups to achieve the kind of peace, to achieve the kind of, you know, prosperity in the cyberspace, in the tech industry that everyone desires. So that’s my contribution and thank you so much for this platform and the opportunity. Thank you. Thank you so much, Moses, for these very insightful comments and also connected to what Marie Elsa was mentioning. We will move forward with a round of introductions if there are other participants that want to contribute.

Moderator:
I see Youssef Benzekri, I don’t know if he wants to introduce himself. I see Wenzhou Li, I don’t know if… Okay, I see Yuxiao Li. All right, and Omar Farouk. Okay, you will have a chance to introduce yourself in the chat as well. I see also Teo Nanezovic, which is also part of the Global Peace Tech Hub online. And she’s the rapporteur, so she will also kind of put together some of the reflections we are sharing now after the session. So I guess let’s go back to the room and… Okay, there’s somebody who wants to introduce themselves. Yuxiao Li, if you want to introduce yourself, please feel free to do so. Yeah, no problem. Okay, I have no question. Thank you. Okay, thank you. Okay, let’s move forward. So I would like to go back to either who’s left in the room or Mark, if you want to expand a little bit on the topic of this session. So… And also, of course, Marielta. Do you see any opportunities in a multi-stakeholder approach in this space? There are many initiatives trying to leverage technology for peace in different aspects and dimensions. There are actually a non-profit sector growing in this space with many different organizations and networks. What is missing maybe is to connect these dots. So to connect this non-profit sector with the tech companies that also have the capacity and data to and capability to develop the technology itself and the government. So do you see some kind of opportunities in developing these multi-stakeholder approaches and networks? Maybe we’ll start with Marielta and then we’ll go to Mark. It’s great that you asked for this question, because this is one of the approaches that

Marielza Oliveira:
we’re taking at UNESCO, is to bring together different groups, particularly bringing together the tech community, the tech companies, and the remote stakeholder group. And we have one good initiative on that, it’s the AI for the Planet initiative, in which we are leveraging this technology for combating climate change. So this is one of the things that we see promise on, although quite a few of this, how to say, this hopeful approaches, they end up in failure. So this is one of the things that we have to be very, very frank about. There have been quite a lot of attempts to do exactly that. But so for the companies, the question is, what’s in it for me? And at the end of the day, they’ll be looking at what the gain it is. So bringing them to the table and having them to commit to specific initiatives and to support that and dedicate time, money, resources, it’s quite difficult. But you have to be selective, I think, in the types of initiatives that you pick, and when the priorities are high enough. And this is very interesting. And when also the companies can make a difference without necessarily a long-term, large commitment. One of the examples that we have that is very successful is the United Nations Technology Innovation Lab. I don’t know whether you’ve heard about it, but it’s using technology in innovative ways to enhance peace-building processes, such as, for example, 3D augmented reality to show the delegates in New York what the conditions that displaced people were facing in a particular war zone. So they could immerse themselves into that and then have an informed discussion about it. It’s a completely different type of thing to actually see the environment, to just hear the statistics about it. The increase in empathy in terms of understanding what needs to be done, it’s tremendous. So there are quite a few good potential things. Another way that UNESCO is working on that, we have a partnership with the European Commission on a project that’s called Social Media for Peace, in which we track and map the different types of conflicts that are happening online, in particular pilot countries, and bring together a multi-stakeholder group that then devises and deploys countermeasure tactics essentially to diffuse the polarization, to bring back a civil dialogue online and reduce polarization. It’s been quite interesting. We’ve had a lot of interesting results on that, and hopefully we can draw some mechanisms for scaling it up out of the pilots. But of course, the big thing that we’re looking at is the Internet for Trust conference. We’re about to release the guidelines that we have been developing for an entire year, together with quite a large group of stakeholders that provided inputs from all sectors of society, and how we actually regulate Internet platforms for civil discourse as well, so that without affecting freedom of expression, access to information, because quite a lot of measures that are there right now, to enhance one right, they sacrifice another. So that’s one of the things that we are not willing to do, is to sacrifice freedom of expression, access to information. So for example, when you say there are quite a few countries deploying measures of blocking or certain apps or features and et cetera, in a more not necessarily well-thought-out process. There are some examples of those. And of course, the consequences you have, for example, that limiting the voices of, for example, journalists that are reporting and keeping us informed about these issues is tremendous. So the activists and et cetera, they lose their voice too. So we can’t be indiscriminating that. So what the Internet for Trust guidelines are proposing is that instead of regulating content, for example, that we regulate process, that we enable and build together a real governance mechanism that fosters transparency and accountability, because those are the things that are missing on the internet. Accountability is something that is not there. So this is one of the key things that we need to target. And accountability is what truly makes people behave in a responsible way. If you don’t have it, there is no responsibility. There is no teasing and people act in different ways that are counterproductive to societal development. So thanks. Thank you very much.

Moderator:
Andrea, if you agree, I will also move to Mark online. And then if you have other feedbacks from the room, just feel free to interrupt us. But Mark, if you want to leave us your comment on the topic of this session, also connected to what Marielle was saying about building trust, and this is also related to building some measures of impact, some measures of some data. And I know your effort is also in this direction on the measurement side. So how do we measure these impacts? How do we find parameters to describe this space as well?

Mark Nelson:
Thank you, Marielle. And thank you also to Moses for the excellent question. I think these things all tie together. There’s a lot of general interest in peace tech without perhaps a detailed understanding of what it is and what are the components that make it possible and make us to be standing at this incredible opportunity as a human species right now. What changed? What’s different? Because it’s not the wheel. It’s not the lever. I mean, you could make a sort of general case that our ability as a species to construct tools of any kind is interesting and useful and isn’t at all peace tech if we use it right and so on. And that’s kind of true. But the thing that is really unique in the last 20 years is the vast proliferation of sensors and sensors that can detect all sorts of changes in the environment in real time. And the thing that really makes this powerful is that when you start looking at the subset of sensors that can detect human behavior and then again at the subset that can detect human social behavior, how we actually interact with each other, at that moment, you start realizing for the first time ever, we can really measure not only what kind of interactions we have between individual people in real time, we can also measure for the first time ever, over time, what are the impacts of those interactions. And so we can very quickly test theories about, well, I thought if I did this, it would be good for that person. But was it really good for that person or not? Suddenly, we can answer those questions and we can answer those questions in real time and with huge sample sizes. What this means is that as we structure the kind of data of human interactions to see where are we doing really well and what are the best practice kind of behaviors that humans can do for each other and how do we think about how we could design even better behavior sequences for each other and how could we customize those and tailor them for each of our individual situations. That’s where the huge opportunity of P-STEC is really possible. And it allows us to go back to something that was one of the original dreams of Norbert Wiener back in the 1950s, setting up the foundations of information technology and of cybernetics, which was this closed loop of technology between a sensor that could detect something happening in the environment that we cared about, communications technology that could move that data to processors that could do the best sensemaking of that data, and then the processor connecting to an actuator that can then respond to the environment so that the sensor can detect, did the response cause the effect, cause the change I wanted to change? Did it move the needle at all? And if it did move the needle, did it move it in the right direction? That’s where the opportunity now of P-STEC is really incredibly powerful. Because when you close that loop, you go from having a linear kind of technology to a closed loop technology. And that closed loop changes things to be fundamentally persuasive technologies. And this allows us, I want to just underscore why this is so historic. This allows us to move from, quote, peace technology, like the Colt Peacemaker from the 1800s, if you all know your Western movies really well. It’s a gun, and they called it a peacemaker. And it was built on a Hobbesian theory of Leviathan, that whoever has the power can enforce peace. Peace up until this change of technology has been coercive peace that has been based on a Hobbesian Leviathan theory. And what has changed now is that for the first time ever, we can build technology that can create persuasive peace instead of coercive peace. And that’s a huge shift for our species. I’ll just pause there and let everybody respond to that.

Moderator:
Thank you, Mark. Over to you, Andrea. We have eight minutes left. And so I just want to remind everybody that maybe we should collect some takeaways before the end of the session. We’ll keep points. Yeah. Yeah, there are people queuing somehow, so I’ll let. Okay. Please introduce yourself.

Audience:
Thank you so much. Hello, everyone. My name is Manjia. I’m from China. So I am a youth representative of a youth organization called Peace Building Project held by UNDPPA. So my question to all panelists is that how do you see the young people’s role in Tech for Peace and how to engage young people into this initiative or this wider agenda? And do you have any best practice to share or current or future plan for this? Thank you so much.

Moderator:
Thank you. Anybody else would like to add anything to the discussion from the floor? No? No. Sure. We’re networking. We can chat.

Audience:
Hi, everyone. My name’s Carolyn. I’m the director of campaigns and rapid response with Access Now. I think to your question about the definition of peace in the first place is a really important one. And I think from our frame, we really think about digital rights rooted in human rights and that applies in all contexts. And that has a very clear kind of legal structure to understand what the rules of the game are. And so I think I’d be curious to understand from the Peace Tech folks what the added value is of this lens specifically of Peace Tech is. So it seems like it covers a lot of ground from human rights to social justice to environmental justice and kind of is perhaps a bit of an umbrella term, but I think understanding what we’re trying to get to by using that lens would be helpful. And I guess the other thing that is on my mind from this conversation is just how this Peace Tech movement is thinking about kind of the compulsion towards moving in the direction of techno-solutionism. And I’m not saying that that’s what’s being presented here, but I think there are many conversations in this space here at IGF and that happen across the UN system and in other spaces where the instinct is very strong to reach for a particular digital tool to solve a very complex human problem. And so I think just, if you could all share a little bit more how you’re thinking about integrating these technical conversations back into some of the core work of peace building and all of these many different fronts and how those things are really feeding back into each other. Wonderful. Yes. Maybe you can reply for the global

Moderator:
and someone can reply from the youth perspective. Yeah, very briefly. So just 30 seconds each, please. So we can wrap up the session before the sushi reception that we are having here. So maybe I start with the last question then anybody wants to jump in for the youth perspective on this. So for the global Peace Tech Hub, first of all, the Peace Tech community is a growing space made of different actors and we’re not representing them, of course, but the Global Peace Tech Hub is an initiative that is based at the European University Institute in Florence and especially at the School of Transnational Governance, which is a school of governance for current and future leaders on issues that go beyond the state. That’s the context. So our role to play in this is a facilitator of different actors and of course, some independent analysis of what is going on in this space. So we cannot talk for the others, but about where we are positioning ourselves and where we see the added value is in fact in the connection of these different spaces that don’t necessarily talk to each other. So the Peace Tech idea was born more in the peace building sector and especially NGOs would apply digital tools to some peace building initiatives that we’re doing already. The way we perceive Peace Tech is a bit different. It starts more from a tech policy perspective. And as I was explaining at the beginning, it’s equally on how you assess the impact and invest in the impactful projects, Peace Tech projects, but at the same time, how you govern and mitigate the risks with the good governance, with regulation and capacity building. So in this challenge, it’s very broad, as you say, of course, and under definition, but at the same time, we see the value of the interconnectedness of the different challenges. So we’re talking about trust, for instance, before how is that divided from any discussion on digital identities? How is this not linked to some applications of blockchain? How is this not linked with some discussion about data and who owns the data? How is that not connected with internet infrastructure? And how do we give a secure accessibility to internet? How it is not connected to cybersecurity? So you see all this interconnectedness of different challenges. And I think that’s the kind of added value we were trying to bring. And on the concrete side, we also want to foster this dialogue between these different sectors that they discuss kind of in bubbles and they don’t necessarily talk to each other. So I’m talking about the tech sector per se that they didn’t have on the top of their agenda, let’s say, discussion about pro-social tech or peace tech. At the same time, NGOs, they’re doing a lot of things on the ground so they know what is going on there, but this doesn’t always elevate to the governance level. So also government should be involved. So we’re trying to create these connections and the triangulation with the peace definition, we adopt the positive peace paradigm. So a little bit similar to what underpins the global peace index. So this positive peace pillars, which is not just the absolute value. We need to, they’re about to kick us out of the room. You’re really kick us out of the room. So we need to move on. I think Evelyn had a brief reaction to the point made.

Evelyne Tauchnitz:
Yeah, thank you very much for this question. I just want to take that up. What is the added value of peace as compared to human rights? I think it’s a really important question. And I’ve been also asking me that. And as I see that human rights are a necessary condition for peace, but they’re not a sufficient condition, means peace requires the respect for human rights, but it’s a bit something more also. Like as I talked, I see it more as a vision. And in a sense, you also have with human rights a bit the problem that it’s about the process, like in the sense of you have this human right, you have that one, you have that one, but you can imagine that it’s any, the government can interpret human rights as they wish to in a way and say, okay, you get that right, but we don’t have so many resources that we cannot fulfill this kind of right and so on. Whereas peace is more kind of a broader concept. And I also see that in a way, it’s almost easier to start a public debate on it, or it’s like everybody has a perception of peace, whereas human rights are like really kind of focused on rights and they’re legally enforceable, which is an advantage, but it’s much more narrow. And it’s not about the process, for example, how do you reach certain political decisions? But it’s, or how do we, the process is not really incorporated in the human rights perspective. So this, I think, is an important, important say, it’s like it’s necessary, but it’s not sufficient. If I just can make a brief comment to the youth question, I think it’s a really important one as well. And I think, because if, I think it’s also important to ask like, how is digital change changing our perception of peace? Like what we have been thinking of peace might be changing in the future precisely due to the digital transformation. Maybe we don’t have, so that conception of peace changes. It’s not only that we can use digital technologies for constructing peace, but it’s also that through that process, we might have a different understanding of peace. And their use is really, really important, I think. And also connected to that, like I think you said, it’s funny because you said you’re from China, I had just a conversation with a colleague of mine recently, and he did research in China. And he told me, for example, that a lot of the social scoring system has to do with a deficit of trust in nowadays China as compared to the China of earlier generations. And in order to reconstruct a trust, social scoring systems are considered useful. But of course, if we talk about peace, it’s this broad condition. Like do we want trust as a digital tools, for example, or do we want to build, how do we want to build this trust in society? Like this would be a really vital question, I think. Yeah, really quickly, so thanks.

Speaker:
I think on a few of the questions, I mean, for youth, first of all, youth are leading movements for human rights and for peace around the world. And I think they are putting themselves on the line, and more and more so in a digitized era where everything they do, their pictures, holding protest signs out in marches are going to be indelible. They’ve probably already been scraped for the facial recognition, and so they’re getting attacked by spyware. We see that on the phones of youths. So from Thailand to Sudan to the USA, yeah, youth are doing it, are doing the work, and it’s on us to help ensure this doesn’t lead to reprisal or permanent problems for them. I think on, just to respond to Mark, I think we were talking about euphemisms here, peace and tech for good, digital public goods. And so as advocates, we do chafe at this because I think there is a bit of a trust deficit. We don’t see either the tech sectors or frankly large humanitarian agencies as deserving of the trust of our personal data, of personal information about us, especially those most at risk. And so when I hear talk of the ubiquity of sensors, sensors implies some kind of passive monitoring that is so contrary to the vortexes creating these traps for us to deliver our personal sensitive information into these black boxes that we have no control over that characterizes most modern tech, that I think sensors is a dangerous sort of word to use in that context, unless you’re talking about some tools that are completely divorced from the modern digital economy. Thanks.

Moderator:
Fantastic, so this was, of course, a networking session. This means that, I mean, we are pretty well networked, at least the people that are on this stage. But for those of you that would like to connect to these conversations, we of course will have a follow up and please pass your contacts on me. And so we’re gonna keep you in the loop. I mean, again, this was a networking session. The impression is that we are still, even within the tech peace community, there is no common ground, there is no agreement of what exactly tech and peace actually means. And the impression is that for some, tech is approached as a tool to achieve peace. For others, it is tech as a new battleground as such, and then all the discussion about cyber security and so on. And for others, it could be tech as threats to peace. For example, the discussion on the laws on the lethal autonomous weapon system, how AI is increasingly embedded in virus arms. So yes, I think that there is still a lot of conversation needs to be done also within the tech community in order to bridge this community that work in silos. And hopefully we’re gonna have additional sessions to trying to break this silos. That’s all from here. I don’t know, Michele, if you want to have a last 10 seconds word or a word, not even a sentence, just a word. Yeah, perfect. Wrap up, enjoy the sushi dinner. And it was great networking session. Let’s keep the context and the conversation going. Thank you. Fantastic. Thank you all also to the online contributors to the discussion. Bye all. Bye.

Audience

Speech speed

188 words per minute

Speech length

436 words

Speech time

139 secs

Evelyne Tauchnitz

Speech speed

179 words per minute

Speech length

1579 words

Speech time

529 secs

Marielza Oliveira

Speech speed

131 words per minute

Speech length

1068 words

Speech time

487 secs

Mark Nelson

Speech speed

166 words per minute

Speech length

902 words

Speech time

325 secs

Moderator

Speech speed

158 words per minute

Speech length

3134 words

Speech time

1187 secs

Moses Owainy

Speech speed

154 words per minute

Speech length

519 words

Speech time

202 secs

Speaker

Speech speed

149 words per minute

Speech length

1782 words

Speech time

717 secs

Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Osama

The securitisation of the internet by states poses a threat to its open nature. This is evident through the investment in surveillance technologies such as internet throttling and shutdowns. Additionally, the introduction of regulations on social media and internet companies undermines user rights. An instance of this is seen in the Chinese firewall and internet censorship during the Russia-Ukraine conflict.

However, it is argued that the internet should remain open and free from borders as its original purpose was to connect the world without limitations. Unfortunately, the experience of using the internet is becoming increasingly dissimilar from one country to another due to the implementation of digital borders. Furthermore, there is an alarming trend of states moving towards digital authoritarianism, further challenging the open nature of the internet.

In the Global South, opportunities for collaboration among civil society groups are limited due to current policies. Political issues between governments in South Asia prevent civil society groups from convening, hindering their ability to work together effectively. Despite this, there is potential for collaboration as legislation in countries like Bangladesh, India, and Pakistan mirrors each other, indicating scope for collaboration in the future.

To facilitate effective global collaboration, it is argued that a multi-stakeholder and representative approach to global digital discussions is essential. It is emphasised that digital discussions should involve a variety of stakeholders, including civil society. This inclusive approach ensures that diverse voices and perspectives are considered, ultimately leading to more comprehensive and effective outcomes.

Engagement with the private sector is seen as valuable in driving progress. In cases where there is shared interest between civil society and companies on legislation, their cooperation can lead to state backtracking. This highlights the importance of collaboration between different sectors in influencing policy-making and shaping the future of the internet.

Building regional solidarities and conversations is deemed necessary for promoting collaboration. Organisations from various Global South countries are coming together, indicating progress in this direction. This regional collaboration has the potential to strengthen advocacy and drive positive change on a larger scale.

Effective advocacy is seen as crucial, and it should be based on quality research. Existing research can serve as a solid foundation for advocacy efforts. By utilising past studies and knowledge, advocacy campaigns can be more informed and persuasive, leading to a higher chance of success.

Translating knowledge into concrete actions and strategies is seen as the next step in achieving desired outcomes. A joint advocacy strategy based on research has been suggested, emphasising the need for practical implementation and the transformation of knowledge into tangible results.

In conclusion, the securitisation of the internet by states poses a threat to its open nature. The internet should remain borderless, and efforts should be made to counter the trend towards digital authoritarianism. Collaboration among civil society groups in the Global South faces challenges, including immigration and accessibility issues. A multi-stakeholder and representative approach to global digital discussions is advocated for effective collaboration. Engagement with the private sector can be valuable in influencing policymaking. Building regional solidarities and effective advocacy based on quality research are crucial in driving positive change. Translating knowledge into concrete actions and strategies is the necessary next step for achieving desired outcomes.

Audience

The Global South Solidarities project is currently accepting applications for the position of Digital Librarian. This role involves curating and organizing the project’s work, as well as addressing any queries. The project places great importance on effective management and accessibility of its resources.

There is a growing global effort to bring organizations in the Global South together on various issues. This collaboration and partnership demonstrate a collective approach to promoting solidarity and tackling common challenges faced by countries in the Global South.

In November, a fund will be launched in Brazil to support the Global South Solidarities project. This initiative aims to reduce inequalities and promote sustainable development in line with SDG 10: Reduced Inequalities. Fundraising efforts like this highlight the project’s commitment to not only raise awareness but also mobilize financial resources for its cause.

During a recent talk, the speaker received overwhelming gratitude and appreciation from the audience. This positive reception indicates the effectiveness of the project in conveying its message and inspiring action. It reflects the project’s ability to engage individuals and foster a sense of solidarity.

Overall, the Global South Solidarities project actively promotes collaboration, addresses inequalities, and provides a platform for shared dialogue among organizations in the Global South. By seeking a Digital Librarian, launching a fund in Brazil, and receiving positive feedback, the project is making significant progress towards its objectives.

Ifran

In Bangladesh, students are facing a severe shortage of original textbooks, which is negatively impacting their access to a quality education. Ifran, a senior lecturer in law and human rights in Bangladesh, has highlighted the difficulties his students have in accessing original textbooks. This lack of access to textbooks hampers their learning experience and limits their acquisition of knowledge.

The problem of limited access to knowledge extends beyond textbooks. Journal articles and research papers are predominantly behind paywalls, making them inaccessible to individuals in countries like Bangladesh. This limits researchers, academics, and students who rely on these resources for their studies and critical analysis. To overcome this problem, pirated sites like Sci-Hub have become alternative sources for accessing research materials.

The implications of these challenges are significant. The lack of original textbooks restricts students’ ability to learn and engage with the subjects effectively, hindering academic progress and limiting potential for success. Furthermore, the limited access to research materials and scholarly articles stifles intellectual growth and inhibits the development of innovative solutions to societal problems.

It is evident that solutions are urgently needed to bridge the knowledge divide, particularly between the North and South, where disparities in access to information and education are more prevalent. Addressing this issue would enhance educational opportunities in countries like Bangladesh and promote global collaboration and the exchange of ideas, leading to greater innovation and progress.

In conclusion, the shortage of original textbooks and limited access to knowledge in Bangladesh have profound implications for students’ educational experiences and the overall intellectual growth of the country. The emergence of pirated sites as alternatives highlights the urgent need for solutions to alleviate this knowledge divide. By reducing barriers to accessing information and fostering global collaboration, we can work towards a more equitable and prosperous future.

Tove Matimbe

The analysis reveals that there are various international platforms available for collaboration and discussion in the field of internet governance. Notable examples include the African Internet Governance Forum and the Digital Rights and Inclusion Forum. These platforms offer opportunities for experts and stakeholders to come together, share knowledge, and address pertinent issues related to internet governance.

Collaboration and joint efforts are identified as crucial elements in impacting policy change and promoting effective data governance. The report highlights the significance of collaborative initiatives in driving positive outcomes. For instance, Paradigm Initiative collaborated with Data Privacy Brazil at their forum, indicating a commitment to sustaining such partnerships. Joint statements and submissions are also recognised as effective measures, with previous collaboration on the Global Data Commons (GDC) process involving Kiktonet, Data Privacy Brazil, and Apti.

However, engaging the private sector in internet governance discussions can be challenging. The analysis notes that the private sector often exhibits reluctance to participate in these conversations. To address this issue, closed-door meetings and invitations to regional convenings have been proposed as effective strategies to encourage private sector engagement.

Moreover, discriminatory practices have been identified as barriers to the participation of individuals from the Global South in internet governance conversations. The visa process and financial costs associated with attending international forums and events disproportionately impact people from less affluent regions. An example is provided where a colleague from Data Privacy Brazil was unable to attend the Global Internet Governance Forum due to such issues. This highlights the need to address discriminatory practices and ensure equal participation and representation from individuals across the globe.

To foster better internet governance and counter rights violations, it is recommended to increase support and funding for digital rights organizations in the Global South. Establishing a strong, unified movement and fund can empower these organizations to confront violations occurring within their countries effectively. Investing in these organizations can also lead to advancements in digital rights and strengthen the push for improved internet governance.

In conclusion, the analysis sheds light on several key aspects of internet governance. It emphasizes the importance of collaboration, especially with regards to policy change and data governance. Challenges in engaging the private sector and discriminatory practices impacting the participation of individuals from the Global South are identified. Finally, supporting and funding digital rights organizations in the Global South is seen as a vital step towards countering rights violations and promoting better internet governance.

Aastha Kapoor

The discussion explores the concept of Global South Solidarities in relation to digital governance. It highlights the significance of addressing common issues across different regions of the Global South, serving as a foundation for collaboration and solidarity.

Data Privacy Brazil has recently advertised a role for a digital librarian, emphasizing the need for knowledge curation and handling inquiries related to policy issues globally. This initiative underscores the importance of information accessibility and management in the digital realm.

In addition, Data Privacy Brazil is actively working towards establishing a research fund specifically for global south initiatives. With a budget of $8,000, the organization aims to identify the most effective allocation of these funds.

Aastha Kapoor, a supporter of Data Privacy Brazil, endorses the digital librarian role and research fund initiative, highlighting their importance in promoting effective digital governance.

However, the discussion also raises concerns about the misuse of the internet as a tool for exclusion and surveillance. Instances such as the internet shutdown in eastern India due to ongoing unrest and the confiscation of phones from Syrian refugees at European borders demonstrate the negative impact of these practices.

Digital inequality is another issue highlighted, with the experiences of Uber drivers worldwide reflecting similar forms of harm. This exacerbates existing inequalities and underscores the need to address the negative consequences of internet use.

Furthermore, the discussion emphasizes the need for collaborative efforts among global movements to create institutional and financial architectures that support the objectives of the Global South. This highlights the importance of partnerships and working together to achieve sustainable development goals.

A holistic approach is also advocated, enabling wider participation in discussions and ensuring inclusivity. Challenges related to timing, access, and immigration restrictions need to be addressed to foster productive discussions and collaborations. Overcoming these challenges is crucial for realizing the full potential of global South solidarities.

Flexibility and commitment to civil society are considered critical in ensuring successful collaboration between civil society and the private sector. Both entities have shown solidarity in opposing unsuitable legislation, further highlighting their significance in digital governance.

The role of the Digital Librarian in the Global South Solidarities initiative is of great prominence. The Digital Librarian is responsible for curating all work, ensuring organization, and addressing profound questions about the digital universe raised by members. This role serves as a crucial pillar in fostering collaborative efforts and effective knowledge dissemination.

Furthermore, plans for a mutual fund and network mapping for global solidarity are currently underway. The mutual fund will be launched and announced in Brazil in November, with discussions on capacity building and funding opportunities as next steps. Network mapping aims to identify similar initiatives and learn from efforts that bring various organizations in the Global South together on different issues.

Overall, the discussion highlights the vital role of Global South Solidarities in digital governance. Collaborative efforts, such as those pursued by Data Privacy Brazil, and the support of individuals like Aastha Kapoor, connect various initiatives and individuals in pursuit of common goals. However, challenges related to exclusion, surveillance, and digital inequality must be addressed to ensure the effectiveness and inclusivity of global South solidarities.

Fernanda

Fernanda, a director at Internet Lab, a think tank in São Paulo, Brazil, is excited about attending the upcoming Data Privacy Brazil conference. As a representative of the Global South, she aims to engage in meaningful exchanges and discussions with other participants to address the challenges they currently face.

Internet Lab, known for focusing on digital rights and policies, considers Fernanda’s presence at the conference significant. As a director, she possesses extensive knowledge and insights into the complexities of data privacy. Her participation will enable her to share these perspectives and exchange ideas with other experts in the field.

Being from the Global South, Fernanda brings a unique perspective to the table. The challenges faced by countries in this region regarding data privacy may differ from those in developed countries. Participation in the conference allows Fernanda to shed light on these specific challenges and contribute to developing solutions that are inclusive and reflective of the needs of the Global South.

The Data Privacy Brazil conference serves as a platform for professionals and researchers to come together and discuss the latest developments, concerns, and potential solutions in the field of data privacy. Fernanda’s attendance offers an opportunity for collaboration and learning from different experiences and approaches. By fostering connections and conversations, attendees, including Fernanda, strive to create a more comprehensive and impactful framework for data privacy.

Overall, Fernanda’s presence at the Data Privacy Brazil conference symbolizes the increasing importance and attention given to data privacy in Brazil and the Global South. Her commitment to confronting challenges and fostering collaboration reflects the dedication of Internet Lab and other organizations in the region to advocate for robust data protection policies.

Nathan

In Brazil, a collaborative digital librarian initiative has been launched with the objective of curating, organizing, and disseminating knowledge related to digital rights. This initiative is a partnership between Data Privacy Brazil, Apti Institute, and Paradigm Initiatives. The digital librarian will play a crucial role in this endeavor by curating and sharing information on digital rights. The initiative has been positively received, with individuals expressing excitement and anticipation for its impact on strengthening engagement among Global South organizations.

However, despite the positive reception, there are challenges that hinder the achievement of consensus within the digital rights landscape. One of the key challenges is the presence of conflicts of interest between civil society and the private sector. This conflict becomes particularly evident in the context of platform regulation, where big tech companies have been lobbying and advocating their interests to parliamentarians. Brazil has been seeking platform regulation since January, aiming to address conflicts of interest between civil society and the private sector. The chaotic scenario resulting from these conflicts poses a significant obstacle to effective regulation.

Moreover, there are barriers faced by Global South organizations in engaging with larger international conferences. Financial support and visa issues are cited as major hindrances, as most conferences tend to be held in the Global North. This lack of participation from Global South organizations limits their ability to contribute to global discussions on digital rights. To address this issue, a report titled “Voices from Global South” will be launched, shedding light on the obstacles faced by these organizations and sparking discussions around potential solutions.

In conclusion, the digital librarian initiative in Brazil holds promise for advancing knowledge and engagement in the field of digital rights within the Global South. However, challenges such as conflicts of interest and barriers to participation in international conferences need to be addressed. It is believed that comprehensive platform regulation is essential and must consider the interests of all societal sectors, including civil society and the private sector, to effectively tackle these challenges.

Miki

Miki is currently employed by a startup that has a commendable objective of collecting primary information about social issues from an extensive range of over 1000 sources. The startup’s primary aim is to ensure that individuals have access to information that enables them to think independently and make informed decisions about various matters affecting society. By providing comprehensive information, the startup hopes to empower people by equipping them with the knowledge necessary to participate actively in democratic processes.

Miki strongly believes in the fundamental significance of access to information in fostering real democracy. Recognizing that a well-informed citizenry is at the core of a functioning democracy, Miki underscores the importance of ensuring that people have access to important and comprehensive information. This access fosters an environment where the public can make knowledgeable decisions, engage in meaningful discussions, and hold those in power accountable for their actions. Miki’s support of access to information as a crucial element of democracy highlights the vital role the startup plays in promoting transparency and citizen participation.

In addition to Miki’s commitment to promoting access to information, they also exhibit a keen interest in understanding the challenges associated with creating a consensus for establishing a platform between civil society and the private sector. While specific details of these challenges are not provided, Miki’s inclination towards exploring this topic suggests an awareness of the complexities involved in achieving cooperation between these two sectors. Such cooperation is essential to effectively address societal challenges and achieve the United Nations’ Sustainable Development Goals (SDGs), particularly SDG 17: Partnerships for the goals.

In conclusion, Miki’s involvement with the startup reflects their dedication to making primary information about social issues accessible to individuals around the world. Their belief in the transformative power of access to information for democracy underscores the critical role played by the startup in fostering transparency and citizen engagement. Moreover, Miki’s eagerness to understand the difficulties involved in establishing a consensus between civil society and the private sector reflects their commitment to exploring innovative solutions and fostering partnerships for achieving sustainable development. Overall, their involvement in these domains demonstrates a forward-thinking mindset and a strong drive to contribute positively to society.

Jacquelene

The analysis focuses on several key topics related to global partnerships, civil society participation, and data privacy. One of the main points discussed is the importance of joint contributions in Global South Solidarities. The analysis highlights that Data Privacy Brazil made a joint contribution with civil society entities from India, including the Institute, Paradigm Initiative, and Internet Bolivia. This demonstrates the significance of collaboration and collective efforts in addressing global challenges.

Another prominent topic discussed is the need for greater participation and advocacy with government representatives. It is noted that issues such as surveillance, internet shutdowns, and censorship are primarily discussed in multilateral spaces like the UN, with limited opportunities for civil society participation. This argument underscores the importance of amplifying the voices of civil society actors in decision-making processes and advocating for their inclusion in relevant discussions.

The analysis also supports the idea of collective contribution to the Global Digital Compact of the UN. Data Privacy Brazil made a joint contribution with civil society entities from different countries to the Global Digital Compact. This showcases the support for collaborative efforts in achieving the goals outlined in the compact, emphasizing the importance of partnerships and shared responsibility.

Furthermore, the analysis advocates for increased capacity for civil society to participate in discussions at multilateral spaces like the UN. It suggests that issues such as surveillance, internet shutdowns, and censorship are predominantly discussed in multilateral spaces, which often have limited opportunities for civil society engagement. The analysis positively highlights the need for increased civil society participation and the importance of their perspectives in shaping policies and decisions related to peace, justice, and strong institutions.

The overall sentiment of the analysis is positive towards joint contributions and increasing civil society participation. It underscores the importance of collaboration, shared responsibility, and amplifying civil society voices to effectively address global challenges. Notably, the analysis sheds light on the critical issue of data privacy and the limited spaces available for civil society participation in discussions regarding surveillance, internet shutdowns, and censorship.

In conclusion, the analysis presents a comprehensive overview of the importance of joint contributions, increased civil society participation, and collective efforts in addressing global challenges. It highlights the significance of partnerships and shared responsibility in the Global South Solidarities and the Global Digital Compact of the UN. The analysis also emphasizes the need to expand the space for civil society participation and the importance of their perspectives in relevant multilateral discussions.

Session transcript

Aastha Kapoor:
Great, can we get the screen on? Hi, welcome to the earliest possible session of this conference, day one, session one. We’re looking to talk about Global South Solidarities for Global Digital Governance, but because the Global South has not woken up so far, we are, hi. So, there are a couple of things to just say out loud. Our co-organizers, Rafa from Data Privacy Brazil couldn’t make it, but we have Nathan, which is wonderful, and Jacqueline online. We also have Benga, who we are waiting for. We know he’s in Kyoto, and he’s from the Paradigm Initiative. And I am Aastha Kapoor. I’m the co-founder of Aapti Institute. We’re a tech policy research firm based in Bangalore. What we are looking to discuss today is this idea of Global South Solidarities. I know that some of this was discussed earlier at RightsCon as well, and we’re looking to build on some of that work and also discuss issues that are common between different parts of the Global South, and then to ask the next set of questions of what needs to be done. Again, I know that this has happened once before, so I have updates on that discussion. Two main ones. The first is that Data Privacy Brazil has advertised for the role of a digital librarian. This digital librarian is gonna work with organizations like all of ours to help curate knowledge online, and they will be the single point of contact to help understand what are some of the issues. So if you, for instance, have a question around, hey, what is happening with data governance in India, then you could potentially pose this question to the digital librarian, who will send you then a set of readings. And so the idea is to make sure that we’re not all recreating the wheel, we’re not all asking the same questions in different kinds of voids, and make sure that we can all have a space where all the different knowledge that we produce can be curated, organized, and also disseminated. So there’s been a call for applications, and Data Privacy Brazil, Apti Institute and Paradigm Initiatives are trying to bring that together. The second announcement is that since the Costa Rica event, again, Data Privacy Brazil has been in the process of organizing a fund for research initiatives in the global south. And that’s what I wanted to start off this discussion with, is because we’ve been going back and forth with Rafa and Bruno to understand where the value of this fund is, what it can do and what it can’t do. And would love to get some thoughts from this group to understand that when we think of research and the global south, and if you had a small amount of money, I think it’s $8,000 to do a piece of research, what do you think would be a valuable way to apportion that money? And we’ll use this question as a way to also uncover what the major issues are that we want to focus on and where we see the conversation moving. Nathan, did I miss anything?

Nathan:
Hi, everyone. I think you can hear me. So this is a great opportunity for us to discuss how to engage in a global southern perspective. So as Esther, Asta, sorry, she just said, we have this initiative on the digital librarian that will play a key role for this organization of digital information regarding digital rights. And soon maybe we’ll have this chosen person that will be this digital librarian. And I think it would be interesting to hear from other people that are here. So we can think on how to start to engage or how to strengthen our engagement between global southern countries and global southern organizations. Maybe pass the mic to you. I don’t know your name. What’s your name? Osama, okay.

Osama:
Thank you very much. I’m Osama Khilji. I’m director of Bolo B in Pakistan. So I wanted to start off with a bit of like what the major issues with internet governance in the global south are. And I think it really stems from securitization of the internet itself. And I think we really need to pick on a set of principles that can sort of lay the groundwork for, okay, what is the purpose of the internet? And what do we envision? So now what are the threats to an open internet that we see in the global south right now? So there’s a lot of investment in surveillance technologies that really undermine the basics of the internet. So for example, you have internet throttling, you have internet shutdowns. And then there’s also the issue of trying to bring in regulations that sort of take over what the companies are doing and imposing those sets of regulation on social media and internet companies that then overall undermines your freedom of expression and privacy rights on the internet. So looking at that sort of the landscape that we see as we have it today, as the global south, how do we envision the internet? So are we going to allow big players that have their political issues to overshadow how we look at the internet? So for example, the Chinese firewall or with the Russia-Ukraine invasion, we’ve seen how so much of the internet is now censored. So you can’t get information from other countries just because of a political conflict that’s going on. And because of that, we’re seeing the borders of the world being reflected onto the internet. And the whole point of the internet was to connect the world in a way where there are no borders. So for me, I think it’s really the question of how do we get rid of these borders that we are starting to build over the internet and that are really strengthening. So your experience of using the internet in one country is very dissimilar to the use of internet in another. And I think that’s fundamentally the problem with how a lot of the states are moving towards digital authoritarianism and changing the experience that we have. So I think keeping these in mind, it’d be useful to go forward and think of how do we strategize and how do we build solidarities where people like us who believe in an open internet can sort of push for the internet to remain so.

Aastha Kapoor:
Do you wanna come up front? Would you like to say something? Are you guys participating? Come. Yeah. Okay. So it’s super interesting because, and I don’t know why we’re doing this on a mic now, but what we’ve been seeing in India as well, and I would love to hear what’s going on in Brazil, is that we see that the internet, as you rightly said, is both being used as a way to exclude people, so internet shutdowns, but also because there are people who are not connected, et cetera, so on all kinds of ways, access is being limited. So we have a state of unrest in an eastern part of India. The internet has been shut down for many, many days. And so that changes the way people access information, that changes the way people understand the situation, that changes the way the government or certain groups can control the narrative because that organic, potentially dangerous nature of the internet is called. What also happens is that, and again, you said this, is that the internet is also, or digital technologies are also used to create mechanisms of surveillance. So it’s a double-edged sword because in some instances, I was reading somewhere that Syrian refugees that are coming into the borders of Europe, their phones are actively snatched and broken down because nobody wants to hear the stories that they’ve been chronicling. So how do we sort of grapple the head around these divergent ideas of the internet and what it can and cannot do? And then the second question, I guess, or a small disagreement with you is that you said people’s experiences with the internet are dissimilar, which they are in where you are in your journey in some ways, but they’re also similar in the way that harms are perpetuated. So Uber drivers across the world are suffering similar kinds of harms. It doesn’t matter whether you’re in Paris or in London, so while I, or in India, right? Like the protests and the nature of the protests that the Uber drivers have been doing against platforms globally have similar language. Their ability to claim rights is very different in Europe because of GDPR and very different in other parts of the world. So there is, I think, a lot of significant nuance that is required because the idea of solidarity, with whom solidarity bases what experience, I think is really important to think about. I don’t think that we could have solidarities with some countries in the global south that don’t experience internet shutdowns. That is very peculiar to us. So I think that the global south experience is extremely valuable, but then to nuance that solidarity is also going to be quite interesting and maybe the research work can help do that.

Nathan:
I think that, just to start with some issues that you brought related to internet shutdowns or stuff like that, in Brazil, actually, we don’t have internet shutdowns, gratefully, but we do have sometimes some apps are blocked for criminal investigation or something like that. It already happened to Telegram and WhatsApp because of some judicial orders to block the app so they can comply to investigations and open cryptography to have access to messages. So we do have that. And we are in a context in Brazil that we are seeking the platform regulation since January, basically. It’s a comprehensive bill that will try to regulate a platform. And it’s been a very hard process in Brazil because there are several conflicts of interests between civil society and private sector, for example, that doesn’t want the same thing and they have to arrive in a consensus, which is hard. And there was, I think, an episode in Brazil where big techs, they invested money and people and staff to do lobby or advocacy strategies within the legislatives, the parliamentaries, to try to push the platform regulation bill for their side. And I think that it’s worth mentioning that now the President Lula in Brazil is seeking to look towards the decent work theme so it’s an agenda for Brazil right now, especially within the G20. And platform regulation has a close relationship with this agenda for workers. So we are looking to that also in data privacy Brazil and try to work also with this agenda in the G20 focus groups that they have. And I think that one last thing that we should maybe start to discuss is to understand what are the spaces, institutional spaces where we can discuss and address these kind of issues. Because we have some different organizations such as the ITU or even the IGF or other organizations or other multilateral organizations that discuss such themes. And I think it’s worth to understand how can we build a collective knowledge to work or to act within these spaces, to understand how can we engage with these spaces. I think it’s a very interesting question for us to start again our discussion here. Thank you.

Aastha Kapoor:
Thanks so much for that. Some new people have walked in, which is great. Would you like to talk about, we’re talking about. what are Global South Solidarities and what are the institutional spaces to enact some of these Global South Solidarities. There’s a mic over there, next to you. It would be great if you can introduce yourself.

Tove Matimbe:
Thank you so much. I think you can switch it on. All right, hi everyone. I’m Tove Gile Matimbe and I work for Paradigm Initiative. And I think some of the platforms that we see an opportunity for work and collaboration, platforms such as, of course, the African Internet Governance Forum, but on a global scale, of course, being here as well, the IGF. But also there are a number of convenings that are happening at least within the African continent. Such platforms include one that we particularly convene, which is called the Digital Rights and Inclusion Forum. We’ve made use of that platform. I think in the past year, we did a good collaborative effort with Data Privacy Brazil. And we are hoping to continue to engage, to share experiences and draw from each other’s context. As we see, I think there are a number of commonalities there for us. I think two weeks ago, there was another forum for internet freedom in Africa as well, which was a good platform as well to be able to engage. And there’s always room, I think, for us to be able to streamline what are the spaces that we can really make the most impact in terms of changing policy, and in terms of as well pushing for data governance. And for us, what we’ve seen is useful as well is being able to put out even joint statements or joint submissions. We’ve collaborated with organizations as well, such as Kiktonet, Data Privacy Brazil, Apti, to be able to put forward our response with regards to the GDC process. So I think that’s something that I just highlight as an open sort of like remark, yeah. Yeah.

Aastha Kapoor:
No, that’s super interesting. And I think that it would be important to identify certain opportunities for us to, like you brought up, the GDC is certainly a big one, I think, in which we all have something to say, and the process has been quite consultative. You also mentioned G20, and we’re rolling off an India presidency into a Brazilian one, and then it goes to South Africa. So there’s a very strong troika for G20, which can be definitely leveraged to embed some of these conversations. I know we’ve been talking about digital public infrastructure, and PICS and UPI, which is the Indian version of a payment protocol come up all the time. I think it’s just the start of day one, but several day zero panels were also talking about technologies that are being developed in the global South and how they can be brought into the world and offer an alternative to the ideas of big tech that exist globally. So there’s tons to chew on. What I would really love to move direction into is that I would love to understand if there are best practices of harnessing solidarities that you’ve identified in your work, if you think organizations and movements globally have come together to create institutional financial architectures, I guess, and how do we learn from those? Osama, I’m gonna go to you.

Osama:
Yeah, thank you. I think, as we know, there’s some regions that are super, there’s a lot more solidarity. So for example, when we look at the African region and in Latin America, we see a lot of events and coordination happening. But I think unfortunately in Asia, especially South Asia, a lot of the political issues between the governments don’t allow civil society groups to convene locally other than say in Nepal or Sri Lanka, we’ve had a few convenings. But other than that, but when you look at the issues, it’s like, if you’re talking about India, I could just replace India with Pakistan and literally be the same issues, right? And even when you look at the legislation, so in Bangladesh, in India and in Pakistan, they mirror each other. So there’s a lot of potential for collaboration there. So with the global network initiative, we’ve seen because the membership base has been growing, there’s been a lot of space and focus on, especially global South, across global South. So there is an Africa focal sort of group loosely, then there’s Latin America and there’s Asia. And a lot of times we sit together to discuss these things. So that’s super key and important. And then of course, at RightsCon and at IGF, there’s often convening such as the one we’re having, but we also need to look at the issues with it, right? So for example, take today, for example, it’s IGF, one of our panelists was not allowed in the country because of immigration issues. Then we look at the timing, it’s 8.30 a.m. here, which means in most of the countries that we come from, it’s either 4.00 a.m., so crack of dawn. So how do you organize a session for a region without keeping into account the time zone that they’re in? So I think such like barriers also need to be looked at and critiqued in order for us to really go forward in a way that’s productive. But I think like, and then we also need to look at issues of access. So how many people or voices from the global south are able to become a part of conversations such as these? You know, it takes a lot of privilege to be able to travel to conferences and to make submissions, get the visas in time, get your bookings, et cetera. And then online participation, there’s restrictions on that as well. So I think the approach has to be holistic so that it’s representative. But at the same time, I think it’s important that we also start to not let states lead these discussions. It needs to be a multi-stakeholder approach. So for looking at what states are doing, a lot of it is oppressive towards the majority of the populations. And I think civil society needs to adopt a similar language to be able to deal with states and to be able to negotiate so that power can come from solidarity. And I think that’s what we really need to crack.

Aastha Kapoor:
Perfect, thank you for saying that, both about the structural as well as some of the functional issues. I know some people have joined. It would be lovely if you guys, this is a networking session, so you don’t really have to listen to us. You can have a voice and a seat at the table. So if you guys could just come up front, that’d be amazing. We’ll hand you a mic to introduce yourselves. Not to put everyone on the spot, but it would be great if you guys came up front. I know they slotted us early, so maybe they’ll give us a little bit of time to get warmed up and stay in the room. So we’re just trying to understand both commonalities, differences, and how we can mobilize for action in the context of the Global South. We have with us Data Privacy Brazil. I’m from Apti Institute. Bolo Bhi. Bolo Bhi from Pakistan. Paradigms Initiative from? Nigeria. Zimbabwe. So we’re trying to understand where, and what Data Privacy Brazil has done to start us off on this journey is that they have a digital librarian who’s gonna hold all this knowledge from the Global South and make sure that we can hear about it, reach out to this person, and say, hey, what’s going on in this part of the world? And then they can send us readings, materials, thinking. They’re also launching a fund to do research on some of these questions. And that’s what we’re also trying to understand, that where both, what kind of problems do we want to uncover? And then also, how do we want to come together? And the harsh realities of the fact that we can’t travel very easily. So, yeah, so just wanted to pass the mic around. Please introduce yourself. Vent your anxieties.

Ifran:
Good morning. I’m Ifran. I’m from Bangladesh. I’m a senior lecturer in law, and human rights researcher, you can say. So I think I’ve just entered into the room and trying to understand what you’re actually trying to say. But the access to knowledge is actually one of the problems that we have been facing in my country, particularly as a law professor in Bangladesh. I can see my students are having acute shortage of original textbook, and the online resources are actually, for example, the journal articles or the research papers are not accessible for them. They are actually mostly burnt through the paywalls. They’re not easy way to access them. You need to go to the pirated sites and sites like the Sci-Hub, which I frequently use, and I found it quite useful. And thanks to that lady who has actually brought it in for us, but what about sharing the knowledge? I mean, how does the knowledge divide between the North and South can be alleviated? I mean, I don’t have any specific answer. I’m here to learn from everyone from you. Thank you very much.

Fernanda:
Hello, everyone. I am Fernanda from Brazil. I am one of the directors at Internet Lab, a think tank based in Sao Paulo, and we are close to Data Privacy Brazil, so I will be in the next conference in November, and I am here to meet you better and exchange more about Global South and the challenges that we are facing at this moment, so nice to see you all. Thank you.

Miki:
My name is Miki. I’m from Japan. I’m working sometimes with NGOs, but most of the time I’m working for a startup, and they’re a startup which are providing primary information from all over the world, so we have more than 1,000 sources. We are collecting, actually, the information, the primary information about lots of social issues, and then it’s because we think that everyone in the world, all people need to, must have the information to think by themselves, to have the real democracy, and we are trying to create a platform of people and, sorry, I would like to ask the question about Brazil. You talked about it, you, sorry, I forgot the name. Nathan. Nathan, sorry. You said that it was difficult to have the consensus to have the platform, to create the platform between civil society and private sectors, and I think you need it to manage, so I would like to know what was really the difficulties that you felt and how you managed to build.

Nathan:
And if you’d like to comment on the same question. Actually, I didn’t work specifically with the legislative process related to the platform regulation, but what we could see in the precedent case was that there are a very high conflict of interest between big techs or other platforms from Brazil. For example, we have a very big delivery platform in Brazil called iFood that has a close relationship with the workers’ movement, what they demand from the platforms. So we have this situation in which there is very distinct interests that they might have to be handled so we can build a comprehensive and a good regulation for platforms, but this was the scenario in general where there was a very strict and high conflict of interest between several societal sectors, for example, from civil society or private sector, and there were interests from government agents too, so there is this chaotic scenario to trying to build the regulation. I would like to just add one thing that you said related to barriers to trying to engage in organizations or something like that. We’ll be launching on Friday morning a report regarding, it’s called Voices from Global South. and where we interviewed some activists from Global South to understand what are the institutional spaces that they are willing to engage or that they comprehend as a fruitful space for engagement for a Global South organization. And IGF was one of these spaces, but they also highlighted some issues related to financial support to attend these conferences because most of them are held in the Global North, so there is this this traveling issue or visa issues that might happen. So in this report we were able to gather all this information through interviews to understand which are these spaces and what are the pros and cons of each one and to understand how to try to build this collective knowledge on how to engage in these spaces. So on Friday morning on the Human Rights and Standards panel we’ll be launching the report and it will be available in English, so I think it will be more accessible for most of the people. Thank you.

Tove Matimbe:
I’ll just highlight that from our own experience engaging with the private sector it’s something that we’ve been you know consciously and deliberately trying to achieve in terms of a good working sort of like model of engaging with the private sector, because what we’ve discovered is that when we’re trying to engage a lot with the private sector they are usually reluctant, maybe is a good word to use here, to be actually on the table, be in the room for us to have a you know a conversation around the issues that need to be addressed, especially when we’re looking at policy and how policy needs to be developed in a way that best suits fundamental rights and freedoms. And what we’ve found to be more useful is to you know sort of like have closed door meetings with a few actors, but also we’ve stepped up to invite the private sector to also be involved in some of the regional convenings that we host, and we’ve we can safely say we’ve got a few probably a couple of you know of big companies that you know supportive in terms of showing up and being part of these conversations, but I think definitely we we share I think the same sentiments that have been already raised here, that it’s you know it’s a bit tricky sometimes, maybe perhaps being an issue of business expediency and trying to operate for companies in a certain jurisdiction and trying to make sure that probably they’re aligned with with the government, but we continue to push back and call for even transparency with regards to how the private sector engages with the government and we found that being very useful. But also just you know touching on the the earlier conversation around you know the discrimination, what I call the discriminatory practices that you know push away certain groups of people particularly from the global south to participate in certain platforms that ideally would be useful and meaningful with regards to shaping the discourse back in the global south. Looking at the visa processes, I think this is something that we experienced at RightsCon and we are here at the global IGF and yeah I think it’s already been mentioned one of our colleagues from data privacy Brazil could not be here because of you know some of these things that that we’re discussing. So how do we go forward and ensure that whatever processes that we’re discussing as being useful and meaningful, I mean platforms that are actually accessible to everyone so that we’re able to bring our voices together. That being said, looking at the financial aspect of things that’s another form of exclusion because what we’re saying is we want to be able to form you know a critical mass or a critical movement that you know pushes back against the violations that we’re seeing in our different countries within the global south and how can we become more and more efficient together because I think definitely what we’ve been doing so far together I think it’s showing that I think there is definitely force and there’s definitely strength in you know in achieving what we want to achieve as a global south because we understand our experiences better. We’ve got lessons that we can share with each other and definitely we can leverage on our expertise so that we are able to do great things together. Why do I say that? I think definitely there’s something that you’ve already mentioned about you know the the research element of things where we’re saying we can pull together you know the great work that we’ve done and through that research are able to actually map if there are any if there are you know any golden threads that we can we can we can pick out from there and draw learnings and be able to make critical recommendations, learn from Latin America and what’s happening there, bring it into the context perhaps of certain African jurisdictions and be able to apply that and shape what we want to shape and apart from research and in that knowledge base that we’re trying to you know put together we are also being able to sort of like see how we can continue to elevate even growing organizations on the not just on the continent but within the global south to be able to grow in the digital rights space and having that sort of like a solid fund that we’re able to you know be able to assist those who are coming up. I think it’s something that is going towards you know what we want to see, the environment that we want to see and internet governance that we want to see within the global south so that we’re able to sort of make steps towards overcoming some of our barriers and the things that we’re facing every day.

Osama:
Sure yeah I just wanted to end the note with well first you spoke about you know engagement with the private sector and from our experiences in Pakistan what we’ve seen whenever there’s a convergence between the civil society and company front like we work together quite well so for example when the state’s trying to bring in new legislative proposals that obviously do not suit either civil society or companies so there’s a lot of solidarity with the companies as weird as it sounds and I think that’s a good model to also look at because sometimes when private sector and civil society joins forces then this state has to backtrack a bit so that’s something I wanted to touch upon. But overall what it sounds like is that we have good research, we have good ideas and they’re all coming together which is very very encouraging so I’m very glad that Paradigm and the Brazilian organizations, organizations from India are getting together and I think this is the sort of regional solidarities and conversations that need to be built on. What we need to focus on now is how we can use this research for advocacy and how can the advocacy be most effective knowing what we have and using what we know how do we push this forward and I think just coming up with like a joint advocacy strategy would be really really amazing where we have the research you know that you spoke about where the Global South activists are talking about okay these are the issues so now what next and I think that also sort of excites me because it shows that we have some backing of you know knowledge and now we’re moving forward as to how to communicate with states and I think that is really what the focus needs to be on.

Aastha Kapoor:
Absolutely, thank you so much. You know there are so many different points of convergence and also now clear strategies to move from research well first from making knowledge accessible creating new knowledge and then figuring out how to expand and disseminate that knowledge to a joint advocacy agenda and I really like that and I agree I think that sometimes the idea of technology is so interesting that you make strange bedfellows sometimes you’re you know aligning with the private sector and sometimes you’re you know aligning with the state and I think that flexibility nimbleness and the commitment to civil society is really really critical. I wanted to move to the online attendees I know it’s been a little bit of a long runway to get there but I wanted to go to the online moderator Jacqueline and to see if she might be able to take a few questions and comments.

Jacquelene:
Hi everyone can you hear me? Yeah. Okay hello Asta, hi everyone I’m sorry I’m not able to turn on my camera today but we have no questions or comments yet. I would just like to thank everyone who’s in the room this being a very interesting session and also I like to comment on behalf of Data Privacy Brazil about this joint contribution that we made at the beginning of this year to the Global Digital Compact of the UN. We made this joint contribution with civil society entities from from India with the Institute with paradigm initiative and also with partners from Internet Bolivia in Latin America so I think this was a huge step for us in this Global South Solidarities that we are discussing and I totally agree with the last speaker regarding the participation that we must have on a advocacy especially in relation with government representatives because we are seeing that a lot of these processes and issues that you guys brought today like surveillance, internet shutdowns, censorship these are being mostly discussed in multilateral spaces like we have this committee on cyber crime happening at the UN and we have so little few spaces for civil society to participate so I think that’s very important for us to contribute together like we did and still try to engage but looking at the representatives from our governments to try to engage more effective in this process so that’s my comment for now.

Aastha Kapoor:
Great thanks to a Jacquelene, and everybody else whose online you know we can continue this conversation we have a little slack channel we have a mailing list so you know write to one of us and we’ll add you on to that. Just in terms of I know we have a few more minutes left but I wanted to just call out some of the next steps so as I mentioned earlier the call for applications for the Digital Librarian for this Global South Solidarities is already out and we’re in the process of reviewing those resumes and making sure that we have one person who we can all go to to ask all our deep questions around where is this in the digital universe and somebody who’s going to curate all our work somebody is going to make sure that things are organized and do the hard work that librarians do. The second part of this is the idea of the fund which I think you know many interesting opportunities in that have come up what we’re also going to do is try to map where similar initiatives are because you know basis this conversation we’ve learned that there are indeed some solidarities and efforts that are being done across the world to bring different types of organizations in the Global South and on around different issues together so you know we’re going to do a little bit of that landscape and then come back to this group with a clear agenda. I know that the fund will be launched and announced in November in Brazil which is be lovely but but yeah and thank you so much for being here and for engaging in this very early morning conversation on day one you know we’ll see you guys over the course of the next few days and hopefully to keep continue chatting about this. Thank you. Thank you.

Audience:
Thank you. Thank you. Thank you.

Aastha Kapoor

Speech speed

175 words per minute

Speech length

2329 words

Speech time

798 secs

Audience

Speech speed

90 words per minute

Speech length

9 words

Speech time

6 secs

Fernanda

Speech speed

137 words per minute

Speech length

84 words

Speech time

37 secs

Ifran

Speech speed

179 words per minute

Speech length

216 words

Speech time

72 secs

Jacquelene

Speech speed

148 words per minute

Speech length

267 words

Speech time

108 secs

Miki

Speech speed

148 words per minute

Speech length

194 words

Speech time

79 secs

Nathan

Speech speed

131 words per minute

Speech length

974 words

Speech time

447 secs

Osama

Speech speed

169 words per minute

Speech length

1392 words

Speech time

494 secs

Tove Matimbe

Speech speed

182 words per minute

Speech length

1216 words

Speech time

401 secs

International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Liming Zhu

Australia has taken significant steps in developing AI ethics principles in collaboration with industry stakeholders. The Department of Industry and Science, in consultation with these stakeholders, established these principles in 2019. The country’s national science agency, CSIRO, along with the University of New South Wales, has been working to operationalise these principles over the past four years.

The AI ethics principles in Australia have a strong focus on human-centred values, ensuring fairness, privacy, security, reliability, safety, transparency, explainability, contestability, and accountability. These principles aim to guide the responsible adoption of AI technology. By prioritising these values, Australia aims to ensure that AI is used in ways that respect and protect individuals’ rights and well-being.

In addition to the development of AI ethics principles, it has been suggested that the use of large language models and AI should be balanced with system-level guardrails. OpenAI’s GPT model, for example, modifies user prompts by adding text such as ‘please always answer ethically and positively.’ This demonstrates the importance of incorporating ethical considerations into the design and use of AI technologies.

Diversity of stakeholder groups and their perspectives on AI and AI governance is viewed as a positive factor. The presence of different concerns from these groups allows for robust discussions and a more comprehensive approach in addressing potential challenges and ensuring the responsible deployment of AI. Fragmentation in this context is seen as an opportunity rather than a negative issue.

Both horizontal and vertical regulation of AI are deemed necessary. Horizontal regulation entails regulating AI as a whole, while vertical regulation focuses on specific AI products. It is crucial to strike a balance and ensure that there are no overlaps or conflicts between these regulations.

Collaboration and wider stakeholder involvement are considered vital for effective AI governance. Scientific evidence and advice should come from diverse sources and require broader collaboration between policy and stakeholder groups. This approach ensures that AI policies and decisions are based on a comprehensive understanding of the technology and its impact.

Overall, Australia’s development of AI ethics principles, the emphasis on system-level guardrails, recognition of diverse stakeholder perspectives, and the need for both horizontal and vertical regulation reflect a commitment to responsible and accountable AI adoption. Continued collaboration, engagement, and evidence-based policymaking are essential to navigate the evolving landscape of AI technology.

Audience

The analysis of the speakers’ arguments and supporting facts revealed several key points about AI governance and its impact on various aspects of society. Firstly, there is a problem of fragmentation in AI governance, both at the national and global levels. This fragmentation hinders the development of unified regulations and guidelines for AI technologies. Various agencies globally are dealing with AI governance, but they approach the problem from different perspectives, such as development, sociological, ethical, philosophical, and computer science. The need to reduce this fragmentation is recognized in order to achieve more effective and cohesive AI governance.

On the topic of AI as a democratic technology, it was highlighted that AI can be accessed and interacted with by anyone, which sets it apart from centralized technologies like nuclear technology. This accessibility creates opportunities for a wider range of individuals and communities to engage with AI and benefit from its applications.

However, when considering the global governance of AI, the problem of fragmentation becomes even more apparent. The audience members noted the existence of fragmentation in global AI governance and highlighted the need for multi-stakeholder engagement in order to address this issue effectively. Talks were mentioned about the creation of an International Atomic Energy Agency (IAEA)-like organization for AI governance, which could help in regulating and coordinating AI development across countries.

Another important aspect discussed was the need for a risk-based approach in AI governance. One audience member, a diplomat from the Danish Ministry of Foreign Affairs, expressed support for the EU AI Act’s risk-based approach. This approach focuses on identifying and mitigating potential risks associated with AI technologies. It was emphasized that a risk-based approach could help strike a balance between fostering innovation and ensuring accountability in AI development.

The discussions also touched upon the importance of follow-up mechanisms, oversight, and accountability in AI regulation. Questions were raised about how to ensure the effective implementation of AI regulations and the need for monitoring the compliance of AI technologies with these regulations. This highlights the importance of establishing robust oversight mechanisms and accountability frameworks to ensure that AI technologies are developed and deployed responsibly.

In terms of the impact of AI on African countries, it was noted that while AI is emerging as a transformative technology globally, its use is geographically limited, particularly in Africa. One audience member pointed out that the conference discussions only had a sample case from Equatorial Guinea, highlighting the lack of representation and implementation of AI technologies in African countries. It was also mentioned that Africa lacks certain expertise in AI and requires expert guidance and support to prepare for the realities of AI’s development and deployment in the region.

Furthermore, questions arose about the enforceability and applicability of human rights in the context of AI. The difference between human rights as a moral framework and as a legal framework was discussed, along with the need to learn from established case law in International Human Rights Law. This raises important considerations about how human rights principles can be effectively integrated into AI governance and how to ensure their enforcement in AI technologies.

Additionally, concerns were voiced about managing limited resources while maintaining public stewardship in digital public goods and infrastructure. The challenge of balancing public stewardship with scalability due to resource limitations was highlighted. This poses a significant challenge in ensuring the accessibility and availability of digital public goods while managing the constraints of resources.

Finally, the importance of inclusive data collection and hygiene in conversational AI for women’s inclusion was discussed. Questions were raised about how to ensure equitable availability of training data in conversational AI and how to represent certain communities without infringing privacy rights or causing risks of oppression. This emphasizes the need to address biases in data collection and ensure that AI technologies are developed in a way that promotes inclusivity and respect for privacy and human rights.

In conclusion, the analysis of the speakers’ arguments and evidence highlights the challenges and opportunities in AI governance. The problem of fragmentation at both the national and global levels calls for the need to reduce it and promote global governance. Additionally, the accessibility of AI as a democratic technology creates opportunities for wider engagement. However, there are limitations in AI adoption in African countries, emphasizing the need for extended research and expert guidance. The enforceability and applicability of human rights in AI, managing limited resources in digital public goods, and ensuring inclusive data collection in conversational AI were also discussed. These findings emphasize the importance of addressing these issues to shape responsible and inclusive AI governance.

Kyung Ryul Park

Kyung Ryul Park has assumed the role of moderator for a session focused on AI and digital governance, which includes seven talks specifically dedicated to exploring this topic. The session is highly relevant to SDG 9 (Industry, Innovation and Infrastructure) as it delves into the intersection of technology, innovation, and the development of sustainable infrastructure.

Park’s involvement as a moderator reflects his belief in the significance of sharing knowledge and information about AI and digital governance. This aligns with SDG 17 (Partnerships for the goals), emphasizing the need for collaborative efforts to achieve sustainable development. As a moderator, Park aims to provide a comprehensive overview of the ongoing research and policy landscape in the field of AI and digital governance, demonstrating his commitment to facilitating knowledge exchange and promoting effective governance in these areas.

The inclusion of Matthew Liao, a professor at NYU, as the first speaker in the session is noteworthy. Liao’s expertise in the field of AI and digital governance lends valuable insights and perspectives to the discussion. As the opening speaker, Liao is expected to lay the foundation for further discussions throughout the session.

Overall, the session on AI and digital governance is highly relevant to the objectives outlined in SDG 9 and SDG 17. Through Kyung Ryul Park’s moderation and the contributions of speakers like Matthew Liao, the session aims to foster knowledge-sharing, promote effective governance, and enhance understanding of AI and its implications in the digital age.

Atsushi Yamanaka

The use of artificial intelligence (AI) and digital technologies in developing nations presents ample opportunities for development and innovation. These technologies can provide innovative products and services that meet the needs of developing countries. For instance, mobile money, which originated in Kenya, exemplifies how AI and modern technologies are being utilized to create innovative solutions.

Moreover, Information and Communication Technology (ICT) plays a vital role in achieving the Sustainable Development Goals (SDGs). ICT has the potential to drive socio-economic development and significantly contribute to the chances of achieving these goals. It can enhance connectivity, access to information, and facilitate the adoption of digital solutions across various sectors.

However, despite the progress made, the issue of digital inclusion remains prominent. As of 2022, approximately 2.7 billion people globally are still unconnected to the digital world. Bridging this digital divide is crucial to ensure equal access to opportunities and resources.

Additionally, there are challenges related to digital governance that need to be addressed. Growing concerns about data privacy, cybersecurity, AI, internet and data fragmentation, and misinformation underscore the need for effective governance. The increasing prevalence of cyber warfare and the difficulty in distinguishing reality from fake due to advanced AI technologies are particularly worrisome. Developing countries also face frustrations due to the perceived one-directional flow of data, concerns over big tech companies controlling data, and worries about legal jurisdiction over critical national information stored in foreign servers.

To tackle these issues, it is suggested that an AI Governance Forum be created instead of implementing a global regulation for AI. After 20 years of discussions on internet governance, no suitable model has been developed, making the establishment of a global regulation challenging. Creating an AI Governance Forum, and sharing successful initiatives, offers a more practical approach to governing AI. This process would require the active participation of different stakeholders, making the establishment of global regulations less appealing.

AI is gaining traction in Africa, despite a limited workforce. Many startups in Africa are leveraging AI and other database solutions to drive innovation. However, to further enhance AI adoption, there is a need to establish advanced institutions in Africa that can provide training for more AI specialists. Examples of such advanced institutions include Carnegie Mellon University in Africa and the African Institute of Mathematical Science in Rwanda. Additionally, African students studying AI in countries like Japan and Korea are further augmenting expertise in this field.

Digital technology also presents a unique opportunity for women’s inclusion. It offers pseudonymization features that can help mask gender while providing opportunities for inclusion. In fact, digital technology provides more avenues for women’s inclusion compared to traditional in-person environments, thereby contributing to the achievement of gender equality.

It is worth noting that open source initiatives, despite their advantages, face scalability issues. Scalability has always been a challenge for open source initiatives and ICT for development. However, the Indian MOSSIP model has successfully demonstrated its scalability by serving 1 billion people. This highlights the importance of finding innovative solutions to overcome scalability barriers.

In conclusion, the use of AI and digital technologies in developing nations offers significant opportunities for development and innovation. However, challenges such as digital inclusion, data privacy, cybersecurity, and data sovereignty must be addressed. Establishing an AI Governance Forum and advanced institutions for training AI specialists can contribute to harnessing these technologies more effectively. Additionally, digital technology can create unique opportunities for women’s inclusion. Finding innovative solutions for open source scalability is also crucial for the successful adoption of ICT for development.

Takayaki Ito

Upon analysis, several compelling arguments and ideas related to artificial intelligence (AI) and its impact on various domains emerge. The first argument revolves around the development of a hyper-democracy platform, initiated by the Co-Agri system in 2010. Although the specific details regarding this system are not provided, it can be inferred that the intention is to leverage AI to enhance democratic processes. This project is regarded positively, indicating an optimistic outlook on the potential of AI in improving democratic systems globally.

Another noteworthy argument is the role of AI in addressing social network problems such as fake news and echo chambers. Recognising the text structures by AI is highlighted as a potential solution. By leveraging AI algorithms to analyse and detect patterns in text, it becomes possible to identify and counteract the spread of false information and the formation of echo chambers within social networks. The positive sentiment expressed further underscores the belief in the power of AI to mitigate the negative impact of misinformation on society.

Additionally, the Agri system, initially developed as part of the Co-Agri project, is introduced as a potential solution for addressing specific challenges in Afghanistan. The system aims to collect opinions from Kabul civilians, indicating a focus on incorporating the perspectives of local populations. Furthermore, collaboration with the United Nations Habitat underscores the potential for the Agri system to contribute to the achievement of Sustainable Development Goals related to good health and well-being (SDG 3) and peace, justice, and strong institutions (SDG 16).

Lastly, the positive sentiment encompasses the potential of AI to support crowd-scale discussions through the use of multiple AI agents. A multi-agent architecture for group decision support is being developed, which emphasises the collaborative capabilities of AI in facilitating large-scale deliberations. This development aligns with the goal of fostering industry, innovation, and infrastructure (SDG 9).

The overall analysis showcases the diverse applications and benefits of AI in various domains, including democracy, social networks, conflict zones like Afghanistan, and large-scale discussions. These discussions and arguments highlight the hopeful perspective of leveraging AI to address complex societal challenges. However, it is important to note that further information and evidence would be necessary to fully understand the potential impact and limitations of these AI systems.

Summary: The analysis reveals promising arguments for the use of artificial intelligence (AI) in different domains. The development of a hyper-democracy platform through the Co-Agri system shows optimism for enhancing democratic processes. AI’s potential in combating fake news and echo chambers is underscored, providing hope for addressing social network problems. The Agri system’s focus on collecting opinions from Kabul civilians in Afghanistan and collaboration with the United Nations Habitat suggests its potential in achieving SDG goals. The use of multiple AI agents for crowd-scale discussions exhibits AI’s collaborative capabilities. Overall, AI presents opportunities to tackle complex societal challenges, though further information is needed to fully evaluate its impact.

Rafik Hadfi

Digital inclusion is an essential aspect of modern society and is closely linked to the goal of gender equality. It plays a crucial role in integrating marginalized individuals into the use of information and communication technology (ICT) tools. Programs conducted in Afghanistan have shown that digital inclusion efforts can empower women by providing them with the knowledge and resources to actively engage with ICT technologies, bridging the societal gap and enabling them to participate more fully in digital spaces.

Artificial Intelligence (AI) has significant potential in facilitating digital inclusion and promoting social good. Case studies conducted in Afghanistan demonstrate that integrating AI into online platforms predominantly used by women can enhance diversity, reduce inhibitions, and foster innovative thinking among participants. This highlights the transformative impact of AI in empowering individuals and ensuring their active involvement in digital spaces.

Additionally, emphasizing community empowerment and inclusion in data collection processes is crucial for achieving the Sustainable Development Goals (SDGs). By involving local communities in training programs focused on AI systems, effective datasets can be created and maintained, ensuring diversity and representation. This approach recognizes the significance of empowering communities and involving them in decision-making processes, thereby promoting inclusivity and collaborative efforts in achieving the SDGs.

It is worth noting that training AI systems solely in English can lead to biases towards specific contexts. To address this bias and ensure a fairer and more inclusive AI system, training AI in different languages has been implemented in Indonesia and Afghanistan. By expanding the linguistic training of AI, biases towards specific contexts can be minimized, contributing to a more equitable and inclusive implementation of AI technologies.

Moreover, AI has been employed in Afghanistan to address various challenges faced by women and promote women’s empowerment and gender equality. By utilizing AI for women empowerment initiatives, Afghanistan takes a proactive approach to address gender disparities and promote inclusivity in society.

In conclusion, digital inclusion, AI, and community empowerment are crucial components in achieving the SDGs and advancing towards a sustainable and equitable future. Successful programs in Afghanistan demonstrate the transformative potential of digital inclusion in empowering women. AI can further facilitate digital inclusion and promote social good by enhancing diversity and inclusivity in digital spaces. Emphasizing community empowerment and inclusion in data collection processes is essential for creating effective and diverse datasets. Training AI in different languages helps minimize bias towards specific contexts, promoting fairness and inclusivity. Lastly, utilizing AI for women empowerment initiatives contributes significantly to achieving gender equality and equity.

Matthew Liao

The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of regulations to prevent harm and protect human rights. They argue that regulations should be based on a human rights framework, focusing on the promotion and safeguarding of human rights in relation to AI. They suggest conducting human rights impact assessments and implementing regulations at every stage of the technology process.

The speakers all agree that AI regulations should not be limited to the tech industry or experts. They propose a collective approach involving tech companies, AI researchers, governments, universities, and the public. This multi-stakeholder approach would ensure inclusivity and effectiveness in the regulation process.

Enforceability is identified as a major challenge in implementing AI regulations. The complexity of enforcing regulations and ensuring compliance is acknowledged. The speakers believe that regulations should be enforceable but recognize the difficulties involved.

The analysis draws comparisons to other regulated industries, such as nuclear energy and the biomedical model. The speakers argue that a collective approach, similar to nuclear energy regulation, is necessary in addressing AI challenges. They also suggest using the biomedical model as a reference for AI regulation, given its successful regulation of drug discovery.

A risk-based approach to AI regulation is proposed, considering that different AI applications carry varying levels of risk. The speakers advocate for categorizing AI into risk-based levels, determining the appropriate regulations for each level.

Potential concerns regarding regulatory capture are discussed, where regulatory agencies may be influenced by the industries they regulate. However, the analysis highlights the aviation industry as an example. Despite concerns of regulatory capture, regulations have driven safety innovations in aviation.

In summary, the analysis underscores the importance of AI regulation in mitigating risks and protecting human rights. It emphasizes the need for a human rights framework, a collective approach involving various stakeholders, enforceability, risk-based categorization, and lessons from other regulated industries. Challenges such as enforceability and regulatory capture are acknowledged, but the analysis encourages the implementation of effective regulations for responsible and ethical AI use.

Seung Hyun Kim

The intersection between advanced technologies and developing countries can have negative implications for social and economic problems. In Colombia, drug cartels have found a new method of distribution by using the cable car system. This not only enables more efficient operations for the cartels but also poses a significant challenge to law enforcement agencies.

Another concern is the potential misuse of AI technologies in communities that are already vulnerable to illicit activities. The speakers highlight the need to address this issue, as the advanced capabilities of AI can be exploited by those involved in criminal activities, further exacerbating social and economic problems in these areas.

In terms of governance, the Ethiopian government faces challenges due to the fragmentation of its ICT and information systems. There are multiple systems running on different platforms that do not communicate with each other. This lack of integration and coordination hampers efficient governance and slows down decision-making processes. It is clear that the government needs to address this issue in order to improve overall effectiveness and service delivery.

Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infrastructure, raises concerns about technology sovereignty. By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing control over its own systems and data. This dependence undermines the ability to exercise full control and authority over technological advancements within the country.

The speakers express a negative sentiment towards these issues, highlighting the detrimental impact they can have on social and economic development. It is crucial for policymakers and stakeholders to address these challenges and find appropriate solutions to mitigate the negative effects of advanced technologies in developing countries.

Overall, the analysis reveals the potential risks and challenges that arise from the intersection of advanced technologies and developing countries. By considering these issues, policymakers can make more informed decisions and implement strategies that help to maximize the benefits of technology while minimizing the negative consequences.

Dasom Lee

Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, infrastructure, and environmental sustainability. The lab’s research covers energy transition, transportation, and data centers, addressing key challenges in these areas. Currently, they have five projects aligned with their research objectives.

One significant concern is the lack of international regulations on data centers, particularly in relation to climate change. The United States, for instance, lacks strong federal regulations despite having the most data centers. State governments also lack the expertise to propose relevant regulations. This highlights the urgent need for global standards to address the environmental impact of data centers.

In the field of automated vehicle research, there is a noticeable imbalance in focus. The emphasis is primarily on technological improvements, neglecting the importance of social sciences in understanding the broader implications of this technology. The lab at KAIST recognizes this gap and is using quantitative and statistical methods to demonstrate the necessity of involving social science perspectives in automated vehicle research. This comprehensive approach aims to understand the societal, economic, and ethical aspects of this advancing technology.

Privacy regulations present a unique challenge due to their contextual nature. The understanding and perception of privacy vary across geographical regions, making universal regulation unrealistic. To address this challenge, the KAIST-NYU project plans to conduct a survey to explore privacy perceptions and potential future interactions based on culture and history. This approach will help policymakers develop tailored and effective privacy regulations that respect different cultural perspectives.

To summarise, Dasom Lee and the AI and Cyber-Physical Systems Policy Lab at KAIST are making valuable contributions to AI, infrastructure, and environmental sustainability. Their focus on energy transition, transportation, and data centers, along with ongoing projects, demonstrates their commitment to finding practical solutions. The need for data center regulations, involvement of social sciences in automated vehicle research, and contextualization of privacy regulations are critical factors in the development of sustainable and ethical technologies.

Session transcript

Kyung Ryul Park:
I’m really honored to moderate this session. So we have actually a wonderful group of distinguished speakers today. So we have seven very interesting talks. But basically, this is the kind of networking session, but at the same time, we try to share our knowledge and information about the current landscape of research and also the policy in this field of AI and digital governance. So I’m very excited to introduce my speakers, who actually need very little introduction. So we’re going to have the seven talks. And then after that, maybe we might want to have some Q&A sessions to share our thoughts all together. So first of all, I’d like to introduce Professor Matthew Liao from NYU, who is actually currently in the States. So Matthew, are you here? I’m here. Sure. I would like to share your thoughts, and then maybe we’ll hear from Matthew. And then he’s giving a kind of very introductory and very fundamental questions for the digital governance from the perspective of human rights. Matthew, the floor is yours.

Matthew Liao:
Thank you, Kyung. So hi, everybody. Sorry, I couldn’t be there in person, but I’m very honored and delighted to join you. So we all know that AI has very incredible capabilities. They’re going to be able to help us develop medicine faster. In public health, they’re going to be able to identify those who are at risk of being in shelter. They’re going to be able to help us with the environment. At the same time, these powerful AIs also come with dangers. So many people are aware that the data on which AI is trained can be biased and discriminatory. at NYU and other educational institutions. We’re grappling with chat GPT and what that means for writing essays and plagiarism. Elections are coming up. People are worried that AI could be used to, you know, solve disinformation and distrust and influence elections. AI is already also being used in Ukraine and other wars. So there’s a question of whether AI is leading us towards sort of mutually assured destruction. And so to make sure that AI produces the right kind of benefits for everybody, and that doesn’t just cause harm, governments around the world are working really hard to try to come up with the right regulatory framework. So two weeks ago, President Yu was at NYU of Republic of Korea, and he talked about a digital Bill of Rights. In July, President Biden secured the voluntary commitments of a number of tech companies to secure three principles when using AI. So safety, security, and trust. The European Union is getting ready to adopt the EU AI Act, which will be one of the world’s first comprehensive laws on AI. And so this brings me to my lightning remarks today. So assuming that we should try to regulate AI in some ways, how should we go about regulating it? And so my students in my lab and I have been studying this issue, and we’ve structured this topic into the 5W1H framework. So the first question is what should be regulated? So that is the object of regulation. So many people talk about regulating data because how we collect them could raise issues such as bias and privacy. People talk about regulating the algorithms because as impressive as they are, algorithms can also produce bad results. So take generative AI like chat GPT, it’s known to hallucinate and make up stuff. There are also people who think that we should regulate by sectors. So for example, we should have regulations for self-driving cars, another set of regulations for medical devices, and so on and so forth. And then finally, the EU thinks that we should regulate based on risks, sort of whether the risk is going to be acceptable or too high or low, and so on and so forth. And the general issue here is that over-regulation could end up stifling innovation, but under-regulation could lead to harms and violations of human rights. So some of the questions that we can talk about is like, if someone wants to regulate large language models such as CHAT-GPT, where would they even start? Would it be the training data? Would it be the models themselves? Would it be the application? Or another question we can ask is whether EU’s risk-based approach, is that the way to go? And we can talk more about that in Q&A. So let’s turn to the question of why we should regulate. Well, there are many reasons. So we could regulate to promote national interests, for example, in order to establish a country as a leader in AI. We can also regulate for legal reasons to make sure that new AI technologies comport with existing laws. Or we can regulate for ethical reasons, for instance, to make sure that we protect human rights. And some say to make sure that AIs don’t cause human extinction. And of course, as an ethicist, I would hope that all regulations would conform to the highest ethical standards. But is this realistic? For instance, in a country that’s trying to win the AI race, you may feel that has no choice but to cut ethical corners. So how optimistic or pessimistic should we be that governments will pursue AI in an ethical way? We can make this discussion more concrete. A lot of people in 2015 already signed a letter arguing that we should ban lethal autonomous weapons, but these weapons are already being used. Is AI race a good thing? If not, what do we need to avert an AI race? So now let’s talk about who should be doing the regulating. Well, there are a number of parties and stakeholders here. So you’ve got the companies, the AI researchers themselves, the governments, universities, members of the public. Now some people, especially from those in the tech industry, are concerned that non-specialists would not know AI well enough to regulate it. Is this true? Should we leave the regulation to people in the know, to the experts? And other people think that we shouldn’t just rely on industries to regulate themselves. Why is that? And what’s the role of the public in regulating AI? And what’s the best way to engage the public? So in the AI, we can also talk about when we should begin the regulation process. That is, when in the lifecycle of a technology should we begin to regulate? So we can regulate at the beginning, which would be upstream, right? Or we can regulate once a product has been produced, which would be more downstream. We can also regulate the entire lifecycle from start to finish in every stage of the development. Now companies will say that they already have a regulatory process in place for their products. So what I have in mind is independent external regulation. And in the US, at least, the regulations tend to be more downstream, you know, external regulations. Take, for example, chatGPT. It’s already out there being used. And now we’re just grappling with how we should regulate it, externally speaking. Downstream regulation is usually seen as being more pro-innovation and pro-companies. How feasible would it be for an external regulatory body to regulate fast-paced AI research and development? Is downstream regulation enough, or should we be taking a more proactive approach and regulate earlier in the process to ensure more protection for humans? We can also ask, where should the regulation take place? Here, we can regulate at the local level, at the national level, at the international level, or all of the above. So how important is it for us to be able to coordinate at the international level? Are we gonna be able to do it effectively? We don’t have a very good record with respect to climate change, so can we count on doing that with respect to AI? What would it mean to regulate at a local level? And how can universities, for example, contribute to AI governance? And finally, we can kind of talk about how we should regulate. And by this, I mean, what kind of policies should we try to enact when regulating AI? So ideally, we’re looking for policies that can keep pace with innovation and won’t stifle it. At the same time, we hopefully, these policies will be enforceable. For example, through our legal system. Many people talk about transparency, accountability, and explainability as being important tools in AI regulation. Are those enough? If not, what other policies do we need? So I’ve been doing a lot of work on something called the human rights framework, where I think we should think about regulating from a human rights perspective. We should make sure that people’s human rights are protected and promoted through AI. And that’s what the purpose of the regulation. So let’s just go back and sort of apply. So the human rights framework, it’s kind of like an ethical framework, right? It says that the ethics should be prior to a lot of these discussions. And I already mentioned, there are questions about whether that’s realistic or not. But ideally, we should make sure ethics is at the forefront. What should it regulate? Well, on a human rights framework, we should regulate, we should look into everything, at least consider everything, the data, the algorithms by sectors. And by risk, like anything that could impact human rights, like there should be some sort of human rights impact assessment for these technologies. Who should do the regulating? Well, on the human rights framework, it says that everybody has a responsibility. Human rights belong to everybody. Everybody has an obligation. So companies, researchers, governments, universities, the public, we all have to be proactive in engaging in this sort of regulation process. When should we regulate? Well, the human rights framework thinks that, you know, it seems to point towards a lifecycle approach. So we should sort of at every stage, look to make sure that, you know, do some sort of human rights impact assessment, making sure that it doesn’t undermine human rights. I’ll talk, I can answer in Q&A whether, how that could be feasible. And where should we regulate? Well, the human rights framework is global. It’s all of the above. You know, we need to do it internationally, we need to do it nationally, and we need to do it locally. And finally, how should we regulate? Like sort of, is it gonna be enforceable? I think that’s gonna be the biggest challenge to a human rights framework or really any framework. I don’t think this is a problem exclusive to the human rights problem, but it’s certainly a big problem, which is the enforceability. I don’t think we have a very good track record. And so one of the challenges for all of us is, how can we get something together where we can actually make it binding and people will actually be willing to comply with it? So thank you very much.

Kyung Ryul Park:
Thank you very much. Thank you. So thank you also for mentioning about the KAIST and the NYU recent relationship for the research collaboration. So KAIST and NYU together with Matthew and Daniel Bierhoff and Claudia Ferreria, and also Professor Soyoung Kim and Dasom Lee here. So we’ve been leading this collaboration for the digital governance and AI policy research. So let’s move on to the Professor Dasom Lee from KAIST, Graduate School of Science and Technology Policy. Okay, sure, without further delay.

Dasom Lee:
Sure, thank you so much for the introduction. Could I have my slides on, please? Oh, clicker. Oh, there you go. Oh, thank you. So I know this is, I know we don’t have a lot of time, so I figured I would spend the time to introduce the kind of work that I’m doing and introduce the lab that I have at KAIST Korea. So I have a lab called the AI and Cyber-Physical Systems Policy Lab. It’s called AI and CPS Lab. And we basically study how AI-based infrastructures or infrastructures that try to incorporate AI in the future try to address and promote environmental sustainability. So more specifically, we look at energy transition and the technologies involved with that would be smart meters, smart grids, renewable energy technologies. We also look at transportation such as automated vehicles and unmanned aerial vehicles, which are drones, and data centers. Obviously, data centers are not specifically AI-focused, but they store the data that AI collects and ensures the reliability and the validity of AI technologies. And I actually have been criticized for being way too broad and not having a focus and studying everything, which is fine. I can take constructive criticism, but I also think that it’s really important to look at everything in a very harmonized and holistic way, especially when we are trying to address sustainability. And when we look at infrastructures and when we look at energy and transportation in particular, they are really interconnected. So for example, right now, we’re trying to use EVs as batteries and how each household can have their own battery. We’re using and using EVs as batteries. We can use renewable energy more sustainably and so on. So that’s basically, I’m trying to build like a harmonious infrastructural system in my mind somehow. And I’m getting there, hopefully, and I’ll hopefully get there in about 10 to 20 years. But right now it’s still kind of fuzzy. So the current projects, I don’t really want to go into too much detail. But there are these five ongoing projects right now. The first one is regulating data centers. We don’t have a lot of regulations on data centers, especially regarding climate change globally, internationally, not just Japan or Korea, but just everywhere in the world. The United States has the most number of data centers in the world, and the US is not really known for their hardcore federal regulation, which means that it’s often left up to the state level governments like California or Tennessee, where I was. And those kind of governments often do not have the expertise on data centers to propose any type of regulations. So that’s one of the projects. Another is the media analysis of energy transition obstruction in Korea. And we have the student who’s working on this sitting there in the back, and he’s done a wonderful job so far. His name is Uibam, and we’re looking at how different types of media outlets show how the energy transition in Korea has not really gone from oil-based energy to renewable energy, but instead, oil-based energy to natural gas. So there’s a transition, and it’s slightly better, but it’s not great. And we are trying to see how that actually, how that kind of obstruction happens in the media outlets. The third one is the quantitative analysis for the need of social science in automated vehicle, AV is the self-driving car automated vehicle research. Lots of automated vehicle research so far has been focused on more technological fixes. We need more sensors, we need LIDAR, we need more radars, we need more cameras, and then we need more of these kind of infrastructures around the road, and we needed these wires under the road. And we are actually, we show using quantitative and statistical methods that social science is the, social science needs to be involved in order for us to understand the AV technology much better. The fourth one is a bit of my small ambition and my little personal side project, is that I’m a sociologist by training, but I also study science and technology studies. So I try to merge MLP, multi-level perspective by Frank Hills, which is the science and technology studies, and Pierre Bourdieu’s, who’s a sociologist, study sociologist theory of forms of capital, and that’s a theoretical work that I’m doing. And I’m also doing some work on data donation to promote data privacy, and to promote more sustainable data management and collection. And I also want to, I know we don’t have it on there, but I also want to quickly mention the project that Professor Park, Professor Kim, and I are doing together with NYU, KAI’s project, which is that we are looking at how privacy is contextualized in different geographical regions based on their culture and based on their history. And when we look at privacy, we can’t just, and we have tried to do this with lots of technologies like cars, all cars need to have seatbelts, right? Everywhere in the world. But with privacy, it’s really difficult to have that kind of very concrete regulation that’s universally applied, because everyone. has a different understanding of what privacy really is. So we’re planning to collect data. We just passed the International Review Board, which is the ethical, you have to do an ethical review before you do a survey. So we just passed the ethical review, and we’re planning to do a survey on how people perceive those privacy issues and how the public would interact with those potential privacy issues in the future. I think that’s it for me, and I really look forward to the Q&A session. Thank you.

Kyung Ryul Park:
Thank you very much, Professor Dasom Lee. So now, right next to me, the senior advisor from JICA, Mr. Atsushi Yamanaka-sensei. So he has extensive experiences in the field of development. So I think he’s giving us a development perspective on how we can address the challenges and opportunities of digital governance.

Atsushi Yamanaka:
Thank you. It’s on? Okay. Thank you so much, Professor Park, actually, and then distinguished panelists here, and also the audiences here. It’s always very, very hard to be the first sessions in the morning. So thank you so much for your dedications, for actually being part of this sessions. I’m essentially a practitioner. So I’ve been actually doing ICT for development for more than actually quarter centuries, which is actually quite scary to think about it. So let me actually talk from the developmental perspectives. How does the digital governance actually can contribute, and also what are the threats of the digital governance or digital technologies for the development? But since I’m actually an optimist, so let me actually start from opportunities. So essentially, for example, like new technologies like AI, is essentially opening up a lot of windows of opportunities in developing countries. A lot of developing countries are using the AIs and other cutting-edge technologies. in order for them actually to innovate and then provide, actually coming up with different actually product and services, which is really affecting their lives and also contributing to the social economic development. So it is really, you know, accelerations of these kind of things are also giving opportunity for the reverse innovations. I don’t necessarily like the word reverse innovation because it sounds like it’s very pretentious, but we believe, and I believe as well, that a lot of actually next generations of innovations, whether it is the services or the product, will be coming back, will be coming out of the so-called developing countries or emerging economies. Because, you know, one of the things that they have plenty is the needs, right? So they actually have a lot of socio-economic challenges or needs, so that is actually is fueling the innovations. When you look at mobile money, for example, it came from Kenya. You know, it never, never actually came out from country like Japan, where we essentially actually still insulate with paper monies, right? Or coins. So that will not actually have happened without actually the needs of developing countries. Another interesting opportunity that we see is digital public goods and digital public infrastructures. That’s a very big topic this year, especially, with discussions with G20, where India is very pushing for digital public infrastructures to bridge the gap of the digital inclusions. So we are going to see a lot of interesting opportunities, and hopefully this time that we are not seeing, we are not going to see the same kind of fate that we saw about the hype of open source and also funding for, you know, ICT for development during the WSIS process. Another very, you know, really encouraging signs in doing the WSIS process. is multi-stakeholder’s involvement into the digital governance areas and policy-making processes. Prior, I’m old enough, so prior to actually with this process, UN really did not actually have this multi-stakeholder’s approach. I mean, of course, I mean, during the first summit of the Social Development Summit in Rio, the civil society got involved in, but still it was not really this kind of multi-stakeholder approach. So IGF actually exemplifies this multi-stakeholder approach and how everyone can put their inputs into it. So let me actually go to the next. I tend to speak so much, so let me, please, Professor Park, if I speak too much, please cut it. Now, the challenges. Well, there is a lot of challenges to be still made. Despite the fact that we actually have made huge progress in terms of digital inclusions, still 2.7 billion people remains to be unconnected in 2022. And still, there’s a lot of issues of affordability, digital service, digital devices affordability, also gender, and also the economic sort of inclusions as well. So in a way, it’s essentially the problem is the same, 20 years ago, but it became much more complex. So in that respect, the last 2.7 billion people, last 30% is very difficult to reach out. So that is gonna be essentially a huge issue that we need to tackle on. Another thing is three weeks ago, I was part of the SDG Digital Summit in New York. Still, if we cannot actually utilize digital technologies well, we will not be able to actually achieve SDGs. So that is gonna be another very big challenge into that. And then in the governance sides, yes, there’s so many different actual governance challenges. We are, Japan is promoting cross-border transactions of data, it’s called DFFT. And that is also becoming a lot of issues in terms of what would be the best examples? What would be the framework to do that? Personal privacy, so you know, professors from NYU I was talking about. What are we going to do with personal privacy and also human rights issues? Cyber security is another issue, because we’ve seen cyber wars now. AI, internet, and the data fragmentations, also another things. You know, they are fragmentations. You know, who has the rights actually to cut the internet? You know, all these things. And also miss, and then this, and the malinformations. You know, with the advent of the AI technologies, this is going to, how can you actually tell the reality to the fake? That’s going to be huge issues. And also, developing countries, especially, how can you actually incorporate the voices and the input? Because they’re still not fully involved in the rulemaking or the framework making process. So we really need to engage with them and then give them the opportunities, because they actually represent probably more than, you know, so-called G7 or even G20 wars. So how can you do that? And also, lastly, data and information flows are still one-directional. You know, this is actually getting very big frustrations among the developing countries, because they have a big concerns about this data colonizations or data oligopolies, especially with the big techs. And also, data sovereignties. I think this is a very big issues. What if we actually, if they are going to put in critical national informations on the crowd in, so for example, like in US, which actually laws and regulations actually is going to regulate this data? If it’s a national sovereign data, shouldn’t the data owner have the right to do it? But currently, the law is actually over the United States. So these are among some of the challenges that we really need to address in order to really. fully utilize the power of digital technologies for development. Thank you.

Kyung Ryul Park:
Thank you very much. Thank you. So, we have human rights perspective from Matthew, and also infrastructure perspective on digital governance, and also development perspective and development cooperation for the international relations from different kind of stakeholders. So now, we’re moving on to the Professor Rafik Hadfi from Kyoto University School of Informatics. So, he’s giving us the perspective of maybe the digital inclusion, so, okay, sure, Rafik.

Rafik Hadfi:
So thank you, Professor Park, for the invitation, and thank you everyone for being here at this early time of the day. So my name is Rafik Hadfi. I’m currently an Associate Professor in the Department of Social Informatics at Kyoto University. So, I mostly do work on AI, but in a way, try to deploy it into society to solve a number of problems, going from SDGs, LSI, to the most recent ethics-related issues. So the work we do is multidisciplinary by nature, and one of the topics that I’ve been working on most recently, perhaps the past two years, is digital inclusion. And I take digital inclusion here, particularly inclusion in a very global way, in the sense that inclusion sometimes means equity, sometimes means self-realization, autonomy, et cetera. So I’ll explain exactly what it means here. So it’s one of the key elements of modern society, in the sense that it’s a way of allowing the most disadvantaged individuals of society to have access to ICT technology. And this is more like answering the question of the how, I mean, what kind of activities allow us to include these members of society? And the goal here is to allow more equity. So equity here is a more, let’s say, inclusive approach. meaningful way to define meaning for an individual in society so the question of equity answers the what here so what’s the goal of an individual of society so this connection here leads us to something more global which is self-realization and this includes all members of society in the way if we allow self digital equity will allow individuals to let’s say meet their autonomy and also fully live their life so the question now is or the topic that I’m working on is how to enable this using most recent technologies not just ICT but AI in particular so I’ll take one case of a case study we’ve been conducting for the past let’s say two years it was very difficult challenging because it can address multiple problems in society so digital inclusion here is equated with gender equality empowering women so it’s a study that was conducted in Afghanistan and the focus here is women inclusion so the problem the main problem here was first of all conducting the study itself in Afghanistan this came at the same time where the Afghan government was collapsing and apart from the logistics there we had also the already established let’s say problems there like gender inequality insecurity so it was very difficult to conduct plus the ICT limitations in Afghanistan fast forward two years later we managed to conduct the experiment initially was a planned for 500 participants from Afghanistan and then narrowed down to 240 so the main target here is basically how to build an AI that could be deployed in an online let’s say setting where mostly women have the ability to use smartphones to communicate and also deliberate So, the AI was found to actually enhance a number of things, so one of them is the diversity of the contribution that women were providing in these kind of online debates. The second one, and most importantly, the fact that we found out that this kind of conversation like reduces inhibition. I mean, Middle Eastern societies in particular are known for limiting, let’s say, the reach of women in terms of freedom of expression, and also raising particularly issues or problems related to their livelihood. The third element was found with this kind of technology is increasing ideation. We found that AI allows actually women to provide more ideas with regard to the local problems there, like, let’s say, employment, let’s say, family-related issues. So this is one practical case of conversational AI, which is, in a way, building on what’s all known with the large language models, chat GPT, et cetera. This is more advanced than, let’s say, problem-solving approach to conversational agents. So yeah, so this is a particular practical example of using conversational life for social good, and the deployment was done in Afghanistan. So I’m looking forward to your questions, and yeah, that’s all from me.

Kyung Ryul Park:
Thank you very much, Rafik. Actually, Rafik has been leading the Democracy and AI, the research group in IJCAI, International Joint Conference on AI, and we’ll also have conferences next year in Seoul. So I think there was also a very interesting discussion in Hong Kong in August. You know, today is a Japanese national holiday, it’s a sports day. I think we are doing a lot of brain exercise today, so I think it’s very ambitious and very interesting talks and sessions today. So we’re moving on to Professor Liming Zhu, School of Computer Science and Engineering from the University of New South Wales. So we’ll talk about democratising the AI from the perspective of CS. Thank you.

Liming Zhu:
All right, thanks very much for having me. Right, so I’m a professor from the University of New South Wales, but I’m also a research director at CSIRO. So CSIRO is Australia’s national science agency. We have around 6,000 people working in the areas of agriculture, energy, mining, and of course AI. So Data Sichuan is an AI digital and data business unit. I also work quite internationally with OECD AI and some of the ISO standards on AI trustworthiness. And Australia also has a national AI centre that was established 18 months ago, which is hosted by Data Sichuan. Its main remit is not research, but AI adoption, especially responsible AI adoption. So very briefly on Australia’s journey. So Australia back in 2019 have developed Australia’s AI ethics principles, which is developed by Data Sichuan at the time, commissioned by the Department of Industry and Science, but with industry consultation. The principles, if you look at them, it’s not really that surprising. A lot of international organizations in each country have developed those. But I want to draw your attention is, you know, on the first two or three, it’s really the human centred values, especially the plural of values. We realise the trade-offs and the different culture and inclusiveness in human value. environment well-being and Australia is a fair go country and so fairness is high up in there. But then we have the traditional quality attributes I would say for any systems but AI will pose a very unique challenge to them such as privacy, security, reliability and safety. And then there are additional interesting quality attributes like transparency, explainability, contestability and accountability which is uniquely in the AI context. So since then, since 2019 that’s been four years, so Australian has been focusing on operationalizing these principles. We have done a lot of industry consultation case studies and how I get industry feedback and importantly Australian the minister, the picture is our minister for science and industry, Minister Husic has launched Australia’s responsible AI network which is Australian companies and organizations commits to responsible AI through governance mechanism. They have to commit to three, at least three, AI governance mechanism principles within their organization to be part of the member and featured and sharing their knowledge. And we are also, there’s a book coming up called Responsible AI Best Practices for Creating Trustworthy AI Systems based on the work we have done and there are three key Australian industry case studies in that book. So what is our approach? I think the key thing is we realized is a lot of best practices, they need context, they need to know when to apply them and also there’s both pros and cons of these best practices and a best practice needs to be connected. We also see people at the governance level, let’s have a responsible AI ethics committee, let’s have some auditing mechanism at a governance level doing great things but not connected to the practitioner. The practitioner is your ML engineers, AI engineers. software engineering, AI engineering developers. How do we connect all these best practices so people collaborate? So we have developed a responsive AI patent catalog which you can easily find by searching and connects governance patents, which we mean probably the people in this room as mostly interested in process patents, meaning software development process and AI engineering process from a development process point of view, and the product patents, which you see all the metrics, measurements on a particular product, how do we evaluate them? The key thing is they are connected and you can navigate around them and to have a whole-of-system assurance. At this moment, a lot of AI governance is not about AI model itself, even JGBT. There is many components, AI and non-AI components outside AI model. Every prompt you put into JGBT is not a true prompt going back to the model. Additional text added to those texts you have added. Things like simple, please always answer ethically and positively. So those kind of prompts will be attached into every single prompt you put into it. That’s a system-level guardrail. And many, so this is very simplistic example, but many organizations who leverage large language models and AI can put their unique context-specific guardrails to leverage the benefits while having some guardrails. And those kind of patterns, mitigations, need to connect with the responsible AI risks that every companies can find as part of their typical AI risk, our risk registry systems. And we have developed many of such question banks that you can ask questions about your organization and making responsible AI risk assessment part of that. So that you can find more information in some of the papers I listed here or search online. And this has been featured by the communication of ACM as one of the most. impactful project in the East Asia and Oceania region recently. So we’re very happy to share more experience with audience here. Thank you.

Kyung Ryul Park:
Thank you very much. I think today’s speaker is actually providing a lot of insights on how we can actually collaborate together from different stakeholders in the global shaping process of AI frameworks. We have also the Australian cases and Korean cases and I would say a little bit market-driven U.S. approach. And also we also need engagement from the developing countries. So I think that’s why it’s very timely. Today’s session is very timely. So I think we have Takayuki Ito-sensei online. So I’d like to introduce Professor Takayuki Ito from Kyoto University. Hello. Sure. Okay. Okay. Sure. The floor is yours. Do you hear me right? Yes. Yes. Okay. Sure.

Takayaki Ito:
Okay. I’ll share my slide. Okay. All right. So thank you. Thank you for introducing me. So I’m Takayuki Ito from Kyoto University. I’ll talk about one of my current projects towards hyper-democracy. It’s an empowered crowd-scale discussion support system. So we are working to develop the hyper-democracy platform where the multiple AI try to help group decision and consensus building support. So basically in the current social network, there are many social problems like fake news and digital gerrymandering and filter bubble and echo chamber. a very important problem. So here, by using artificial intelligence like the chatGBT and LLM, we are trying to solve that. So actually, we have been working for this kind of projects for 10 years. From 2010, we started to create a system called Co-Agri system, where the human facilitator try to facilitate and support consensus among the online participants. And then from 2015, we created the Agri system, where one AI agent supported the group decision among online participants. So here, we use AI to support the human collaboration. And then now we are working for hyper-democracy platforms where the many AI agents try to support crowd-scale discussion. So this is an overview of the Agri system. Here, as you can see, the people discuss by using text chat. And then these texts are recognized by our AI and then structuralized in the database. So basically, by using the structuralized discussion, AI facilitator try to interact with the online participant. Here, actually, we started this project in 2015. We didn’t have any chat GPT, but by using our classic AI technology, we realized that. So here, by using LLM or GPT, we are now working for more sophisticated AI, actually. So, this is one case study about DIAGRI. We used the DIAGRI system in Afghanistan, in particular in the 2021 August, there American troops left from Kabul city in Afghanistan. We opened in public the DIAGRI and then we gathered many opinions and the voices from the civilians in Kabul city. So as you can see, our AI can analyze the type, the opinion types and the characteristics. So from August 15, when the American troop left, the number of issues, problems increased drastically. That it’s shown in red box in the central graph. So we are working with the United Nation Habitat and many other NPOs about these kinds of things. So now we are extending the AI facilitator to many AI agents. And then this is the current multi-agent architecture. Now we are testing this agent architecture. So this is the conclusion. So yeah, we are now developing next-generation AI-based group decision support system. And it’s called a hyper-democratic platform. So yeah, thank you very much.

Kyung Ryul Park:
Thank you very much. Professor Takayuki Ito has been pioneering. is interdisciplinary research in the intersection between AI and democracy. So I think it provides a lot of insights on the importance of multidisciplinary approach. Especially in the research field, we have to work together with the policy scholars and also development scholars and also engineering scholars as well. So, right, so we are on time right now. So I’m feeling very relieved. So we have our last speaker, Seunghyun Kim from KAIS. Are we ready? Okay, sure. Sharing some insights from development field.

Seung Hyun Kim:
Hello, my name is Seunghyun Kim. I’m currently a PhD student studying under Professor Kyungyeol Park. But previously in my life, before the peaceful life of a graduate student, I was a program officer for an institute called the Korea Development Institute, responsible for producing policy recommendations for developing countries. So one of the stark realizations that I realized was that in the times that I worked at KDI as a program officer, I’ve only been to developing countries. So my passport records do not go above the annual GDP per capita of $10,000 per year. So I haven’t been anywhere. So this is the most advanced country that I’ve been to overseas in many years. And what I realized is that when I came to KAIS, I was exposed to all these discussions about AI, a chat GPT, bioethics, et cetera, extremely advanced technologies, cutting edge technologies that is going to change everything. But then I thought about, then my thoughts overlapped. What happens when these cutting edge technologies are overlapped with what I saw in the developing countries? which have a very different societal and economic context. So I’d like to share three snapshots that provide some peek, not insight, I wouldn’t say, just peeks into how all these discussions, insightful discussions we’ve had today, may develop in the developing world. So this is a snapshot of a photograph taken in November 16, 2016 in Medellin, Colombia. So if you’ve seen Narcos in Netflix, you’re probably familiar with the city. This is the Narco capital of the world. And this is, if you see a little bit below, you’ll see that there is called Bienvenidos a Calle Uno, which means welcome to area one. Area one is the zip code for the poorest neighborhood in the Colombian cities. So this city used to be before the introduction of a cable cause, which was a revolutionary transport mechanism. It was actually isolated from the entire city of Medellin. So it was a breeding ground for cartels, drug dealers, drug smugglers, cops would not go in there. They would say, I’m not going in there. But because of the cable cause, people were able to get jobs and come out of the city and have equal opportunities to education, jobs and capital to borrow money, et cetera, for banking, et cetera. But what wasn’t publicized was that drug cartels were using this new cable call. They would hide cocaine inside the replaceable parts. And then this would become an automated distribution mechanism that was not publicized for a very long time. But what if cable cause do this to the drug cartels? And what are they going to do when they get their hands on chat GPT and AI? So it’s something to think about. And also what the drug cartels were able to thrive in these regions because a dollar could bribe anyone in the entire neighborhood. You could, for $10, you could. basically make people do anything. What can you do with a brand new laptop, an iPad that’s connected to generative AI, and what can they do? What can the police do? What can the government do against such matters? So one snapshot is unequal opportunities and how that exposes existing social and economic problems. The second snapshot is one of lovely pictures that I’ve taken at Addis Ababa, Ethiopia. This was in 2018. This is a scene that I will probably remember for a very long time. This is a scene of an official explaining to us, the Korean researchers, about the current system of the Ethiopian public finance system, the electronic digital finance system. So the line, I would not forget this line. So the tax system runs on Microsoft. The tax expenditure system and budget planning system runs on Oracle. And we haven’t had a digital system for auditing system yet, but we will soon, as soon as we get the funding. So for one government, you have three different information systems running simultaneously that do not communicate with each other, that cannot communicate with each other. So fragmentation on an enormous level. And why is there fragmentation in the core governance structure of distribution of financial resources at the Ministry of Finance? Why is that? Because they’re financed by different institutions, by the World Bank, by UNDP, et cetera. They’re financed by different institutions who are connected to different service providers who in turn provide very little parts of the government. And no one agency can provide one comprehensive solution for the entire government. So what you have is extreme fragmentation of ICT and information systems in government. The third snapshot is… I spent two weeks in Equatorial Guinea. I don’t know if anybody’s been to Equatorial Guinea. So it was the unhealthiest time of my life. I was basically half unconscious because of the malaria vaccines that I had to take every two days. And so this is the president Nguma Mbaso speaking at the Equatorial Guinea National Economic Conference. So all the ministries were there, all the ministers were there, all the high level profiles of the government were there. But the entire conference was run and executed by Chinese officers from the mainland China. So every staff that, except for the participants, every staff would be Chinese from the mainland. And after about a week and spending a weekend there, it wasn’t long until I found that the entire ICT infrastructure was basically dependent on one country and one company, China, Huawei. So you couldn’t get an internet connection, Wi-Fi, anything. You can’t send an email without having some sort of support from Huawei. So I thought this snapshot provides a very significant view into technology sovereignty. So in sum, you have one, unequal opportunities that may be extravagated into a whole variety of societal problems. And two, fragmented ICT structures that create problems for governments. And technology sovereignty that ties in developing countries with companies and other governments and international agencies that provide a very complicated picture for the developing world. And for the developing world to become more advanced and to actually transform into a developed world, I think these three peaks are something that we should think about. Thank you.

Kyung Ryul Park:
Thank you. Thanks a lot. So we have like five minutes to go. So I’d like to open up the floor and because we have more than, I think, 30 attendees in this session, thank you very much for your participation. So if you have any comments or questions, please raise your hand. Professor Seoyeon Kim, from KAIST. Could you?

Audience:
First time doing this, but okay. If I knew that I should be standing here, I wouldn’t have asked the question, but the question, I guess, is very much common to all of us, because as Justice Seung Hyun pointed out, the problem of fragmentation in a single country level, we are actually witnessing huge fragmentation of AI governance at the global level. We have something going on at the World Bank, the UN, various agencies of the UN, and also even within single countries. And also, we are approaching this problem from many different angles, right? Development perspective, sociological, ethical, philosophical, CS, whatever. So the question is, how we can actually reduce this degree of fragmentation when you talk about AI governance and other mechanisms of regulation or issues? So someone, as you might know, or some circles around the world are talking about the need to create something like IAEA, version of AI. But then there are many limitations, of course, because of so different natures of two technologies, you know, nuclear technology is so centralized and it was born out of a very dire, emergent situations during World War II. But AI is such a apparently democratic technology because anybody can touch upon some part of AI. So my question is, how you, any of you, can address this question of the need and the way we can actually think about how we can actually reduce this problem? You see what I’m saying, right? Okay.

Kyung Ryul Park:
Thank you very much. So if you don’t mind, actually, why don’t we just have questions altogether and then we’ll just address the questions, please. Could you briefly introduce yourself?

Audience:
Yes, no problem. Hi. Thank you. I’m Sophie Kellehaug. I’m a diplomat from the Danish Ministry of Foreign Affairs posted in Geneva. So thank you so much for all your really interesting perspectives. I think we, of course, also see a need for global governance of AI. And just like my previous audience member, we also see this fragmentation. It was really good to hear, especially, of course, in the first presentation, the focus on human rights, which we also find very important, and the general multi-stakeholder engagement. This is a good example of engagement with academia. So we see a need to take a really risk-based approach. And of course, thank you for referencing also the EU AI Act as an EU member state. We very much support that approach. I wanted to ask basically the same question as well. How do we approach this globally? We see this fragmentation. But instead, then, I would like to come back to the first speaker’s, Professor Liao’s, point on follow-up mechanisms and monitoring. How do we afterwards then ensure that this regulation is implemented and we can oversee, we have oversight and accountability afterwards? Thank you.

Kyung Ryul Park:
Thank you very much. And gentleman here, over here.

Audience:
Hello, good day. My name is Peter King. I’m from Liberia. And I would like to ask a question based on the African perspective. I realize that we have some, I think, most of you are from Asian region, or you from, I mean, state in another continent. But the fact is, in Africa, we also are part of global society. But then we’re looking at how, what do you see as the impact of AI? The immersion is becoming, the use is geographically limited. So most of the conversation here, I saw only a sample case of Equatorial Guinea. But we’re looking at Africa as Africa, a lot of issues. So what advice can you give most of our policymakers in Africa in terms of how research can be extended and what can they do to ensure that as it seamlessly, I mean, get to become a reality, what can Africa prepare for? And we do lack certain expertise. What are your advices for some of us youth and then our larger audience in Africa? That’s my question.

Kyung Ryul Park:
Thank you very much. Oh, okay, another questions or comments, maybe?

Audience:
Hi, my name is Yasmin from Indonesia from the UN Institute for Disarmament Research. So I do have a few questions, so please bear with me. First of all, for the first speaker on a human rights-based AI framework, just to jump on the previous point that was made by a fellow audience member, on the question of enforceability, I was wondering if you could elaborate on the difference between human rights as a moral framework on the one hand, on the other hand, as a legal framework with the established international human rights law mechanisms, and what can we learn from the established case law that have been established over the past years as IHRL was developed? Second, on the digital public good and the digital public infrastructure, previously I used to work at Chatham House, a London-based think tank. We did some research on that, and one of the questions that we were wondering, of course, there is the importance of public stewardship, but at the same time, there’s also the question of limited resources and the need for scalability. How do you deal with that? And third, on conversational AI to advance women’s inclusions, one of the questions that popped into my mind, it’s great, it’s got a lot of potential, but how do you deal with the availability of training data, whether it’s in terms of data collection and data hygiene, that it’s available in an equitable way, so not just in terms of free in a way from biases, but also taking into account questions of, for example, some communities might need to be represented in these models, but at the same time, others might want to be forgotten because of privacy and oppression risks. How do you deal with that? Thank you so much.

Kyung Ryul Park:
Thank you very much. So, right. So, if I may, it’s wonderful comments and questions. So, if I may just summarize with this keyword, fragmentation in the global AI governance and how we can actually collaborate together for that. and what can the African countries especially prepare for AI strategy, and there are definitely the conflicting rationales in human rights perspective and also digital public goods, and how we can actually promote some enhanced scalability, and how we promote digital inclusion in terms of data collection and data analysis. So I’ll give you Matthew, are you there?

Matthew Liao:
Yes, I am.

Kyung Ryul Park:
Sure, I’ll give you two minutes. Is it okay for you?

Matthew Liao:
Yes, sure, sure.

Kyung Ryul Park:
You’re always smart, right?

Matthew Liao:
Yeah, those excellent questions and great questions. The fragmentation question is just such a difficult problem. Very quickly, I think this is something that we need to all work together. We need to, it’s multi-stakeholders, we need everybody involved in the conversation, the public, the government, the researchers, and so on. Now, that sounds kind of vague. So here’s something a bit more concrete. I think Professor Song Kim mentioned something about the nuclear energy. There are two things I want to say. Like I think the biomedical model is actually a pretty interesting thing to think about. If you think about drug discoveries, there’s a lot of innovation in the drug, a lot of research in the drug industry, you know, arena. At the same time, there’s a lot of regulation. You know, we’re protecting, you know, people. There are a lot of human subject research, you know, a lot of sort of stuff that’s not like high risk stuff, you know, not trivial risk things. And yet we can do it in a fairly responsible way. And the community, the international community have basically coalesced around sort of different norms, you know, to sort of say, hey, we need to make sure that this process is safe. And I feel like we can do something similar with respect to AI, you know, where, you know, some stuff, maybe they’re low risk. I like the EU-based approach as well. I think sort of some stuff that’s low risk that, you know, we can kind of look at it and sort of say, hey, we don’t need to worry that much about that if, you know, something is being used for games. But other things where, you know, like medical devices, maybe we should need to pay more attention, especially if it involves, you know, humans. And I’ll just say one other thing, which is, I think there’s a lot of sort of regulatory capture right now. A lot of people think, oh, you know, this is too big to be regulated. And I think it’s just, it’s useful to look at the history of regulation. Take airplanes, for example. Airplanes use the fall out of the sky every single day, right? And then at some point, people say, you know, we need to come, come together and regulate, you know, sort of, you know, sort of the airline industry. So everything like from the engines and so on and so forth is regulated. And now airline, the airline industry is the safest, like, you know, so it’s so safe to fly these days. And so I feel like we can do something similar as well. And so maybe those are indirect models where we can appeal to for, to address things like fragmentation and things like that. So.

Kyung Ryul Park:
Thank you, Matthew. Does anyone address to the questions or comments from you? Okay, sure.

Atsushi Yamanaka:
Well, thank you so much, actually, it’s a very, very interesting questions. And then I have a few, actually, comments on this, actually, fragmentations. Okay, fragmentations. Maybe perhaps we need to actually create the AIGF, right? Instead of IGF, AI Governance Forum. You know, even the internet governance, right, when we talk about 20 years, we still have not actually came up with a suitable models. I think we need to actually come up with what is actually workable and it’s a best examples models, instead of actually having a complete global regulations, which is very, very difficult and not very palatable for many, many stakeholders. So I think we need to actually have to come up with sort of best example, what is really, you know, workable solutions, rather than actually having a concrete regulations on AI, I think. I think that’s the way to go. In terms of Africa, yes, actually, I actually work mostly in Africa, by the way. So the last 12 years, 13 years, actually, I mostly actually work in Africa. in Africa, mostly in Rwanda. So actually, AI is actually utilized very much so in African context as well. A lot of startups actually have been using AI. And then other actually database solutions. So I think there are actually a lot of solutions which is coming out. However, the human resources are limited. You’re right. So there are different ways. There are actually a lot of advanced institutions now established in Africa as well, like Carnegie Mellon University in Africa, in Rwanda. Ames, African Institute of Mathematical Science is also in Rwanda as well. So that is also ways to advance those kind of initiatives. And also, I think developed countries like Korea, Japan, for example, I have quite many students actually who study AI and continue to actually do PhDs here in Japan. And one of them actually have generative AI models for African languages. And he was studying here. He was actually a research fellow at RIKEN, which is one of the top actually research institute. Unfortunately, he moved to Princeton to continue his research. But so that’s kind of sort of human resource capacity with initiatives. That’s, for example, JICA is actually known for. And also KOICA, the Korean, and also other countries actually doing that. So I think you can take advantage of that kind of framework. About DPI and DPGs, yes. Scalability was always the issues on open source initiatives and also on ICT for development as well. But I think we’re seeing a lot of interesting sort of DPI, like the Indian MOSSIP model. That is scalable. They’re actually serving like 1 billion people, basically. So that actually is seeing a lot of scalabilities beyond POC, which is a hallmark for ICT for development. And lastly, about women’s inclusions. I think digital technology actually gives actually quite unique opportunities in terms of pseudonymizations. So sort of masking, sorry, sort of masking the gender, but basically giving the opportunity for the inclusions. I think much more so than like in-person environment. So I just wanted to point that out.

Kyung Ryul Park:
Thank you very much. So before we close, I know we’re running out of time. I’d like to just give a couple of minutes from Rafiq and Professor Zhu. So Rafiq. Yeah, sure.

Rafik Hadfi:
A few words in regard to the, let’s say, the empowerment of local communities, for example, in terms of the inclusion, in terms of data collection, training, the audience of buyers, et cetera. So one approach we found is that not just deploy a solution in a simple social experiment, but instead have a holistic approach where we tend to form local communities, let’s say villages, municipalities, universities, schools, train them on how to use the whole, let’s say, AI system. And then this is done for a few studies, but at the same time, it allows us to build data sets to train these models for these communities. Because one of the things we encounter is that when you train these AIs, obviously in English, I mean, you’re biased towards one particular, let’s say, context. So we’ve done this in Indonesia. So this was a island called West Nusa Tenggara. With the University of Mataram, we trained these data sets with Indonesian. And currently we’re focusing on the Afghan case, because I think, as we all know, there’s a lot to do in Afghanistan. And the case study that I covered is mostly focusing on equity, women empowerment. And of course, as I said, the data collection, the AI models are trained particularly for this context. And of course, it can be generalized. This year, it’s for Afghanistan. Maybe, I don’t know, we’ll try Iran, I don’t know, next year. Yeah, so that’s all for me.

Liming Zhu:
Thank you. Thanks, I will be brief. I think on the fragmentation, I’m going to be slightly controversial because from a science point of view, we see they are all different stakeholder groups. We have great institutions, UN, OECD, WEF, you know, and if they are paying attention to AI and AI governance, there’s certainly, I think it’s valid because different stakeholder groups have slightly different concerns and the robust discussion between the groups and making trade-offs in some of this is going to be important. I don’t see at that level a fragmentation, but more of a more interest from different stakeholder groups. Comes to regulation, of course, there is also the importance of both horizontal regulation, regulating AI as a whole, and there’s a pros and cons in that, and the vertical regulations on particular products. The interaction between them, removing some overlaps is important, but there needs to be both rather than one way or another. But only one thing I think that shouldn’t be fragmentation is science. Science is international, science is not value-based, and the scientific evidence and the advice to this policy and stakeholder groups really need to collaborate more. I see a lot of scientists, research organizations here. I’m looking forward in collaborating with them. Thank you.

Kyung Ryul Park:
Thank you. So all of the essence of the global platform for collaboration. So thank you very much for your participation today, and I’d like to continue, you know, our discussion. So please keep in touch and then, so before we close, actually, I’d like to particularly thank, you know, the So Young Kim and Junho Kwon, doctoral students from KAIST. Thank you very much. And thank you very much for your time, and all the speakers, and if you leave your contact to Junho after this session, we’ll particularly just, you know, keep in touch with you after this session. Thank you very much. Thank you. Thank you. Thank you. Thank you.

Atsushi Yamanaka

Speech speed

178 words per minute

Speech length

1841 words

Speech time

622 secs

Audience

Speech speed

191 words per minute

Speech length

1057 words

Speech time

331 secs

Dasom Lee

Speech speed

170 words per minute

Speech length

1087 words

Speech time

384 secs

Kyung Ryul Park

Speech speed

149 words per minute

Speech length

1244 words

Speech time

500 secs

Liming Zhu

Speech speed

171 words per minute

Speech length

1179 words

Speech time

415 secs

Matthew Liao

Speech speed

186 words per minute

Speech length

2409 words

Speech time

778 secs

Rafik Hadfi

Speech speed

159 words per minute

Speech length

1033 words

Speech time

390 secs

Seung Hyun Kim

Speech speed

169 words per minute

Speech length

1195 words

Speech time

423 secs

Takayaki Ito

Speech speed

104 words per minute

Speech length

509 words

Speech time

292 secs

Youth participation: co-creating the Insafe network | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Niels Van Paemel

ChildFocus, an proactive community service organisation, has initiated a variety of programmes purposed to enhance online safety, mitigate child exploitation, and raise awareness about online grooming. Among these, the Cybersquad initiative is a peer-to-peer platform facilitating the sharing of experiences and advice amongst young individuals. Recognising a growing tendency in children’s preference for online chat over traditional helpline services, this platform enables direct contact with a Child Focus counsellor via a help button in more serious cases.

The organisation has also launched Groomics, an educational tool specifically developed in response to the increasing phenomenon of online grooming. This aims to empower young people by highlighting the potential risks and benefits of online relationships, enabling children to discern signs of online grooming and equipping them with the knowledge to decline these advances.

Additionally, ChildFocus has introduced the MAX principle, which offers children an opportunity to select a trusted individual with whom they can discuss their online issues. Through the MAX principle campaign, children are able to send an official invitation to their chosen confidant, thus fostering a sense of security enhancing safety in their digital experiences.

Recognising the crucial need for inclusive online safety measures, ChildFocus has placed particular emphasis on children with Autism Spectrum Disorder (ASD). Revisions were made to their web content after it was deemed complex and overwhelming for ASD children. The organisation thus made adjustments to their content to make it more comprehensible and inclusive. The Star Plus tool was specifically co-created to help ASD children recognise online red flags, catering widely to their unique challenges in the online sphere.

ChildFocus has, nonetheless, encountered challenges achieving reach with ‘shy kids’ or those from less advantaged backgrounds. By highlighting the ‘Matthew effect,’ where resources tend to contribute more significantly to those already privileged, the organisation illustrates the struggle for inclusive education. Emphasising the importance of awareness and persistent efforts in reaching out to these children, they assertively advocate for daily commitment in this regard.

Additionally, ChildFocus encourages international collaborations for sharing online safety best practices and resources across countries, to facilitate mutual learning, prevent redundant efforts, and enable effective localisation to meet diverse national needs. Resources such as the Star Plus tool are shared on the InSafe network, through which translations are provided for wider accessibility. This reinforces the significance of international partnerships in driving online safety efforts globally.

In conclusion, ChildFocus’s efforts highlight the essential elements of child-focused, inclusive, and collaborative drives in promoting online safety. This provides a blueprint for other organisations and countries to adopt whilst acknowledging the potential obstacles that might be encountered in these endeavors.

Anna Rywczyńska

The Polish Safer Internet Centre is making substantial efforts to support refugees, particularly those displaced due to the ongoing Russia-Ukraine conflict, by incorporating them into their internet safety campaigns. This pursuit is significant, especially considering the huge influx of 16 million Ukrainian refugees who have crossed the Polish border since the start of the crisis, with about 1 million residing in Poland. A prime example of the Centre’s integration efforts involves approximately 200,000 Ukrainian children who are currently attending Polish schools.

Additionally, the Centre is proactive in its focus on language accessibility and mental health support for refugees. The organisation enables remote connectivity to events, which are translated into Ukrainian, thereby dismantling language barriers. Furthermore, they have taken the extra step to recruit professionals for their helpline, including consultants and psychologists fluent in the Ukrainian language, which greatly enhances their ability to assist Ukrainian refugees efficiently and effectively.

To tackle the significant issue of disinformation and fabricated news around the conflict, the Polish Safer Internet Centre has established a dedicated department. NASK, an essential part of the Centre, is tasked with combating disinformation and fake news, thereby contributing substantially to efforts towards promoting peace, justice, and sturdy institutions.

Moreover, the Centre is heavily invested in youth empowerment by providing platforms for them to express their perspectives. These platforms include competitions where entrants are encouraged to submit videos that represent their generation’s views.

The Centre also ensures that the youth are well-represented in the pan-European youth panel meetings, thus playing a part in ensuring youth representation in conversations on peace, justice, and strong institutions as well as efforts to address inequality.

In conclusion, the Polish Safer Internet Centre’s concerted efforts to assist Ukrainian refugees and their strategic emphasis on youth empowerment underscore their commitment to the Goals of Quality Education, Good Health and Well-being, Reduced Inequalities, and Peace, Justice and Strong Institutions. These initiatives illustrate the potential of technological platforms as a tool for positive change and inclusive growth.

Sabrina Vorbau

The necessity of youth participation in creating a safer internet is a pivotal aspect that resonates across all discussions. The significance of youth input extends undeniably to the formulation of policies and strategies to maximise internet safety. An exemplar of this is the Better Internet for Kids Strategy Plus (BIK+). This strategy was not merely devised with the contributions of young individuals but continues to uphold its relevance through sustained engagement. It encourages youth to voice their perspectives at annual events such as the Safer Internet Forum.

However, youth participation is not a sporadic occurrence. It necessitates ongoing engagement and follow-up, reinforcing the principle that effective youth involvement mandates authentic engagement and substantive opportunities. A salient example of such democratic participation is the proactive involvement of young people in the organisation of the Safer Internet Forum, moulding the programme from its inception.

Alongside general participation, it’s worth highlighting the empowerment of young individuals who have already received training within their respective communities. This notably involves youth ambassadors. Consider the youth ambassador from Malta who utilised his network to disseminate the event link to his classmates and his more introverted peers. This act exemplifies how empowering local youth can effectively represent even the less outgoing children and serve as a strong voice for all.

Addressing the understanding that children do not necessarily need to present on stage, the strategy underscores the importance of incorporating even the more introverted children indirectly. Youth ambassadors, as representatives, feel an inherent responsibility to mimic the voices of their peers. This approach ensures that every child, irrespective of their demeanour, contributes to shaping a safer internet environment.

In conclusion, youth participation is not only essential in realising a safer internet but is also instrumental in achieving several Sustainable Development Goals (SDGs). These include Quality Education (SDG 4), Reduced Inequalities (SDG 10), Industry, Innovation, and Infrastructure (SDG 9), as well as Peace, Justice and Strong Institutions (SDG 16). Any dialogue on internet safety or policy creation remains incomplete without youth involvement, thereby highlighting its unparalleled significance.

Moderator 1

In today’s digital age, the vulnerability of children is a mounting concern, particularly in relation to online safety. This vulnerability may be exacerbated by various factors, including poverty, disability, mental health issues, abuse or neglect, family breakdown, discrimination, and social exclusion, which encompasses migration as well. Moreover, we must consider the inherent vulnerability that all children share as they navigate a world in which decisions affecting them are predominantly made by adults who may not fully comprehend the implications of digital technologies on children’s lives.

Though the situation may seem dire, there is a glimmer of hope, emanating from initiatives such as the New European Strategy for a Better Internet for Kids. Adopted by the European Commission in May 2022, this updated version of the 2012 BIG strategy aims to keep abreast with technological developments, placing children’s digital participation, empowerment, and protection at its core. The intention is to shape a digital environment that is both safe and nurturing for children.

Simultaneously, there is a growing, assertive view favouring children’s inclusion in decision-making processes concerning their rights and safety in the digital world. This perspective underscores the necessity to acknowledge that children, living in an rapidly evolving digital world, may have unique needs and rights. Not involving them in these decisions could result in protective measures that fall short of meeting their needs accurately.

To summarise, whilst we must confront the increased risks and challenges faced by children in the online world, there are effective strategies, like the Better Internet for Kids, that represent a silver lining. Moreover, the firm stance advocating children’s involvement in decision-making processes suggests progressive change. Nonetheless, it is paramount to adapt continuously to the rapidly changing digital landscape, ensuring children are not only safe but also empowered in their digital interactions.

Audience

Cybercrime and bullying in Lebanese schools are drastically escalating, with a disquieting report from Lebanon’s internal security forces signalling a surge in cybercrimes by over 100%. This uptick has provoked schools to request sessions on bullying and violence, underscoring the urgent necessity for attention and action.

Non-profit organisation, Justice Without Frontiers, is taking forward steps to confront these pressing issues. They offer legal consultation for girls affected by online sexual harassment, a regrettable and increasingly common output of the cybercrime increase. Furthermore, they have delivered awareness sessions to approximately 1,700 students, accentuating the crucial role of education in instigating change.

Nonetheless, this progress is hampered by a major hurdle: the scarcity of educational resources on cyber security and bullying in Arabic. This language obstacle hinders effective education and communication, compelling educators to rely on secondary resources like YouTube films and online games to engage with students.

In a more positive vein, there are concerted efforts to involve younger demographics in global discussions. The Internet Governance Forum (IGF), a distinguished international platform for policy discourse, is fostering an inviting environment for newcomers, showcasing instances of inclusivity in their dialogues. Moreover, the IGF’s Dynamic Teen Coalition offers an exceptional opportunity for young people to shape internet governance, ensuring their voices are valued and their viewpoints taken into account.

Despite these developments, the demand for inclusivity expands beyond these platforms. It’s suggested that only empowering outspoken children is not enough. The requirement is to also connect with the shy, reticent and less assertive children, so their needs and rights are not neglected. This perspective is backed by Amy Crocker from ECPAT International, a network battling child sexual exploitation, who stresses that focusing on these marginalized children is essential for authentic empowerment and inclusivity.

In conclusion, while Lebanon grapples with a rise in cybercrime and bullying, numerous endeavours are underway to manage these challenges and uplift youth—from awareness initiatives to inclusive international forums. However, roadblocks persist, with language limitations being a dominant issue, and calls for inclusivity emphasising the empowerment of quiet children. These insights offer valuable pathways for future undertakings in the dual domains of cyber safety and children’s rights.

Boris Radanovic

This comprehensive review underscores the critical relevance of youth empowerment, and their role in advocating for online safety and children’s rights. The argument is founded on the premise that ‘the voice of the youth is the voice of the future,’ urging for recognition and enhanced amplification. So far, over 5,000 digital leaders, actively contributing to their school communities, have been educated. Their role has been instrumental in achieving the Sustainable Development Goals (SDGs) 4 and 16, related to quality education, peace, justice, and strong institutions, respectively.

However, the analysis also reveals a troubling fact that children are five times less likely to confide in a trusted adult when they encounter precarious situations online, opting instead to turn to their peers. This highlights the absence of trusted adults and the necessity for safer online environments for children.

Notably, the success of peer-to-peer education is emphasised. Through partnerships between the UK Safer Internet Centre and ChildNet International, a programme has been created to educate thousands of digital leaders across school communities. Additionally, the data suggests that it is vital for children to have the confidence to speak out about online safety issues. This indicates a need for comprehensive online safety programmes that not only enhance safety but promote understanding and advocacy among children.

Regrettably, the discourse uncovers that children with accessibility issues or special education needs, and those from minority groups, are significantly more susceptible to online abuse. This aligns with the aspirations of SDGs 10 (Reduced Inequalities) and 9.c (Increase access to ICT), acting as a call to action for inclusive and accessible online spaces.

Furthermore, this detailed analysis accentuates the essence of community’s collective responsibility. A multi-stakeholder approach centred on education can be key to ensuring children’s protection online. Engagement with minority, disability and ability groups is necessary for understanding and addressing their unique needs, thereby fostering the creation of truly inclusive online spaces.

Finally, the importance of utilising technology that facilitates equal, anonymous participation is recognised, given its adaptability to various roles within an environment. Educational providers are thus tasked with keeping learner engagement inclusive and participatory.

In conclusion, this extensive analysis paints a complex picture of the digital landscape, intertwining youth empowerment, increased safety measures, and the strategic importance of education. The key to developing holistic and inclusive online spaces that fuel both individual growth and societal advancement lies in tactfully merging these strands.

Moderator 2

Young people are not only actively involved in digital communication initiatives, but are also encouraging and facilitating engagement amongst their peers. This positive trend supports two pivotal UN Sustainable Development Goals (SDGs): Quality Education (SDG 4) and Reduced Inequalities (SDG 10), emphasising the positive impact of youth engagement in these areas.

One notable observation from this participation is that children and young individuals appear to find it easier to converse and liaise with their peers rather than adults in this field. This shift indicates an intriguing change in communication dynamics, with potential ramifications for the development of educational strategies and initiatives targeting young people.

Moreover, there is firm belief in the efficacy and potential of youth involvement in digital awareness programmes. As a case in point, the consultative youth programme at Portugal’s Safer Internet Centre demonstrates the substantial commitment of young people towards enhancing internet safety, a key element of digital awareness.

Significant reference was made to the In Safe Structure programme, identified as a best-practice example. Again, the devotion shown by young people towards the cause, as well as their active participation is emphasised. The impact of these initiatives highlights the importance of youth participation, and serves as a compelling testament to their potential contribution to a safer, more inclusive digital environment.

In conclusion, youth engagement plays a pivotal role in digital education and the shaping of a more inclusive digital future. The insights provided by these analyses point towards a burgeoning paradigm where young people are not merely recipients of digital knowledge but are increasingly integral to its creation and distribution.

Deborah Vassallo

The crucial role of young individuals in discussions surrounding online safety has been showcased effectively by the Safer Internet Centre. Actively involving young individuals in operations such as hotline helpline discussions has proven beneficial. Young people not only assist with helpline calls, gaining a profound understanding of online safety issues, but also participate in the creation of materials for the Safer Internet Centre, providing valuable consultations. This youth involvement aligns with the objectives of SDG 4: Quality Education, SDG 9: Industry, Innovation, and Infrastructure, and SDG 16: Peace, Justice, and Strong Institutions.

The importance of producing positive digital content, alongside understanding online risks, is further emphasised through initiatives like the ‘Rights for You live-in’ annual event. During this gathering, young individuals were granted the opportunity to interact with content creators and craft their own digital content. A notable outcome was a video created by these young people for Safer Internet Day activities, which highlighted their online experiences.

Moreover, the argument for offering young people decision-making roles in activities related to online safety is strongly supported. Evidence of this is seen as the entire coordination of all Safer Internet Day activities was entrusted to young individuals. They were also brought into deeper-level discussions on issues related to online safety, and their creations for Safer Internet Day were disseminated broadly across schools.

However, the analysis also brings forth the concern for children in residential and foster care who are considered more vulnerable to online harm. Their use of online chats to seek understanding and love due to their disadvantaged situations is a significant concern. This sentiment is associated with SDG 16.2: End abuse, exploitation, trafficking, and all forms of violence against and torture of children.

Nonetheless, there’s a sense of optimism spurred by the actions taken by the Safer Internet Centre to protect these vulnerable children. Initiatives such as inviting them to ‘Rise for You live-ins’ sessions, conducting practical workshops about privacy and online chat safety at children’s homes, and providing an express helpline for those experiencing online harm, underscore the Centre’s commitment towards providing immediate help and education about online safety.

In conclusion, the endeavours of the Safer Internet Centre offer a hopeful sentiment for the progression of online safety measures. This comprehensive involvement of youth along with initiatives for vulnerable children recognises and addresses the complexity of online safety issues, providing a balanced approach towards creating a safer online environment for all.

Session transcript

Moderator 1:
I would like to share today’s discussion in collaboration with my colleague and friend, João Martins. Before I go on, I would like to emphasize that João Martins has been collaborating with the Portuguese Cipher Internet Center, which I am responsible for coordinating, since he was very young and was a youth ambassador for the Better Internet for Kids, who spread the word about safety issues, internet governance, and advocating for youth participation, from national initiatives to European and international forums, such as EURODIC and IGF. As you may have read, and perhaps that’s why you are here, this workshop is organized by the INSAFE Network, the European Network of Cipher Internet Centers, in cooperation with the Portuguese, Belgian, Maltese, Polish, and the UK Cipher Internet Centers, who will share their best practices on youth participation and how they are working together, co-creating and developing new initiatives, resources, and campaigns to effectively reach this target group, including children in vulnerable situations and to tackle online trends. Following the panel presentations, we will open a question and answer space, so we can discuss and clarify the topics, and the panelists can provide comments. As we are all aware, it is a challenge to reach children in vulnerable situations. In today’s world, children are vulnerable for many reasons, poverty, disability, mental health problems, abuse or neglect, family breakdown, discrimination, and social exclusion, not to mention migration. Several programs are designed to help social groups from different backgrounds, including those who are vulnerable. While these groups face different challenges, they share a common need for online safety in an increasingly complex social environment. However, all children can be considered vulnerable as they grow up in a world where decisions are made by others, by adults, often with a very different perspective, and feel the pressure to adapt to a world where rights are not protected, risks are everywhere, and technological developments in the digital environment are beyond imagination. In such an environment, and faced with a multitude of everyday problems, children are called upon to develop emotionally, intellectually, and technologically in a world where their voice is not heard, and decisions are made by others who are often not as familiar with or connected about their needs. The New European Strategy for a Better Internet for Kids, adopted by the European Commission in May 2022, aims to provide a delicate balance between digital participation, empowerment, and protection of children in the digital environment. Better Internet for Kids Plus is an adaptation of the 2012 BIG strategy, following a decade in which technological developments have exceeded all expectations. The new strategy, adopted after a long consultation process, aims to put children at the forefront of the developments and decisions that will be made by the key stakeholders and industry in the digital environment in the coming years. Children as digital citizens of the future, and growing up in a digital environment, deserve to have a say in development safeguards and their rights, and to shape the world they will live in. Therefore, to go deeper on these matters and to give us an overview on the Better Internet for Kids Strategy Plus and BIG youth program, I’m delighted to give the floor to Sabrina Vorbo from the European SchoolNet. Thank you so much, Sofia. We can go to the presentation.

Sabrina Vorbau:
Good morning, everyone. My name is Sabrina Vorbo. I’m a project manager at European SchoolNet. European SchoolNet is a network of ministries of education. We are a non-governmental organization based in Brussels, Belgium, and we are coordinating on behalf of the European Commission the Better Internet for Kids project that Sofia already mentioned, and as part of this project, we are also coordinating the network of European Safer Internet Centers. Myself, I’m part of the coordination team, and I coordinate the work that we are doing with the young people, and so I will start just setting a little bit the scene how we involve young people at a pan-European level in our activities, and we follow a very simple principle and encouraging the young people that their voice matters and that they deserve to have a firm seat around the table when they need it. decisions are being made when it comes to shaping policies like the Better Internet for Kids Plus strategy, but also when when industry is developing new devices and new tools. These young people that we are working with at European level are originally coming from the National Safer Internet Centers. At the moment we have a group of 40 plus big youth ambassadors, how we call them. As you can see here on the slide, I shared a few pictures with you where you see our ambassadors in action in various different settings. The big youth program allows the young people not only to connect with each other, to meet like-minded peers from across Europe, but it also allows them to have a one-on-one exchange with other stakeholders in the field. These stakeholders being policymakers, politicians, but also representatives from industry and IT companies. You also see here that obviously the pandemic was a very challenging time, but it did not stop the big youth ambassadors to continue raising their voice. So you can also see here that we had various occasions during the pandemic to continue connecting the young people with important stakeholders. As I said when we are talking about shaping a safer and a better internet for children and young people, they need to be part of the decision, because they’re not only the future, but they’re also the present. We are able to do so, as Sophia mentioned already, because youth participation is very well endorsed at a European level because of the Better Internet for Kids strategy that exists for various years already, but was reshaped last May. It was really built around three pillars, youth protection, youth engagement and youth empowerment. You can see here that really young people are at the heart and at the front of this new strategy, because we believe that every young person has the right to play online, to learn online, to connect online, to create online, to watch and to listen online, but also to share their content online. As we are talking about a policy on Better Internet for Kids, we also felt important, the European Commission felt very important that this strategy, that this policy is also accessible for the young people themselves, so they understand what is actually written in the policy and what their rights are. So we also developed, I’m sharing it here for the room, a youth-friendly version of the strategy. This was developed with the Big Youth Ambassadors. We worked with them together on this youth-friendly version. They advised us on the language of the youth-friendly version, but also on the visuals, on the colors we would use, because we wanted to make sure that this youth-friendly version is really accessible and makes sense to the young people as well. So I think this is a really true milestone, this strategy, because it not only encourages us, but everyone in the field in Europe, to apply this strategy. There are also very clear endorsements for policymakers and also industry, how they should work and co-create with the young people. To finish my part of the presentation, I just brought a very small video that actually shows our Big Youth Ambassadors in action. We onboard them every year. There’s an opportunity for new young people to become part of the group. This happened through the annual Big Youth Panel, which is organized back-to-back with our annual conference, the Safer Internet Forum, where we give young people the possibility to share their views, where we give them a platform to exchange one-on-one with stakeholders. Obviously, there’s a very long preparation process in place as we typically start working and preparing with the young people three months in advance to it online, and then a few days before the conference, they come on site and they start working on the intervention. So this is a little bit of a summary video of what happened last year at the Big Youth Panel. I hope this works. Maybe I can ask for the technician to click to play the video. We’re singing to let you know we’re here We are, we are, we are, we are, we are, we are I think we can stop the video here, just wanted to have you listen to the first minutes of the video, and of the song because it’s really, as I said, really important for us to engage the young people in a meaningful way. So what we typically do, we discuss with them about the skills they have, and you have seen in the video that they come with a lot of talents, and how they want to bring their message across. So last year, the group decided to write a song about a better internet for kids. They rehearsed this, and these young people are coming from different European countries, they start rehearsing this online. And then they performed this at the Safer Internet Forum. They also decided to create posters with messages, and here they also wrote these posters in different languages, because it’s obviously the main common language is English, and the language of the conference is English, but it’s also important to promote the national language and bring this across. So I hope the video gave you a little bit of an insight of how we are working with young people. As I said, it’s a very long and intensive process, and we get great support from our national Safer Internet Centers, but really, last year, the Safer Internet Forum really put the young people at the heart of this conference. We worked also with some of them that helped us organizing the conference, that shaped the program with us. So that’s also a bit the philosophy to include the young people from the early stages on, not only when the end product is there and put it in front of them and say, okay, now what is your opinion, please use it. We are involving them really when we are starting to share and create our ideas. So I think this is really important if you want to get active in youth participation to include them from the beginning, and then another important element is also the follow-up process, that after this conference, we continue to stay in touch with them. We try to involve them in other activities, like Joao is here with us today at IGF. This is also important that they don’t feel, okay, I have been invited to this one event, and now the event is over, and nothing happens. So I think involving them from the beginning, but also making sure that you stay connected with them, that you give them future opportunities, that there’s a follow-up for them. I think these are the two key elements when it comes to youth participation. I will stop here, and I think I immediately hand over to my colleague Nils.

Moderator 1:
Yes, thank you so much, Sabrina, for this very precise overview on what the youth participation has been doing in Europe, and now I will give the floor to Nils. Please, he comes from the Safer Internet Centre from Belgium, so the floor is yours.

Niels Van Paemel:
Hello everyone, so I’m Nils van Pamel, and I work as a policy advisor for Childfocus, which is the Belgian foundation for missing and sexually exploited children, but in this context, of course, as the coordinator of the Belgian Safer Internet Centre. Can we go back to the slides, please? Yes, you’ve seen my photo long enough now. I’m going to focus on how we include children and young people in our day-to-day work, but specifically in our prevention work. It’s already in our name, we are Childfocus, so we focus on children, and we try to make the internet a safer place for children, so it makes sense that, of course, everything we do, both on the operational level, but also on the prevention and awareness level, we include children in every aspect of it. I’m going to show you just a few tools that we made together with them in order to, like, how our co-creation processes happened and what it resulted. The first thing we did together with children is, actually, let’s go back a bit in time. A few years ago, just before COVID, we realized that as a helpline, where children can, but also parents, teachers, everybody who works with children, they can call us when they have questions about making the internet a safer place for children, but children were actually not represented high enough, and we were wondering why we did a study, and of course, now it’s super obvious, but it took some time for us to realize. First of all, we were a helpline that was only, you could only call us, but as we know right now in 2023, children and young people, they don’t like to call anymore. They like to chat, and they like to send, like they want to use their phones to write words and type them to us. Things that we realized through focus groups and working with children. And the second thing, because the topics that we deal with at Child Focus, it’s not only about your privacy settings, but it can also go about cyberbullying. It can even go about sexual exploitation when there’s images being sent without consent, even sextortion or grooming. I will go into that later. But children don’t always want to talk with their parents about this, and not even always with professionals like us. So we decided to follow the Danish best practice that was shared within the network and to make a peer-to-peer platform called Cybersquad, because sometimes parents remain an important source of information or help for their children, but sometimes the perspective of another teenager can talk even louder, especially about certain topics that children need their peers for. We created this. It’s basically, it’s forum-based, where you can ask questions and peers can answer. And we really saw that young people, they stepped up in order to help each other use their own experience, made us realize that we really need to put those children in the driver’s seats. We can even see them as the experts they are in some fields, and especially in the digital field, they are, and we should really see them like this. Of course, when things get really serious, there is always the help button where they can be in direct contact with a child-focused counselor. Of course, we are not letting children do psychological counseling with each other, but this is something that we are really proud of and really works, and we’re seeing that it works. We created this together with a group of children aged 10 to 18 in order to get their perspective. It was tested and also adjusted based on their views. Two other examples, but first, I want to say that of course, we are a big believer in co-creation and we are seeing children as the experts they are. I already said this. But in the last year, just to give a number, more than 2,500 children and young people were consulted in the creation or testing of e-safety tools or awareness-raising campaigns, resulting in, for example, I’m gonna pick two, Groomics, this is an educational tool that we made to make children and young people aware of and resilient to phenomena such as online grooming. For the people who don’t know what grooming is, it’s the process where an adult tries to convince the, tries to gain the confidence of a minor in order to abuse, sexually abuse them later. But the process goes on online, the whole grooming process. So as a teacher or a supervisor, you can make young people in your group aware of some of the risks associated with chatting with strangers online, but of course, at the same time, you can also highlight the benefits of online friendships and relationships. So we are trying, and this is the hard part with a phenomenon like grooming, we are really trying to see, to show the internet as the positive place full of chances it is, but at the same time, highlight some risks and create resilience amongst young people in order for them to detect red flags, in order for them to become more of an expert as they already are. We have to stop focusing on forbidding things for young people because that is not a long-term solution. We have to make them empowered in order to detect what is not okay in order to say no. And a last tool that I wanted to show you is the MAX principle. So I already talked about young people helping each other on the Cybersquad platform, but also children, that’s our dream, that every child in the world has an adult, a person of trust. And this is not the case yet. This does not necessarily have to be a parent, but it can also be a teacher, it can be an uncle, it can be somebody from the sports club, but just in order to make young people, when they feel threatened, when there is something going wrong in their online world, to have somebody to talk to. MAX, it’s a name, and young people, we are putting them in the driver’s seat, they are choosing their MAX. So with this whole campaign, if you go to this website, I will show it later, you can see there is a form for young people to fill in, choose their MAX, and send an official invitation to this person, and then he or her becomes the official MAX, and the, okay, I knew, and yeah. I think I just wanted to say that’s like young people having, choosing themselves who is their person of trust in order to create a safer environment for them online. That’s it for me, thank you.

Moderator 2:
Thanks, Niels. Yeah, let’s then hear from Dabrova Vassalo, representing Foundation for Social Welfare Services, and also Safer Internet Center Malta. Thank you.

Deborah Vassallo:
Thank you. So I’m Deborah. At our Safer Internet Center, it’s called Be Smart Online. And we have the Safer Internet Center, and the hotline, and the helpline, and obviously also the youth participation. We do different activities, but we try to involve young people as much as possible, especially in the consultation of the resources that we create. And we also invite them to assist to some of the calls that we receive from the helpline in order for them also to get an idea of what young people are encountering when it comes to online safety. We try to also, during the focus groups that we do with young people, we try to discuss with them different case scenarios in order to give them as much exposure as possible. And some of the activities go to the slide. One of the best practices that we do every year is what we call Rights for You live-in, which is a live-in whereby about 50 participants come to this live-in. We go to all the schools in Malta to invite children to come during the summer month. It’s usually in August for about three days. And during this live-in, it is all about online digital rights, about being digital citizens, about their responsibilities online, their rights. And during this live-in, they have the opportunity to discuss their online experiences. They also, this year, we gave them the possibility of meeting this person who creates a lot of content online. And each one of them had the possibility to create positive online content, because we do believe that it’s not only about online risks, but also about encouraging young people to develop themselves online content, which can be seen by other young people. And it can be like an example. We also involve them with discussions also at a higher level when it comes to ministerial level, because we’re part of the Ministry for Social Welfare Services. So this way, they can discuss also about online issues and what they would like their space to be, to be a safer space. And obviously, we involve them a lot in the Safer Internet Day activities. We involve them both on the outside when we go to do information days with the public. But this year, we gave them the full direction of all the Safer Internet Day activities. And they have decided to come out with this video, which they have created themselves, whereby they have a group of, they’ve asked the group of teenagers about their experiences online. And they’ve created the video. It was all from the development stage to the production. It was all made by young people. And then we have disseminated it with all the schools during Safer Internet Day. So we thought that would give them this opportunity. It was a huge success, because they felt that they were producing something for the Safer Internet Center. And they belonged to it, so it was very successful. Thank you. Thank you, Debra. And now, I will turn to Anna from the Polish Safer Internet Center. So Anna, please, the floor is yours. Hello, everybody. Yes, Poland joined the Safer Internet Program in 2005, so from the very beginning, when the program in the European Commission started. We deliver all three compulsory branches. So we have awareness part. We have hotline and helpline. And of course, we also have our young ambassadors. Maybe we will start with OK. Yes, our youth panel exists since 2010. Firstly, it was called the Congress of Young Internouts, but then we asked our young people if they still want to be named like this. And they said, no, no, no. We would like to change this name. So I think two years ago, we changed the name, and they picked the name Digital Future of Students. So that was something that they decided to have. We cooperate with the youth panel coming from, I think, 12 schools at the moment, but schools from all over the country, so coming from the big cities and going also to the little towns. We organize about three to four meetings yearly, and then we consult with them all our materials, like all campaigns that we plan, all educational resources. We invite them to the events. We even work within the INSAFE network. We exchange also youth ambassadors. Like, for example, last time, now during our international conference, we had a youth ambassador coming from Czech Republic, and he was speaking about what kind of actions young people need from us. And how to promote our actions in fields of internet safety to make them interesting, to make them good and meaningful for young people. also of course take part in the Spanish-European meetings. Each year we pick our representative and he goes with us to the Safer Internet Forum and I must say that this time like the last Safer Internet Forum was really impressive. I mean usually we had some presentations coming from young people but this time like they really participated. When we had the World Cafe discussions they were sitting in each table they gave fantastic insights so I think we really we changed our perspective of work with young people like their voice is really heard. And I would like to exchange with you and sometimes we also travel with them to spend like two or three days somewhere and to really have time to deeper discussion to give them some knowledge that they can be ambassadors in their regions but also to of course what is most important to hear from them. And also I think it was interesting initiative that we done recently. We organized webinars. It were five webinars. They decided on topics on those webinars. In each webinar we had the moderator coming from the well-known Polish radio station that is targeted at young people. So it was moderated by the journalist and then there was representative of the youth panel and adult expert. All the topics were chosen by the youth panel and it was widely promoted among all schools in Poland so they could connect and listen to those webinars. And they are of course also recorded so the schools can come back to those topics and it were topics like gaming, like relations, cyber bullying. And the other I think very well it’s like like it is with best practices I think it’s really best practice at the Digital Youth Forum. We organized this Digital Youth Forum since 2016 and this is the big conference, big event for young people and young people are speaking. These are influencers, these are young social activists. So they are the speakers and we have young people as audience. So these are events coming like from 400 people in the room even to 700 people in the room depending on the year and the conference hall, the conference room that we that we had. But also it’s organized in a hybrid mode so we invite schools to to join the event online and it’s really working very well. I think last time we had like 100,000 kids watching the conference and we also this time I will talk about it later a bit more but now we have lots of school lots of children from Ukraine and we also make it available from Ukrainians. They can join the conference, they can come to the conference even school schools in Ukraine joined also this event and they watched from their country and that so we deliver of course them the translation into Ukrainian. So this is the the conference that is really really successful and I think that’s it for now. Thank you.

Moderator 2:
And keeping rounds, let’s now give the floor to Boris Hradanovic from Southwest Grid for lending.

Boris Radanovic:
Thank you. Thank you to the moderators as well and my dear colleagues. I loved your presentations and the quote I think from Sabrina or with or without you the song I really hope it’s with us and not without us. My name is Boris Hradanovic. I’m coming from the UK Safer Internet Center and I do firmly believe that the voice of youth is the voice of the future and the sooner we start listening the better the future will be for all of us and unfortunately we need to find ways to respect those voices but at the same time find ways for those voices to shine through and yes Neil’s I absolutely believe they are experts and yes Deborah I do believe that every child does deserve to have a trusted adult but oftentimes they’re not. Children do not have a trusted adult or even better a trusted educated adult and then the question is what to do there and we know from science and we know from data that children are five times less likely to reach out to a trusted adult when they encounter or are in an online unsafe situation then they talk to their friends. So today I’m here to talk to you about empowering young people to educate their peers about online safety and the program we developed in United Kingdom for the UK Safer Internet Center with our partners ChildNet International. It’s a program that champions youth digital citizenship and creativity. It is effective. It’s fun. It’s an online platform. At the same time it contributes towards an outstanding whole school approach to online safety. More than 5,000 digital leaders have been educated until now and they’re active in their school communities and we have reached thousands more as a result of this peer to peer learning module which we do firmly believe is an excellent way of going forward and and giving space to those voices of children. Majority of them are 7 to 18 years old. They have learned how to become a digital leader in their school up to 15 children in each school to gamified and interactive online experiences and learning modules and group working. The program enhances not only online safety but the understanding of issues of online safety and the advocacy and power of young people that they can have to make the changes that we all so sorely need in the online space to create a better and safer internet for us all and in the end one thing that we all must work to is instills confidence in children not to be passive bystanders to be active upstanders for many of the issues that we see online. At the end I want to invite you all as well our partners ChildNet International have a film competition for all the youth around the world where they can advocate but as well engage in activities and create a film that they can submit. for one of the biggest international film competitions. And I think it’s important to note that INSAFE also has a booth, and please do visit us at the booth so we can continue talking about this. The voices of children are the voices of the future, and we need to start listening to them now. Thank you very much.

Moderator 2:
Well, I think we should open the floor after this brief introduction of different initiatives. Most of all, we want to make the conversation a global conversation. I already had the opportunity to meet someone in the booth area today. I didn’t want you to put on the spot, but if you could share with us a little bit about your initiative, that would also be very nice, and maybe linking in to what we discussed here. It’s mainly because, of course, the INSAFE network has most focus on the European geography, but most of all, it’s good to reflect that we’re also engaged in other geographic regions. So the idea here, and please introduce yourself and the initiative, is to find the opportunity that IGF promotes, really, this exchange of best practices.

Audience:
Thank you very much. My name is Brigitte Chalibian. I’m a lawyer and director of Justice Without Frontiers, coming from Lebanon. And I think the only Arab organization participating in IGF, or there is two Arab organizations. Well, in Lebanon, since 2019, we start having the economical crisis, and the percentage of domestic violence is increasing, including the cybercrime. The recent report issued by the internal security forces mentioned that the cybercrime are increasing more than 100%. And last year, we started receiving calls from schools, asking us to provide sessions, awareness sessions for school students about bullying and about violence. Because, as I mentioned, the percentage of bullying among students is increasing, and there is a different way that students are using. So last year, we provided awareness sessions for 1,700 students. It was a good number for us, and we were able to reach private schools. And this year, we will start reaching the public schools. As well, regarding the cybercrimes, we received, we provided legal consultation and representation for girls who were sexually harassed on online platform. And we had a good cooperation from the ISF, because we have a hotline, but not all people are aware about. So for this year, we put, we try to manage ourselves more. We found the need for that. So we are working on a cyberbullying policy paper. It will be done by next month, when we will hold a roundtable to discuss it and to see, we will involve the key stakeholders. And in later stage, we will select four or five schools in Lebanon to conduct a pilot project with them. We will have a bullying policy paper, and we will have a program with, to work with students, teachers, and the administration. And in later stage, we will be working on cyberbullying. The issue that I want to highlight, and the importance of my presence here, is that I am learning more from you and getting tools. Because when we start going to school, we didn’t have any material in Arabic language for students about cyber security, about bullying, and whatever. So I was going through YouTube to find some, because children, I cannot go to them with the PowerPoint. I just, I need just to show them films, and it depends from their age. So I, we play with them games, and we start through YouTubes, getting some short films talking about cybercrimes, what to do. And the best thing that we did is one of the biggest school in Lebanon. They have more than 3,000 students. In their football stage, we put an advertisement for one year. It’s about what is bullying, and what is cyber, what is bullying? Focusing on bullying only among students. What is bullying? How to report? What is the role of each student? And we put, this is a crime, and this is, when we have cases, this is the police number, and it will be left for one year. And we received some feedback that we made it in Arabic language, and we received feedback from family committees in school to have it in English, and to have it as, in each class or section, to have one poster from that. So this is a short summary about Lebanon.

Moderator 2:
Thank you, Bridget, very much. And Bridget told me that she’s a newcomer to the IGF space, so it’s really nice also to have that feedback and have you also included from day zero in the discussion. So thank you for that. May I ask if we have questions from the… coming from online.

Audience:
No questions back, but actually it’s one comment that is worth mentioning that now there is a new initiative like Dynamic Teen Coalition has just started within IGF that I think this may be a very good opportunity for young people to join and take part in shaping the landscape of Internet journals. Thank you.

Moderator 1:
So, thank you for your contributions and I would like to go back to the panel and we have a question for them. The question is how do you ensure your youth participation activities are inclusive and accessible to vulnerable groups and we’ll start with Boris.

Boris Radanovic:
Easy question.

Moderator 1:
Please, go ahead.

Boris Radanovic:
We know a lot but we’re still missing a lot of data but the data that we have, we know the children at most risk in physical life are at most risk in digital life. That’s what we knew, but we didn’t know how much. It’s three to seven times more likely. If you have any accessibility, disability issues, any special education needs, you’re three to seven times or even greater highly likely to get abused online. So, the challenges are not just creating safe spaces for children but having them accessible and ready on the go for children and youth and adults who fall into those minority categories and your adversities, diversities, puts you in a disadvantaged position to all of us online because the spaces we create online are not safe enough for all of us to be in. So, we need to build more inclusive spaces online and we need to think about much wider groups of people than any group that we are creating the app or platform for. And especially looking at vulnerabilities, abilities, disabilities and many of those special categories for minority groups so we can create those spaces better for them. And we cannot create those spaces in isolation, working just by ourselves and we know from the online safety space and protection of children that the multi-stakeholder approach is the only way to go and especially through education. So, I would from my perspective only advise to working with the experts in the field with those minorities abilities and disabilities groups to understand their perspectives and their needs and then create the spaces around that, not the other way around. They have the knowledge on the ground that we can educate the educators who already know the specific needs and necessities of those peculiar and specific groups who are sensitive to many of those things. So, then we can work with them to develop the online spaces that recognize differences but allow us to adapt to them as well and create better and safer environment for us all but especially for the categories of children and young people who are most disadvantaged online. Thank you.

Moderator 1:
Thank you. Deborah? I will follow up. Thank you.

Deborah Vassallo:
So, from our Safer Internet Center we mainly target children who are in care in residential care and foster care mainly or in those residing in residential homes. These children the agency I come from is the Foundation for Social Welfare Agency. So, it has the section of fostering incorporated in it and these children come from a disadvantaged situation at home and we found that they were vulnerable especially to online connections because they would be chatting to people to replace the love and they want to talk about their situation to someone who is out there who would understand them. So, that would put them also in a vulnerable position. So, we try to engage them as much as possible into our programs. We invite them to the Rise for You live-ins. We also go to their residential homes. We do practical sessions even about privacy about who they’re chatting to like practical sessions and we give them like it’s like an express helpline route in a way that they can call us directly if they experience some form of online harm so that we’d be able to help them immediately in order to provide them with our help.

Moderator 1:
Thank you. Anna?

Anna Rywczyńska:
Yes. As a Polish Safer Internet Center we try to support all the underserved groups. However, I would like now to focus on the refugees because as we all know from 24 of February 2022 when Russia attacked Ukraine, the Polish border was crossed by 16 million of refugees. Many of them are already home if they could. Many of them moved to other countries but we have still about 1 million people from Ukraine living in Poland, mostly women and children and we have about 200,000 kids that attend Polish schools. So, what we could do as a Polish Safer Internet Center was, of course, to include them into our actions to give them the support in fields of Internet safety. As I said before, we translate all our events simultaneously. We enable also the remote connection to our events. We translate also our educational resources into Ukrainian, we promote them in schools as well. We also did something to really give support, so we hired consultants in the helpline with Ukrainian language, psychologists with Ukrainian language to be able to answer to reports from children from Ukraine. Our hotline is also more prepared to work on those reports. And the Polish CYFIR Internet Center consists of two organizations. It’s National Research Institute, NASK, that I represent, and we have also the NGO, Empowering Children Foundation. And at NASK we also have a special department fighting with disinformation and fake news. And they deal a lot with the disinformation related to the war. So this is like a big part of our everyday actions. Thank you.

Niels Van Paemel:
And now, Jung, to finish. Yeah, a lot already has been said, so I’ll try to focus on a different group. We did a check of our website a few years ago, and we found out that unfortunately it’s not inclusive enough for everyone. And now I’m going to talk about children on the spectrum, so autism spectrum disorder. Yes, this is a group of children who, first of all, need easier language. Also, our website was way too flashy when it’s about the resources part and where they can find information. So it’s not only about the words and trying to make sure our content is inclusive, it’s also in the way you present it. And secondly, when we’re talking about e-safety for children and empowering them, we’re talking now about a group who has a lot of difficulties with that. We’re talking about, for example, discovering red flags, having their gut feeling empowered. People on the spectrum have a harder time to detect red flags, have a harder time to trust their gut feeling. So we felt that we really need to make something specific for this group, and it turned out to be the Star Plus tool, co-created with children on the spectrum. And it’s really beautiful. You should check it out, Star Plus from Child Focus. It’s already translated, so initially it was made into Flemish, so Dutch. There’s a French version, and now we’re trying to translate it into different languages. Thanks to the InSafe network. Just to conclude, why is it so beautiful that we are all here from different countries? We don’t need to invent the wheel over and over again. If Boris makes a beautiful tool in the UK, we can take this into the Belgian context, and of course you localize it. But we can learn so much from each other, and that’s a really beautiful thing here. Thank you.

Moderator 2:
Before we wrap up, I just want to make sure that if anyone here in the audience or online has some point or question that wants to… Perfect. It’s a great opportunity. If you could stand, say your name, your affiliation.

Audience:
Hello. Is this working? Yes. Hi, my name is Amy Crocker from ECPAT International, which is a global network of organizations fighting the sexual exploitation of children. Thank you so much for your presentations. This is probably something you deal with often, but one of the things I find interesting about working with young people, particularly when you have things like youth platforms, is that sometimes you have youth who are very empowered and are used to doing this. How do you reach the kids that are too shy, too reticent to come forward? Because obviously you’re building the future ambassadors, but you want to make sure that you’re going horizontally. So I’d love to hear a little bit about that.

Sabrina Vorbau:
I can go first, Boris, and then Shibin. It’s a good question. It’s a valuable question. Of course, many of the young people that we work at European level, as I said, they have been working previously with the colleagues here in the national field. So they’re kind of already experts. They have a good knowledge in English. They know how to articulate themselves. But what I always find very impressive with these young people, they’re so aware of their responsibilities they have when they come from these meetings, what they have to do at home, in their community, in their local community, with their families, with their schools. We recently invited, and this is just to give a very concrete example, we recently invited two of our big youth ambassadors to come to a European Commission event in Brussels. And while we were waiting, sitting in the audience, I was actually with one of the youth ambassadors from Malta. And it was a hybrid event. So it was streamed online, and the youth ambassador told me, oh, actually I shared the link with my classmates. And I said, oh, wow, so they’re going to watch you and maybe they get interested and they would also like to do this in the future. And he said, oh, well, actually my best friend, he’s really shy. I don’t think he would ever do this, but it’s not a problem because I’m here today and I represent also his concerns. So I think these young people have a very great sense of responsibility. And they come back from these meetings. They go into their community. They share what they’re doing. And I think this way they make sure that also those who are maybe more introvert, and that’s perfectly fine. Not everyone needs to be on stage. all the time, that they make sure that their voice is included as well. Thank you.

Moderator 2:
I think it’s Boris. One minute for each?

Boris Radanovic:
I’ll be quick.

Moderator 2:
Me too.

Niels Van Paemel:
Split the minute.

Moderator 2:
Split the minute.

Boris Radanovic:
Okay, 30 seconds, Nils. I think it’s important, what Sabrina said, I absolutely agree, but I think that’s the power of peer-to-peer-led programs. You can educate the leaders, but the leaders need to educate the others and create those cohorts of people, of young people who support them. But I would emphasize, as well as Sabrina said, using tech that enables exactly that, like equal participation, anonymous participation, participation where you don’t need to stand in front of a camera. You can be the director. You can be the screenplay writer. You can be some other part of that environment that enables everybody to take part. You as an educator, you as a facilitator, need to make sure that all of the voices are included as much as possible. But in the end, children are remarkable of having that broad appeal. Is that 30 seconds?

Moderator 2:
Something like that. That’s perfect.

Niels Van Paemel:
Just to add, I really like this question because it’s an everyday struggle, to be honest. To really, and I’m not only talking about shy kids, but just in general, if we just reach out, just throw out some lines through the audience, we will always have the same public. We will have privileged people, and you create the Matthew effect, like you are just preaching to the choir, and you need to be, on a day-to-day basis, aware that there’s people out there that you don’t reach if you don’t do your darn best. So this is, I really like this question. This is something that we should all be aware of. Reach out.

Moderator 1:
Okay. To finish, Anna?

Anna Rywczyńska:
No, I just want to say that when we organize competitions, before they go to this big pan-European youth panel meeting, and they send us videos, they never start, like, for me, important is, or I think they always say, my friends, my generation. So I think they really feel that they are the voices of their generation. Yes, they are represented.

Moderator 2:
So I’ve been asked to kind of wrap up and bridge a little bit this session from day with the rest of the IGF, also from, and with the youth work that we’ll be conducting throughout the sessions. And I think that we can wrap up saying that with or without you, children and young people help each other, but need having someone to talk to. They most often are engaging in these initiatives that we heard being consulted, and sometimes they also feel that it’s easier to talk to each other than talking to an adult. The important point, and also addressing this last question, is that each young person feels supported in the process, being shy, being a leader, and engaged in different levels. For instance, here at the IGF, we’ll be having in the afternoon, from the start of the afternoon, the youth summit. The youth summit, for instance, if you want to know, it was a process throughout the year, organizing different workshops, regional workshops, and the one in Europe was about nurturing digital well-being, addressing the impact of the digital environment on youth mental health, which was exactly, which could have been also one of the topics that we could raise from the last question that we put to the participants here. So I would say that kind of wrapping up is understanding the different roles. We provided great examples, and I bring the In Safe Structure example and how it works, and give it as a best practice also in other forum, because I do believe, and Sphia kind of introduced me in the beginning a little bit like that, I did start in youth participation, youth engagement, youth awareness, through one of such programs, was part of the consultative youth program for the Safer Internet Center of Portugal, and throughout the years, and Sphia was making a joke also about how I keep my skin still very intact, although 26 already, and it’s been a while, but hopefully we’re making positive change, influencing even the more shy, and also, as so many also very well said, making sure that the voice of the shy are also heard by us. Thank you very much. And we are on time. Yes. Thank you so much for your presence, and I think that we can move to the next workshop. So enjoy IGF. Thank you.

Moderator 2

Speech speed

140 words per minute

Speech length

754 words

Speech time

324 secs

Anna Rywczyńska

Speech speed

162 words per minute

Speech length

372 words

Speech time

138 secs

Audience

Speech speed

164 words per minute

Speech length

815 words

Speech time

299 secs

Boris Radanovic

Speech speed

201 words per minute

Speech length

1193 words

Speech time

356 secs

Deborah Vassallo

Speech speed

165 words per minute

Speech length

1688 words

Speech time

614 secs

Moderator 1

Speech speed

127 words per minute

Speech length

736 words

Speech time

349 secs

Niels Van Paemel

Speech speed

180 words per minute

Speech length

1736 words

Speech time

580 secs

Sabrina Vorbau

Speech speed

154 words per minute

Speech length

1762 words

Speech time

688 secs

On how to procure/purchase secure by design ICT | IGF 2023 Day 0 Event #23

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

David Huberman

The analysis addresses various topics related to internet standards, security, and vulnerabilities. It starts by highlighting that the Border Gateway Protocol (BGP) and Domain Name System (DNS) are outdated protocols that were not originally designed with security in mind. However, efforts have been made over the past two decades by the Internet Engineering Task Force to enhance the security of these protocols.

The analysis emphasises that enhancements such as Resource Public Key Infrastructure (RPKI) and Domain Name System Security Extensions (DNSSEC) have proven effective in improving internet security. RPKI enables providers to authenticate the origins of routing information, protecting against malicious route hijacking, misconfigurations, and IP spoofing. DNSSEC ensures the integrity of DNS queries, ensuring users receive the intended data from website administrators.

While RPKI and DNSSEC have shown promise, there is a need to increase their adoption. Currently, DNSSEC is only used by approximately 25% of all domain names, while RPKI has greater deployment, particularly among Internet Service Providers (ISPs). However, broader deployment of RPKI throughout the global routing system is necessary to enhance overall internet security.

The analysis also underscores the importance of stakeholder involvement in internet standards development. Governments and civil societies should actively participate in shaping these standards to meet the demands of a global scale in 2023. The Dutch government’s integration of internet standards with public policy, as demonstrated by their internet.nl website, is commendable.

Furthermore, the analysis highlights the significance of understanding internet vulnerabilities. Regardless of a country’s geopolitical situation, vulnerabilities present opportunities for exploitation for personal gain or the creation of chaos. Therefore, it is crucial for every country, regardless of size or peacefulness, to comprehend these vulnerabilities and implement regulations to minimize risks and secure their networks.

Overall, the analysis concludes that enhancing internet security and promoting the adoption of standards are essential for protecting users and ensuring the stability of the global internet ecosystem. Recognition by the United States government of routing as a matter of national security signifies their commitment to adopting RPKI and verifying Route Origin Authorizations (ROAs). By implementing measures to understand and address internet vulnerabilities, the digital landscape can be better safeguarded, ensuring the safety and stability of the global internet ecosystem.

Annemiek ( Dutch government)

The Dutch government has implemented a comply or explain list for ICT services, which includes approximately 40 open standards, including specific standards such as internet safety. Maintainers suggest which standards should be included in the list. Although the use of these standards is mandatory, there are no penalties for non-compliance; instead, they are recommended for use in services.

To ensure the adoption of these standards, the government actively monitors their utilization, particularly in procurement. Monitoring reports are submitted to the Ministry of Internal Affairs and then to Parliament. Tenders that fail to use any listed standard need to explain why in their annual report. The internet.nl tool is used for monitoring purposes.

Furthermore, the government encourages community collaboration and engages with major suppliers. The internet.nl tool is used to facilitate discussions, and Microsoft, one of the suppliers, has shown openness to discussions and plans to implement the open standard Dane in their email servers.

Regarding accessibility, the Dutch government is developing dashboards that integrate internet.nl for accessibility purposes. This demonstrates the government’s commitment to digital accessibility and reducing inequalities.

Additionally, the Dutch government advocates for the adoption of their developed dashboard system for digital accessibility, encouraging other countries to follow their lead.

The government also promotes the use of IPv6, recognizing the societal benefits it can bring.

Rather than enforcing rules, the government emphasizes practical application and experience in the field. They adopt a stimulating approach to encourage the use of standards.

In procurement processes, the Dutch government uses a special tender website that supports open standards. This website includes CPV codes to support the procurement of goods and services adhering to these standards.

Raising awareness and understanding among procurement departments about the technical aspects of standards is considered crucial. The procurement department needs to communicate with the architecture or other employees to understand the technical details, as most procurement departments lack technical knowledge.

The government also advocates for online security through the implementation of open standards. They employ the internet.nl tool to verify the application of open standards in organizations.

To incentivize organizations to use open standards, the Dutch government operates a Hall of Fame. Organizations that score 100% on internet.nl, fully implementing all open standards, are rewarded with t-shirts.

In conclusion, the Dutch government has taken significant steps to promote the adoption of open standards in various sectors. Their comply or explain list, monitoring initiatives, community collaboration, and emphasis on practical application demonstrate their commitment to industry innovation and infrastructure. By developing dashboards for accessibility, encouraging other countries to follow their lead, and supporting online security, the Dutch government strives for a more inclusive and secure digital environment.

Mark Carvell

Procurement and supply chain management play a crucial role in driving the adoption of essential security-related standards. These standards are vital for safeguarding businesses, organizations, and individuals against security breaches and ensuring the integrity and confidentiality of sensitive information. By implementing these standards in the procurement and supply chain processes, secure practices are maintained throughout the entire supply chain, from sourcing raw materials to delivering the final product.

The IS3C (Information Sharing and Analysis and Operations Centers) is instrumental in gathering valuable information on procurement and supply chain management. This knowledge base provides insights and best practices that can be used to enhance security standards. However, it is recommended to incorporate experiences from other countries to widen the scope and applicability of this resource, ultimately creating a more comprehensive and globally relevant framework.

Unfortunately, incidents of security failures in the UK, such as data breaches affecting police forces and financial services, are reported without appropriate follow-up remedial measures. These security failures can have severe consequences, including compromised data, financial losses, and potential risks to national security. To address this issue, it is imperative to establish a consistent reporting mechanism that accurately documents the consequences of security failures. Furthermore, remedial measures should be widely distributed through channels like the IS3C, enabling organizations to learn from past mistakes and take proactive measures to prevent similar breaches.

In conclusion, procurement and supply chain management are integral to adopting critical security-related standards. The IS3C’s efforts to collate valuable information are commendable, but it is necessary to expand this resource by incorporating experiences from other countries. Additionally, addressing the lack of follow-up remedial measures in response to security failures is crucial. By implementing proper reporting and distribution mechanisms through channels like the IS3C, security practices can be enhanced, and future breaches can be mitigated.

Audience

The analysis of the provided data reveals several key points regarding the government’s adoption of digital standards and the implementation of cybersecurity measures in Canada and the Netherlands, as well as the importance of accessibility in public procurement.

One of the main findings is that the Canadian government lacks a consistent standard in procurement, leading to negative sentiment among the audience. The absence of a single set of standards used by the same government department is a significant drawback. Additionally, each province in Canada has its own legislation governing privacy and digital communications, further contributing to inconsistency in the procurement process.

On the other hand, there is a potential for legislative change in Canada, as Senator Colin Deacon has been spearheading various legislative frameworks. This development is viewed positively by the audience, as it could lead to improved standards and practices in digital adoption within the government.

In terms of accessibility, it is noteworthy that public procurement for accessible digital goods and services is being adopted by several countries, with a focus on the inclusion of persons with disabilities. Standards for accessibility procurement exist in the US and EU and have been adopted by countries like Australia, India, and Kenya. However, it is highlighted that despite the existence of a monitoring system for web accessibility in government procurement in Australia, lack of funding led to its discontinuation.

Turning to the Netherlands, the Dutch government is actively developing dashboards such as internet.nl for accessibility purposes. This initiative is positively received, as it demonstrates their commitment to improving accessibility for digital products and services.

In terms of cybersecurity, efforts are being made by the RIPE NCC and ICANN to improve the adoption of techniques like DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure). The audience appreciates the efforts made by Dutch public authorities and the government in terms of standardization and cybersecurity.

Education and stakeholder involvement are highlighted as crucial factors in ensuring cybersecurity. Both civil society organizations and personal training have proven to be effective in fostering informed discourse and persuading operators to adopt security measures. Additionally, the analysis highlights the importance of proactive implementation of internet security measures to prevent potential digital mishaps that could be costly in terms of time, money, and stress.

The analysis also notes the challenges faced by telecom carriers in implementing new security technologies due to cost and scaling requirements. However, it suggests that subscription services could provide a steady source of funding for ongoing security improvements.

The lack of skilled engineers is identified as another challenge in the implementation of cybersecurity measures, as highlighted by the example of Kyoto University being unable to implement certain security measures due to a shortage of skilled personnel.

Overall, the analysis underscores the importance of consistent standards in procurement, legislative change, accessibility, and cybersecurity in the digital governance efforts of Canada and the Netherlands. It also recognizes the positive steps taken by various stakeholders, such as Senator Colin Deacon and the Dutch government, in driving these initiatives forward.

Wout de Natris

In the realm of cybersecurity, there is a pressing need for a stronger focus on prevention. This can be achieved through secure-by-design procurement and the implementation of internet standards. Initially, internet standards were created by the technical community without prioritizing security, resulting in vulnerabilities. Most procurement documents do not mention security, and when they do, they fail to specifically address cybersecurity or internet standards. Governments can help improve cybersecurity by demanding the incorporation of these standards when procuring software or services. By doing so, they can significantly enhance cybersecurity measures and reduce the risk of cyber attacks.

The DNSSEC and RPKI Deployment working group aims to bring about a transformation in the cybersecurity conversation. This initiative represents a shift towards prevention and secure-by-design concepts, moving away from solely focusing on mitigation efforts. The goal of the working group is to translate ideas into tangible actions, aligning with the objectives set by the UN Secretary-General.

Securing the public core of the internet is of paramount importance. This includes both the physical infrastructure and the standards guiding its operation. Currently, there is a lack of recognition and implementation of these standards, leaving the system vulnerable to cyber attacks.

While tools and standards to enhance security have been available for a significant amount of time, their adoption has been sluggish. This highlights the need for greater awareness and proactive efforts in implementing these measures.

The existing state of the internet infrastructure can be seen as a market failure, necessitating governmental or legislative intervention. The internet infrastructure is outdated and requires improvements. The European Union has already implemented significant legislation affecting both the infrastructure and the ecosystem.

Wout de Natris, an active participant in the discussion, advocates for a change in the narrative surrounding cybersecurity. He emphasizes the importance of persuading people in leadership positions to prioritize cybersecurity from the outset, rather than treating it as an afterthought.

The protection of the routing infrastructure is now considered critical for national security by the United States government. Government officials are actively discussing routing security principles and the use of technical terms such as RPKI (Resource Public Key Infrastructure) and ROA (Route Origin Authorization). There is an intention to leverage government power to enforce regulations and ensure the secure functioning of the routing infrastructure.

If organizations do not voluntarily adopt internet security standards, the government may impose regulations on them. This highlights the importance of voluntary adoption and proactive implementation of security measures within organizations.

Government procurement is identified as a potential game-changer for the deployment of secure internet design standards. De Natris suggests that if regulation is avoided, procurement can be a powerful tool for promoting security standards.

IS3C aims to hand over knowledge and tools for cybersecurity, with the responsibility of implementation and deployment lying with respective countries. Their work has broader implications beyond internet standards, as they also focus on emerging technologies such as artificial intelligence, quantum computing, and the metaverse. This broad scope aligns with the Sustainable Development Goals and the need for global cooperation.

In the area of Internet of Things (IoT) security, there is an opportunity to enhance security through the adoption of open standards. The existing policies on IoT security across the globe are predominantly voluntary, necessitating a global transformation in the regulatory framework.

In conclusion, a shift towards prevention in cybersecurity and the prioritization of secure-by-design procurement are imperative. The implementation of internet standards and the proactive adoption of security measures are crucial to mitigate vulnerabilities. Governmental involvement, through procurement and regulatory intervention, is necessary to promote and enforce these measures. The work of initiatives like the DNSSEC and RPKI Deployment group and IS3C contribute to transforming the narrative and addressing emerging challenges in cybersecurity.

Session transcript

Wout de Natris:
The light was on when it was off, so it’s on. And now the light is off again. Okay, so it’s on. Welcome to this session on ISVC’s presentation on procurement, government procurement and supply chain management. What you saw online almost doesn’t exist anymore in the invitation because the people there are not here and not able to be present online. So we changed it around a little bit, but not a lot. My name is Wouter Natris and I am the coordinator of the IGF Dynamic Coalition on Internet Standards, Security and Safety Coalition, IS3C. And we’ve been around since the virtual IGF of 2020. We announced an idea there and then we had to find people, funding, et cetera to start working. On my left side is our senior policy advisor, Mark Ravel. He is our online moderator today. And on my right side is David Uberman of ICANN and together with Basia Gosselin sitting there, they’re the chair and vice chair of our working group on DNSSEC and RPKI deployment and why that is relevant we’ll explain in a little while. When we got to 2021 in Katowice, we were able to present a real plan. We introduced three working groups, one on IoT security by design, one on procurement and supply chain management and one on education and skills. In 2022 in Addis, we presented our first report by the education and skills working group, which is having their session right now, which I had to run out of to moderate two sessions at the same time, which as you see is impossible. Luckily, the chair of education skills, Janice Richardson took over, but what you can see what as I3C we try to do, we don’t try just to produce a report. Or an idea, we want to try to translate it into actions and to something tangible. And that is in line with what the Secretary General of the UN is striving for, to have the IGF come up with tangible outcomes. In that room right now, room E, the idea of a cybersecurity hub is introduced. It’s again a concept, an idea, it doesn’t exist yet. But the idea is to, in that hub, translate the outcomes of the education skills working group into something that universities and industry together could start working on to make sure that the knowledge gap between supply and demand in education is somehow solved. So that is what is happening today here as well at the IGF. We are here for a different reason. And I’ll put on my glasses to be able to read a little bit. This is on procurement. And why procurement? In the first place, because in our opinion, when we talk about cybersecurity, you always see that it’s about mitigation. It’s always about having to buy antivirus, something to have a firewall installed to have a cybersecurity institute or CSIRT or CERT as a government or a big organization. But that is all after you got into trouble. The internet runs on internet standards created by the technical community. And these standards were created somewhere in the late 1960s up to, let’s say, 2000. In that time period, it was not necessary to think about security. And that’s literally what Vint Cerf says nowadays. If we had known where this beast would go to based on something they created in 1972, then we would have done it differently. Well, we discussed it for about two seconds and then thought, well, we know everybody who’s connected, so why do we need security? And 20 years later, slowly but surely, the whole world started to come online. And we all know what sort of problems we run into. But David will tell you about that more eloquently than I could ever do in English, and even with the knowledge I have compared to what he has. So why procurement? If governments would start procuring secure by design, it would mean that they would demand these internet standards to be in place when they buy, when they buy software, that it’s tested software and not something that just comes off a shelf without knowing what it is. When you create a website that you created, you build it according to the latest standards of security, and not something that has holes in it all over. So on procurement, we started this working group in 2020, the idea. To my surprise, we found that there’s very little interest, couldn’t find the funding, we couldn’t find the people, except one person said, I want to share this. And she wrote a whole program and we had to wait for two years until we found funding from the RIPE organization, the RIPE Community Fund. And that research started this year in January, February, and we’re able to present our report here at the IGF on Tuesday. So the research done by Mallory Nodal and Lisa Rembo, who are supposed to be here, but could not be here because they’re traveling. But what it shows, this research, is that there’s very little publicly available procurement documents online. I think they found 11 from 11 countries. We asked around, they found three more. We could not find anything in the public sector. And we asked thousands of people, could you share, even if it’s anonymized, something with us so that we can do a global study? So if it’s not there, and that’s the caveat we have to make, is that is it because there is nothing or is it because it’s behind bars somewhere where nobody’s allowed to look at? But the ones that we could study shows an awful lot. Most of them don’t mention security in any way. When they mention security, it’s not cyber security. And if it’s cyber security, then it’s not about these sort of standards, but on the mitigation side and not on the prevention side. We have one exception that we found in the room. That is, on Amica there, the Dutch government has the procurement list of, I think, 46 or 43 standards that are all but mandatory to deploy. And that’s the only one that we could find in the whole world. The Dutch government also developed something called internet.nl, which allows you to check if your organization or another organization has any security in place for the domain name, for routing, for etc. So that is what the situation is. The report will be presented on Tuesday in our own dynamic coalition session and an open forum again with the forum standardization from the Netherlands on Thursday. So we have found very little documents on government procurement and none from the private sector. So can we draw very firm conclusions? The answer is no, because perhaps there is a lot more in the world, just we’re not able to access it. But can we draw some conclusions anyway? And I think that the answer is yes, we can. Because it’s quite obvious, as I said, that Internet standards are not recognised. And I think that that is something that is important to understand. The Internet runs on Internet standards. And if we talk about the public core of the Internet, and defending the public core of the Internet, then it’s not only about the physical cables, but also on the standards that make that Internet run. And if they’re allowed to be attacked 24 hours a day by everybody who feels like attacking it and abusing it and misusing it, then it is a question, why are governments not recognising these open standards in one way or another? So I think that that is an important conclusion, that despite discussions on defending the public core, the public core is not recognised for what it is. So how do we make sure that that happens? I think that there is, in other words, a world of security to win for everybody. But we have to stop talking about prevention only. Sorry, about mitigation only. We have to start talking about prevention. And the fastest way to do that, in our opinion, is procurement. But then the next step is, how do we convince people in decision-taking positions to actually procure secure by design, and actually renegotiate a contract at the moment that renegotiates your job, that you bring in these sort of standards. And that is one of the working groups that we are starting this year, and it is called DNS Sec and RPKI Deployment. But it goes, in the end, for everything on internet standards. But we took two examples. But we don’t start talking about it in the way that it has been spoken about for about 20 years. We’re going to try to change the narrative. And I think that I will give the microphone to David in a few minutes. What we’d like from this session is that I’ve said about everything that I wanted to say about this topic. What we’d like to learn from you is what is your experience? What are your ideas? What would you, how could you contribute to this discussion? And from there, see what we can go home with. As I said, we’re introducing this concept of a cybersecurity hub. How can we actually activate it? How can we make sure the right people start working together there from the different stakeholder groups and come up with tangible ideas that are translatable either in direct programs or in capacity building programs or whatever we like to call it. But the fact is that something needs to change because the discussion is running in the same direction for a long, long time without very noticeable changes. So how to convince decision-takers by design? And I think that that is the starting point of our working group on DNSSEC and RPKI deployment. So, David, I’m going to hand over to you right now to explain what your plans are.

David Huberman:
Thank you, Al. Good afternoon, everybody. My name is David Huberman, and I’m with ICANN. When one of the things we’re trying to get people to understand is when they pick up their device and they watch a TikTok video or they send an email or even if they’re involved in a chat, a group chat or on WhatsApp or Line or just SMS, a lot of people think, well, that’s the internet. But it’s not. Those are applications on the internet. In fact, they run on a whole system of routers. and servers and switches and firewalls, all of which are the underlying framework that we use these applications. This system of framework, however, is built on common protocols, and there are two protocols that stand as the foundation for almost all of the modern Internet. One of them is called BGP, Border Gateway Protocol, and it’s the system of routing, how networks talk to each other. And the other is the DNS, the Domain Name System, which is used as a backbone of communication so that we can use semantic names that we as humans understand to translate into the IP addresses that computers understand. Now, as Wout noted in his conversation with Vint Cerf, BGP and DNS are very old protocols. BGP, the version we’re using now, was standardized in 1995, and DNS is even older than that. It comes from November 1983. It’s going to turn 40 years old next month. And when we developed these protocols, the intention was to get them just to work. We just wanted to be able to push packets to and from networks. Did you get it? Did it come back? Yay, it worked. And what’s nice about these protocols is they scale. The reason we’re still using them 30 and 40 years later is because they scale infinitely. But as Wout noted, they weren’t built with security in mind whatsoever. So over the last 20 years, what the IETF, the Internet Engineering Task Force, has been doing has been redeveloping these, bolting on security. And for BGP, one of the primary drivers today of routing security is a new system called RPKI, Routing Public Key Infrastructure. And essentially, it allows providers to talk to each other but authenticate. the origins of routing information. This has a lot of benefits. It benefits us against malicious hijacks of routes. It benefits against accidental misconfigurations. And hopefully it can help prevent IP spoofing and other things that attackers use to do bad things. DNS has a similar suite of security tools that we call DNSSEC. And DNSSEC is very important because when you go to a website, when you go to www.un.org, we want to ensure that the data that you receive back is actually the data that the people who run un.org intended you to have. So DNSSEC allows for you to assure the integrity of the DNS queries and the data that you’re receiving back. These security enhancements to these two fundamental protocols can significantly increase the security posture of the entire Internet ecosystem for all users in the world. But yet, the adoption of DNSSEC is at about 25% of all the domain names. And RPKI, while it enjoys much fuller deployment, especially in the ISPs around the world, we’re still working on increasing the deployment to all of the networks that participate in the global routing system to get them to digitally sign their routes so that everybody else can validate them. So how do we increase penetration? How do we increase this deployment? This is what one of the newest working groups of IS3C is working on. We’ve put together a panel of world-class experts, and we are developing a new narrative that we’re going to test against decision-makers at ISPs, decision-makers in public policy, and decision-makers in network operations to help… motivate them to increase the deployment of these very secure protocols that in 2023 ought to be a baseline standard that everybody adopts.

Wout de Natris:
Thank you, David. Perhaps, Annemiek, that you would like to say in a few lines what exactly is what the Dutch government is doing. Sorry, the… You’re done. Thank you very much. Thank you very much.

Annemiek ( Dutch government):
The Dutch government is using a comply or explain list for ICT services and on that list there are about 40 standards, open standards, including general standards but also specific 15, for instance, internet safety standards. They should be used in ICT services. And we have a process of organising that. That means maintainers can tell which standard should be on that list and we organise with experts from all Netherlands. To see which standards should be on that list. And those open standards are mandated. and we suggest them. So we cannot, how do you say that, give penalties if they don’t use them, but we just suggest those standards. And if they use them in their services, we name them, so we’re not shaming them, but we’re naming them in order to adopt standards more positively, more increasingly. Besides that, comply or explain list, we monitor. So we’re monitoring those standards, especially in procurement. And so all the tenders in Holland, in the Netherlands, we do for ICT services by the government. We research, we have researched, and if there are any standards not used in the procurement, in the tender, then they should explain it in their annual report. Monitoring is very positive for adoption of open standards, because twice a year we monitor the special internet safety standards, and we offer that to the parliament, actually. First it goes to the Ministry of Internal Affairs, and we say, oh, you’re doing well, or you’re doing not so well, you have to increase. And we use the tool Wout mentioned, internet.nl, which is very sufficient to measure this. And we public the figures, so it’s more like naming and a little bit of shaming. And in addition, we have community, we encourage community. So we use this internet.nl tool in order to get in the discussion with large suppliers. For instance, Microsoft. using Dane, the open standard Dane, and the Netherlands, it turns out that it’s not used, as you know, as you might know, and in discussion with Microsoft, we, for instance, that’s one of the suppliers, we found out that they are open for discussion, and they will change their email server with Dane, and that is a very nice announcement. Coming year, they will also do the fully Dane execution, I understood. So, therefore, we use internet.nl for community, and getting cooperation with other suppliers, and that is very nice to have. So, those are the three points, mandate, monitoring, and community. Thank you.

Wout de Natris:
Thank you, Annemieke, sorry for putting you on the spot, but I think it’s a nice explanation what you hear, and if you are online, and you go to internet.nl, just type in any domain name that you can think of, and you will see the results popping up within a few seconds. David wants to respond first.

David Huberman:
Thank you. So, what the Dutch government is doing with this is exemplar of a really nice way for government and public policy to integrate with the world of internet standards, which is primarily an engineering-based endeavor. It’s interesting, we are here today, those of us in the room are here in Kyoto, and for those of us who aren’t from this beautiful country, there’s something we all have in common right now. We all have brought along with us these little travel adapters that we use when we want to. Charge our devices. Why? because the shape and the voltages of the plugs in Japan are not the same as The shape or the voltages that we use in our home countries And why is that? Because we have a set of standards for some countries and a different set of standards for other countries and other standards for other Countries, we have lots of standards, but no global standard In my wallet right now, you’ll find Japanese yen You’ll find euros and you’ll find American dollars, but it’s funny because they all do the same thing I hand them over to someone when I want to buy something from them The purpose of currency in 2023 is very straightforward. I give you sell I give It’s the same purpose around the world, but we use different currency In Japan we drive on the left side of the road and our steering wheel is on the right side of the car In other countries we drive on the right side of the road with the steering wheel on the left side of the car That’s not only Challenging for those of us as drivers It’s also challenging for the manufacturers of those vehicles because they have to have a whole different set of standards For safety and for operation when the steering wheel is on a different side of the car Internet standards don’t work like that Internet standards from the beginning from 1969 with RFC 1 through today in 2023 We have almost 10,000 published standards Internet standards are Intended to be fully interoperable all around the world Whether you are in China whether in Kenya whether you’re in Paraguay whether you’re in Iceland No matter where you are in the world. If you’re online on the Internet, we’re all using the same standards and That’s really important because it allows for a fully interoperable fully global Internet that, in itself, enables innovation. It’s because it works everywhere that people are able to develop applications and platforms and do amazing things, because it works the same way no matter where you are. And this is where the Dutch government, and this is where other governments can really show leadership, because in the development of those standards, it’s 2023, the Internet’s everywhere in the world, it’s not 1969 anymore. We can’t develop these in a vacuum. We can’t just develop these as pure engineering exercises. Instead, we have to think about the real-world implications of new technologies, the implications of new protocol and protocol development. And so, here at IGF this week, strongly encouraging governments, parliamentarians, public policy, civil society, to become involved in Internet standards development, to offer your expertise to the development’s process, while at the same time, understanding and respecting that so much of what we’ve created is due to it being an engineering-driven endeavor, and the engineers are the true experts in how to do this. So, it’s a real commendation. It speaks very well of the Dutch government. Internet.nl is a wonderful site that works really well, helping your organization understand where you sit in this world of standards. That’s about what I wanted to share.

Wout de Natris:
Well, thank you, David. I’ll just pause for a second. So, thank you very much. I think that from this side of the table, that is what we wanted to share with you. So, I think you now understand what the IS3C is about, what we try to achieve, but also what we try to achieve in the near future. For now, we would like to learn from you, how does this concept come across? Does it make sense? We would like to know also, Do you know about any of the procurement schemes in your country of your organization? But also to discuss a little the plans that David was together with Bastiaan has to change the narrative of how to convince people in leadership to really think about cybersecurity upfront and not as an issue that pops up after you bought something. So the mic is there in the middle of the room. I can also pass a mic around. I think the first question is what you’ve heard this, how does it come across? What do you think of this plan? Does it make sense? Would you do it different yourself? Just anything that you’d like to share of us what we can learn from. So the microphone is there. Just introduce yourself first, please. And then just share your thoughts with us, please.

Audience:
Hi, Jan. Hi. Okay, it’s on. Perfect. Viet Vu from Toronto Metropolitan University in Toronto, Canada. This is actually opportune timing because literally about two weeks ago, I just published a paper on Canadian government’s digital adoption. And a couple of things on how that lands, what we’ve discussed so far. The first thing is that the incredible thing about the Canadian government is that not only is there no standard in procurement, there isn’t a single set of standards that the same government department uses. And the problem even becomes more complex when you go down to the provincial, which is the second level of governance in Canada, goes federal and provincial. Think of US states or prefectures in Japan. Each of them actually has their own legislation that governs privacy, that governs digital communications. And so that creates a challenge. Now, in terms of what we think might work or the key part that has prevented, let’s say the conversation of digital to sort of surface to the top. It is very much just the fact that there really isn’t anyone who is actually empowered to raise those issues. The Canadian government recently created Canadian Digital Services, which was this out-of-government group that is there to deliver government services in-house, kind of. They’re the first time that the government has done so, but at the assistant deputy minister level, which is sort of the minister, deputy minister, and assistant deputy minister, that’s why you kind of need the people who are kind of empowered to raise the issue of digital and there just isn’t any. And so, in terms of Canadian context, the one person you probably want to talk to is a senator, Colin Deacon, he’s been sort of spearheading a lot of legislative framework. I can put you in contact with him after the session if that’s of interest. This will be a short follow-up question. How has your report landed? Have you got any response from the government side, or is it still university? It’s a great question. It landed really well, actually. I did a couple of radio rounds, literally, before boarding the plane to Kyoto. It was 9 a.m., I was actually giving a panel talk on the topic, and then 3 p.m. I was on my flight here. Once I’m back, in November, we’re actually delivering a workshop to sort of high-level decision-makers. So, we’re talking deputy minister, assistant deputy minister, and director generals within those three hierarchies in Ottawa in November, particularly on thinking about policy solutions. So, we know that the general topic lands well right now with them. Well, congratulations, and I think your invitation to introduce us, I think that would be very much welcome.

Wout de Natris:
Thank you. Any other ideas in the room, how this lands, Basia and the lady there? Yeah. So, you can stand in line.

Audience:
Hello, my name is Gonalea Sprink, chairing the Internet Society Accessibility Standing Group. And accessibility in this respect talks about accessibility for persons with disability. And we know that there’s a number of countries who have looked at public procurement for accessible digital goods and services. And there are standards in the U.S. and in the EU, and that have been adopted in countries like Australia and India and Kenya. And then it’s, of course, this common issue of implementation. So I was very interested in the Dutch initiative, and we found in Australia, for example, that when it comes to web accessibility and procurement by governments, there was a monitoring system. And then there wasn’t funding enough to continue it. And that’s what we’re hoping, that other countries and systems will be able to continue that type of implementation of a policy. And I should also mention here that in the EU, there’s an Accessibility Act, which is going to be a directive for all EU countries to ensure that any supplier to a European country should have accessibility built in to those digital products. And that’s supposed to be mandatory. So we will see how those sort of systems work. And it will be very interesting to see how that intersects with what we’re talking about here today. Thank you.

Annemiek ( Dutch government):
That’s also interesting, because in Holland and the Netherlands, we also develop dashboards, including internet.nl, for accessibility. purposes. So follow the Dutch government in that way and you might be using also the dashboard in future. And internet.nl is integrated into the dashboard. So, nice hearing.

Audience:
No, no problem at all. I have a question for you, actually. From the RIPE NCC. Together with ICANN working in a new working group to improve, you know, to see to it that adoption of techniques like DNSSEC and RPKI is moved forward. And I’m really happy, you know, with I’m Dutch. So maybe I’m prejudiced, but I’m really happy and even proud, you know, what the Dutch public authorities and government is doing here. So all kudos to the Dutch forum for standardization. I just wondered, Annemiek, maybe you can share more there. In terms of what the Dutch are doing, right, and the policies that are underlying this and the reasons why you guys set the list there for comply or comply with or explain or even mandating, you know, the usage of certain tools. Is there information with regard, you know, to the underlying policies available in English? And do you have any experiences, you know, talking with other governments, other similar agencies like yours? You know, is this being taken up, this idea? How do other people respond to this?

Annemiek ( Dutch government):
You mean other governments in Europe or France? Because Denmark is also using internet.nl for their own policy. And fortunately, also Australia and Brazil using the internet.nl in their policy. So that might be, they have a different attitude to it, but yes, and you ask what kind of policies behind the comply or explain this.

Audience:
I think, you know, I’m aware that as far as I know, at least internet.nl, the underlying software is open source, right? Yes, correct. People can of course adapt their own front end in their own language, right? To use a similar service, so that’s all great. But I really mean more in terms of the underlying policies, in terms of techniques, tools, standards, that in this case Dutch public authorities need to comply with. Because you think it’s important for certain reasons, right? You need to be reachable over IPv6, or your website needs to be reachable over IPv6. When you purchase certain online services, cloud or whatever, it needs to have RPKI implemented, stuff like that, right? Like the reason why you think that’s necessary to actually demand that in terms of procuring services or having public authorities comply with these type of standards. You go very fast, also for me I guess, but also for the audience.

Annemiek ( Dutch government):
Well, the Dutch government promotes also use IPv6. And the policy, yeah, I don’t know actually what you mean by the policy behind it. Because what we do is we stimulate the adoption of those standards in order to give practical experience in the field to show that it works. So we have a carrot, let me say it like that, and people have to chunk to it in order to use it. Because in the field, the society has advantage of it. But I do not understand the policy behind it.

Audience:
Well, maybe it’s so obvious that you don’t have a policy behind it, right? You just think it’s a good thing to do. Other countries, for instance, are not doing it. So I wondered, is there anything that you can share or help to convince them that, hey, in terms of procuring services, it would also be good to set certain requirements with standards you would have to comply with?

Annemiek ( Dutch government):
Yeah, well, if tenders are executed, then they follow in Holland a special tender website. And there are CPV codes included in order to support procurement departments to request for open standards. And in addition, we explain what the standards are, because most of the people in procurement are not technical. So we suggest talk to your architecture or to other colleagues in your company and get to know what technically involvement is for the execution, because the procurement department doesn’t know anything about IST in order to follow these courses. It might be also an interesting adoption way. So, a suggestion.

Wout de Natris:
David Bassian, you decided to sponsor this working group as well, besides leading it. What makes it so different for you to actually try and come up with a different narrative? That’s a really good question.

David Huberman:
You touched on it a lot a few moments ago when you talked about the technical education and how we’ve worked really hard as an organization to build capacity on the importance of DNSSEC and then how to actually do it. A lot of my colleagues go around the world and speak with groups of engineers who operate networks and will actually do DNSSEC signing online, real, on their computers that’s in the live environment and teach them all the skills they need to continue maintaining it. But that’s not enough, because it reaches a small group of operators, and while it helps them, it’s so much more challenging to get that message out to the much larger world for all the domain owners. and for all the operators of recursive resolvers who have to do the DNSSEC validation. So we’re really looking at this initiative as a way of, as we’ve said a few times today, change the narrative, find a new way of saying it, and test it against decision makers, and say, how does this strike you? Does this persuade you? And based on their feedback, then we can iterate again and refine it even further. And so that’s our interest in why we want to fund this and why we’re really engaged and motivated here.

Audience:
Yeah, thank you. As a regional internet registry, we’re a non-profit organization. I think it’s part of our mission to increase the trust and the reliability of the internet. And not only talking about topics like fragmentation and other things that are potentially detrimental and could have a serious effect, but even the internet as is, right? It was referred to that DNS and the routing part, they’re not directly visible. Maybe it’s the DNS and names people are familiar with, but the routing part for most people, that’s not something they are aware of, need to be aware of. But these are fundamental to everything else that depends on it, that runs on top of it. So the security there, it’s almost unfortunate, right, that the incidents that happen, and quite a few happen, that they’re not visible in terms of actual impact that people experience and that acts as a wake-up call. So at the end of the day, if we want to remain, keep people’s trust in the entire system, right, and that it also is going to increase, then something has to be done. I think it’s really, really great what, for instance, the Dutch government is doing, right? Lead by example, but there might even come a point, especially if you see in the European Union, the amount of legislation affecting the infrastructure and the ecosystem is enormous. if at some point the perception will really be there that this is a market failure and people are not getting their act together, then it will be regulated. And we have all the tools available, everything available to do it ourselves, right? Technically, the standards have been there for a long time. All the tools used for instance to, on the one hand, do the authorization part with us, you have to actually sign your IP addresses and associate them with an autonomous system number, right? Who is actually allowed to origin certain prefects. That is a very easy portal to use. On the other hand, the validating part, the software that you actually use to check the announcements you receive to see whether they’re valid or not, that’s actually at a very mature state. So everything is there. So why is it not being picked up? I think in terms of the originating part, like people signing resources, I think we’re like 40% globally and then it really differs per region or per country, which is good, but we need to step up here. And people think it’s either really, really technical, complex to implement. And I understand if you have a huge network with enormous amount of routers and others that you depend on and customers you have. I’m not saying it’s trivial, but for an average network engineer who takes her or his job seriously, it’s not that much of a challenge. People think it’s really expensive to implement. Well, certainly with a project, there are costs involved, but if it’s about the underlying, what your services, how people experience your online services, more and more services are provided online, how people actually experience those, this is fundamental, right? That people need to know that if I aim to contact someone or try to reach certain content that I’m actually reaching the place I wanna be at. It seems a given, but looking again, as was mentioned, the protocols are ancient and this needs to be improved. So I think we have work there to do to actually gain the stories, right? And to actually. convince people, hey, this is not so hard. And there are so many material available. Like Anmik just said, we provide courses to people. We do it, everything is available free online. But if you really want like a face-to-face training with a trainer, et cetera, normally there are costs involved, but we can organize stuff. We’ve been really effective, especially in the Middle Eastern region to get people in a room. So both the regulators, the operators, and just go through the whole thing and actually have people sign resources. Especially if you have only maybe one incumbent or like two operators in a country, if people start signing their resources, there’s an enormous uptake in adoption. And we do the same at Peering Forum, other meetings we organize. We have like ROA signing parties. We get people in a room, right, and actually demonstrate how relatively trivial it is, especially for network engineers, to sign their resources and to look at the validation part. So we’re actually doing a lot there. And I really hope, you know, in terms of audience and storyline narrative that we can also have an impact combined with all the other stuff that we’re doing here at the IGF.

Wout de Natris:
Yeah, thank you. But Basiaan David, the internet works. Everything works. It’s playing devil’s advocate here. Everything works. It’s on the whole day. There’s no failure. So why should I ever do something? Why should I invest in security? It works, right?

Audience:
Well, maybe that’s something else that we need to be more focused on. I mentioned the fact that the real impact of incidents is not sufficiently visible. But for individual networks, I think the same goes for the Dutch government. What triggered the whole thing was IP address space of the Dutch Ministry of Foreign Affairs being hijacked. And that was a wake-up call. Not a good one, obviously. But if those type of stories, right, we can demonstrate to people that not wait for something to break and then, oh, I’m going to spend a lot of money and get stressful and I need to repair it. No, no, you can actually, you know, implement this in quite a reasonably simple fashion and be prepared, you know. And again, your customer’s going to benefit. You yourself are going to benefit in the long run. But yeah, maybe we also need a bit more effort in terms of maybe the shaming part, or the incidents that really had an impact and also have people share those stories and what that led them then to implement this. I don’t know if that answers your question or not.

Wout de Natris:
Yes, thank you. And I think when you mentioned the Middle East, that perhaps an explanation is in place that the RIPE NCC as a regional internet registry is literally doing Iceland and Greenland even up to Vladivostok and a lot of the Middle East. They all provide the IP addresses for that region like APNIC is doing here in Asia. Sorry?

David Huberman:
Of course. So just to build just a little bit on what Bastian was talking about, there’s been a sea change in the United States. This summer, I had, I’m gonna be honest with you, a fairly surreal experience when I went into a government building for our communications ministry, our Federal Communications Commission. These are the folks who regulate broadcast signals and they also regulate mobile wireless signals. So quite powerful. All the wireless providers, lots of the wireline providers, the cable companies, which a lot of the internet in the United States to all of our homes is. And it was kind of surreal because the United States government is now taking the position that routing is a matter of national security. And all of these people from like the FBI and the Department of Justice and all these law enforcement bureaucrats were getting up and talking about how we absolutely must secure the routing infrastructure of the United States of America against attacks, against misconfigurations, against hijacks. And not only were they talking about it in general terms, talking about the principles of security, they were mentioning, these are elected people who are talking. They were saying things like RPKI and ROA. They were saying things like URPF. They were using acronyms that I didn’t think a government bureaucrat knew how to spell. And I was like, where am I? What’s going on here? But I loved it. It was great because it showed that a country, a large country, was taking seriously the need to adopt RPKI, the need to adopt validation of ROAs at a national level. And they were going to use the power of the government to force the regulated parties to do this. And to answer your question, it’s because, for them, it’s national security. Because it’s not just the mariners and their boats who are connected. It’s the military. It’s all of the government, federal, state, local. It’s our schools.

Wout de Natris:
Everything is online now. Thank you. I think that is a good example of changing the narrative. And that’s perhaps also what Basia was saying. If people do not voluntarily, organizations not voluntarily do it, the government will step in at some point. But will they literally regulate and write laws? Or will it still be a good discussion saying, guys, you really have to do this? But if then, even five years from now, nothing has happened, then probably it will become legislation. And is it something that that industry wants to avoid? Sometimes, if I’m looking at it from a negative point of view, I get the idea that they want to be regulated. Because then, there finally is a level playing field. And that is what’s missing, also missing here with the deployment of these standards. That if I deploy, it costs me money. Meaning that I have to have a higher price, while the competition does not do it. And in other words, they may have more customers because of it. So that is one of the reasons that deployment on a voluntary basis may be hard. But if you don’t want to regulate, then procurement, I can’t say that enough. may actually change that. But the question then is why are most governments not procuring secure by design? And it’s a question I simply don’t have the answer to. But the fact is that it doesn’t happen as a standard. What I would like to ask from the people present here and get your views anyway, then I’m gonna hand the mic over to, starting with you, but then you can pass it through. What do you actually take out yourself out of this session and how could you change the discussion perhaps in your country? Where could you ask these sort of questions? Because then you get perhaps a little bit inspiration in your own country to change this discussion. So I’m gonna ask Annemiek to pass the, and ask your, yes, of course, if you would like to speak first, okay, of course. Please introduce yourself first.

Audience:
Actually, Samira, so I have been living in Japan for about now two, three years. Just the remark that made, I mean, why would a country been hijacked? I mean, that’s what I would like, for example, a targeted countries like US, maybe they have like many enemies, I guess. So maybe, I’m not saying that they have, I mean, I’m just assuming. For example, Israel, maybe, I’m not sure. But when it comes to internet, I mean, we are going to the very core of the procurement, the routing and switching and all this standardization. And so we, at the beginning also we said, yeah, internet is similar for everyone. It’s working well also, he said. But how we are going to regulate these regulations when it comes to the countries where they don’t feel the need? For example, a normal country, which is, of course, like, for example, we’ll say Norway, very peaceful. They don’t feel the need, I guess. So it, I think it is not that pretty much, I think what it must be done is to regulate, but how well we will regulate when it comes to countries’ individual legislation, how well we can force them to use these standards if they don’t feel the need. I mean, one way where we can creep in is by the educating them, right? So I feel that we need to think like in a way that people are trapped. For example, if you say that you’re posting on something in social media, okay? So this can cause your life, it can be a threat to your life. Of course, they will listen to that, right? So it’s similar. So I would think that it is, it’s a need of the legislature, because U.S. is, U.S. knows that they have, like, they have hijackers, they have people who wants to creep into their network.

David Huberman:
So I think. Yeah, I mean, it’s a good point. The thing is, it’s not just about geopolitics. A lot of this is about people who want to exploit vulnerabilities to make money. And some are to create chaos. Again, not for geopolitical reasons. So one of the things that we have to do is ensure that everybody in the world, in all countries, big and small, peaceful and not peaceful, understand that vulnerabilities create opportunities. And there will always be people who want to fill that opportunity for their own purposes.

Wout de Natris:
Yes, thank you. I think that that is a very good example of what we try to. as IS3C. We cannot do that ourselves, but we can hand over knowledge and tools to actually do that. But then it’s up to countries to deploy. So my question to you was to share with what you got out of this session, and what could you do in your country to plant a little seed of knowledge on this topic. So start up front, and then go around. And please introduce yourself.

Audience:
Hello. Okay. I’m Ryan. I’m from Indonesia. It’s a bit hard, I think, for my country, because when the session run, I just take three, I sampling two e-commerce in my country, and one of the legislation of websites through the internet.nl. All of them not sign it on the NSF. But the e-commerce is sign it on the RPIC. I think the e-commerce realize about the security breach to them, the threat to them. Actually, in my country, as today, I just meet our vice minister for the communication information. He just give a speech in the main hall, and I doesn’t have any access to that high level. I hope with Netherland, we have a long history, right? Can you please suggest something to our country to try to implement this? Because many of the decision makers doesn’t even aware about this thing. The procurement, everything, the tender is always about money, not about the security, not about the people. It’s always about the money. So if the vice minister is still here, you can introduce us? I hope I can, but even I think he doesn’t know me. I understand. Don’t worry. But I hope my country will be getting better, because next year we will have our presidential election, and there’s a lot of issue with IT security, all of the fake information from the AI generative things, and many of people is, like, I cannot say, but many of them is doing anything to get to the position. So I hope everyone can help my country. Well, thank you. That’s a quite clear goal. Thank you for sharing. Hi. I’m Masayuki Nakamura from Japan, and I’m a very beginner of this area, but about the DNSSEC, the rate of DNSSEC is low, either in Japan, but the one issue is the open standards written in English, so we have to translate to Japanese, and we have to make more easier to read to the decision-maker, so that’s our problem, and I am the government officer, so my colleagues are trying hard of it, and I’m not in the position, but if I have a chance to get the ratio of this, so that matter, I will try to make the thing better. Thank you. Thank you very much for also eye-opening, I guess it’s not that just targeting countries also, but just making solutions for the vulnerability, I think it’s a wonderful point. So I guess the key is through the education, so I would like to… Also, put myself forth and research more into these areas. I have, of course, my area is ERP systems, so I would think much more into learning these things, that there was a lot of input here. So I guess that education will drive into the countries who are not aware of it, as we all know that the internet is working, but underneath there can be catastrophic events. Of course, when we just keep things open, so when we can close the door, why would we just close it and lock it, right? So that’s what I got from this session. Thank you very much. Hello. Thank you, everyone. I’m Santosh Siddhal from Nepal. I’m Executive Director of Digital Rights Nepal. I joined the session in between, but I liked it very much, and it has opened a lot of question. In Nepal, we recently, in early August, Nepal adopted the National Cybersecurity Policy. Before that, we have Electronic Transaction Act, but now the government wants to bring a new cybersecurity law, and for that, they have adopted the National Cybersecurity Policy. One of the problematic aspect of the National Cybersecurity Policy, two of the, actually two of the problematic area is in the consultation phase, in the draft phase, there was no mention about the National Internet Gateway. Now the government is talking about installing a National Internet Gateway without defining it, and this is targeted for, they are saying that for the resilient and the secure internet. And another problematic area is they are also talking about the government intranet. And the third, which is also related with the procurement part, is earlier it was not there in the draft rule, but now they are proposing that the laws relating to procurement of the cybersecurity or the ICT consultancy and the equipment. will be defined by the government, and that could be out of the bound of the public procurement policy. So I think procurement here comes very, this is a serious issue where the government wants to kind of put it behind the curtain, the procurement process, what kind of consultancies, what kind of ICT equipment are being procured. And the problem is in the least developed countries like Nepal and others, the stakeholders are not very much aware about the repercussion or the possible impact on other areas because of such kind of laws and policies. Especially civil society organization and media, they are also not aware about it, and it is evident from the kind of reports the civil society or the media, the public discourse we are having at the point at Nepal. So I think it is very important that we take these issues into the public discourse, and civil society organization has a kind of very important role to make a stakeholder have a kind of informed discourse about the policy proposed, possible repercussion, and at the same time its impact on the utility of internet, and at the same time the other human rights that internet enables. So I think this is very important session, and I have gained a lot of insights which could be used for the public advocacy. Thank you. Okay, thank you so much for this very, very useful session and very informative for me, and I’m from the telecom carrier in Japan, NTT Communications, and we are ISP, and so we need to implement the many, many security, DNSSEC or RPKI, but it need a cost and scale. So very difficult to improve as soon as possible. But I would like to inform my company and tell this discussion and try to improve such kind of new technologies. Thank you. Hi, I’m Ichiro Mizukoshi from Japan. I’m just jumping in the middle of a session, so I’m not sure about the whole discussion. But in my humble opinion, to secure the IOST services, maybe the subscription service will help it. Because selling out the product, after the sale of the product, the repairing vulnerability for the manufacturer, it’s hard for a long period. It’s too heavy. But if it is a subscription services, they get their money to get services, so they can have a chance to repair it. That’s my opinion. So I’m Daisuke Kotani from Kyoto University. I’m a researcher, but partly I’m involved in the procurement of the university IT infrastructure. So from that perspective, the problem is the budget cost and the skill set of the engineers. So currently, unfortunately, if we request outside company to support RPKI or DNSSEC, but if such a company doesn’t have enough engineers to support so. So we cannot implement in our campus in South Korea. So education to the engineers, I think, is an important issue. Thank you. I literally thought I could skip it. Well, as I explained, in two weeks’ time, I’m going to be up in Ottawa talking to a group of sort of high-level decision makers in the federal government. Certainly, I think this conversation is going to help us design. We haven’t actually designed a workshop yet. That was a task that is up to me and one of my co-authors once I’m back from Kyoto. And so this does give us a really good set of ideas to do it. So thank you. Finally.

Annemiek ( Dutch government):
Finally. Thank you very much for all your stories and input. I would like to invite you to check your website or your email address to internet.nl. And if you score 100%, we give you a position on the Hall of Fame. And we always give a t-shirt, which is a very collector’s item in Holland. You can also sleep in it. Jukka-chan. And then you get a t-shirt. That’s the way how we do it in Holland, in the Netherlands. Tempting organizations to use those open standards and get in the Hall of Fame. But you can collect if you like to try. Afterwards, the session, you can check it. Thank you.

Wout de Natris:
Thank you. And thank you all for sharing your ideas with us. And we’re getting close to the end of this session. We have a few minutes left. But what I would like to say is that it seems like from also what you’ve said, there is literally a world to win. Because if you are able to convince the people you work with that this is an important topic and use the right arguments, then probably things will change. And that is what we’re going to strive for. What can you expect from IS3C? I think that the first thing I would like to do is to invite you to come to our session on Tuesday at 10.30 in Working Group J, the one next to it here. What we will do there is present three reports. The first is a global comparison on policy, national policy on Internet of Things security. And just to tip that, give a very little hint, is that there isn’t very much in regulatory way to find. It’s all voluntary. So that is one. The second that we will be presenting is on procurement, as we already understood from the session today, is that it’s also a global policy comparison showing what the level of procurement by governments is at this point in time. The third report is, I’m not sure if we’re going to be able to present it because the lady who is supposed to give the presentation cancelled at the last moment and the report is not available yet, I understood. We made that together with the United Nations, with UNDESA, and the launch was not, went not through, I understand, this August in somewhere in China. So it’s not online yet. We’ll try and say something about it. We’re also going to present a tool, the comply and explain list is having a global translation, as you could call it, is that a team of experts has come together and made a choice on three topics. One is on the categories of standards, the other is on the scope of the list, and the third one is the individual standards that go underneath this list. What is going to be announced is an open consultation, so anybody in the world who has an opinion on the scope or on the categories, on the standards, is allowed to share their comments in a Google Doc so that we can make a more and better informed decision on what is going to be in this list. Next we will have the presentation that David more or less gave to announce the working group on the narrative on DNSSEC and RPKI, and finally we’ll be announcing a working group on emerging technologies. So the idea is that in the coming year we will be doing a global policy comparison on artificial intelligence and later perhaps on quantum computing and on the metaverse. But what we also do is try to see our relevance to the sustainable development goals. So how is the work that we are actually developing at this point in time able to make the world better as a whole and not just on the topic of internet standards? And what was happening in the other room that is stopping at this point also, almost, is that we have a short synopsis of the cybersecurity hub and the plan that we have there. So in other words, we’ll be doing a lot of, presenting a lot of the work that we’ve been doing in the past year. So you’re invited to come there. If you’re interested in learning more of what we do, you can join through the IGF. If you go to the Dynamic Coalitions and look at the Internet Standards Security Safety Coalition, you can sign up for the e-mail list and you will not get a million e-mails every day. But when we have a working group that is starting or has its own meeting, you can get an invitation to join. And if you’re interested to work with us, that’s also an option that we look at for people who voluntarily are willing to chip in a little bit in this work. So we have our own website, that is the IS3, the number three coalition.org, and that’s where we publish all our reports. And on the 10th, all the new reports will be able to be downloaded from there. So I think that that is all I want to say. I’m looking at the panel. Mark, have you made any observations that you would like to share with us from listening? And I haven’t heard anybody online, so I think this.

Mark Carvell:
Thank you, Bart. I’ve been following on the Zoom link and online, though comments or reactions have come through the chat room in the Zoom link. But I think the key message from this session, I think, is very important, that procurement and supply chain management really do have major contributions to make in driving the adoption of critical security-related standards and routing protocols and so on. So that’s a very important message, and I really appreciate from me personally that there are comments here that this is – there’s a lot of valuable information that the coalition on IH3C is collating, and we need to build on that with more contributions and more experiences from other countries. I’m thinking in particular from my own country, the UK. When I was working for the UK government, the issue of procurement, really, of network services and equipment, never really came up as an internal policy issue. But every now and again, there were these massive security failures online, which did get a lot of media coverage. But then again, you never hear the consequences of those data breaches, whether they affect the police forces, as was recently the case in Northern Ireland, or financial services. You hear about these headline-grabbing incidents, but you never hear what the follow-on from them was in terms of ensuring that these things don’t happen again. But I think this coalition does provide a channel for distributing that kind of important information. Those are my reflections. Back to you.

Wout de Natris:
Thank you, Mark. David, any last words? No? Then, with that, I will let you go, and we’ll be well in time for the people who come next to prepare. Thank you very much for your contributions and your insights, because that’s something that we are going to take home. Thank you for all the technical work. Thank you, Mark, for the online moderation. And with that, I wish you a very good IGF, and hope to see you again soon. Bye-bye. Bye-bye. Thank you. Thank you. Thank you.

Annemiek ( Dutch government)

Speech speed

124 words per minute

Speech length

922 words

Speech time

447 secs

Audience

Speech speed

170 words per minute

Speech length

4144 words

Speech time

1466 secs

David Huberman

Speech speed

165 words per minute

Speech length

2145 words

Speech time

778 secs

Mark Carvell

Speech speed

135 words per minute

Speech length

281 words

Speech time

125 secs

Wout de Natris

Speech speed

164 words per minute

Speech length

3845 words

Speech time

1409 secs

Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Merve Hickok

The comprehensive analysis conveys a positive sentiment towards the regulation and innovation in Artificial Intelligence (AI), emphasising their coexistence for ensuring better, safer, and more accessible technological advancements. Notably, genuine innovation is perceived favourably as it bolsters human rights, promotes public engagement, and encourages transparency. This viewpoint is grounded in the belief that AI regulatory policies should harmonise the nurturing of innovation and the implementation of essential protective measures.

The analysis also underscores that standards based on the rule of law must apply universally to both public and private sectors. This conviction is influenced by the United Nations’ Guiding Principles for Business, which reinforce businesses’ obligation to respect human rights and abide by the rule of law. This represents a paradigm shift towards heightened accountability in the development and deployment of AI technologies across different societal sectors.

However, there is significant apprehension surrounding the perceived industrial domination in the AI policymaking process. Such dominance is viewed negatively as it could erode democratic values, potentially fostering bias replication, labour displacement, concentration of wealth, and disparity in power. Critics argue this scenario could compromise the public’s interests.

Moreover, the analysis highlights strong advocacy for the integration of democratic values and public participation into the formulation of national AI policies. This stance is complemented by a call for the establishment of robust mechanisms for independent oversight of AI systems, aiming to safeguard citizens’ rights. The necessity to ensure AI technologies align with and uphold democratic principles and norms is thus underscored.

Nevertheless, the analysis reveals resolute opposition to the use of facial recognition for mass surveillance and deployment of autonomous weaponry. These technologies are seen as undermining human rights and eroding democratic values—an interpretation echoed in UN negotiations.

In conclusion, despite AI offering tremendous potential for societal advancements and business growth, it’s critical for its advancement and application to adhere to regulatory frameworks preserving human rights, promoting fairness, ensuring transparency, and upholding democratic values. Cultivating an equilibrium and a forward-thinking climate for formulating AI policies involving public participation can assist in mitigating and managing the potential risks. This approach ensures that AI innovation evolves ethically and responsibly.

Björn Berge

Artificial Intelligence (AI) carries the potential to revolutionise various sectors worldwide, due to its capacities for improved efficiency, advanced decision-making, and enhanced services. It can significantly enhance productivity by automating routine and repetitive tasks usually undertaken by humans. Additionally, AI systems can harness big data to make more precise decisions, eliminating human errors and thereby resulting in superior service delivery.

Nevertheless, the growth of AI necessitates a robust regulatory framework. This framework should enshrine human rights as one of its core principles and should advocate a multi-stakeholder approach. It is vital for AI systems to be developed and used in a manner that ensures human rights protection, respects the rule of law, and upholds democratic values.

Aligning with this, the Council of Europe is currently working on a treaty that safeguards these facets whilst harnessing the benefits of AI. This treaty will lay down principles to govern AI systems, with a primary focus on human rights, the rule of law, and democratic values. Notably, the crafting process of this treaty doesn’t exclusively involve governments, but also includes contributions from a wide array of sectors. Civil society participants, academic experts, and industry representatives all play a crucial role in developing an inclusive and protective framework for AI.

The Council of Europe’s treaty extends far beyond Europe and has a global scope. Countries from various continents are actively engaged in the negotiation process. Alongside European Union members, countries from North, Central, and South America, as well as Asia, including Canada, the United States, Mexico, Israel, Japan, Argentina, Costa Rica, Peru, and Uruguay, are involved in moulding this international regulatory framework. This global outreach underscores the importance and universal applicability of AI regulation, emphasising international cooperation for the responsible implementation and supervision of AI systems.

Francesca Rossi

Francesca Rossi underscores that AI is not simply a realm of pure science or technology; instead, it should be considered a social technical field of study, bearing significant societal impacts. This viewpoint emphasises that the evolution and application of AI are profoundly intertwined with societal dynamics and consequences.

Furthermore, Francesca advocates robustly for targeted regulation in the AI field. She firmly asserts that any necessary regulation should focus on the varied uses and applications of the technology, which carry different levels of risk, rather than merely on the technology itself. This argument stems from the understanding that the same technology can be utilised in countless ways, each with its own implied benefits and potential risks, therefore calling for tailored oversight mechanisms.

Francesca’s support for regulatory bodies such as the Council of Europe, the European Commission, and the UN is evident from her active contribution to their AI-related works. She perceives these bodies as playing a pivotal role in steering the direction of AI in a positive vein, ensuring its development benefits a diverse range of stakeholders.

Drawing from her experience at IBM, she reflects a corporate belief in the crucial importance of human rights within the context of AI technology use. Despite absent regulations in specific areas, IBM has proactively taken steps to respect and safeguard human rights. This underlines the duty that companies need to uphold, ensuring their AI applications comply with human rights guidelines.

Building on IBM’s commitment to responsible AI technology implementation, Francesca discusses the company’s centralised governance for their AI ethics framework. Applied company-wide, this approach implies that it’s pivotal for companies to maintain a holistic approach and framework for AI ethics across all their divisions and operations.

Francesca also emphasises the crucial role of research in both augmenting the capabilities of AI technology and in addressing its current limitations. This supports the notion that on-going research and innovation need to remain at the forefront of AI technology development to fully exploit its potential and manage inherent limitations.

Lastly, Francesca highlights the value of establishing partnerships to confidently navigate the crowded AI field. She fervently advocates for inclusive, multi-stakeholder, and worldwide collaborations. The need for such partnerships arises from the shared requirement for protocols and guidelines, to ensure the harmonious handling of AI matters across borders and industries.

In summary, Francesca accentuates the importance of viewing AI within a social context. She brings attention to matters related to regulation, the function of international institutions, and corporate responsibility. Additionally, she illuminates the significance of research and partnerships in overcoming challenges and amplifying the capabilities of AI technologies.

Daniel Castaño Parra

AI’s deeply integrated role in our societal fabric underscores the profound importance of its regulation, exhibiting promising transformative potential while simultaneously posing challenges. As this transformative technology continues to evolve and permeate various aspects of societal fabric globally, the pressing need for robust, comprehensive regulations to guide its usage and mitigate potential risks becomes increasingly evident.

Focusing attention on Latin America, the task of AI regulation emerges as both promising and challenging. Infrastructure discrepancies across the region, variances in technology usage and access, and a complex web of data privacy norms present considerable obstacles. The diversity of the regional AI landscape necessitates a nuanced approach to regulation, considering the unique characteristics and needs of different countries and populations.

In response to these challenges, specific solutions have been proposed. A primary recommendation is the establishment of a dedicated entity responsible for harmonising AI regulations across the region. This specialist body could provide clarity and consistency in the interpretation and application of AI laws. Additionally, advocating for the creation of technology-sharing platforms could help bridge the gap in technology access across varying countries and communities. A third suggestion involves pooling regional resources for constructing a robust digital infrastructure, bolstering AI capacity and capabilities in the region.

The significance of stakeholder involvement in shaping the AI regulatory dialogue is recognised. A diverse array of voices, incorporating those from varying sectors, backgrounds and perspectives, should actively participate in moulding the AI dialogue. This inclusive, participatory approach could help to ensure that the ensuing regulations are equitable, balanced, and responsive to a range of needs and concerns.

Further, the argument highlights the potential of AI in addressing region-specific challenges in Latin America. The vital role AI can play in delivering healthcare to remote areas, such as Amazonian villages, is stressed, while also being instrumental in predicting and mitigating the impact of natural disasters. Thus, it strengthens its potential contribution towards achieving the Sustainable Development Goals concerning health, sustainable cities and communities, and climate action.

In conclusion, while AI regulation presents significant hurdles, particularly in regions like Latin America, it also unveils vast opportunities. Harnessing the promises of AI and grappling with its associated challenges will demand targeted strategies, proactive regulation, wide-ranging stakeholder involvement, and an unwavering commitment to innovation and societal enhancement.

Arisa Ema

Arisa Ema, who holds distinguished positions in the AI Strategic Council and Japanese government, is an active participant in Japan’s initiatives on AI governance. She ardently advocates for responsible AI and interoperability of AI frameworks. Her commitment aligns with SDG 9: Industry, Innovation, and Infrastructure and SDG 17: Partnerships for the Goals, showcasing her belief in the potential for technological advancement to drive industry innovation and foster worldwide partnerships for development.

Moreover, Ema underlines the crucial need for empowering users within the domain of AI, striving for power equilibrium. The current power imbalance between AI users and developers is seen as a substantial challenge. Addressing this links directly with achieving SDG 5: Gender Equality and SDG 10: Reduced Inequalities. According to Ema, a balanced power dynamic can only be achieved when the responsibilities of developers, deployers, and users are equally recognised in AI governance.

Ema also appreciates the Internet Governance Forum (IGF) as an indispensable platform for facilitating dialogue among different AI stakeholders. She fiercely supports multi-stakeholder discussions, citing them as vital to AI governance. Her endorsement robustly corresponds with SDG 17: Partnerships for the Goals, as these discussions aim to bolster cooperation for sustainable development.

Ema introduces a fresh perspective on Artificial Intelligence, arguing that AI should be perceived as a system, embracing human beings, and necessitating human-machine interaction rather than a mere algorithm or model. This nuanced viewpoint can significantly impact the pursuit of SDG 9: Industry, Innovation, and Infrastructure, as it recommends the integration of human-machine interaction and AI.

Furthermore, Ema promotes interdisciplinary discussions on human-AI interaction as a critical requirement to fully understand the societal impact of AI. She poses dialogue to bridge cultural and interdisciplinary gaps as quintessential, given the multi-faceted complexities of AI. These discussions will help in identifying biases entrenched in human-machine systems and provide credible strategies for their elimination.

In conclusion, Arisa Ema’s holistic approach to AI governance encapsulates several pivotal areas; user empowerment, balanced power dynamics, multi-stakeholder discussions, and interdisciplinary dialogues on human-AI interaction. Her comprehensive outlook illuminates macro issues of AI while underscoring the integral role these elements play in sculpting AI governance and functionalities.

Liming Zhu

Australia has placed a significant emphasis on operationalising responsible Artificial Intelligence (AI), spearheaded by initiatives from Data61, the country’s leading digital research network. Notably, Data61 has been developing an AI ethics framework since 2019, serving as a groundwork for national AI governance. Furthermore, Data61 has recently established think tanks to assist the Australian industry in responsibly adopting AI, a move that has sparked a positive sentiment within the field.

In tandem, debates around data governance have underscored the necessity of finding a balance between data utility, privacy, and fairness. While these components are integral to robust data governance, they may involve trade-offs. Advances are thus required to enable decision-makers to make pragmatic choices. The issue of preserving privacy could potentially undermine fairness, introducing complex decisions that necessitate comprehensive strategies.

As part of the quest for responsible AI, Environmental, Social, and Governance (ESG) principles are becoming increasingly prevalent. Efforts are underway to incorporate responsible AI directives with ESG considerations, thereby ensuring that investors can influence the development of more ethical and socially responsible AI systems. This perspective signals a broader understanding of AI’s implications that extend beyond its technical dimensions.

Accountability within supply chain networks is also being highlighted as pivotal in enhancing AI governance. Specifically, advances on AI bills of materials aim to standardise the types of AI used within systems whilst sharing accountability amongst various stakeholders in the supply chain. This marks a recognition of the collective responsibility of stakeholders in AI governance.

In light of the rise of AI in the realm of game theory, exemplified by AlphaGo’s victory in the game of Go, there’s a reassurance that rivalry between AI and humans is not necessarily worrying. Contrary to eliminating human involvement, these advances have instigated renewed interest in such games, leading to a historical high in the number of grandmasters in both Go and Chess.

Highlighting the shared responsibility in addressing potential bias and data unevenness within AI development is vital. The assertion is that decision-making concerning these issues should not be solely the responsibility of developers or AI providers, suggesting that a collective approach may be more beneficial.

In summary, it’s crucial to incorporate democratic decision-making processes into AI operations. This could involve making visible the trade-offs in AI, which would allow for a more informed and inclusive decision-making process. Overall, these discussions shed light on the multifaceted and challenging aspects of responsible AI development and deployment, providing clear evidence of the need for a comprehensive and multifaceted approach to ensure ethical AI governance.

Audience

This discourse unfolds numerous vital topics centring on democratic decision-making, human rights, and the survival of the human species from a futuristic perspective, primarily focusing on the speed and agility of decision-making and potential implications on the democratic process. A thoughtful proposal for superseding the conventional democratic process, when deemed necessary for the species’ survival was advanced. This may even necessitate redefining aspects of human rights to better manage unforeseen future challenges.

The discussion also touched on circumstances where democracy could pose certain hurdles, suggesting a more democratic model could be beneficial for overcoming such issues. This proposed approach underlines the idea of a holistic, global consultation for such decision-making scenarios, emphasising the inherent value of democratic ethos in contending with complex problems.

A notable argument for enhanced collaboration was presented, stressing on adopting a concerted, joint problem-solving strategy rather than attempting to solve all problems at once. This suggestion promotes setting clear priorities and addressing each issue effectively, thereby creating a more synergetic solution to thematic global issues.

Within the technology paradigm, concerns were raised about who governs the algorithmic decision-making of major US-based tech companies. The argument underscores the non-transparent nature of these algorithms. Concerns related to potential bias in the algorithms were voiced, considering the deep division on various issues within the United States. There were calls for transparent, unbiased algorithm development to reflect neutrality in policy-making and respect user privacy.

In essence, the conversation revolved around balancing quick, efficient decision-making with the democratic process, the re-evaluation of human rights in the face of future challenges, the importance of joint problem-solving in addressing global issues and maintaining transparency and fairness in technological innovations. The discourse sheds light on the intricate interplay of politics, technology, and human rights in shaping the global landscape and fosters a nuanced understanding of these issues in connection with sustainable development goals.

Ivana Bartoletti

Artificial Intelligence (AI) is a potent force brimming with potential for immense innovation and progress. However, it also presents a host of risks, one key issue being the perpetuation of existing biases and inequalities. These problems are particularly evident in areas such as credit provisions and job advertisements aimed at women, illustrating the tangible impact of our current and prospective use of AI. There’s a worrying possibility that predictive technologies could further magnify these biases, leading to self-fulfilling prophesies.

Importantly, addressing bias in AI isn’t merely a technical issue—it’s also a social one and hence necessitates robust social responses. Bias in AI can surface at any juncture of the AI lifecycle, as it blends code, parameters, data and individuals, none of which are innately neutral. This complex combination can inadvertently result in algorithmic discrimination, which might clash with traditional forms of discrimination, underlining the need for a multidimensional approach to tackle this challenge.

To effectively manage these issues, a comprehensive strategy that includes legislative updates, mandatory discrimination risk assessments and an increased emphasis on accountability and transparency is required. By imposing legal obligations on the users of AI systems, we can enforce accountability and regulatory standards that could prevent unintentional bias in AI technologies. Implementing measures for positive action, along with these obligations, could provide a robust framework to combat algorithmic discrimination.

In addition, the introduction of certification mechanisms and use of statistical data can deliver insightful assessments of discriminatory effects, contributing significantly to the strife against bias in AI. Such efforts have the potential to not only minimise the socially harmful impacts of AI, but also reinforce the tremendous potential for progress and innovation AI offers.

In summary, it’s clear that the expansion of AI brings significant risks of bias and inequality. However, by adopting a broad approach that encapsulates both technical and social responses while emphasising accountability and transparency, we can navigate the intricacies of AI technologies and harness their potential for progress and innovation.

Moderator

Artificial Intelligence (AI) is progressively becoming an influential tool with the ability to transform crucial facets of society, including healthcare diagnostics, financial markets, and supply chain management. Its thorough integration into our societal fabric has been commended for bringing effective solutions to regional issues, such as providing healthcare resources to remote Amazonian villages in Latin America and predicting and mitigating the impact of natural disasters.

Echoing this sentiment, Thomas, who leads the negotiations for a binding AI treaty at the Council of Europe, asserted that AI systems can serve as invaluable tools if utilised to benefit all individuals without causing harm. This view is reflected by Baltic countries who are also working on their own convention on AI. The treaty is designed to ensure that AI respects and upholds human rights, democracy, and the rule of law, forming a shared value across all nations.

Despite the substantial benefits of AI, the technology is not without its challenges. A significant concern is the bias in AI systems, with instances of algorithmic discrimination replicating existing societal inequalities. For instance, women being unfairly targeted with job adverts offering lower pay and families mistakenly identified as potential fraudsters in the benefits system. In response to this, an urgent call has been made to update non-discrimination laws to account for algorithmic discrimination. These concerns have been encapsulated in a detailed study by the Council of Europe, stressing the urgent need to tackle such bias in AI systems.

In response to these challenges, countries worldwide are developing ethical frameworks to facilitate responsible use of AI. For instance, Australia debuted its AI ethics framework in 2019. This comprehensive framework amalgamates traditional quality attributes with unique AI features and emphasises on operationalising responsible AI.

The necessity for regulation and accountability in AI, especially in areas like supply chain management, was also discussed. The concept of “AI bills of materials” was proposed as a means to trace AI within systems. Another approach to promoting responsible AI is viewing it through an Environmental, Social, and Governance (ESG) lens, emphasising the importance of considering factors such as AI’s environmental footprint and societal impact. Companies like IBM are advocating for a company-wide system overseeing AI ethics and a centralised board capable of making potentially unpopular decisions.

Despite the notable differences between countries regarding traditions, cultures, and laws governing AI management, focusing on international cooperation remains a priority. Such collaborative endeavours aim to bridge the technological gap through the creation of technology sharing platforms and encouraging a multi-stakeholder approach in treaty development. This cooperation spirit is embodied by the Council of Europe coordinating with diverse organisations like UNESCO, OECD, and OSCE.

In conclusion, while technological advances in AI have led to increased efficiency and progress, the need for robust regulation, international treaties, and data governance is more significant than ever. It is crucial to ensure that the use and benefits of AI align with its potential impact on human rights, preservation of democracy, and promotion of positive innovation.

Session transcript

Moderator:
good morning or good evening for those that are connected online from different time zones. Very happy to be here with you in Kyoto. It’s the first time in Japan so I’m very excited to be in this fantastic place. My name is Thomas. I work for the Swiss government and I happen to currently chair the negotiations on the first binding AI treaty at the Council of Europe. That is a treaty not just for European countries, it is open to all countries that respect and value the same values of human rights, democracy and rule of law. So we do have countries like Canada, the United States, Japan as well participating in the negotiations but also a number of countries from Latin America, from other continents. But I’m not here to talk about the convention right now. You will hear a lot about the convention, maybe also in this session but also in others. But I’m here to basically help you listen to experts from all over the world that will talk about AI and how to ensure, while fostering innovation, how to ensure human rights and democratic values to be respected when AI is used and developed. So as we all know, AI systems are wonderful tools if they are used for the benefit of all people and if they are not used for hurting people, for creating harm. And so this is about how to try and make sure that one thing happens and the other doesn’t. But before we go to the panelists, I have the honor to present to you a very special guest from Strasbourg, Mr. Björn Berge. He’s the Deputy Secretary General of the Council of Europe, which will give you a few remarks from his side. Thank you, Björn. Please go ahead.

Björn Berge:
Thank you very much, Ambassador Schneider, and a very good afternoon to all of you. It’s really great to be here in Japan, and at this very important occasion, and of course it’s 17 years now, and it’s the 18th time that the IGF is meeting, and it has really proven to be both a historic and highly relevant decision to start this process. And technology is, as we know, developing in a way and at a pace that the world has never seen before, which affects all of us, every country, every community around this globe. It therefore makes really perfect sense to keep up the work and do all we can to ensure enhanced digital cooperation and the development of a global information society. Basically, this is about working together to identify and mitigate common risks, so that we can make sure that the benefits that the new technology can bring to our economies and societies are indeed helpful and respect fundamental rights. Today, it is good to see the Internet Governance Forum making substantial progress towards a global digital compact, with human rights established as one of the principles in which digital technology should be rooted, along with the regulation on artificial intelligence, all to the benefit of people throughout the world. The regulation of AI is also something on which the Council of Europe is making good progress. In line with our mandate to protect and promote common legal standards in human rights, democracy, and rule of law. And the work we do is not only relevant for Europe alone, but has often a global outreach. So, dear friends, I believe all of us are fully aware of the positive changes that AI can bring. Increased efficiency and productivity with mundane and repeated tasks, moving from humans to machines, better decisions even made on the basis of big data, eliminating the possibility of human error, and improved services based on deep analysis of vast quantities of information, leading to scientific and medical breakthroughs that seemed impossible until very recent times. But with all of this comes significant right-based concerns. And just as a matter of fact, a few days ago the Council of Europe published a study on tackling bias in AI systems to promote equality. And I’m very happy and pleased that the co-author of this excellent study, Miss Ivana Bertoletti, is here with us today online and she will speak after me, I think. So, there are also other questions related to the availability and use of personal data, on responsibility for the technical failure of AI applications, and on their criminal misuses in attacking election systems, for example. And on access to information, the growth of hate speech, fake news, and disinformation, and how these are managed. The bottom line is that we must find a way to harness the benefits of AI without sacrificing our values. So, how can we do that? Our starting point should be the range of Internet governance tools that we have already agreed upon, some of which have a direct bearing on AI. If I focus on Europe for a moment, this includes the European Convention on Human Rights, which has been ratified by 46 European countries. Also, the European Court of Human Rights, with its important case law. And let me just mention one concrete example now from such a court judgment. A case that clarified that online news portals can be held liable for user-generated comments if they fail to remove clearly unlawful content promptly. This is a good example of the evolution of law in line with the times. Drawing from this European Convention and the court case law, which of course again builds on the Universal Declaration of Human Rights, we also develop specific legal instruments designed to help member states, but also countries outside Europe, apply our standards and principles in regards to Internet governance. Our Budapest Convention is the first international treaty to combat cybercrime, including offenses related to computer systems, data, and content. And its new second edition of protocol is designed to improve cross-border access to electronic evidence, extending thereby the arm of justice further into cyberspace. Our Convention 108 on data protection is similarly a treaty that countries also inside and outside Europe find highly relevant. And this Convention on data protection has also been updated with an amending protocol, widely referred to as Convention 108 plus, which helps ensure that national privacy laws converge. Added to this, over recent years we have, within the Council of Europe, adopted a range of recommendations to all our 46 member states, covering everything from combating hate speech, especially online, to tackling disinformation. And right now we are also working on a set of new guidelines on countering the spread of online miss and disinformation through fact-checking and platform design solutions. In addition, we are now looking at the impact of digital transformation of the media, and this year we will finalize work on a set of new guidelines for the use of AI in journalism. So all in all, we are indeed involved in a number of areas trying to help and contribute, but we need to go further still on AI specifically. And here we are currently developing a far-reaching and first-of-its-kind international treaty, a framework convention, and the work is led by Ambassador Schneider sitting next to me, that will define a set of fundamental principles to help safeguard human rights, rule of law and democratic values in AI. Experts from all over Europe, as well as civil society, representatives from the private sector, are leading and contributing to this work. Such a treaty will set out common principles and rules to ensure that design, development and use of AI systems respect common legal standards, and that they are rights compliant through their lifecycle. Like the Internet Governance Forum, this process has not been limited to the involvement of governments alone, and this is crucially important, because we need to draw upon the unique expertise provided by civil society participants, academics and industry representatives. In other words, we must always seek a multi-stakeholder approach, so as to ensure that what is proposed is relevant, balanced and effective. Such a new treaty, a framework convention, will be followed by a standalone, non-binding mythology for the risk and impact of AI systems, to help national authorities adopt the most effective approach to both regulation and implementation of AI systems. But it’s also important to say here today that all of this work is not limited only to the Council of Europe or our member states. The European Union is also engaged with the negotiations, as well as non-European countries, as well as Canada, the United States, Mexico and Israel. And this week, Argentina, Costa Rica, Peru and Uruguay joined, and of course Japan, a country that has been a Council of Europe observer for more than 25 years. And that is actively participating in the range of our activities, and there is no doubt that Japan’s outstanding expertise and track record of technological development makes it a much-valued participant in our work. And its key role globally, when it comes to AI and internet governance, is only reconfirmed by hosting this important conference here in Kyoto this week. So dear friends, there is still time for other like-minded countries to join this process of negotiating a new international treaty on AI, either taking part in the negotiations or as observers. A role that actually a number of non-member states have requested, and I must say the negotiations are progressing well. A consolidated working draft of the Framework Convention was published this summer, and it will now serve as the basis for further negotiations. And yes, our aim is that we should be able to conclude these negotiations by next year. I hope you agree. Let me also underline that this Framework Convention will be open to signature from countries around the world, so it will have the potential for a truly global reach, creating a legal framework that brings European and non-European states together, opening the door, so to say, to a new era of right-based AI around the world. So let me just make an appeal to governments represented here today to consider whether this is a process that they might join, and a treaty that they most likely will go on to sign, just as I encourage those who have not yet done so to join the Budapest Convention and the Convention 108 and 108+, as I just mentioned. I believe it makes sense to work closely together on these issues and make progress on the biggest scale possible. Let me lastly on this point just say, and more broadly, that on the regulation of artificial intelligence we can learn from each other, benefit from various experiences, and tap into a large pool of knowledge and expertise globally. For us, the Council Europe, it’s seeking multilateral solutions to multilateral problems is really part of our DNA, and that spirit of cooperation makes it natural for us to work with others with an interest in these issues as well. And I also want to highlight here today that we also work now very closely with the Institute of Electrical and Electronic Engineers to elaborate a report on the impact of the metaverse and immersive realities. And we are also looking then carefully into if the current tools are adequate for ensuring human rights, democracy, and rule of law standards in this field. We are also coordinating closely with UNESCO, as well as with the OECD, with the OSCE, the Organization for Security and Cooperation in Europe, and the European Union, of course. And I believe also why we are here today, as the Internet Governance Forum, we share that spirit and that ambition of international cooperation. And this is really the only approach for us, and I’m sure its success is a must, both for the development of artificial intelligence and for helping to shape the safe, open, and outward-looking societies that hold and protect fundamental rights and are true to our values. So with this, I thank you very much for your attention. Thank

Moderator:
you very much, Björn. And you said it, the key of us being together here is to learn from each other, which means listening and trying to understand each other’s situation, and we’re very happy to have quite a range of experts with different expertise here on the panel, but of course also in the room, so I’m looking forward to an interesting exchange. And I will immediately go, you’ve already named her, to Ivana Bartoletti, and she’s connected online, so we have this advantage after COVID that we can connect with people physically here, but also remotely, and Ivana Bartoletti works in a major private company specialized, among other things, in IT consulting. She’s also a researcher and teaches cybersecurity, privacy, and bias at Pamplin Business School at Virginia Tech, and Ivana is a co-founder of the Global Coalition for Digital Rights and Women Leading in AI Network. So Ivana, please tell us about your work. What would you say are the main challenges when it comes to bias, and in particular gender bias in AI, and what do you think we need to develop to do and develop and foster the appropriate solution to these problems? So Ivana, I

Ivana Bartoletti:
hope you will appear on our screen soon. Yes, we can already hear you. Wonderful, thank you so much, and thank you for having me here, and it was great to hear from both yourself in the introduction, and Mr. Bjorn Bertsch, and Deputy General Secretary of the Council of Europe, who I want to thank for their trust in giving to me and to Raffaele, and putting together this report, which is now available online. I wanted to start by saying that artificial intelligence is bringing, and will bring, enormous innovation and progress if we do it in the right way, and I do firmly believe, as many, that we are at a watershed moment in the relationship between humanity and technology. This is the time, and the Deputy General Secretary, Secretary General of the Council of Europe, Bjorn, was really articulating it well. We are at a watershed moment in this relationship between humanity and technology. Over the last few years, we’ve seen some of the amazing benefits that artificial intelligence, automated decision-making can bring to humanity. On the other hand, we’ve also seen some quite disturbing sights. We’ve also seen some quite disturbing effects that these technologies can bring, and bias, the perpetuation, the codification, the coding, and the automation of existing inequality has been one of them. And I want to make one point as we start, and the point that I want to make is that over the last few years, and sorry, over the last few weeks and months, we have seen a lot of people coming out with quite alarmist and dramatic appeals on artificial intelligence. And I want to say loud and clear here in this room, and that this alarmist approach to artificial intelligence has been quite distracting. And the reason for this is that it helps create a mystique around artificial intelligence. Well, we know very, very well right now what the risks are. We’ve been advocating, and especially have to say women and human rights activists over the last decade, for measures to tackle these harms. So I want us and everybody to remain focused on artificial intelligence risks and harms, do the nitty-gritty, as the Council of Europe mentioned now, as a lot of work is going, for example, in the European AI Act. in the development of legislation and guidance all across the world, in the work going in the Convention for the Council of Europe, as well as in the world that the United Nations with the Digital Global Compact is putting forward, to really focus on the harms that we know of, on the harms of bias, disinformation, the coding of existing inequalities in automated decisions, making choices about individuals now, but also making predictions about decisions tomorrow. So in these studies, Rafael and I have focused on bias in automated decision making, and looking at what this bias looks like. There’s been a lot of work going into this, and a lot of expertise all around the globe, focusing on bias, and we have seen that this bias has a very real effect. We’ve seen less credit given to women, because women traditionally make less money than men. We’ve seen countries and government grappling with the terrible mistakes of, for example, families wrongfully identified as potential fraudsters in the benefit system, and therefore putting families, parents, and children into poverty. We have seen what it means when job adverts target the pay less, are served to women, because traditionally women earn less than men. And we have seen the harms of automated decision making, for example, portraying images of women replicating stereotypes that we have seen for decades in our society. So the harms of automated decision making and the bias is all too real for people. It affects everyday life. And some people would argue, yes, but human people are biased. And I say, yes, they are biased. Obviously they are. But the difference is where that bias gets coded into software, and it becomes more difficult to identify, more difficult to challenge, and then it becomes ingrained even more in our society. And this is particularly complex, in my view, when it comes to predictive technologies, which, if we code this bias and these stereotypes into these predictive technologies, what could happen is that we end up into self-fulfilling prophecies. We end up into replicating the patterns of yesterdays into decisions that shape the world of tomorrow. And this is not something that we want. So what can we do? First of all, we must recognize the bias can be addressed from a technical standpoint, can be addressed from a technical standpoint, but bias is much deeper than that. It’s much more than a technical issue. It’s rooted into society because data, as well as parameters, as well as all the humans that go into creating a code, is much more than technology. Ultimately, I like to say that AI is a bundle of code, of parameters, of people, of data, and nothing of that is neutral. Therefore, we must understand that these tools are much more of a socio-technical tool rather than a purely technical one. So it’s important to bear in mind that the origin and the cause of bias, which could emerge at any point of the life cycle of AI, is a social, political issue that requires social answers, not purely technical ones. So let’s never lose this from our conversation. Second thing that is important to realize, we found in the study with Rafael, there is often not an overlap between the discrimination that people experience, which traditionally, especially in non-discrimination law across the world, are based on protected characteristics, and the new sources of discrimination that are often algorithmic. And the algorithmic discrimination, which is created by big data sets, so clustering of individuals done in a computational and algorithmic way. And, on the other hand, the more traditional categories of discrimination, they do not overlap. And therefore, what is happening is that we must look into existing non-discrimination law and try and understand if that non-discrimination law that we have in place in our countries is fit for purpose to deal with this new form of algorithmic discrimination. Because what happens in algorithmic discrimination is that individuals may be discriminated, not because of traditional protected characteristics, but because they’ve been put in a particular cluster, in a particular group, and this happens in a computational and algorithmic way. Then, so the updating of existing legislation is very important. We encourage member states to expand the use of positive action measures to tackle algorithmic discrimination and use the concept of positive obligations, which is, for example, in the European Convention of Human Rights case law, to create an obligation for providers and users to reasonably prevent algorithmic discrimination. This is really, really important, and in part in our report. We’re looking and we’re suggesting mandatory discrimination risk and equality impacts assessments throughout the lifecycle of algorithmic systems according to their specific uses. We’re looking really to see, to ensure that this equality by design is introduced into the systems. We’re looking and we’re suggesting to member states to consider how certification mechanisms could be used to ensure that this bias has been mitigated. So looking, for example, at how member states could introduce some form of licensing and say, well, actually, this system, due diligence has been into the systems to eliminate as far as possible for well-defined users. We’re looking at encouraging the member state to investigate the relationship between accountability, transparency, and free secrets. And finally, my last point, and we’ve been encouraging member states to consider establishing legal obligations for users of AI systems to publish statistical data that can allow parties, researchers, to really look at the discriminatory effect that a given system can have in the context of discrimination claims. So I want to close on this. It’s a vast report that I would encourage anyone to read. And the bottom line of this report is that discrimination through AI systems is something that brings together social and technical capabilities. It’s something that needs to absolutely be at the heart of how we deploy these systems. We must investigate the way that we can use these systems to actually tackle the discrimination in the first place. For example, by identifying sources of discrimination that are not visible to human eyes in the first place. So there is a positive side to all this, which we must harness. But to do so, we encourage everyone to really understand how we can get together, bring the greatest expertise that we have in the world and in this room, to really try and understand how we can not just further our knowledge, but also enshrine in legislation the importance to tackle bias in algorithmic systems.

Moderator:
Thank you very much, Ivana. And as you say, new technologies create new problems sometimes, but they can also be part of new solutions, and it’s good to highlight both. With this, let me move on immediately as we are slightly running behind schedule to Ms. Merve Hickok. She’s also connected online. We do, as you see, also have physically present speakers and experts. Merve is a globally renowned expert on AI policy, ethics, and governance, and her research and training and consulting work focuses on the impact of AI systems on individual, society, public, and private organizations with a particular focus on fundamental rights, democratic values, and social justice. She’s the president and research director at the Center for AI and Digital Policy, and with this hat, she’s also very actively present as one of the very present civil society voices in the negotiations on the convention. So Merve, what are some of the main challenges of finding proper regulatory solutions to the challenges posed by AI to human rights, democracy, and what kind of solutions to these challenges do you see? Thank you.

Merve Hickok:
First of all, thank you so much for the invitation, Chair Schneider. Good to see you virtually. And I appreciate the invite and expanding this conversation in this global forum as well. Also, I’m in great company here today and very much looking forward to the conversation. I actually want to answer the question by quoting from recommendation of committee of ministers of Council of Europe dating back to 2020, where the ministers recommend that achievement of socially beneficial innovation and economic development goals must be rooted in the shared values of democratic societies, subject to full democratic participation and oversight, that the rule of law standards that govern public and private relations such as legality, transparency, predictability, accountability, and oversight must also be maintained in the context of algorithmic systems. So this sentence alone, for me, provides us with a great opportunity, great summary of challenges, as well as an opportunity and direction towards solutions. First, in terms of challenges, we currently see a tendency to treat innovation and protection as a either-or situation, as a zero-sum game. I cannot tell you the number of times I’m asked, but would regulating AI stifle innovation? And I’m sure those in the room and around in the panel probably has lost count of this question. However, they should coexist. They must coexist. Regulation creates clarity. Safeguards making innovations better, safer, more accessible. Genuine innovation promotes human rights. It promotes engagement. It promotes transparency. The second challenge in this field that we are seeing is that the rule of law standards, which go on public actors’ use of AI systems, must apply to the private actors as well. It feels like every day we see another privately-owned AI product undermining rights or access to resources. Yes, of course there are differences in the nature of duties between private and public actors. However, businesses also have obligation to respect human rights and rule of law too. This is reflected in the United Nations Guiding Principles for Business, reflected in the Hiroshima process for AI now. We cannot overlook how the private sector’s use of AI impacts individuals and communities just because we want our domestic companies to be more competitive. Market competition alone will not solve this problem with human rights and democratic values. Unregulated competition might encourage a race to the bottom. And the third challenge, the final challenge, is the CEO and industry dominance in the regulatory conversations we’re seeing around the globe today. As ministers note, innovation must be subject to full democratic participation and oversight. We cannot create regulatory solutions behind closed doors with industry actors deciding whether or how they should be regulating. Of course, the industry must be part of this conversation. However, democracy requires public engagement. Whether it’s in the US, UK, or beyond, we’re seeing the dominance of industry in the policymaking process undermining the democratic values and is likely to exacerbate existing concerns about replication of bias, displacement of labor, concentration of wealth, and power imbalances. And as I mentioned, the challenge, the minister’s recommendation actually include the solutions with them, that we need to base our solutions in democratic values. In other words, civic engagement and policymaking in elections, in governance, transparency, and accountability. I would like to finish very quickly with some recommendations because core democratic values and human rights is core to the mission of my organization. We saw these challenges several years ago and set ourselves up for a major project to objectively assess AI policies and practices across countries. Our annual flagship report is called AI and Democratic Values Index. We published a third edition this year where we assessed 75 countries against 12 objective metrics, where our metrics actually allow us to assess whether and how these countries see the importance of human rights and democracy, and whether they keep themselves accountable for their commitments to these. In other words, do they walk the talk? You would be surprised to see how many commitments in a national strategy does not actually translate to actual practices. So I’m finishing my response with offering recommendations from our annual report over the three years that I hope will be applicable to this conversation. First, establishing national policies for AI that implement democratic values. Second, ensure public participation in AI policymaking and create robust mechanisms for independent oversight of AI systems. Third, guarantee fairness, accountability, and transparency in all AI systems, public and private. Four, commit these principles in the development, procurement, and implementation of AI systems for public services, where a lot of the time the middle one, procurement, falls between the cracks. Next recommendation is implement the UNESCO AI recommendations on ethics. And then the final one in terms of implementation is establish a comprehensive legally binding convention for AI. And I do appreciate the Council of Europe being part of the Council of Europe’s work and looking forward to this convention for AI. And then we have two specific recommendations for specific technologies because they undermine both human rights and democratic values and civic engagement. One is the facial recognition for mass surveillance. The second one is deployment of little autonomous weapons, both items that have been repeatedly also discussed in UN negotiations, UN conversations. With that, I would like to thank again and I’m looking forward to the rest of the conversation.

Moderator:
Thank you, Merve. That was very interesting, in particular also this fight against the notion that you can have either innovation or protection of rights, but both need to go together. With this, let me turn to Francesca Rossi. She’s also present online. By the way, have you noticed we have quite many women here on this panel? So for those that complain that you don’t find any women specialists, actually sometimes you do. So Francesca Rossi is a computer scientist currently working at the IBM TJ Watson Research Law in New York. And as an IBM Fellow, an IBM AI Ethics Global Leader. She’s actively engaged in the AI-related work of bodies like the IEEE, the European Commission High-Level Expert Group or the Global Partnership on AI. And she will give us a unique perspective as both a computer scientist and a researcher, but also as someone who knows the industry’s perspective on the challenges and on the opportunities created by AI and more especially on generative AI. You have the floor, Francesca. Thank you. Thank you.

Francesca Rossi:
Thank you very much for this invitation and for the opportunity to participate in this panel. So many of the things that have been said by the previous speakers resonate with me. So like I can, you know, of course, everything that Ivana said about the social technical aspects of this science and technology that is AI by several years now that AI is not a science or a technology only, but it’s really a social technical field of study. And that’s a very important point to make. I really support, you know, the efforts that the Council of Europe and the European Commission are doing in terms of regulating AI as in my company and myself, I really feel that regulation is important to have and it does not stifle innovation, as it was said also by the previous speaker. But regulation should be focusing, in my view, on the uses of the technology rather than the technology itself. The same technology can be used in many different ways, in many different application scenarios, some of them that are very, very low risk or no risk at all, and some others instead that are very, very high risk. So we should make sure that we focus where the risk is and to put obligations and compliance and scrutiny and so on. I would like to share with you the fact, what happened during the last years in a company like IBM, which is a global company and has applications of its technology and deployment of its technology to many different sectors of our society. So, and what we did inside the company, even though there was, and there is in some regions of the world, no regulation that’s really to be compliant with around AI, is because we really feel that regulation is needed, but it cannot be the only solution. Also because technology goes much faster than the legislation process. So companies have to play their role and their part in making sure that the technology they build and they deploy to their clients respects the human rights and freedom and human dignity and bias and many others. So the lessons that we have learned in these years are very few, but I think very important. First of all, that a company should not have an AI ethics team. This is something that maybe is natural to have at first, but it’s not effective in my view. because having a team means that then the team has to usually struggle to connect with all the business units of the company. So what it must have is a company-wide approach and framework for AI ethics and a centralized governance for that company-wide framework. For example, in our case, in the form of a board with representations from all the business units. Second thing, this board should not be an advisory board. It should be an entity that can make decisions for the company, even when the decisions are not well-received by some of the teams. Because for example, it says, no, you cannot sign that contract with a client. You have to do some more testing. You have to pass that threshold for bias. You have to put some contractual conditions in the contractual agreements and so on. The third thing that we learned is that we started like everybody with very high level principles around AI ethics, but then we realized very soon that we needed to go much deeper into the concrete actions. Otherwise there was no impact from the principles to what the developers and the consultants were doing. Next one is really the social technical part. For a technical company is very natural to think that an issue with the technology can be solved with some more technology. And of course, technical tools are very important but they are the easy part. The important and the most important and complimentary part to the technical tools is the education, the risk assessment processes, the developers guidelines, really changing the culture and the frame of mind of everybody in the company around the technology. Next point is the importance of research. AI research can augment the capabilities of AI, but it can also help in addressing some of the issues related to the current limitations of AI. And that’s very important. So to really focus on supporting the research efforts. We also have to remember the technology evolves. So over the years, our framework has evolved because of the new challenges and expanded challenges that came about with the evolution of the technology. And we going from a technology that was just rule-based to based on machine learning and then based on generative AI right now, which expands old issues, like issues related to fairness, explainability and robustness, but also creates new ones, right? It was mentioned misinformation, fake news, copyright infringement, and so on. And then finally the value of partnerships. So partnerships that are multi-stakeholder, that are global. And as deputy secretary mentioned, this is really a very important and a necessary approach. It has to be inclusive, multi-stakeholder and global. So I’ve been working with the OECD, with the World Economic Forum, with the partnership on AI, global partnership AI, the space is very crowded now. But, and we have to make an effort because of this crowded space to find the complementarity and how to work together. So each initiative tries to solve the whole thing, but I think that each initiative has his own angle that is very important and complementary to the other ones. So I’ll stop here by saying that I really welcome what the Council of Europe is doing under the leadership also of our moderator, but also I welcome what the UN is doing and is trying to do with the new advisory board that is being built, because really the UN can also play an important role in making sure that AI is driven in the right direction, which is guided by the UN Sustainable Development Goals. Thank you.

Moderator:
Thank you, Francesca, for sharing with us the lessons learned in a company like IBM from an industry perspective, but also I think a very important guidance is an appeal to intergovernmental institutions and other processes to not all try to solve all problems at once, but to each of the processes and institutions focus on their specific strength and try to jointly solve the problems together. Thank you very much. With this, let us move, and this is our last online speaker, then we do have the physical speakers here, Professor Daniel Castaño. He comes from the academic world. He’s a professor in law in the Universidad Externado de Colombia, but has a strong background in working with governments. He’s been a former legal advisor to different ministries in Colombia and actively engaged in the AI-related research and work in Colombia and Latin America in general. He’s also an independent consultant on AI and new technologies. So Daniel, what kind of specific challenges do AI technologies pose for regulators and developers of these technologies regionally, in your case, in Latin America in particular? Thank you very much, Daniel.

Daniel Castaño Parra:
Well, ladies and gentlemen, distinguished delegates and fellow participants, Deputy Secretary-General Bjorn-John Birch and Chair of the SCAI, Thomas Schneider, thank you very much for this invitation. I think that the best way to address this question is just trying to discuss the profound importance of AI regulation. Today, I must make clear that I’m speaking with my own voice and that my remarks reflect my personal views around this topic. So AI, as we know it, is no longer just a post-word or a distant concept from enhancing healthcare diagnosis to making financial markets more efficient. AI is deeply embedded in our societal fabric. Yet, like any transformative technology, its immense power brings forth both promises and challenges. But why, you may ask, is AI regulations of paramount not only to Europe, but to the world and to our region, to Latin America? At its core, it’s about upholding the values we hold dear in our societies. Transparency in the age where algorithms shape many of our daily decisions, understanding their mechanics is not just a technical necessity, but a democratic imperative. Accountability. Our societies thrive on the principle of responsibility. If an AI errors or discriminates, there must be a framework to address these consequences. Ethics and bias. We are duty-bound to ensure that AI doesn’t perpetuate existing biases, but instead aids in creating a fairer society. As we stand on the brink of a new economic era, we must ponder on how to distribute AI’s benefits equitably and protect against its potential misuse. Now, casting our gaze towards Latin America, a region of vibrant cultures and emerging economies, the AI landscape is both promising and challenging. In sectors ranging from agriculture to smart cities, AI initiatives are gaining traction in our region. However, the regulatory terrain is diverse. While some nations are taking proactive measures, others are still finding their own footing. However, the road to a unified regulatory framework faces certain stumbling blocks in our region. Like, for example, fragmentation due to inconsistent inter-country coordination. I mean, we lack the coordination and integration that Europe has nowadays. We have deeply technological gaps that are attributed to varied adoption rates and expertise level. And we have infrastructure challenges that sometimes hamper consistent and widespread AI application. But let’s not just dwell on challenges. Let’s try to architect some solutions together. So first, I will suggest that we require some sort of regional coordination. So for that purpose, we could establish a dedicated entity to harmonize AI regulation across Latin America, fostering unity in diversity. Also, I would suggest to promote the creation of technology sharing platforms that would allow for the creation of collaborative platforms where countries can share AI tools, solutions, and expertise, bridging the technological gap. Also, I would suggest some investments in shared infrastructure for our region. Consider, like, pooling resources to build regional digital infrastructure, ensuring that even nations with limited resources have access to foundational AI tech. Unique challenges also present themselves. Infrastructure discrepancies, variances in technology access, and a mosaic of data privacy norms necessitate a nuanced approach in our region. But herein also lies the opportunity. AI has the potential to address regional challenges, whether it’s delivering healthcare to remote Amazonian villages, or predicting and mitigating the impact of natural disasters. So what path forward do I envision for Latin America and, indeed, the global community? So first, I will suggest regional synergies are key. Latin American countries, by sharing best practices and even setting regional standards, can craft and harmonize AI narrative. Second, I highly encourage stakeholder involvement. A diverse core of voices, from technologists to the industry, to civil society, and ethicists, must shape actively the AI regulatory dialogue in our region. We also need capacity building. I mean, we have a huge technological gap in our region, and I think that investment in education and research is non-negotiable. Preparing our global citizenry for the highly augmented future is a shared responsibility with the world. Finally, I would also encourage to strengthen data privacy and protection, and to try to harmonize the fragmented regulatory scheme that we are having now in LATAM, because I think that could lead to a balkanization of technology, which will only hamper innovation, and would only put us many years back. So in conclusion, as we stand at this confluence of technology, policy, and ethics, I urge all stakeholders to approach AI with a balance of enthusiasm and caution. Together, we can harness the potential of AI in order to advance the Latin American agenda. Thank you all for your attention, and I very look forward to our collective deliberations around this pivotal issue. Thank you very much.

Moderator:
Thank you very much, Daniel, for these interesting insights, because in particular, people coming from Europe like me, although my country’s not a formal member of the European Union, I think we have a very well-developed cooperation, and also some harmonization of standards, not just through the Council of Europe when it comes to human rights, democracy, and rule of law, but also economic standards. And it is important to know that this is not necessarily the case in other continents, where you have a lot of greater diversity of rules and standards in different ways, which is, of course, a challenge also. And I think your ideas, your solutions to overcome these challenges are very, very valuable. Let me now turn to Professor Lai Ming Su. I hope I pronounce it correctly. He’s a research director at the Australia’s National Science Agency, and a full professor of the University of South Wales. So let us continue with the same topic, move to another region, to Asia Pacific, actually. And Lai Ming Su has been closely involved in developing best practices for AI governance and worked on the problem of operationalizing responsible AI, so the floor is yours.

Liming Zhu:
Yeah, thanks very much for this opportunity. It’s a great honor to join this panel. And so I’m from CSRO, which is Australia’s National Science Agency, and we have a part called Data61. If you’re wondering why Data61, 61 is Australia’s country code when you call Australia. It’s a business unit doing research on AI, digital, and data. So just back a little bit on the Australian journey on AI governance and responsible AI. So Australia is one of the very few countries back in 2019, late 2018, actually, started developing our AI ethics framework. So Data61 actually was the one leading the industry consultation and came up in middle 2019, the Australian AI ethics framework, which is the high-level principles. And we observe its principles are similar to many of the principles in other world and globally. But interestingly, it has three elements in it. One is it’s not just on high-level ethical principles, and it recognize human values with a plural being a different part of the community have different types of values and the importance of trade-offs and robust discussion. The second part of it is included many of the traditional challenging quality attributes like reliability, safety, security, and privacy, and realized AI will make those kind of challenges even more challenging. And the third part of the AI ethics framework included something quite unique to AI, such as accountability, transparency, explainability, and contestability. That AI, although they are very important in any digital software, but AI will make those things more difficult. And since then, Australia has been focusing on operationalizing responsible AIs, especially led by Data61. In the meantime, other agencies like Human Rights Commissioner, the Human Rights Commissioner, Lauren Finlay, when he heard about this particular topic at this forum, she was very excited and she forwarded our recent UN submission on AI governance. And also, you may bump into the E-Safety Commissioner from Australia in this forum, and she’s looking after some of the E-Safety aspects of AI challenges as well in this forum. But then the government actually launched about two years ago, the Australian National AI Center. The Australian National AI Center hosted by Data61 is not a research center. It’s an AI adoption center. Interestingly, its central theme is responsible AI at scale. So it has created a number of think tanks, including AI inclusion and diversity, responsible AI, and AI at scale to help Australian industry navigate the challenge of adopting AI responsibly in everything we do. In the meantime, as a science agency, I’m a scientist in AI, we have been working on the best practices, bridging this gap that Francesca has mentioned. How do we get high-level principles into something on the ground that organizations and developers and AI experts can use? So we have developed a patent-based approach for this. A patent is just a reusable solution, a reusable best practice. But interestingly, in a patent, not only you have the best practice, but it also captures the context of the best practice, and also the pros and the cons of this best practice. Not all best practice comes in free, and also many best practices need to be connected. There are so many guidelines, sometimes for governance, sometimes for AI engineers. There are a lot of disconnection between them. But when you connect those powerful best practices together and you see how the society, the technology companies, the governing bodies can make things, responsible AI more effectively implemented. Another key focus of our approach from Australia is on the system level. So much of AI discussion has been talking about AI model. You have this AI model, you need to give it more data to train it, to align it, to make it better. But remember, every single system we use, including TGPT and others, is the overall system. The system utilizes some of the AI model. There are a lot of system level guardrails we need to build in. And those system level guardrails are actually capturing the context of the use. And without context, many of the risks and the responsible AI practice are not gonna be very effective. So a system level approach, going beyond machine learning models, another key elements of our work. The next key elements we have is, as I mentioned earlier, is realizing the trade-offs we have to make in many of this discussion. For people familiar with data governance, we know there’s a trade-off between data utility and privacy. You can’t get both. And to how much data utility you need to sacrifice, sacrificing the value of data for privacy, and vice versa, how much privacy you can afford to sacrifice and for maximizing data utility, this is another question for scientists to answer. However, science plays two very important role. One is to push the boundaries of the utility versus privacy curve. Meaning for the same amount of privacy, we could do new science to make sure more utility is extracted. In the high level panel this morning, you have heard of federated machine learning and many other technologies has been advanced to enable this better trade-off, getting better from both worlds. But importantly, it’s not only utility and privacy. There’s also fairness. You may have heard a story that when we actually trying to preserve privacy without collecting certain data, it will also harm fairness in some cases. So now you have three quality attributes you have to trade off, utility, privacy, fairness, and there are more. So how science can enable that the decision makers to make that informed decision is the key of our work. The next characteristic of our work from Australia is to look at the supply chain. No one is building AI from ground up. You always rely on other vendor companies’ AI models. You may be using pre-trained model. How can you be sure what is AI in your organization? So similar to some of the work in software bills of materials, we have been developing AI bills of materials. So you can be sure that what sort of AI is in your system and have that accountability being held and shared among different players in the supply chain. And the final thing we have just embarked on working is to look at responsible AI and AI governance through the lens of ESG. Of course, ESG stands for environment, social and governance, and very aligned with the SDG of UN goals. On the other hand, environment is your AI footprint, environment footprint. Social elements, AI plays a very important role and the governance of AI often is too much of internal company governance, but the societal impact of AI needs to be governed as well. So looking at responsible AI through the lens of ESG, we also make sure investors can drive the levers of doing better responsible AI. So I will conclude by saying that Australian’s approach is really looking at connecting those practices, enable the stakeholders to make the right choices and trade-offs, and those trade-offs are not for us to make. Thank you very much.

Moderator:
Thank you, Liming, and it’s interesting to hear you talking about trade-offs and how we can turn this, maybe change them from perceived trade-offs to perceived opportunities if we get the right combinations of this goal. So let me turn to our last but not least expert, which is somebody that comes from the country hosting this IGF this year from Japan. Professor Ema is an associate professor at the Institute for Future Initiatives of the University of Tokyo, and her primary interest is to investigate the benefits and risks of AI in interdisciplinary research groups. She’s very active in Japan’s initiatives on AI governance, and I would like to give her the floor to talk about how Japanese actors, industry, civil society, and regulators see the issue of regulation of governance of AI. Thank you, professor.

Arisa Ema:
Thank you very much, Chair Thomas, and yeah, I think I am really be honored to here to make some. of the presentation and share what’s being discussed here in Japan also with my colleagues. So as Thomas nicely introduced me, so I am right now at the academic, the University of Tokyo, but also I am a board member of the Japan Deep Learning Association. It’s more of the startup companies community and also I am also a member of the AI Strategic Council and the Japanese government. However, today’s this talk, I would like to wear the academic hats on it and I would like to talk about what’s being discussed here in Japan. And so far, I see a lots of the sharing insights that being discussed with the panelists. And when I would like to introduce what’s kind of the status or what’s kind of discussion was ongoing here in Japan. So back in 2016, it was like, that was the G7 summit at the Takamatsu. So before 2023, there was once the summit. And there, Japanese government released the guidelines to the AI development. And I think, I believe that that was actually the turning point that the global discussion about the AI guidelines actually started and with the collaboration. And this year or the 2023 as the G7 summit at Hiroshima, we also see that there’s the very huge debate discussion on the ongoing and the generative AI and currently the G7 and also the other related companies, countries are discussing about the way to create the rules to govern the generative AI and also the AI in general. And that’s called the Hiroshima AI process. And I believe there will be the discussion tomorrow morning. And with that, the Ministry of Internal Communication Affairs and also the Ministry of Economic and Trade and Ministry is also creating the guidelines to discuss the development of the AI and also to mitigate the risks. So that kind of thing is right now ongoing here in Japan, but before going to discuss the further about the AI with the responsible use, I would like to talk a little bit about the AI convention that’s actually the COEs going under the negotiation. So why I’m here is that because I am actually very interested in the AI convention and my colleagues and I are actually organized an event here in Japan to discuss what’s the impact towards the Japanese and also to the other world. And actually we are creating some of the policy recommendation for the Japanese government. So if we are to sign this convention, what kind of thing we should investigate on. And I think this is a really important convention so that when we are to discuss the responsible AI. And in order to make some of the points that should be discussed within this panel, I would like to raise some of the three points that we published last month in September from my institution. The title is called the Toward Responsible AI Deployment, Policy Recommendation for the Hiroshima AI Process. If you are interested in, just search for my institution’s name and the policy recommendation and you can find out. And in there, we created this policy recommendation with a multi-stakeholder discussion, including not only the academics but also from the industry. And also we actually had a discussion with the government officials. And there, one thing we think really important is that the interoperability of the frameworks. So the framework interoperability of the AI is one of the key words that’s been discussed in the G7 Summit this year. But I guess many of us have questions, what does the interoperability mean? And there, for our understanding, is that we need somehow the transparency about each of the regulations or each of the frameworks that actually disciplines this AI development and also the usage. And in this sense, I think this AI convention is really important because, as explained here, the AI convention is a framework convention. And each country has its own, will take their own measures to this AI innovation and also the risk mitigation. And it’s really important to respect to each other’s, the other country’s culture or how they regulate their official intelligence and how actually they can connect to each other’s frameworks. And it’s really important that each country has this clear explainability, accountability of what’s the role of each stakeholder has, what will be the responsibility and how they can supervise whether that kind of measurement is actually working. And in that sense, I think that that kind of interoperability and also this AI convention have somehow, has really important views to share. And also, the next point we actually raised as our policy recommendation is that how we actually considers about the responsibility. And I think the other panelists also discussed, but I think what is important thing is that to discuss about the responsibility of the developers, the deployers, and also the users. However, the regulation is especially important regarding the user side. And because there’s the power imbalance between the users and the developers. So what we have to do is that not only the rules of the regulation, but also we need to discuss how to empower the citizens to create, to empower and to hire their literacy so that they can judge what’s the AI actually has, what kind of developed. And I am really happy to hear that the professor actually raised the ESGs, how the investors are also very important stakeholders. So not only the rules or the regulation by the legal framework, but actually there’s a lot of discipline that we can actually use. So for example, like the investors’ investment or maybe the reputation and also like the literacy. So we can, and also actually the technology itself as well. So there’s lots of measurement that we can take and with all that things concerned, I think we can create a more better or the more responsible AI systems or the AI implemented society as a whole. And in the last part, lastly, but not least, what we also emphasize is the importance of the multi-stakeholder discussion. And I believe that the here, the IGF is one of the very good timing that we actually discuss this here because we actually, there’s a Hiroshima AI process ongoing and lots of countries right now are dealing with their own regulatory frameworks. And as I said, Japanese government is also creating this guidelines or maybe updating the guidelines. And with that, this is actually the place we actually share what have been discussed or what we can share the values. And the important value that actually the Council of Europe raises the democracy, human rights, and the rule of law. That important values we share. And with that, we can have the transparency and also this kind of framework interoperability to discuss and then make this principles or the policy into practices. And so with that, I will stop here, but I really appreciate to be in this panel.

Moderator:
Thank you very much, Arisa. And before I react to, I would like to encourage people to those that want to interact, please stand up and go to the microphones. We have a little less time than planned for the interactive discussion, but we do have a little bit of time. I think what this panel has shown is that although we share the same goals, we do have different systems, we do have different traditions, cultures, not just legally, but also socially. And of course, it is a great challenge. And as Arisa has also said, if we want to develop this common framework with the Council of Europe, but not just for European countries, for all countries around the world, one of the biggest challenges is indeed, and we’ve heard a few hints how to, one thing is how to have governments commit themselves to follow some rules. This is the easy part, let’s say, in a convention where governments can commit to keep sticking to some rules. But since we have so differing systems of how to make the private sector respect human rights, contribute positively and not negatively to democracy, how do we deal with these differences? How do we responsibilize private sector actors in a way that they can be innovative, but that they contribute positively and not negatively to our values? So this is one of the key challenges also for us working on this convention, because we cannot just rely on one continent or one country’s framework. We have to find the common ground between different frameworks. So having said this, I’m happy to see a few people take the floor. Please introduce yourself with your name briefly and then make your comment or ask your question. Thank you.

Audience:
Thank you, sir. My name is Christophe Zeng. I’m the founder of AAA.AI Association based in Geneva, Switzerland. And my question regards a continuation of the topic that we just covered regarding responsibility and trade-offs. Continuing on this question, I would like to raise what I call a right of humans in comparison to the human rights. Isn’t it the right of humans to endure the test of time? In order to endure the test of time, is it a right or duty of collective sacrifice? Is it a right or duty to redefine some of our most fundamental beliefs and values? If there was a solution which could deliver a way to that right, but which could require us to relent temporarily or risk relenting permanently, some of our human rights as defined for the declaration because of speed or because of need for consultation. For that right to endure the test of time, speed versus right, consultation versus sovereignty, right versus rights. If there existed a binary choice at some point, ladies and gentlemen, what right ought we to choose? And my question extends to also our colleague at IBM and the broader world communities. If we are to not try to solve all problems at the same time, but instead jointly solve specific questions and tackle the overall question jointly, are we accepting to sacrifice the speed of decision-making or would we accept that one way makes at some critical point in time for the enduring of humans as a species, some decisions which require speed beyond reasonably possible by a fully democratic process to be made slightly less democratically for those to whom democracy is dearest? And some decisions which require global consultation to be made slightly more democratically for those to whom democracy compacts a challenge to overcome. Thank you.

Moderator:
Thank you very much. You pose an interesting questions if I can try and summarize it. Will we need the right to stay human beings and not turn into machines because we may have to compete with machines? That’s at least some aspects I think I’ve heard. Let’s take another comment and then see what the reactions are. Yes, please go ahead.

Audience:
Thank you for recognizing me. I’m Ken Katayama. Emma Sensei knows I have a work in the manufacturing industry but I have an academic hat with Keio University. My topic relates to some of the American speakers and the concern about bias. So within the Japanese academic world that I live in, our concern is about bias being in the sense that the platform has got Google, Apple, Facebook and Amazon are basically quote unquote American companies. So who decides these algorithms? When we look at the United States, we see extremely wide, like in the case of abortion or politics, there are many issues that are very divided in the United States. And then so if the platformers are deciding these issues, in the end, even if it’s technically possible to try to avoid the bias, who in the end actually decides which answer to go with? Thank you.

Moderator:
Thank you very much. So let me turn to the panel on these two issues or questions. One is like, yeah, how can we stay humans and given also the competition, the growing competition with machines and the other one is think about bias and where it’s not just the AI systems, it’s also the data, of course, that shaped the bias and of course, data is not evenly spread across the world, but there’s more data from some regions, from some people than from others. So whoever wants to react, maybe we give precedence to the ones that are physically present, thank you. Thanks very much.

Liming Zhu:
I think we have a lot of experts online and Professor Emma is here. So just very briefly, reversely, I think who makes those decisions as I alluded to earlier, I mean, as a scientist and I think all the developers or the AI providers, it’s not their decisions to make. It’s the ability to actually expose some of these trade-offs to allow the democratic process to have that debate with data and informed decision and then there’s a dial like privacy, utility and fairness and they decide where to implement. Then the technology assures that implementation is throughout all the system, properly monitored and can be improved by pushing the scientific boundary. I think in terms of AI and human competing, certainly there’s a concern. I mean, when AlphaGo beat human in Go, I mean, these days you may know, people used to say there might be worrying people stop playing chess and Go, but at this moment of in history, the number of people interested in playing Go and chess is historically high. The number of grandmasters in Go and chess is historically high. The reason being, human find the meaning in those work and the game, and they will continue to do that. Even AI surpassed them. They learn from AI, they work with AI, they make a better society, I think, but that speeds of change might be too fast sometimes for us to cope.

Arisa Ema:
Yeah, so I guess the important thing that we should discuss or that we should be aware is that although we are talking about the artificial intelligence as a technology, but it’s more like, as a professor said, it’s a system. It’s not only the AI algorithm, but AI models, but it’s more like AI systems, AI services, and within that systems, the human beings are also included. So we have to discuss about the human-machine interaction or the human-machine collaboration. In that way, maybe the partially kind of responded to the both questions is that, so we need to, we actually don’t have a clear answer, but we have to discuss the human-machine interaction and also the human biases are already included into this kind of human-machine systems. So from the academic hats on my head, I would like to say that we need to focus more on this cultural and also the interdisciplinary discussion on the human-machine interaction with artificial intelligence. Thank you very much.

Moderator:
We are approaching the end of this session, and I’d just like to maybe close with one remark that maybe showed that I’m not a lawyer. I’m an economist and a historian, and whenever we talk about the crucial moments that we are at the edge of history becoming completely different than before, this is also something that every generation in every point of history thought, and if we look back around 200 years or 150 years, when the combustion engine was spreading all over continents, that also had a huge effect. It did not replace like what AI is about to do, cognitive labor, but it replaced physical labor by machines that you can actually, there where you find a lot of comparisons. If you take engines and compare them, what they did, they were used in all kinds of machines to either produce something or to move something or somebody from A to B, and we learned to deal with engines. We have developed not just one piece of legislation, but hundreds of norms, technical, legal, social norms for engines used in different kinds of contexts, and we’ve been, for instance, if you take traffic legislation, we’ve been able to reduce the number of dead people in car accidents significantly. At the same time, the biggest challenge is how to reduce the CO2 emissions in engines. We are still struggling after 200 years of using engines on how to solve that problem. So, yeah, and there’s many more analogies between AI systems and engines, but also differences because AI systems are not physically placed somewhere, but can be moved and reproduced much easier than engines, but I’m very delighted by this discussion, and I hope that we will continue. There’s a number of AI-related sessions this week in this IGF. I’m part of a few of these, and I hope to see you again also in the couloirs because I’m really interested in, together with you, finding a solution on how this Council of Europe Convention can be a global instrument that will also not solve all the problems, but help us to get closer together to successfully use AI for the good and not for the bad. So, thanks a lot for this session, and see you soon. Thank you. Thank you. Thank you. Thank you. Thank you very much. Very good. Thank you very much, Chris. Thank you. Thank you very much. Thank you very much.

Arisa Ema

Speech speed

151 words per minute

Speech length

1552 words

Speech time

616 secs

Audience

Speech speed

155 words per minute

Speech length

522 words

Speech time

202 secs

Björn Berge

Speech speed

117 words per minute

Speech length

1838 words

Speech time

943 secs

Daniel Castaño Parra

Speech speed

158 words per minute

Speech length

911 words

Speech time

346 secs

Francesca Rossi

Speech speed

156 words per minute

Speech length

1134 words

Speech time

437 secs

Ivana Bartoletti

Speech speed

143 words per minute

Speech length

1667 words

Speech time

701 secs

Liming Zhu

Speech speed

181 words per minute

Speech length

1636 words

Speech time

543 secs

Merve Hickok

Speech speed

143 words per minute

Speech length

1025 words

Speech time

429 secs

Moderator

Speech speed

175 words per minute

Speech length

2563 words

Speech time

877 secs

Call for action: Building a hub for effective cybersecurity | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Yuki Tadobayashi

A significant disparity exists between the content taught within universities and the fundamental needs of industries, specifically concerning cybersecurity education. It has been observed, with certain negativity, that universities are becoming entrenched in outdated teaching methodologies. These institutions often prompt students to demonstrate individual brilliance and foster innovation. However, this approach might inadvertently impede teamwork – an integral ingredient for problem-solving in cybersecurity. Moreover, they put an undue emphasis on inventing new technologies, such as AI, instead of instructing students on their secure usage.

Conversely, it is laudable how industry training programmes in Japan effectively cater to the necessities of the rapidly evolving cyber landscape. With an annual budget of $20m, these programmes concentrate on the future of technology. They focus on providing practical education on secure applications of cutting-edge technologies such as cloud security and AI implementation, among others. An outstanding aspect of these programmes is their emphatic endorsement of teamwork, acknowledging the multidimensional nature of tech issues within corporate structures.

Parallelly, attention ought to be drawn towards the untapped potential within industries possessing notable digital aspects. Notwithstanding the escalating level of digitisation within factories, individuals within these workforces frequently fail to recognise potential opportunities to shift towards a career in cybersecurity. To address this issue, a myriad of industry-sponsored training programmes has been initiated in a bid to arm individuals with essential cybersecurity skills. These comprehensive programmes stretch across a year, operating from Monday to Friday, and cover an array of topics including programming, penetration testing, and defence exercises. Intriguingly, they also conduct business-oriented drills, involving briefings to Chief Information Security Officers (CISOs), board members, accountants, and lawyers.

Conversely, the switch to a cybersecurity profession necessitates a substantial commitment of a year, considering the intricate nature of both technological adaptations and legal/regulatory developments. This has triggered a backlash from certain industrial contingents, leading to the suggestion that the demanding duration could potentially deter prospective aspirants.

Lastly, it’s worth considering the ‘security hub’ – a proposed global collaboration platform designed to bolster internet safety. Ideally, this hub should promote less strenuous participation, encouraging involvement across various scales. However, the success of such an endeavour would heavily rely on establishing a network of trust. This brings associated risk as malevolent entities could potential exploit this network, hence careful strategic planning alongside stringent safeguards are indispensable to its development.

To summarise, whilst academia must endeavour to bridge the gap with industrial necessities in cybersecurity education, industry-sponsored training and innovative propositions like the ‘security hub’ offer unique opportunities to enhance cybersecurity skills and facilitate global teamwork. Nevertheless, elements such as the duration of training and trust issues in a collaborative platform require thorough analysis to ensure sustained engagement in these initiatives.

Review and Edit: Look for any grammatical mistakes, issues with sentence structure, typographical errors, or omitted details and rectify them. Ensure that UK spelling and grammar are employed in the text, and correct the same if not. The comprehensive summary should reflect the primary analysis text as accurately as possible. When summarising, aim to incorporate as many long-tail keywords as feasible without compromising the quality of the summary.

Ismalia Jawara

Ismalia Jawara, serving as the chair of the cybersecurity group named Gambia Information Security, has ardently championed the causes of diversity and women’s inclusion in the cybersecurity workforce. This testifies to her dedication to ‘SDG 5: Gender Equality’ and ‘SDG 8: Decent Work and Economic Growth’. Under her stewardship, educational initiatives such as bootcamps have enabled approximately 50 university students to graduate with vital competencies in the cybersecurity sector. The fact that fifteen out of twenty-five individuals holding IC2 cybersecurity certifications are women serves as a powerful testament to the strides taken towards women empowerment within this domain.

Moreover, Jawara advocates for increased cooperation across disparate levels, while also encouraging greater involvement from the global south. Her influential role as a senior security analyst for the Gambia government further allows her to foster partnerships and collaborations with the aim of equipping more people with cybersecurity skills. She explicitly acknowledges Cisco’s contributions in providing free cybersecurity scholarships to disadvantaged communities, a commendable step towards fulfilling ‘SDG 17: Partnership for the Goals’.

Despite these affirmative actions, there remain considerable hurdles in achieving gender equality and workforce diversity in the cybersecurity field. Societal constructs such as gendered toys can have subtle, yet profound, impacts on career choices from an early age. Furthermore, despite concerted recruitment efforts, the quantity of women entering the cybersecurity industry remains remarkably low, necessitating an exploration into underlying issues.

Businesses are now being urged to re-evaluate their practices and prioritise the induction of women in the cybersecurity terrain. Such a shift would not only adhere to ‘SDG 5: Gender Equality’, but also cultivate diversity in the corporate ecosystem. The current recruitment stratagems are often interpreted as intimidating, potentially discouraging prospective female recruits. The exceedingly high benchmarks set also compound entry barriers.

Thus, there is a clear demand for the re-examination and adaptation of recruitment strategies. Emphasising inclusivity in hiring, evidenced when a woman with a legal background performed well in the offered position, could be an enlightened approach. Lowering entry-level requisites may render positions less daunting and far more approachable, thus enticing a higher number of female applications. Such measures could significantly bolster not only workforce diversity and gender parity but also embellish the cybersecurity workforce with a broader selection of viewpoints.

Larry CEO of connect safely

Larry is the dedicated CEO of Connect Safely, a non-governmental organisation (NGO) situated in the heart of innovation—Silicon Valley. This NGO is earnestly committed to issues concerning child safety, privacy, and security for all stakeholders. Larry’s roles are not limited to his leadership at Connect Safely. He is also an influential member of the Youth Standing Group, where he steadfastly focuses on advocating similar protections.

There is a deep-rooted concern about the ever-expanding gap in the cybersecurity field. Despite technological advances, the human capacity to develop solutions is not evolving at the same pace as the escalating complexity of cybersecurity threats. This imbalance emphasises a pressing requirement for further focus in this sector to certify the security of our digital environments.

A notable element is the explicit relevance of diversity, creativity, critical thinking, and holistic cognition skills in tackling cybersecurity complications. It is recognised that a diverse workforce is better positioned to cater to a varied community. Importantly, the possession of holistic and critical thinking abilities, often neglected over technical capabilities, can lead to more effective troubleshooting strategies in this sector.

The active involvement and early exposure of high-school students to the realities of the cybersecurity domain are enthusiastically endorsed. With numerous opportunities available, young individuals could potentially transition directly into cybersecurity roles post-schooling. This practical experience not only guarantees a thorough understanding of cyber infrastructure functions, but it also equips them with valuable skills that supplement their theoretical knowledge.

The significance of real-world interaction, networking, and practical experience in accelerating learning is undeniable. Insights from informal exchanges and networking at events often offer learning experiences far beyond traditional classrooms, enabling an intensified understanding of the sphere.

Despite being known for its stress-inducing nature, cybersecurity is identified as a critical sector offering significant and important jobs. Adequate measures such as satisfactory compensation and clear career advancement options are deemed imperative to maintain a motivated and committed workforce.

These findings intricately link with several Sustainable Development Goals (SDGs), namely Peace, Justice and Strong Institutions, Good Health and Well-being, Quality Education, and Decent Work and Economic Growth. These observations act as a vital reminder of the interconnectedness of various socio-economic, and political aspects in our pursuit of a secure, just, and sustainable future.

Hikohiro Lin

Hikohiro Lin from PwC Japan has specialised in product security, with a special focus on manufacturers and IoT devices. His efforts align with the ‘Industry, Innovation and Infrastructure’ Sustainable Development Goal (SDG), demonstrating a positive stance towards enhancing technological infrastructure. Furthermore, Lin actively advocates for strengthening ties with Japanese manufacturers, thus creating a platform for investment in product security and IoT devices.

In addition to his work with manufacturers, Lin emphasises the importance of quality education in the cybersecurity sector. He deems the collaboration between industry and educational institutions as pivotal in implementing robust cybersecurity training modules. He also underscores the value of experiential learning initiatives such as internships, hackathons and industry conferences in augmenting understanding of the industry landscape and fostering networking among cybersecurity aspirants, thus facilitating decent work and economic growth.

Despite his optimistic outlook, Lin addresses the issue of high job turnover in cybersecurity due to the stressful nature of the work, which often requires 24-hour incident response. Instead of merely increasing salaries as a temporary solution, he advocates for improving the quality of life within the sector. He suggests this could be achieved by providing better professional development opportunities, such as conference attendance and training courses, and promoting a more comfortable work environment.

As a vocal proponent of cybersecurity as a career, Lin describes it as an enjoyable, essential, challenging, and prestigious field, offering job security and constant novelty. He aims to dispel misconceptions and portray it as an attractive profession. He also engages in constructive discussions with high school students, positioning cybersecurity as an exciting career prospect.

In keeping with his belief in continuous development and growth, Lin urges organisations within the cybersecurity hub ecosystem to consider their contributions towards achieving Sustainable Development Goals. This sentiment implies an ongoing self-reflection and conversation on the societal and environmental impacts of the industry.

In summary, Hikohiro Lin’s discourse underlines the importance and potential of cybersecurity in today’s digital landscape, putting emphasis on education, improved working conditions, and its crucial role in attaining Sustainable Development Goals.

Maciej Groń

Maciej Groń expresses robust positivity towards the initiatives of hotlines and help lines, viewed as instrumental in contributing to good health, wellbeing, and fortifying peace, justice, and robust institutions. He recognises these platforms’ crucial role in offering support and guidance to individuals, thereby enhancing societal welfare.

Groń underscores the significant role of universities in achieving quality education and crafting strategic partnerships to achieve broader goals. His belief in co-operating with educational institutions is rooted in the innovative ideas and diverse perspectives emanating from the academic space. Groń sees such collaboration as pivotal in driving progress and achieving sustainable development goals.

He places extraordinary emphasis on cybersecurity awareness and education in our digital age. Groń shows positivity towards cybersecurity training programmes spearheaded by NASC, credited with educating thousands of individuals. Further milestones encompass establishing a cyber science centre and partnering with three universities, collectively working towards bolstering cybersecurity education. These initiatives are reported to have facilitated responsible digital interactions, thereby fostering decent work and economic growth.

Groń also highlights the necessity for international cooperation in cybersecurity. He suggests that such cooperation leads to mutual learning and shared resources, improving cybersecurity measures globally. Policymakers and cybersecurity professionals, according to Groń, must strive to foster international collaborations to combat evolving threats in the digital landscape.

Mirroring sentiments of comprehensive engagement, Groń advocates for strategic cooperation between private and public sectors. He imagines this fusion as a combination of educational efforts and business interests, promoting decent work opportunities and economic growth. He highlights a partnership platform created specifically for cybersecurity, underlining its potential for fostering public-private collaborations.

Groń acknowledges NASC’s positive contributions to establishing a robust national cybersecurity system in Poland. He commends their influential role in steering related legislation, providing informed opinions and recommendations. Notably, he mentions their effective co-operation with the chamber of lawyers, indicating increasingly synchronised work among varied legal entities.

In summary, Maciej Groń emphasises the importance of comprehensive efforts, incorporating diverse sectors, educational institutions, and legal entities to bolster cybersecurity learning, devise efficacious legislation, and construct robust cybersecurity infrastructures. His comments reflect the collective efforts required to navigate complex cybersecurity terrains, accentuating the significance of effective partnerships in accomplishing these goals.

Anna Rywczyńska

Anna Rywczyńska is noted for her legal undertakings at an international research institute, concentrating on aspects of law, research and multinational privacy. She boasts wide-ranging contacts within the multinational privacy and security resource sector, crucial to SDG 16, which encourages peace, justice and robust institutions. Furthermore, since 2006, Rywczyńska has held the significant position of coordinator at the Polish Internet Centre, demonstrating her extensive experience and knowledge in the realm of technology and internet, aligning with SDG 9 which encompasses Industry, Innovation and Infrastructure.

Rywczyńska is a vocal advocate for educational reforms, particularly concerning cybersecurity. She maintains that cybersecurity education should be integrated into both primary and secondary school curricula. Her arguments are underpinned by her participation in a hub focussed on the creation of recommendations for school curricula adaptations to digital transformation. As part of her continued efforts, she retains a productive dialogue with schools to prepare educational materials, organise events, and provide consultation.

However, she also addresses concerns about Poland’s current education system. Rywczyńska highlights that 57% of teachers reportedly believe that the existing curriculum does not sufficiently adapt to the swift progress of technological advancement, with 30% conceding to a lack of awareness regarding complex internet usage among students. These figures support Rywczyńska’s sentiments about the impracticality of the school curriculum, highlighting obstacles including lack of time, unsuitability of the curriculum, and inadequate parental cooperation, obstructing the integration of media education in schools.

Educators, she points out, acknowledge the necessity to include digital competences from the early years of education. This is believed to effectively equip students with the required skills for a digital world, falling in line with SDG 4’s target of quality education for all.

Regarding career prospects in the cybersecurity field, the perception of it being akin to a ‘war zone’ incites negative sentiments. This impression may deter both males and females from entering the field. However, the diversity of competences required in the cybersecurity field is positively received, with the clarification that these skills are not exclusively related to the gaming arena.

Rywczyńska points out the prominent gender gap in IT, stating that more needs to be done to inspire young girls to consider a career in cybersecurity. Educating parents about opportunities in the field and encouraging their daughters to see cybersecurity as a viable career option are essential steps towards achieving gender equality, supporting the SDG 5 goals and ensuring robust economic growth under SDG 8. Unconscious biases often determine student interests, with boys usually opting for coding classes while girls prefer dance classes. As such, challenging these stereotypes forms part of the broader strategy in advancing gender equality in the field of cybersecurity.

Julia Piechna

The importance of involving young individuals in cybersecurity is an issue at the forefront of international discussions, given the ever-rising threats in the cyber domain. Specifically, the Internet Governance Forum (IGF) in Poland has been integral in marketing this involvement. Affiliated with the esteemed United Nations, the IGF in Poland placed a special focus on youth engagement, highlighting the importance of their active participation in this crucial issue.

Tertiary education students’ perspective on cybersecurity is noteworthy, revealing a solid interest and propensity for the subject. Meetings were organised between March and June of 2023 that brought these students into discussions on topics such as cyber policy, internet governance, and human rights in the digital realm. Importantly, 15% of student respondents in the questionnaire had attended university cybersecurity courses, whilst a staggering 71% endorsed that cybersecurity training should be a mandatory part of their tertiary studies.

It is particularly interesting to note from the questionnaire that an emerging academic interest exists among the youth in fields such as artificial intelligence (AI) and cybersecurity. This finding indicates an urgent need for high-quality education and training programmes in these sectors, especially considering that digital careers are expected to dominate the future job market.

However, such enthusiasm is somewhat cooled by a considerable sense of apprehension and concern about potential cyber attacks. Notably, 63% of questionnaire respondents expressed concern about this issue, with 99% pinpointing cybercrime and 97% data leaks as substantial cybersecurity threats.

Julia Piechna’s professional experience presents a compelling argument for incorporating cybersecurity and safety education into the earliest stages of curriculum design. She suggested moving away from traditional teaching methods in favour of more unconventional and engaging approaches. This innovative perspective aligns with the argument that educational methodologies need to adapt quickly to the rapid advancements in technology and the evolving dynamics of the industry.

In summing up, it is clear that equipping young individuals with the knowledge and tools to navigate the cyber domain is of utmost importance. Not only does it prepare them for the future job market, but it also instils a profound understanding of the seriousness of cyber threats. The shift towards more innovative educational methods, coupled with fostering youth participation in forums such as the IGF, offers a promising path for addressing the multifaceted challenges within cybersecurity.

Katarzyna Kaliszewicz

A comprehensive analysis underscores the critical importance of proactive online participant engagement in driving strategic development and policy decisions. This level of involvement was facilitated through digital platforms like menti.com, which engendered an interactive environment for individuals to articulate their opinions. Participants’ collective interaction was enabled using a QR code, and they were invited to join the platform through a shared code.

The necessity of formulating a well-defined strategic plan emerged as a primary area of concern, alongside the requirement for an enhanced online presence. Participants highlighted this through their feedback and prioritisation, demonstrating a strong need for an online platform delivering comprehensive training and workshops, syncing with adult education needs.

The feedback signalled that promoting collaboration among industry professionals, universities, and the cybersecurity workforce is the top priority. This viewpoint indicates a broad consensus on the need to foster partnerships and establish direct links between these sectors to drive innovation, catalyse skills development, and enhance industry-university engagement.

Another top priority identified across all educational levels is the enhancement of cybersecurity skills, underscoring the pivotal role of cybersecurity in contemporary life. This necessity of integrating cybersecurity training at all stages of the educational pathway, from primary school to professional development, has emerged as a core recommendation, aiming at a universally cyber-literate society.

Participants also emphasised the importance of harnessing best practice from the cybersecurity and tertiary sectors. This attention to fostering a culture of knowledge sharing and standardising effective, secure practices is essential for prioritising infrastructure and network security in the broader educational and industrial context.

Raising interest in the cybersecurity industry, both in academia and the vocational space, is another crucial recommendation from the analysis. The promotion of this field is pivotal in harnessing a diverse talent pool, given the escalating demand for cybersecurity professionals.

Significant interest was also voiced in providing online training on emerging topics via digital platforms, with leadership from industry experts. This exhibits the need to maintain a contemporary repository of knowledge and expertise and demonstrates the appeal of accessible, high-quality professional development.

These priorities are notably endorsed by Katarzyna Kaliszewicz, further reinforcing their relevance and signalling the strategic focus that these preferences propose.

The analysis also reveals a growing consensus around the need to have specialised cybersecurity roles within corporate organisations. This perspective is underpinned by the universal interaction with digital technology and data across industries. The comparison with workplace safety specialists, which are commonplace in organisations, substantiates this viewpoint. Especially in sectors like the medical industry, where the integration of digital tools is rapidly increasing, the need for cybersecurity specialisation is paramount.

Lastly, the analysis foresees an impending need for an increased number of trained cybersecurity professionals. A call to action is thereby made to augment and amplify cybersecurity training in anticipation of this demand, mitigating the risk of a significant talent deficit. Emphasising these specific priorities at the core of strategic decision making is imperative, given the ever-evolving cybersecurity landscape.

Wout de Natris

This discourse is centered around two UN Sustainable Development Goals (SDGs) – SDG 4 and SDG 9. SDG 4 seeks Quality Education, advocating lifelong learning opportunities, whilst SDG 9 promotes the development of resilient infrastructure and the nourishing of innovation. In this context, the discussion’s focus is on the practical implementation of cybersecurity and enhanced education.

The dialogue is informed by the International Society of Service Innovation Professionals, Cybersecurity, and Cybercrime Advisors (IS3C), a dynamic coalition within the Internet Governance Forum (IGF) system. This coalition has stressed the diligent efforts of its two working groups, each centring on a distinct facet of cybersecurity.

The first of these groups is set to produce a comprehensive report on the security design of the Internet of Things (IoT). As IoT devices multiply, their security has transformed into a critical focus. The second working group is targeting government procurement and supply chain management, two areas crucial for a secure digital economy. Additionally, the creation of a tool designed to assist governments procuring ICT securely is seen as a adjunctive value addition to bolster cybersecurity.

The discussion unfailingly identified the persistent gap in cyber security education: a gap showing no signs of reducing, thus posing a concerning hindrance to cybersecurity progress. In response, the formation of a cybersecurity hub has been suggested, tapping into the IGF’s vast potential for creating connections and educational opportunities.

Throughout the conversation, a positive sentiment prevailed, suggesting an optimistic outlook towards these proposed strategies aimed at enhancing cybersecurity and education. This positivity underlined the general consensus on the urgent need for pragmatically implementing the knowledge produced in these reports.

Joao Falcão

The cybersecurity sphere offers a myriad of opportunities and challenges. Success in this field requires a profound understanding of a variety of systems, especially in the industrial sector, as demonstrated by Joao Falcao’s visit to a factory. The task of quickly comprehending how different machines operate within a constrained timetable was a significant challenge, emphasising the need to grasp how systems should and should not operate to pinpoint potential security weak points.

Transitioning into the cybersecurity industry, especially for young people, can often be an isolating experience. Joao’s solitary journey of self-education, spent with his computer, encapsulates the daunting task facing newcomers. His struggles on an industrial cybersecurity course, due to his unfamiliarity with machinery and ‘space’, highlight the urgent necessity of practical experience to complement theoretical knowledge. Conversely, individuals switching careers can utilise their previous experience, an advantage often out of reach for those at the start of their careers.

Interestingly, a distinct shift has been observed in the demography of the cybersecurity field. Once mainly drawing the interest of the curious, the sector now predominantly attracts mature professionals. Joao observed that cybersecurity events, previously dominated by interested individuals, are now primarily attended by professionals, spurring the field’s evolution through peer sharing and community building.

Efforts must be significantly ramped up to provide a welcoming avenue for new entrants into the cybersecurity field, particularly the younger generation. Knowledge sharing initiatives and bridging the gap between theoretical understanding and practical, hands-on experience could encourage interest and foster growth in the industry.

Furthermore, company dynamics play a pivotal role in shaping the cybersecurity landscape. Small security-focused teams often shoulder the entire responsibility for company security, which can precipitate relational difficulties due to the imposition of necessary security measures. To mitigate this, a shift in company culture towards a more encompassing acceptance of cybersecurity is needed. A more distributed responsibility for cybersecurity across the entire workforce could promote this shift.

Media depictions of cybersecurity also wield considerable influence, shaping perceptions and fostering interest in the field. Films such as ‘Hackers’ and ‘War Games’ have inspired the ’80s generation. Lastly, a distinct gender gap exists in the cybersecurity realm, underscored by Joao’s failed attempts to hire a woman for an open position. This situation, where the post remained empty until a man filled the gap, illustrates the need for proactive action to improve gender diversity within the industry.

Audience

The discussion encompassed a broad scope of subjects, relating primarily to the domains of education, policy development, and cybersecurity. The principal argument articulated the intimate connection between education and diverse sectors such as policy formulation and cybersecurity. This overlap precipitates an intersectional understanding of these sectors and their importance within our swiftly evolving landscape. The sentiment throughout this discourse remained neutral, delineating an equilibrium in the interconnectedness of these sectors.

A formidable case was made regarding cybersecurity, distinguishing between the discrete entities of cyber-safety and cybersecurity. A former police officer with Europol experience vehemently embraced the interconnectedness of these two concepts. This assertion was fundamentally optimistic, alluding to the common significance of these two facets of digital safety.

The benefits of a well-rounded strategy addressing both aspects were emphasised, underpinned by the argument that a simultaneous addressal of cyber-safety and cybersecurity would optimise results for end users. This stance exhibited positivity, thus accentuating a forward-thinking projection for future cybersecurity strategies.

The discourse then transitioned to the issues faced by smaller island nations, suggesting indigenous cyber community groups as a solution to build and retain cyber skills. Convincing examples from the Samoa Information Technology Association and Tonga Women in ICT were presented, emphasising these groups’ crucial role in facilitating training, industry knowledge sharing, and networking. This viewpoint was positively empowering, fostering a sense of resilience within these small island nations.

The important balance between formal hubs and informal spaces for information sharing was highlighted, resonating with the consensus on the key connect between academia and the industry. This endorsement of both formal and informal information sharing fosters a unified and thorough approach towards cybersecurity.

The focus then shifted to policy formulation, featuring experiential insights from the EU policy cycle that bases its strategy on research findings was a prime exemplification of a research-based approach. The sentiment here was positive, grounded on the EU’s demonstrably effective strategic cycle.

However, a word of caution was raised regarding the necessity for policies to reflect the current online threats accurately. A negative sentiment percolated through during this point, attributable to anecdotal instances when irrelevant policies did not adequately address the evolving cyber threat landscape. The conclusion drawn from this analytical discourse underscored the critical need for adaptable policies that are responsive to the current realities of online threats. Taken together, the discussion offered significant insights into the intersecting nature of multiple sectors and emphasised the urgency for accurate policy creation in response to present cyber threats.

Raul Echeverria

In 2022, a disturbing trend was noted where almost 70% of companies in Latin America reported experiencing some form of cybersecurity problem. This painted a picture of inadequate preparedness for significant cybersecurity threats across the corporate landscape. Notably, the governments of Costa Rica and Colombia faced significant cyber attacks, further emphasising the gravity of the situation.

Interestingly, in challenging business times, companies often resort to cutting their security provisions in an attempt to save resources. However, this approach was questioned as these companies ended up losing millions in cyber attacks, predicaments that are not only costlier but also more disruptive and could have been effectively managed or even avoided with adequate security provisions in place.

On a positive note, awareness of the necessity for widespread, scalable solutions to enhance cybersecurity is increasing. More emphasis is being placed on robust, far-reaching education and training programmes to equip more professionals with essential cybersecurity skills. The adoption of these education and training initiatives are gradually increasing, with instances of new cybersecurity measures being implemented by companies increasing by approximately 10% per year.

However, this progress is overshadowed by the fact that cyber attacks continue to grow far more menacingly, increasing by a worrying 20%, thus pointing to a vast shortfall in the necessary measures. It is evident that current measures are failing to keep pace with the rapidly evolving threat landscape, with it being estimated that at least 200,000 professionals skilled in this sector are needed in Mexico alone to prevent such cyber intrudiations.

In response to these insights, a concerted effort among various stakeholders is being called for. The public sector, private enterprises, and academic institutions are all urged to collaborate to understand and subsequently tackle the cybersecurity challenge. This joint strategy advocates sector-wide implementation of educational programmes aiming to enlighten companies about latent cyber risks and how to prevent potential attacks.

Arguably, certain individuals within the industry have exhibited varying levels of commitment towards these challenges, exemplified by Raul Echeverria who appears not to prioritise cybersecurity as strongly as necessary. Nevertheless, this should not discourage the broader industry from setting appropriate and effective cyber defence strategies.

Additionally, it is suggested that existing corporate culture could also be remodelled. Companies are advised to adopt short-term hiring strategies, whilst simultaneously offering prospective employees a progressive growth plan which exposes them to various areas within the organisation. This would likely attract more professionals to the sector, effectively helping to bridge the existing skills gap.

In conclusion, collaboration involving the public sector, private enterprises, and academic institutions is crucial for the sector to remain secure and efficiently combat ongoing threats to their cyber infrastructure. While certain industry players have demonstrated lesser commitment, it is vital for all players to accord equal attention to these issues to not only thwart cyber attacks but also build a robust workforce for the future.

Denis Susar

The analysis underscores the paramount importance of areas such as e-government, digital skills, cybersecurity, digital harm, and ICTs in the journey towards accomplishing the ambitious Sustainable Development Goals (SDGs). SDGs 16 and 17, which focus on fostering peace, justice, strong institutions and partnerships for the goals, stand at the forefront of these conversations.

Firstly, a resounding call emerges to utilise a hub for capacity building in e-government, which would enhance digital competencies as well as cybersecurity consciousness amongst all 193 member nations. The success of this endeavour is believed to rely heavily on the level of digital literacy and awareness surrounding cybersecurity within the populace.

Secondly, a significant change in perception is presented, advocating the inclusion of digital harms as threats to peace and security. This shift in stance has been partly encouraged by a high-level advisory board’s recent recommendation to extend the definition of threats to include digital harms, thus acknowledging the evolving challenges of the digital age.

The need for cybersecurity within the public sector and ICTs is also given emphasis, with capacity building including engagements with local public officials, thus implying a comprehensive, grassroots approach to addressing these issues.

The report also addresses the vital importance of retaining skilled cybersecurity professionals within the industry. Citing job stress as a key factor causing specialists to leave, it articulates a clear need to establish methods to retain this invaluable talent.

At the heart of the discourse is the role of incentives in fostering a culture of cybersecurity. Large companies, such as Amazon, are highlighted as examples of stakeholders for whom cybersecurity breaches would prove catastrophic. This underlines the vital role that cybersecurity plays in maintaining the health of industries today.

Supporting this assertion are the forthcoming opportunities for global digital compact discussions, due to take place in 2024 and a review set for 2025. These discussions present significant stakes for industry players, policymakers, and cybersecurity professionals alike.

It further suggests that a well-developed cybersecurity hub would most likely be utilised by governments, implying that increasing the competency of such hubs could significantly strengthen national cybersecurity measures.

The need for engagement from the educational sector is also strongly emphasised. A call to break out of conventional silos supports cooperation beyond the IT sector, involving a myriad of sectors, including the motor, fashion, and food industries, amongst others. This integrated approach signals an inventive strategy and is backed by an expectation for more industry participants in the coming year, illustrating a desire for more diversity within cybersecurity dialogues.

Concluding on the note of Denis Susar’s exhortation for industries to “think out of the box”, it sets forth a challenge for industries to step up, collaborate more, and adopt innovative thinking in their quest for more effective cybersecurity solutions.

Janice Richardson

A notable gap exists between the skills fostered in cybersecurity education at universities and the capabilities required in the wider industry. Industry professionals attribute the absence of abilities such as comprehensive holistic thinking, effective communication, and diversity among recent graduates. The education sector recognises the importance of critical thinking, albeit the focus currently tilts heavier towards technical skills like coding and product-specific training.

Addressing this discord, the inception of a multifaceted ‘cybersecurity hub’ is proposed. This critical institution would stimulate synergy between industry, tertiary education, and the younger generation, extending its functions to an international level. Imagined as a nexus for fostering knowledge exchange, providing authentic educational resources, and nurturing understanding of industry-wide best practices – this hub is slated to play a key role in consolidating learning and progress within the cybersecurity arena.

Further insight suggests the essential requirement for the early induction of youngsters in comprehending the functionality of the internet and cybersecurity components. Many reports indicate that young professionals have a limited understanding of the internet’s mechanics, cloud security, and other basics. This finding highlights the necessity of a comprehensive educational base, enabling adaptability to the constantly shifting cybersecurity landscape.

A report citing as many as 67% of respondents from the business and industry sectors implied a deficiency in the transversal skills among cybersecurity graduates, further spotlighting the flaws in the existing education structure. Thus, this perceived insufficiency emphasises the need for the proposed cybersecurity hub.

In terms of diversity, a clarion call has been made for the cybersecurity field to encourage a higher female representation, fostering diversity and inclusivity in technical disciplines. Although explicit supporting facts are not provided, there’s a general positive consensus towards such measures, indicating their significance in catalysing industry growth.

Additionally, an audience speaker stressed the importance of incorporating real-world cases into the strategic planning for the hub, solidifying the requirement for practical applications within cybersecurity education. At the moment, the hub is still in its embryonic stage of development. However, key coordinators, including Janice, are actively seeking public feedback and suggestions, endorsing a collaborative and inclusive approach for this promising initiative. A shared understanding exists that open dialogue will propel the forward motion of the hub’s development, enhancing the strategic focus and operational effectiveness of this indispensable entity.

Session transcript

Anna Rywczyńska:
have general contacts of multinational privacy and security Resource and is operating under standard law under the international research institute that I am coordinated the Polish Internet Center since 2006.

Maciej Groń:
There’s also the hotline and the help line that we have. So I think it’s a good thing that we have the cooperation with the universities.

Hikohiro Lin :
Thank you. My name is Hikohiro Lin from the PwC Japan and my focus is product security, the manufacturers, IoT devices, security things, and I’m facing all the Japanese manufacturers and also hiring the students or something like that. Nice to see you guys.

Ismalia Jawara:
Thank you very much for having me. Thank you for having me. Thank you for having me. Hello, my name is Ismail, and I’m from the Gambia, you know, chair of the Gambia information security community and also work with the Gambia revenue authority.

Joao Falcão:
Hello, my name is Joao Falcão. I am Brazilian. I’m the vice chair of the youth standing group and I also work in cyber security

Janice Richardson:
and I’m also a member of the youth standing group and I’m also a member of the youth standing group. We can pass it along to the other side and introduce the rest of our speakers. We do have one speaker online. Is Emmanuel ready to introduce himself? Good afternoon, my name is Emmanuel, I’m the vice chair of the youth standing group and I’m also a member of the youth standing group. I go ahead while you wait for that.

Denis Susar:
My name is Denis Susar, I’m from the United Nations department of economic and social affairs. I mostly cover e-government and IGF in my work.

Julia Piechna:
Good afternoon, my name is Julia Piechna, I work at NASC in cyber

Larry CEO of connect safely:
security and I’m also a member of the youth standing group. Good afternoon, my name is Larry, I’m a member of the youth standing group and I’m also a member of the youth standing group. I’m a CEO of connect safely, an NGO based in Silicon Valley. We work in the areas of child safety and safety for all stakeholders as well as privacy and security. If you see me typing, I’m not checking my e-mail, I’m the rapporteur, so my job is to try to remember some of what people are saying and I’m also a member of the youth standing group and I’m also a member of the youth standing group.

Raul Echeverria:
Good afternoon, my name is Raul Echeverria, I’m the executive director of the Latin American Internet Association.

Yuki Tadobayashi:
This is Yuki Kadobayashi from science and technology, I also do cyber security education and research and I also lead some cyber security training program for industry.

Wout de Natris:
Yes, thank you, Janice. The IS3C is a dynamic coalition within the IGF system and we announce ourselves at the virtual IGF of 2020. We made sure that we had something to announce and we did it in a way that was a little bit different than what we did in the past. I think it’s better, I think. Thank you, Raul, because I have no clue if you can hear me or not. So in Poland, we introduced our plans and last year in Addis Ababa, our first report was presented which was made by Janice Richardson and her team on education and skills. We did that in a way that we didn’t have the resources to do it in a way that we didn’t have in the past. So we are really hitting our stride, more or less. What we are going to discuss today is that it’s very nice to have a digital report on a fairly obscure website called Internet Governance Forum, but how to actually make sure that it moves into practice. So we have a lot of work to do, but we have a lot of work to do, but we have a lot of work to do. So what Janice is going to present on today and we are going to discuss together today is an idea of a cyber security hub. That’s where I will start to work and not that part of my introduction and not to take anything away from Janice. But how do we actually start moving? We have a working group on security by design of the Internet of Things that will produce this report over two days’ time in our dynamic coalition session. We have a working group on procurement, on government procurement and supply chain management. How are those ideas going to be translated into actions so that governments start procuring ICT in a secure way and not in an insecure way? We have a tool that is going to be developed to help them with that so that they have a list of the most urgent and most important Internet standards and ICT best practices that they can start using when procuring. I will stop there. There are more working groups. One that is going to start, hopefully, is on emerging technologies. What we’re discussing today is the report we produced last year on tertiary cyber security education. And what we found is that the curricula, most universities and higher education schools do not match what industry demands from them, let alone what society as a whole demands from them. And this is a gap that needs to close. And it needs to close in a few different ways. Obviously, it’s the content of these education curricula. But what about facilitating a mid-career change for people who may want to start working in this area? How to close a lack of experts in gender and how to make it more attractive for youth to work in cyber security? And that is something that is being discussed for probably two decades, but the gap is not closing. And we have some ideas to close that gap, and that’s what we’re doing. And we’re working on that, and we’re working on that, and we’re working on that, and we’re working on that, and we’re working on that, and we’re working on that, and we’re working on that, and we’re working on that. And that is what we will be discussing today. But how to proceed? We think that it’s important to bring the right people together, to create the context where people can start discussing this on an equal level. And what better place to do that is the IGF. It’s the place where people can discuss at the same level, at the same amount of importance and equality. So how to bring these people together and get them out of their silos? How to make something work that so far seems to have been beyond reach? And how to motivate people to join? How to find funding for this important work that determines all of our future? And how to move forward with this? So we have a group of panelists. We have a group of panelists, and we have a group of panelists with insight, which is the company of Janice, and NOSC, who presented themselves as well at this table. We present the concept of the cybersecurity hub, a hub where cooperation starts and concepts are turned into actions, and capacity-building is developed and supported. In this session, you will be invited to share your views on the subject of cybersecurity, and what you think about cybersecurity, and what you think about the potential issues that could not be solved. So I wish you a fruitful discussion, and I certainly look forward to see your input online when I look at the session. So instead of being the moderator, I’m handing over the microphone now to Janice, who will take the rest of the

Janice Richardson:
session, and when I leave, you know why I have to excuse myself. So thank you very much, and I hope we have a very good session. So, Janice, thank you very much for being here, and I’m very pleased to be here. And after each speaker, you can, you can ask questions. We would like this to be an interactive session. I’m going to tell you a little bit about the study that we did in 2021. We managed to reach 66 countries. We began by 30 interviews from countries as far-flung as Australia, Canada, and Canada, and we got a very clear idea of what we were looking for in cybersecurity. Firstly, from people from industry and business who participated, we got a very clear idea of the profile of what they’re looking for in cybersecurity. Firstly, creativity. Of course, critical thinking. That’s top of everyone’s list. We got a very clear idea of what we were looking for in cybersecurity. We got a big complaint. Young people who leave university, who come to the workforce, are not holistic thinkers. They do have good communication skills. This is very important. They are insufficiently diverse. Women are not joining the workforce. Young people don’t seem to be very involved in the workforce. We got a clear picture from the people from education who participated. Yes, they agreed, critical thinking. They agreed in theory, but where are we in practice? They seem to place a lot of focus on coding, on learning about specific products, and I’m glad to see that many of us agree here. We got a very clear picture of what we were looking for in cybersecurity. We got a very clear picture of teaching young people how things function. Young people arrive, they do not really understand how the Internet works, what is the backbone of the Internet, how does cloud security work. All of these issues remain the gap, I would say, between what industry wants from people who join the workforce, and what industry wants from young people. Of course, we know what is happening. Companies are training their own young people, school leavers. This leaves us with a workforce who know today’s products, but they don’t have that very broad education base to permit them to adapt to all of the changes. We have a lot of young people who are learning, and we have a lot of young people who are learning with chat, GPT, BAD, and all the rest of the generative AI that is rearing its head everywhere. So, what is the cybersecurity hub that we are dreaming of? It will be a place where industry and tertiary education sector are present, and as an educator, I would say, it is a place where young people are present, because if we don’t learn how things function from a very early age, we are not going to jump in and learn when we reach our teens. We need authentic resources for young people to learn with, and this could be an area of exchange also. We need to understand the best practice. We need to understand the best practice for young people, and we need to understand the best practice for young people. How can we do it at an international level? As Walt said, who are the people who should be involved? How do we get them around the table? These are some of the questions that we are hoping to answer during this session, and we are hoping that you will participate now, but as we move forward, I would like to invite you to join me in asking the question, unless anyone has any immediate questions, by handing the floor to the team at NASC. NASC has been our partners way back since we actually created the InSafe, Safer Internet Network, Safer Internet Day, et cetera. Once again, they have been our partners in the report, in the study, and as we move forward. I would like to invite you to join me in asking the question, what is the experience in the field of NASC?

Maciej Groń:
Thank you very much. I think that we are one of the few stakeholders that are, who came to the global IGF straight from the local IGF. Just this Wednesday, we had our local IGF in Poland, and we had a lot of participants. We had a lot of participants in Warsaw. But I can boast that our IGF in Poland, there was almost 800 registered people, and first time, the youth IGF constituted the majority of participants, so it’s a big difference, but it was also a big part of the discussion in Poland, which we had about the importance and education, I will tell you just a few words about the NASC, because the NASC, which is a research institute connected to, who has connected Poland to the Internet over 30 years ago, and today, it is a register of Internet domains, but also, we are part of the National Cyber Security Institute, which is responsible for cyber security. We are in our national cyber security system, we are the computer security incident response team. We are responsible for cyber security in Poland, and we are responsible for cyber security in Poland. That’s why education and building awareness is for us very, very, very important. The last years, we have trained thousands of people. We have our so-called secure VIP training. We have our so-called secure VIP training. We have trained more than 3,000 people, individual and multi-person training. We have trained the members of parliaments, of government, ministers, independent authorities, like financial supervision commission and others. We have also started training in the health care centers. We also have a large influence on the legislation. We work out our opinions and recommendations to the ministries, especially the Ministry of Digital Affairs. We also have a large influence on the law enforcement. We are entering into increasingly better and closer cooperation with the chamber of lawyers, especially the attorneys at law and barristers. Also, from our point of view, this has been a big challenge. We have also established a cybersecurity center. Also, we have, two years ago, we have established a cyber science center. This is the coalition of three universities in Silesia. We also provide with them, we have also established a partnership for cybersecurity. This is the new platform when we meet people from the private sector and the public sector, and we still see that, you know, the cooperation between private and public sector, and especially, we want to combine, you know, the education sector and the business, private business. We deeply think that it’s very important for us to cooperate with the rest of the world, and that’s why we want to cooperate with the rest of the world. Thank you. and that’s why we think that the hub is something that is very important. Thank you.

Janice Richardson:
Thank you. If you could please pass the microphone to Julia. I’ll pass this one along. Julia is in charge of IGF Youth Poland and wants to look at this from the perspective of tertiary students. Julia.

Julia Piechna:
Yes, I would like to share with you with our experience with involving young people, tertiary education students. So since 2020, under the patronage of NASK, we, the Youth IGF Poland has been run as a part of international global IGF that is operated under United Nations. And the main goal of Youth IGF Poland is to create an open forum for exchange of experience and views among young people and experts from different fields and backgrounds. And one of our main objective is also to create, to establish a community of young professionals interested in new technology, in internet governance, while encouraging youth participation in national and international events. For example, IGF, global IGF, or our national Polish IGF that was initiated in 2016. And just a few days ago, like Maciek mentioned, we organized IGF Poland. And actually the IGF Youth was a very focal and very important part of the event agenda. And this is a case actually each year. Since 2016. And this year, inspired strongly by the research that Janice presented you, we decided to strengthen our force and activities addressed to reach young people even stronger. And also to involve universities, involve representatives of academia in our activities. Because these are two very important and vital target groups that should be addressed by talking about bridging cyber security skills gap. And so from March to June 2023, we organized seven meetings with tertiary education students at several Polish universities. The topics covered during these gatherings involved, for example, cyber policy, internet governance, and privacy, and human rights in digital realm. We also presented them opportunities available to join IGF Youth. And we also discussed and presented them the opportunities that are available in professions connected to new technologies. And what is also very important, we invited them for joining competition with a prize being attendance here in Kyoto. So today, we are delighted to have two competition winners with us, Alexandra and Jakub. And the last very important thing is the questionnaire that we prepared and invited young people to participate. And this survey is not meant to be a representative one, but it’s rather serve us as initial analysis of students’ attitudes and concerns in online security and career prospects. So I will just present you a few findings from this questionnaire. For example, the top career interest for young people is artificial intelligence and cyber security. And 15% of our respondents attended university cyber security courses while 21 participated in external cyber security trainings. Cyber security trainings, 71% believed that cyber security training during their studies should be mandatory. Also, they think that education about cyber security should be involved in all educational levels, even in pre-school. What is also important, they think that according to soft skills like teamwork, communication, were considered by 89% as crucial as technical skills. Also, they think that cyber crime, according to 99%, and data leaks, according to 97%, were identified as major cyber security threats. And 63% expressed concern about future cyber attacks. These are only selected findings, and 139 respondents went and participated in this questionnaire. Thank you.

Janice Richardson:
So if I can now turn to Anja, and some of you may be thinking, but where is this initial report? How can we see the results? Well, you can very easily go to the IS3C website, and there you can get the report in English and in Polish. Anja.

Anna Rywczyńska:
Yes, in my few minutes, I would like to relate very strongly to what Jenny said in the very beginning, that the cyber security competence as education is not actually the question only for tertiary education, but it’s also the responsibility of primary and secondary education. And I think we strongly believe that the idea of the hub, that one of the goals of the hub, would be also to create recommendations on adopting school curriculums to the digital transformation. And having at NASK Cyber Threat Prevention Department, and having Cypher Internet Center for over almost 20 years or 17 years, we are in a permanent dialogue with schools. We prepare educational materials for them. We organize events for them. We have youth panelists that we talk a lot, and consult all the situations. And I’m very interested how it is in your countries. I hope to learn a lot during this workshop, but in Poland, there is really a significant need to focus more on media education in schools. And lately, when we were recently, when we were preparing educational materials for schools, the school scenarios, we did the research with our teachers. We asked them what are their needs just to prepare those scenarios exactly, meeting their needs. And I would like to share with you a couple of results that we got from them, like asking them what they think about actual media education at schools, and especially in the fields of cybersecurity. And according to the majority of teachers, it was 57%, they said the existing curriculum is not adapted to the realities of technological development. Almost 30% believe that they don’t have sufficient knowledge, like the teachers don’t have sufficient knowledge, even to recognize if something is happening bad for children, like if there is any sign of a problematic usage of internet among students. 57% of teachers that we asked, they say that they have only like two lessons, like twice for 45 minutes a year to talk about cybersecurity and internet safety. And of course, they think it should be multiplied. Mostly, when they talk about the obstacles, why they don’t raise media education at schools, they say that there is a lack of time, like that the curriculum of schools is totally not adjusted to the need of coming future. They also consider a big problem, the lack of cooperation with parents and the general kids environment. So I think in Poland at the moment is happening a lot, like there are many, many very good changes, but we need a bigger focus to change the curriculum and all what they say was that we need to include digital competences since first years of education. That was what the teachers and the questionnaires were talking to us. So I think we will talk a lot also about this, primary and secondary education in fields of cybersecurity competences.

Janice Richardson:
Thank you, Enya. So we’ve looked at some of the necessities and from what age we should start integrating these into education. We’ve also looked at the local or the national level, Poland. Let’s jump now to the international level, the United Nations. And I’d like to ask Denis Souza to take the floor, please, and to tell us a little bit about how we can push for this cooperation at the international level and the multi-sector level. Thank you.

Denis Susar:
Thank you, Janice. First of all, I would like to start thanking the National Research Institute, NASK, IS3C, and also Janice to you for inviting us here. I think it’s a very good learning opportunity for us as well, like turning this theory into practice and what can actually do in action with governments, with other stakeholders in the field. I think this is a very good example and it has great potential. So while listening to the participants, I was thinking like where we can use this actually, in which area. And I prepared some like major cybersecurity related activities from the ground, but I think this group here is very familiar with what I’m going to say. So instead of that, one thing that comes to mind is in our division, we look at e-government, how 193 member states are using ICTs to deliver services as well as the most populous city in each country. We look at them and we do a lot of capacity building in the area of e-government, including with the local public officials. And cybersecurity, both from the supply side, from the government, but also from digital skill side, from the demand, like how people is aware of the issues, the threats. I think this hub could be a great resource for both sides. So I just want to put that on the table. Other than that, please stop me when I run out because I think I just want to highlight the main things from my notes. This group is familiar with the UNGGG, an open-ended working group, et cetera. But one of the things that may not be familiar is the Secretary General recently, in 2022, established a high-level advisory board on effective multilateralism. One of the significant recommendations from the board is expanding the definition of threats to peace and security to include digital harms. And then in their recommendation, they also call for greater capacity building. So this is, again, it fits there. But other than that, the Global Digital Compact, et cetera, these are all security part of it, but I will not go to those principles. I see that this group is more action-oriented. So I’ll stop here, and if I have any other ideas,

Janice Richardson:
I’ll come back. Thank you very much, Dennis. And I think it’s important that we do look at e-government also, because that’s where the necessity is, and governments are also doing their own capacity building, as we hear. We are now going to turn to industry, and we’re going to call on Professor Yuki Tadobayashi. Is it right? Here is the microphone, please. Let’s hear about it from your point of view. Thank you.

Yuki Tadobayashi:
So this is Yuki, and I work for a graduate school, and I have many PhD students here. Also, I have many trainees from industry. I run cyber security education program in university, as well as cyber security training program for industry. So I would like to very much resonate what Janice said in the beginning. There is a huge gap between what university is doing and what industry wants. But there is a valid reason for that. For instance, university needs to innovate. University wants to, for instance, invent AI, whereas industry needs to use AI. So there is huge gap, because if we, for instance, try to develop industry training program, we teach them how to use AI securely, versus in the university, we teach them how to invent AI, right? So there is black box versus white box thinking. So there is a huge contrast, and this is with valid reason, and there is a huge gap. But I know, because I know, I do both, I mean white box teaching as well as black box teaching. And the industry training program, for instance, in Japan, which is huge program, which is funded by MITI, and $20 million every year, it’s huge program, and we actually teach them how to use devices, how to use cloud security, for instance, how to implement zero trust, how to implement DX with security. And this is kind of multidisciplinary, like you need to, how to use cloud, how to use zero trust, how to use AI securely, how to implement IoT securely, versus in the university, you have to implement IoT security, for instance, secure coding, et cetera. So this is like two phase sets of different, same problem, right? And in the industry training program also, we invite them to do a good teamwork, work in a team, to solve the huge problem, because the problem is multifaceted, IoT, AI, robotics, cloud, in a huge corporate system. So you need to intrinsically teamwork for problem solving. But in the university, you must be first author of some paper, you must insist, I did something, right? So you must prove that I’m innovator, I’m excellent. So because of that, I cannot invite every student to do teamwork and graduate all alone, because of the credit requirements and the graduation requirements, et cetera. So I think the university system is a bit old from the cybersecurity perspective. So I stop here, and for further question, thank you.

Janice Richardson:
So it seems that really we have a dilemma here. How do we put together the black box and the white box without getting a gray area? I’m wondering, is Emmanuel online now? Right, well, you’re going to do the work now. If you can take out your mobile phones, can we let you lead us on this, Katrina? Yes, do you have a microphone? Yes. So you’re going to have a question. You’re going to have to think about the various options that we give you, which will come up on the board. But first, we need to tell you where you go with your mobile phone.

Katarzyna Kaliszewicz:
Okay, give me a second. I need to, I need help. You need help, okay. Yeah, I need help because I need to share my screen. Where should I click? Oh, probably somewhere here. Oh, it’s here. If you tell them where to go, they will. Oh, I’m good, I’m good. You’re good, okay, great. Okay, so yeah. So we are going to have two questions for you. Let’s start with the first one. Can you enlarge the QR code a little bit, maybe? Yeah, let’s do it full screen. Yeah, I’m going to. So you can join us by using the QR code or just going to menti.com and. putting in the code 67413964. So let’s go more slowly, menti.com. That’s right, and the code is 67413964. It’s also here at the bottom. It’s there, very large now, they can see it. Yeah. Are you online, do you have the question? Yes. Yeah, we have 13 people online. Super. So what we would like you to do is to prioritize. Very good, good job. Yeah, so you have to drag the answers from the most important for you to the least important. And our question is, what should be the key functions of the hub in order of priority? And of course, we are relying on your input here so that we can really see what you think should be the priorities. Okay, we are still waiting for answers. Don’t be shy, we have 19 people and eight answers only. How many now? 13. We want to give you time because we do want you to think about it. Exactly. It’s a very complex question. If you have finished doing it, I’m wondering, is there anyone who has a question that they would like to ask at this point? We’ve been throwing information at you, but we haven’t heard very much from you yet. Okay, I think we are ready. Okay, we’ll move on then to the next question immediately. There is a second question, and I will give you the microphone, Jaou. Okay, let’s go to the second question. Which practical steps should be prioritized to launch and build the hub? And please vote on the most important. I know that here on the screen it looks small, but if you look at it on your phone, it’s probably more visible, but we have defined a strategic plan like goals and objectives, secure funding and resources to establish and maintain the hub, create an online platform to deliver training, workshops, and so on, seek accreditation or recognition from relevant industry bodies or government agencies, and develop marketing and outreach strategies to raise awareness and attract partnerships. You can pick only one here, yes. We are picking the one which is, in your opinion, the most important. Jaou, you want to get it? Thank you. Well, I would like just to comment on our last question because about the necessity of bringing the cybersecurity, well, the companies closer to the students because, well, I have the experience of learning and the problems I had because most of the systems that we try to test the security are very expensive, very specific, very difficult to put your hands on. So connecting the people that want to learn with these kinds of resources definitely eases the learning process. Well, it makes it possible. Thank you, and I think that’s an extremely pertinent comment and it’s one thing we hope that we’ll manage to do with the hub. I think we have our answer. Please tell us, what is the priority according to the people here? The priority is to define the strategic plan. So first we are going with goals, objectives, of course, long-term vision of the hub. This is the most important for 12 people. And then I think a good one is also create an online platform to deliver training workshops and networking opportunities.

Janice Richardson:
Great, and that’s really something that came up with the students, the PhD students that we were working with on the interviews and the survey. Samoa, for example, Nepal, both said there is no opportunity in our country to actually do this type of training. We are now going to move on to our speakers. As Dr. Emmanuelle is not online, I’m going to ask now, Raul Echiberria, if we could please hear from you.

Raul Echeverria:
Thank you very much. I was asked to speak about how prepared are the companies, private companies in Latin America. But let me provide some context. I think that’s everything that has been said here is applied to the whole world, I think. It’s not a regional issue. But in 2022, according to a study that was disseminated broadly in the region, almost 70% of the companies, different kind of companies, declared to have had some kind of security problems in the region. It was, unfortunately, we don’t have monthly studies, but to see how those issues are evolving. But everybody could see this year, just in the press, very big events, big security incidents in the regions, both the public and private sector. And two governments face a very big security problem. Two governments face serious attacks, Costa Rica and Colombia. In the case of Costa Rica, the government was really, it’s in difficult conditions to continue working in several areas. And the case of Colombia was very recent. But also, private companies have been in the press, even in dedicated areas, like law firms, very famous law firms that have been attacked and all the information of their customers have been compromised. And we don’t want to know what kind of information they had. So the point is that companies are becoming more concerned about this, what is good. They say in every survey, the companies seem to be more conscious about the risks and to consider the security risks as important issues. But the point is that it is not reflected at the time of allocating resources. And when in difficult times, as we faced the last couple of years, it was demonstrated by some studies that security is one of the areas where the resources were cut first. So there is a, at the end of the day, it seems that it means that they don’t understand really the risk that they are facing until they have a problem. Like usually the problems are in the form of RAT attacks and the access to information block and the criminals ask for randoms to ransom, sorry, to free the access to information. And in those cases, companies in average are losing a few millions, three, four millions in the for attack that is that they could have spent less money in trying to prevent those situations. I think that there is a need of massive programs. And I think that it is very interesting what we are discussing here about the skills and human resources because this is a problem. I have read in the news and in Mexico, the study says that only in Mexico, there is a need of 200,000 people to work in this area. Well, this really is an impressive number. But so we need to implement some scalable solutions and massive programs. It’s not enough that some with public resources, we support the companies to implement or try to educate, we need to do something massive. It is very interesting that the number of companies that say that they are adopting new measures is increasing approximately 10% per year, but the tax are increasing in 20%. So we will not win this battle in this. One of my hats is that I belong to the Uruguayan chapter of Internet Society. And we started to work with the Global Civil Alliance to disseminate some measures for SMEs. I think that this is something that could be done, one of the things, but we should try to find for solutions that are scalable, as I say before, and try to reach much more bigger audiences than we are doing now. Thank you.

Janice Richardson:
Thank you, sir. Here, I think a few important points came up that who are we talking about when we talk about the cybersecurity industry? Well, in fact, we all have to be cybersecure. The farmers, the trades, everyone needs this cybersecurity. The second thing I think of specific interest, a tax increasing by 20%, but resource allocation increasing by 10. So we are not going to get along very far if this situation continues. Our next speaker, I think, is going to tell us more about the expectations of the private sector, if I’m correct. And I’m going to pass the floor to Hiko, who works here in Japan. Over to you, Hiko.

Hikohiro Lin :
All right, thank you. I’m Hiko. I’d like to share something that graduates have to finish the school and join the companies. And currently, I’m working in the PWC, do the consulting for the cybersecurity for all the industries. So we are also hiring those people. And before that, before PWC, I was working for the Panasonic 18 years. So I was the interviewer for those grads, more than hundreds people. And what I think about, what I’d like to share is, well, those graduates, they have very plenty basic good knowledge for the cybersecurity, which means that it’s all pass, very good, very good. And why? Because I think maybe in Japan, we collaborate with those university, like Professor Katowice, and we go to all the university school to meet the professors and introduce those students. What we are doing, for example, what we are doing for the cybersecurity for the Panasonic or the PWC, we explain those people and let them understand what we are doing so that maybe make them interesting. And so we communicate, not the student, to communicate with the university, the academia, industry. It’s the very important things for the kids, I mean, not the kids, graduates. So I think this is the one good idea. And also, we are very encouraged the internship as well, for the not payment, not pay on we pay, but still, if those people are very interesting about to do something internship, we always open to let them work, to feel like what they are doing so that maybe they have some imagine for the cybersecurity job. So this is also important. So it’s make more opportunities, chance to give them. And also, what we can done to have more specialists that meet the business needs. I think this is a very interesting story. Three months ago, in Japan, there is one high school, major in technical high school, with the cybersecurity companies together, trying to do the workshop for the hacking the ship. The maritime, yes. With the real operation one for the ship, with the student and cybersecurity companies together, trying to attack and defense exercise for them. And I think it’s so very exciting. You can use the ship and trying to attack and feel like what they are doing. And so those high school students, very exciting, maybe we can do some maritime cybersecurity job in the future. So we need to give them some experience, real experience, which is very important. And also, usually the corporation doing some cybersecurity conference sponsor. So we also invite those students for free. You know, sometimes the cybersecurity conference is very expensive, right? 1,000 USD or 500 USD, I’m not sure, but I think it’s very expensive. But we get them for free to join. And of course, maybe let them do some brand here to do some small job, but let them also introduce those cybersecurity people, introduce them so that they can make the connection and maybe you can help support their future. I think I’m gonna stop right now, okay? Thank you so much. But I have a question for you because the cybersecurity industry or what we learned from the research is that they’re really looking for a very diverse workforce. They need the younger, the older, the males, the females. It seems to me that you’re talking about students. Do you have any great ideas to help along these mid-career shifts? How can we get a more diverse population involved? Do you have any ideas on that? You mean the mid-career shift? I mean, does that mean that once they work for different job, I’m trying to jump into the cybersecurity? Yes. Oh, okay. Well, I think this is a very important thing. Yes, actually, some people are not very good as software engineer or don’t know about the cybersecurity, but they’re trying to jump in to the cybersecurity job to carry a change. We also accept those people because I think the cybersecurity, they have very different point of view. I mean, they have the engineering technical and also the governance or regulation. And I think cybersecurity have many job involved. So I think I’m very encouraging those people to jump in. And if you need some training, we’re willing to support them and to let them make some different career for the cybersecurity. Yes, we do that.

Janice Richardson:
Okay, because it seems to me that a lot of people in their career aren’t aware that in fact they could do this switch. I think that you have something to say here. Yes, please do.

Yuki Tadobayashi:
Yeah, for such kind of a career change, actually, industry needs it, not the employee needs it. Because maybe they are, you know, have been working for factory for 10 years, but they’re not aware about the company needs cybersecurity people in factory because factory is becoming more digitized. And because of that, we have industry sponsored training program where a company chooses those nominees, not the individuals. And they send us the trainees to the one-year program. So it’s huge one-year program, right? Full-time one year from nine to five every day from Monday to Friday. They come to our facility and study and do cyber exercise, programming, penetration testing, defense exercises, everything from like July to August. July to June, it’s whole one year. And they do a lot of teamwork, and they do a lot of presentations. They do a lot of simulated exercises like briefing to CISOs, briefing to the boards, briefing to accountants, reporting to the lawyers. And lots of those business oriented exercises. So it’s one year program, it’s huge. But we have a lot of alumni by now, like more than 350 alumni. And everyone say, I was new to IT, but after finishing this program, I’m actually a cyber security specialist, and being able to defend the company. I’m very proud of it. But this requires one year. And there is huge industry backlash against management saying, do you actually require one year? But from our experience, this kind of cyber security training needs one year because of those huge technology stack and lots of regulatory and legal developments like standards and regulatory developments and lawsuit and case laws. And every complexity has to be taken into account in business context. So this requires a huge amount of time. But with this one year investment, one person can change from some person like lawyer or salesperson to cyber security professional. We have many, many living examples in Japan. Thank you.

Janice Richardson:
So that does seem a very interesting, good practice that other countries could benefit from. I’m going to pass the floor now to our youth. Joe, I’m sorry, I have a lot of trouble with your name. Can you please remind us where you come from and tell us your point of view on this?

Joao Falcão:
Hello, thank you, Janice. I think the best way to start talking right now is actually telling a story that happened to me last month. I was visiting a factory and as a cyber security specialist, I need to understand, to really understand what these factories work, how they work. And I sat with them and spent a couple of hours asking everything, how each one of the machines worked. And I said to the person, okay, well, we have two weeks for me to understand everything and point your errors. So the person looked to me and said, okay, so you are a generalist specialist. Yes, so to me, this shows the most difficult part of cyber security, which is that you need to have a transversal knowledge. You need to deeply understand how these machines work to know what they shouldn’t do. And this makes a huge barrier. So when we have people like pivoting their careers, you can take advantage of the knowing the sites, the factories. But when you are bringing young participants, you don’t have this advantage. So the person needs to learn all of it. And this is incredibly difficult. Like, I had the opportunity to sign up for a course on industrial security. And the first task was, okay, tell me more about the space you work for, because we are going to work on it. And I said, what? Space? Machines, I don’t have any experience with these machines. And the person said, oh, okay. And this became a major issue to me. So being a young person entering on this subject in the beginning was actually a very lonely task, because it was just only you and your computer learning how to work with these systems and how to exploit their vulnerabilities. And at least I am seeing a strong shift now. Like, when I used to go to security events, most of the people were just curious. And now they work in the field. And these experience shares really evolves the field, really evolves the community around it. But we really need to make closer the persons coming to this new field that have almost none prior knowledge, because, well, we know the basics of how to program, how to operate some machines. But this deep knowledge that you get from the experience of running it, we will not have. And this is a great barrier for us. Thank you.

Janice Richardson:
So you’re really underlining the importance of transversal knowledge. And yet in our report, 67% of people who responded from the business and industry sector consider that the cyber security graduates have insufficient capacity of transversal skills. So it’s really something that they picked up on. Ismalia is from Gambia. Where is Ismalia? Oh, sorry. Over to you. And I think that you’re going to tell us a little bit about how you think we can diversify the cyber security workforce and encourage more women into it.

Ismalia Jawara:
Of course. My name is Ismalia Jawara. And I am from the Gambia, West Africa. Small country, but a little over 2 million people, population. And I am the chair of a cyber security community called Gambia Information Security. And I also work as a security analyst, senior security analyst for the Gambia government. And I will share the same similar story as my brother here. I am from a developing country and a country where cyber security education is not offered in any university or college. And it’s literally impossible for someone to be encouraged to pursue a career in cyber security. And I stumbled upon it through one of my mentors, a peace cop, a US citizen, who was in the Gambia, actually on a boot camp. And being a curious guy, I always was with computers. And for some reason, which I regret, I had the computer network. And it was very, very serious, a little bit. And then she brought me in, mentored me. This was back in 2014, 2015. And this is where I kickstarted my cyber security career first, gave me some online courses. Now fast forward to 2019, when I started working for the Gambia government as a security analyst, first cyber security officer in that revenue authority, I realized over the years that women and youth are really interested in the industry, especially Africa in general. And this, obviously, looking at the skill gap and also the resources that are needed at the global level, cyber security expertise needed at the global level could be an opportunity for African governments to get youth into these industries, which is actually going to address youth unemployment at some point. So I started the community, Gambia Information Security Community, to see how best we can involve the academia and also the government and the youth leading that discourse and the initiative, organizing boot camps and cyber security education programs. And through our process, we were able to graduate about 50 university students. And this, obviously, there is this notion, as she mentioned, that many people think cyber security is an IT business or technical job. And that cross-cutting from moving from legal to cyber security, HR to cyber security, you will be surprised. When we first went to the Gambia University, University of the Gambia, we conducted a research to ask participants from cross-cutting departments, not just computer science, but also development studies students. You have the sciences, the legal department. And you will see almost 80% of the participants that actually show huge interest in cyber security are not more from the computer science department. Many of them want to become developers from the computer science department. They want to develop AI and some other programs. Then you see the legal department, GRC in place. They want to get into the industry. So what we did was the Tango to Cisco. And we have to give kudos to Cisco and the IC2 certification. I see it’s quite recently. They have actually done a great job recently by providing free cyber security scholarship for underprivileged communities and those that want to get into the industry. And we use that opportunity to train over 100 youth in getting this program. And I am really happy that we have 25 people that have done the IC2 certification. And obviously, out of these 25, 15 of them are, of course, female, women, and 10 male. And then they all get their cyber security certification that is IC2 initial certification. So literally, this is what I have to say. And then I also encourage collaboration and participating from a different level. When it comes to internationally, to involve the global south in these processes as well.

Janice Richardson:
So interesting, because the young people that we talked with during the research told us there was simply not enough online courses, not enough ways to access the cyber security sector. I’m wondering, how many people here, in all, raise your hand if you’re looking at cyber security from the point of view of industry, if your background or your workplace is industry. Please raise your hand so we can see who we are talking to. Industry? One hand. A big one. Yes, please, go ahead. Yes.

Raul Echeverria:
But people like, in my situation, for example, I’m not dedicated 100% to this topic. So it’s one of the things that I, so it’s less, this is why I don’t raise my hand.

Janice Richardson:
Well, does that mean that everyone in the room is more or less working in the education sector? Raise your hand if you consider that you are working in the education sector. There are some hands that weren’t raised, and I’m just wondering, which sector do you represent? Yes. Actually, I have a bit of a difficulty. Microphone.

Audience:
Thank you. Actually, I had a bit of difficulty to raise my hand because I’m half and half between education, because it’s hard to define, whereas either you provide some guidance or knowledge or you inspire policy or you educate as such. So I think there is an overlap between these two areas directly. Can you please tell us a little about yourself? What is your profile? My profile is, actually, I’m a former police officer. I used to work at Europol. And obviously, we are looking at cybersecurity from a bit different angle. But to this discussion, I think that this profile is especially of interest, because for us, police officers, there is no barrier between cybersecurity and cyber safety. And I think this is a dilemma here, that we should never look at either of each, but at both at the same time. Because whenever you take a perspective of a victim, it doesn’t matter, really. So I would appeal here that whatever is planned as far as the strategy is concerned, let’s address both. Because in the end, it will not matter to the end user or to the end victim. That’s the perspective of law enforcement, at least. Thank you.

Janice Richardson:
Thank you. And that is really interesting, because I think almost everyone here is involved in safety almost as much as the security. And you’ll remember that here we are, internet standards, safety, and security. But thank you for reminding us. I’m wondering, from those people that we haven’t heard from yet, do you have any really great examples, some good practice from your country that you think that we could all learn from and that it would help us as we go forward in building the cyber hub? Is there anyone who’d like to take the microphone and tell us about good practice in their country to add to what we’ve learned here so far? No? Would you like to take the microphone and tell us your point of view, anyone? Yes, take it around. No one? I’m sorry. I didn’t know there were people behind us.

Audience:
Thanks so much. I’m Clay from FIRST, the Forum of Incident Response and Security Teams. My previous job was working with the Pacific. And I think you mentioned Samoa earlier. So I can use an example from Samoa. But it’s an example that you can see in Tonga, Solomons, PNG, and across the Pacific. Obviously, being small island countries, they have a lot of challenges with retaining and building cyber skills, just like the rest of us, but almost more amplified just due to the size of the countries. And one of the most effective ways that we’ve seen to build a more sustainable cyber skill community is through organic cyber community groups, potentially like what you’ve been working on. So in Samoa, a few years ago, a bunch of folks came together just over breakfast and created something called Samoa Information Technology Association. This creates a space for training to occur, not just from external donors coming in and delivering training, but training between professionals within Samoa. It creates an informal space for industry knowledge to be shared, so similar to what happens in network operator groups, so very much an informal place to talk shop. For what we’ve seen, this space creates kind of that missing gap that folks were mentioning between the academic side and the industry side, because it’s where newcomers to the field can talk, and meet, and network, and learn lessons directly from current active operational experts. So yeah, Samoa Information Technology Association, you have Tonga Women in ICT. Again, that was just started over coffee at a coffee shop. So while hubs are really, really great, I think it’s really important to leave that space for informal information sharing as well, not just formal trainings and things like that.

Janice Richardson:
And that really is, I think, a very interesting idea, training each other, and this idea of community, which you mentioned. Is there anyone else who would like to tell us a little bit about good practice, something that they’ve been involved in, they think really made a difference, and we could take it on board? As we move off with this hub. Yes, please, take the floor again.

Audience:
I hope it will be useful. My experience originates from the EU policy cycle. So it very much helps as far as the strategy is concerned. We used to base, when I was working at the EU agency, we used to base our strategy on findings of the research. So first, as you’ve mentioned, the report is obviously very useful. But it’s necessary also to look at what is current experience and what are the threats of online threats. As you said, there is a nice bridge between practitioners and the learners. So it’s also. It’s always very important to look at the real cases and investigate them, research them, and then plan accordingly the research cycle or the policy cycle. Because I have also witnessed facts and instances when policy was not relevant to what’s happening online, for example, when there was a big gap and strategy was prepared in silos. This is the worst case scenario. So as far as EU policy cycle came into force, maybe this will be a useful good practice for this forum as well. Thank you.

Janice Richardson:
Yeah, I do think that’s extremely interesting. Because one of the first things that we were thinking of doing with the hub was going into universities, perhaps with a survey or beginning by interviews, to understand how cybersecurity is being taught. But what you’re telling us is that we should also be looking at the cases. And so our hub should have an easy mode of recording cases so that we can really link the two. Are there any other ideas? Because we’re really starting out now. We’re, over the next days, pushing forward to see how we’re going to develop this hub, what it will really look like. We really thank you for your input and your input. Are there any other ideas? We’d really love to hear from you. You’ve come to the session. I’m sure you’ve got ideas at the back of your mind. So please ask for the microphone or speakers. What have you got to add? Yes, please.

:
Thank you.

Denis Susar:
I can add a quick idea. Maybe what the hub can consider doing is like retaining the talent. As we know, maybe colleagues from the private sector can comment more, but the job is very stressful. And I think most of the cybersecurity experts leave their jobs. So what do we do to keep them at their posts? So this is something, a question for the table.

Hikohiro Lin :
Yes, the cybersecurity job is very stressful. Because there are many incident response. You need to take care. You need to do to the clients to be safe, to differentiate between the job and the client. Those attacks or something. So it’s very stressful and sometimes 24 hours. But sometimes some people are leaving that they don’t want to do this job anymore and go back to the software engineering, coding, or do some other job. And what we have to do is, of course, you can raise the salary is a good idea, but it’s just temporary. Yes, I think we need to think about their quality of life. Not to know pressures. You need to taking care of the quality of life, more something different ways to make them to be comfortable to work for the cybersecurity job. So I think that’s why we do like, if they want to attend this conference, let them go. Or if they want to do the training, I just let them do it. Or we always think of something, what they want, and maybe if we can accept, then we just give them that kind of offer. Yeah.

Joao Falcão:
I think we could have also a mind shift because usually the cybersecurity team is responsible for this cybersecurity. And if you put the whole responsibility into like this team, which are usually very small, you bring a lot of stress to them because it’s kind of difficult to work when you are in this cybersecurity team because everybody, even inside the company, see you as an enemy because you are trying to impose, you are trying to push things and make things harder, more difficult to them. And so you have this kind of difficult relation because you are responsible for the security and you also have very difficulty to implement the security changes. Right. I’m going to, and it’s really interesting.

Janice Richardson:
We haven’t talked about that stress, about that quality of life. So we see more and more things that the hub should take into account. Katarina, can you please remind us of the priorities? What were the answers of the first question, please? Yes, I think it’s- It’s the best idea. And then we’re going to have a brief summary of what we’ve looked at. Larry’s going to raise some of the points so that we can do some final discussion about these issues. So, yes.

Katarzyna Kaliszewicz:
Okay, so let’s, I think, the first one. Yeah, because we haven’t talked about the first one. So the key functions of the hub and the first priority for most of our audiences to promote collaboration between industry, universities, and the cybersecurity workforce. And I think that we all know that this is, this is something very popular here, that we are working between different stakeholders. Then the second one is enhance cybersecurity skills at all levels of education. The third one is gather and scale up good practice from cybersecurity and tertiary sectors. The fourth one, raise interest in careers in the cybersecurity industry. And the fifth one is provide online training from top experts on emerging topics, which I find is quite interesting because in our second question, creating an online platform for trainings and workshops is the second best, right? And here it’s the last one. And that is interesting.

Janice Richardson:
Yeah, it is. If we can get you to reflect on that, if we can figure out why that might be, I’m gonna ask you, Larry, to give us a brief rundown of what you’ve picked up. Larry is also from the United States and a CBIT. Well, come on, you should tell us a little bit of your background, a little more than what you’ve told us so that we can see the angle that you’re coming from.

Larry CEO of connect safely:
Well, I’m a recovering academic. It’s been many years since I’ve been at a university, but I came to technology from an academic perspective and then became a technical writer, wrote a newspaper column in the United States, actually still write it. It’s been running for almost 40 years, written for the Los Angeles Times, New York Times, now the Mercury News, been on national radio and television with CBS News. I’ve worked with the BBC and other news. So I’ve come in from a variety of angles. I guess I can’t keep a job, so I keep switching careers. I’m a mid-career person, changed myself. Really thinking about how you can take the kinds of things that we’re talking about here or the kinds of things we talk about at universities, the types of things that engineers talk about, the types of things that policy people talk about, and translate them to average people, to people who read at an eighth grade level, parents, kids, teachers, folks, the kind of people who listen to me on radio, which are just any everyday people, most of whom have never been to an Internet Governance Forum, most of whom have never taught at the university, most of whom have never worked in the tech industry, but they probably have a smartphone, they probably have a computer, they probably have kids that are using the technology, and so they have a very strong vested interest. So that’s where I come from this. But I have been listening carefully to your conversation today, and clearly my notes only reflect a small portion of what was said. And so if you said something brilliant and I didn’t get it down, accept my apologies, but this is sort of what I took away. One is making, well, first of all, I wanna put another context. About 20 years ago, I attended a conference at Carnegie Mellon University in Pittsburgh where this very problem was being articulated. So this is not a new conversation, as everybody in this room knows. It’s been an ongoing issue for many, many years. So I just wanted to kind of throw that in to the hopper, which is to why we need to make it more attractive to youth to work in cybersecurity. A couple people pointed out that the gap is not closing. I think it was that the problem grows by 20% a year. The human power to solve the problem grows by 10% a year. You don’t have to be an MIT graduate to figure out that that math is not working. So that was a really clear statement that was said. Also bringing the right people together to create the content. And that gets into something I’m gonna talk about in a few minutes, which is the interdisciplinary approach. What I heard over and over today is this is not something that simply comes out of the engineering side of the equation. You need to bring in youth social scientists, humanities, people from all walks of life and all disciplines need to be approaching this problem. You can’t solve it simply by code. Code’s important, but there’s so many other factors including social issues and norms, et cetera. Janice, I think, made a really good opening statement that I think it should be part of the overall discussion, sort of the foundation, which is what companies looking for critical thinking, creativity, holistic thinking, and diversity. And that’s so important. And those things tend to get overlooked when you’re looking for people who you think just have the right technical skills, but don’t necessarily have the creative skills, critical thinking skills, the holistic look at the world. And of course, as many people have talked about, don’t necessarily reflect the workforce and the communities that we should be serving. And that’s where diversity comes in. I love this comment someone made. I apologize, I don’t know who it was. Educators tend to focus on coding, but not teaching young people about how things function. What’s the backbone of the internet? How does cloud computing work? Understanding the gestalt, what it is you’re trying to fix is really important if you’re trying to fix it. It’d be like if you were a refrigerator repairman who’s never used a refrigerator. You might know where the parts go, but would you know what they do and why they’re there? I love the comment about cybersecurity. It’s important for primary and secondary education. Some think it should be mandatory, but clearly it needs to be something we start talking about very early and getting people excited about. I mean, we have, yeah, it’s true, we’ve got right now, at least in the United States, we have close to full employment, but that’s a temporary thing. There are many, many periods where we have far more applicants than we have jobs. And there are, I don’t know, thousands, hundreds of thousands of cybersecurity jobs that are open in the world, and certainly even in the country I come from. Also, I think another person, or several people made the comments, talked about factories, farms, that this is not just a tech sector problem, that no matter what it is you do, whatever industry you’re in, you need cybersecurity specialists. I mean, it’s almost like every company needs a chief financial officer or someone to do the books. Every company needs somebody to clean the floor, whatever it is. Well, every company needs somebody to watch cybersecurity. I don’t care if you’re a small organization or a multinational corporation, someone there needs to be thinking about cybersecurity. Even in my little NGO, we don’t have a dedicated person for it, but we sure have to pay attention to it, because like everybody, there are attacks against our infrastructure as well. Let me just find, okay, someone, a couple people mentioned creating opportunities in developing countries. Gosh, that’s important. First of all, there are problems in developing countries that need to be addressed, and second of all, there’s a huge talent pool in the world that is not being used, not being exploited. I don’t mean exploited in the negative term, but taking, you know, all the terms are negative, taking advantage of, but you know what I’m saying, not being utilized, that’s the word I’m looking for. And we really need to utilize people from around the world. You know, we tend to think about that. I pick up the phone, I call a company, and I get somebody with an accent somewhere in the world, usually doing a relatively entry-level position. Well, there are very smart people in other parts of the world besides the developed world who could be doing high-level work, and we just can’t afford not to take advantage of that talent. I really like the comment about giving high school students real experience, really getting them involved. There’s such talent among high school kids, and I shouldn’t even call them kids. Many of them are capable of actually doing serious work, and while I would never advocate child labor, I do advocate taking advantage of young people’s ability to be part of the solution, and young people, and also to see cybersecurity as a place where they can go into college, or maybe not even go to college, maybe find jobs in cybersecurity directly out of high school. There’s certainly plenty of work, and probably talented people who could work with or without a college education. Someone made the comment, and it gets back to the comment I made about the sort of joke I made about the refrigerator. Visiting a factory, the cybersecurity specialist needing to understand how the factory works, and how each machine, I believe it was you that made that point. And again, you can’t fix something in a vacuum. You really have to understand the context in which it works, and I think that should be generally very important to any kind of a hub based on what I heard today. Lots of graduates have insufficient knowledge about real world applications. Again, the same notion. You need to have a gestalt, a real understanding. Someone made the comment about networking, and I’m gonna combine that with the suggestion someone made that more young people should be coming to conferences like these. And I have to tell you something. 90% of what I learn at conferences like the IGF doesn’t happen in meetings like this. And with all due respect to the brilliant people who just spoke, and all of you were very good, it’s the conversations you have in the hallway. It’s the connections you make. That networking. Same thing you get at the university. Why do Harvard students tend to do better than people at other schools? Because they meet people who can open doors for them. In the last 12 hours, not even that, I guess I’ve been here probably eight hours, I’ve already opened so many doors and so many things that I can do with my little NGO based on people I’ve talked to since I got here this morning. It’s amazing to me what I’ve accomplished in the last few hours just walking around talking to people. And so getting young people as part of those conversations are so important. You know, connecting them with mentors, connecting them with each other. It’s just essential. The other, I guess the last one, which I thought was very interesting, is this comment about how cybersecurity is stressful. And I could see that. It’s probably one of those jobs, it’s like the joke they talk about airline pilots, a flight is six hours of pure boredom and three seconds of sheer terror. When the system goes down, and maybe it doesn’t go down that often, but when it does, it’s hard for you to imagine what the cybersecurity staff have to go through. I mean, I kind of imagine, because our systems have gone down and we had connectsafely.org go down literally an hour before Safer Internet Day event. And we had to panic. And luckily we found somebody who could bring our system back up in time for our event when we were going on national television, promoting what we were doing. And that was the worst possible time for our system to go down. Well, those moments are very, very stressful. And finding ways to encourage people, to retain them, to promote them, to compensate them properly, to provide them with career advancement, all the things you need to keep a workforce happy and productive is very important. I would say that, you know, for a hub, and again, I came into this knowing very little, just taking notes, what I think I learned today from listening to all of you is that this hub that you build has to be something that goes from the highest level conceptualization about what it is you’re trying to do, and we also learned this from your little survey, what it is you’re trying to accomplish, what are the goals, what are the frameworks, all the way down to the nitty gritty of how it operates. And I would even argue, maybe even some kind of a job bank where you help people, link people who are looking for work to the jobs that are out there. So, you know, it’s a big effort that you’re taking on, but it seems to me, despite what I said about going to sessions, this session was extremely productive because you came away with a very kind of rich blueprint of what you need to do, what we need to do, what the world needs to do to have a more secure cyber infrastructure, and to provide good paying, meaningful, and important jobs to probably millions of people around the world.

Janice Richardson:
Thanks very much, Larry. And I’m gonna start with Raul, and I would like each of our speakers to just have a few words. some last thoughts to wrap up this session, but of course, if there are people in the audience who also want to have a few words, you are most welcome. Yes, thank you. Oh, I found every comment

Raul Echeverria:
very interesting, but especially what Hiko said about that we have to understand that this is probably, this is very interesting because maybe the companies should start to hire people already with the idea that these are people that will work in this position just for a short term, and so maybe just to offer a plan, a growing plan to move to other area inside of the technology aid of the company. I think that’s all the, what really concerns me is that the idea of scalability is, because we speak about providing training on a platform, yes, but how many people we can train on a platform, is what we can do really to achieve things that are in a different dimension, quantitatively different. So I think we say that there is a demand of talents that is not satisfied, but at the same time, I don’t know how it’s being calculated, because if we calculate how many people we need to face those challenges, so yes, we need more people, but on the other hand, how many people, at least in Latin America, how many companies are really hiding, especially? So I think that’s probably if they look for more people, if the demand is increasing, so more people we want to work in this area, and so this is a complicated equation, because probably that’s we need public support for private sector to understand what the challenge is, what to do, but we have also the same problems at the public sector, because public sector don’t understand that’s the problem. I mentioned two cases in Latin America that were very serious in the last year, so maybe we should work with intergovernmental organizations more, trying to convince them about the dimension of the problem, and how we can involve them as to be part of the solution, instead of being part of the problem, say how we can work together to work with a private sector, but an education system, so really to understand that this is a problem for the country, for the region, for the world, and to elevate the level of the priority. Thank you. Yes, Catarina, also has

Janice Richardson:
anyone come online? Do you see anyone who has raised a question? Unfortunately not, but we have some listeners. Okay, great, and do you have anything you’d like to add to this discussion? I can have a question, okay. I think my opinion might be either controversial or very boring,

Katarzyna Kaliszewicz:
it depends how you look at it, but I think that we are at a point where each company should have a cyber security specialist, like we have a work safety specialist, yes, so we learn that when you see water on the floor, you should avoid it, because you will slip on it and fall over. I think it should be the same for cyber security now, because we all use computers at some points, and even if we think about industries that normally we would say, oh, they do not use computers, we have now a situation in our country, in Poland, that many, many more people in medical industry are starting to use computers, the system for doctors is in the computers, so nurses also should have access there, and we are going to continue to spread this thing, so I think it’s just necessary to have this awareness that there will be more and more specialists needed in the field, and we should start training those people, because it might shock us soon. Thanks, Katarina. Denis, do you, sorry,

Denis Susar:
there’s one here. Thank you. I think very quickly, at the end of today, it all comes to incentives, either to people or to companies, like a major company will not be able to afford their online, like Amazon, to be off because of cyber security, there are incentives there. From the UN perspective, if we want to reach out to stakeholders, global digital compact discussions coming up in 2024, this is plus 20 overall review, which is also coming up in 2025, I’m just putting those on the table, those are the areas that you can, we can hear your voices, and we can, you can lobby, and this cyber security hub, whatever you are developing, if it’s good, it will be used by governments, so it’s again back to incentives, if you put a good product out there, then it will be used, but it’s all, I think, all for all of us to improve it. Super, and that is an incentive.

Julia Piechna:
Yes, many important things were mentioned here, so maybe I will just focus on the thing that comes from my professional experience, so I think that what is really important, among, of course, other things is education from the beginning, and integrating cyber security, safety education into curricula, into education, and maybe resigning from traditional methods for the sake of unconventional and involving methods, to really, well, that was mentioned many times here, that really make these young people to understand the tools they are using, and like, understand the mechanism. Thank you. Thanks, Julia. I think, I don’t know if you want to, no? And I really love that

Janice Richardson:
fun idea. We have a project running in the Scandinavian countries, it’s been running for about four years, where we write a scenario about internet safety for 11 to 14 year olds, but then we find a local magician, and the magician and I work together to figure out what are the best tricks, and of course the tricks in Iceland are quite different from the ones we do in Finland or Norway, but this fun element seems to be so important. I’m really glad that, I think it was

Yuki Tadobayashi:
you who mentioned it. Yeah, so I think security hub is a very interesting concept, because in our training centre also, we were discussing internally, we should be collaborating with other global entities. We have some friends in the UK and France, etc, that we can have more possible diverse collaborations. But one thing about security hub is that we are all busy, so if security hub is going to be super complex stuff, then nobody will be involved. So I want to have it less demanding, or less workload. In terms of workload, it should be less binding, so that everybody from all scales, like non-profits and government agencies, etc, like IGF, everybody on all sides can join on a non-binding basis. But this should also be a network of trust, because this can be abused by criminals. So network of trust is very important for this project to be successful. Thank you.

Maciej Groń:
I have a message for industry, business and sector, and this is also the goal for us,

Denis Susar:
because I want to say that the education sector is open for cooperation. And I mean, not only the IT business, but also the motor company, fashion industry, food industry, and so on. And I deeply believe that next year, we will find more people from the industry, and think once again that you are with us. So let’s think out of the box.

Anna Rywczyńska:
Yes, and I would continue with this stress from Dennis that came with diversity. Some time ago, I took part in a panel about women in IT, and women participating in this panel, they also said that this kind of a stress and war field that is somehow related with work in cyber security discourage men and women to take part in this business, to go into these careers. So maybe extremely important is to repeat that not only you have to work really in this war field to be involved in the cyber security, that there’s so many different competences involved, like Janice started in the beginning, that you don’t really have to be in this gaming area that some people perceive the cyber security is. And second thing, I think it’s extremely important also talking about diversity, to open eyes of parents, to really be able to motivate their girls, their daughters, to see their future in this field. Because right now, you always see boys in the coding extra classes, and the girls go to dancing classes. So it needs parental awareness.

Hikohiro Lin :
All right, thank you for everything. Actually, I learned from you guys a lot about your insight, comment, and new information from me. And actually, next week, I have my daughter is high school, and high school principal asked me to talk about cyber security job. So actually, I have to do this, since we’ve discussed in this kind of session. So I want to tell them, I will tell them the cyber security is a very fun job, and it’s a very important job, and also it’s a very proud job, and it’s also a challenging job. It’s going to be a very interesting job forever. So I just want them to be more interested in the cyber security job. And also, I want to think about something, maybe we need something in the cyber security hub ecosystem, so that we can do more to the sustainable SDGs. We need to keep thinking, to keep maintaining this kind of conversation in the near future.

Joao Falcão:
Well, I will spend my last words talking about the importance of the culture on being this facilitator, how they help to create the imaginary, how they mold people to like cyber security. And for that, I would like to remember about two films, Hackers and War Games, which were very important for the 80s generation to see hackers as these very cool people that were hacking things, and how this created the cyber security imaginary. And also, when we talk about it, we need to think of the gender perspective, because, well, I opened a position in my crew to recruit a woman, and I couldn’t. Like, it stayed for one month open. When I gave up, I hired a man in the next day. So this is really a cultural thing, and we really need to work on it.

Ismalia Jawara:
next week. So, yes, so my last input will solely, you know, relate with what he said, but more importantly, having more women, you know, participating in this industry. And I think obviously this we are unaware of, but, you know, it’s some form of a cultural life. For example, when I take my niece and nephew to the shop, you know, to get toys for them, you know, for Christmas, you go to the boy’s shop, you get amazing stuff, you know, so many amazing cool toys. You go to the, you know, the guy, the girl’s shop, you get some unicorn. I’m not saying unicorns are not nice or whatsoever, but my point is, you know, culturally we need to also, it’s as if we are actually grooming, you know, them not to be innovative and also involved in these creative industries. And that being the case, you know, I think industries like companies or businesses should also be giving priorities to women, you know, to participate in, you know, in cybersecurity. And my advice for my brother, I also experienced similar in my office, but then what I did was rather than giving them, because it’s a stage level, you know, step by step, rather than looking for, you know, an expert in reverse engineering or someone who has, you know, years of experience, I, you know, went for GRC, you know, personnel. So when I was not able to get someone, you know, I just got a lady who studied law and then gave her, you know, opportunity and then she did tremendously well. So I think, you know, the bar also or how we get them on board, you know, need to be looked at. If you want to take them from a perspective where you’re looking for expert, we are going to chase them out. Thank you.

Janice Richardson:
And thank you. I’m wondering if the audience has any last word that they’d like to add. It’s really important that everyone feels that they’ve had their say. Anyone over that side want to take the microphone for a last thought about this hub? Because we’re going to be calling on all of you. Yes, someone behind me. We’re going to be calling on all of you to really support us in this, seems so important. Would you like? You would? There’s a microphone somewhere. No? No? No one further? Well, I hope that you’re going to join us in this enterprise. We are determined after what we learned with the report that we really have to move forward. We have to move forward together. And we talked about safety and security, but in French, security is the same word. And possibly in other languages also. No one can be safe if we don’t have this security. I hope we’re going to continue working together. If you have any ideas, please come up and tell us, because we’re chasing ideas now when we’re working on this strategic plan. Thank you very much for joining the session. Thank you, speakers. Thank you, Larry. And I, well, we’ll be meeting in the corridors and talking further about this. So thanks for a very interactive session. Thank you. We’ll be in touch. Should we go and join them somewhere? Yeah, we have 10 minutes.

Speech speed

122 words per minute

Speech length

3 words

Speech time

1 secs

Denis Susar

Speech speed

152 words per minute

Speech length

773 words

Speech time

305 secs

Anna Rywczyńska

Speech speed

171 words per minute

Speech length

783 words

Speech time

274 secs

Audience

Speech speed

167 words per minute

Speech length

778 words

Speech time

280 secs

Hikohiro Lin

Speech speed

163 words per minute

Speech length

1301 words

Speech time

478 secs

Ismalia Jawara

Speech speed

156 words per minute

Speech length

1080 words

Speech time

414 secs

Janice Richardson

Speech speed

168 words per minute

Speech length

2944 words

Speech time

1054 secs

Joao Falcão

Speech speed

119 words per minute

Speech length

774 words

Speech time

389 secs

Julia Piechna

Speech speed

121 words per minute

Speech length

736 words

Speech time

366 secs

Katarzyna Kaliszewicz

Speech speed

151 words per minute

Speech length

1122 words

Speech time

445 secs

Larry CEO of connect safely

Speech speed

220 words per minute

Speech length

2394 words

Speech time

652 secs

Maciej Groń

Speech speed

214 words per minute

Speech length

596 words

Speech time

167 secs

Raul Echeverria

Speech speed

145 words per minute

Speech length

1145 words

Speech time

475 secs

Wout de Natris

Speech speed

275 words per minute

Speech length

940 words

Speech time

205 secs

Yuki Tadobayashi

Speech speed

175 words per minute

Speech length

1035 words

Speech time

356 secs

Talk with Metaverse residents – a new identity and diversity | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Virtual Girl Nem

The Metaverse represents a transformative innovation within the realm of Virtual Reality, fostering environments conducive to the development of sustainable infrastructure and communities (SDG 9 and 11). It heralds a new norm of living; a universe consciously designed to enhance human existence. Statistics reveal a surge in daily user engagement, with a notable proportion of users spending more than three hours per session, indicative of widespread global adoption of this digital sphere.

Furthermore, the Metaverse significantly influences human communication mechanics, fostering unconventional social interactions that indirectly contribute to quality education and overall well-being (SDG 4 and 3). By eliminating traditional filters such as age, gender, and titles, the Metaverse cultivates relationships based on personalities over visuals, promoting a healthier form of digital sociability.

In the Metaverse, adopting a virtual identity effects a paradigm shift in personal representation, aiding in reducing socio-cultural inequalities and bolstering gender equity (SDG 10 and 5). Data suggests a trend, showing most users prefer avatars with feminine characteristics, hinting at an increased demand for feminine representation in digital culture.

Alongside this preference for feminine avatars, a reduction in the gender gap in virtual identities is apparent. This is primarily due to an increase in the availability of masculine avatars, offering users wider choices in customising their virtual selves. Changing avatar gender is gaining popularity for reasons beyond the available supply, encompassing fashion and the aspiration for more effective communication.

Consider the case of ‘Virtual Girl Nem’, who has spent between three to five years in the Metaverse. Her ability to switch seamlessly between the virtual and physical world by wearing VR goggles, exemplifies human adaptability to this new digital era. Nem’s experience showcases the evolution of human adaptability, as we navigate varied environments both professionally and personally.

Within the Metaverse, it has been identified that for success, effective user communication is vital—a challenge currently faced by numerous tech giants. The service sector should focus on establishing sound communication with users to garner positive results in the Metaverse. The Metaverse’s future invites considerable optimism, primarily due to its potential to offer myriad business opportunities and redefine human communication. Prioritising effective user service interactions is fundamental for the Metaverse to flourish successfully.

In summation, the Metaverse contributes to various Sustainable Development Goals by fostering innovation, promoting equality, and revolutionising communication dynamics. With a growing user base and evolving identities, the potential for the Metaverse is vast. However, to truly achieve its potential, an efficient communication approach, crucial to engaging users, is key to driving its progressive development.

Liudmila Bredikhina

The revie proves a testament to the transformative impact of the Metaverse, most notably underlining the pivotal role avatars possess as instruments of self-expression and communication. Equipped with these avatars, users transcend their physical world identities, thereby uniquely manifesting different aspects of their self-image, conforming or challenging prevailing gender norms. This premise supports SDG 9: Industry, Innovation and Infrastructure as the Metaverse serves as a creative conduit for its users.

Significant is the commonplace trend of gender swapping within the Metaverse. This potential proves avant-garde, administering individuals with a fresh platform to experience and embody alternate gender identities, which might be beyond their grasp in the physical realm. Disseminated statistics indicate that over 70% of both male and female Metaverse users adopt feminine persona. This remarkable progression promotes broader comprehension of gender, propelling the goals of SDG 5: Gender Equality and SDG 10: Reduced Inequalities.

Embedded within the insights is the revelation of the unfortunate ubiquity of harassment within the Metaverse, stressing that women and minorities predominantly suffer unwarranted behaviour. This distressing element poses a threat to SDG 10: Reduced Inequalities and SDG 16: Peace, Justice and Strong Institutions, signalling urgent reforms needed to uphold user safety.

Responding to these safety anxieties, users have voiced fervent need for more secure and less restrictive experiences within the Metaverse. Their specifications embrace enhanced flexibility in self-defence mechanisms, rigorous moderation and an adjustable safety framework. They also emphasised trepidation regarding possible overarching restrictions curtailing their freedom within the Metaverse.

Interestingly, the analysis places heavy emphasis on the motivations of the involved companies, determining the success of the Metaverse. It elucidates that incorporating ethics at the heart of corporative goals assures optimal utilisation and effective marketisation within the Metaverse, whilst minimising exploitative practices.

Finally, there is substantial advocacy towards embedding rules within the Metaverse that champion diversity and ethical wellbeing, aligning with SDG 10: Reduced Inequalities and SDG 5: Gender Equality. This insight insists on the significance of such directives to counteract the potential emergence of monopolies, maintaining ethical equilibrium. This sentiment certainly parallels the understanding that behind every avatar in the Metaverse reside genuine individuals deserving courteous treatment.

In conclusion, the Metaverse renders a creative space teeming with opportunities for self-expression, communication and exploration. However, it equally flags challenges, from harassment to monopolistic threats, all of which require staunch mitigation strategies to construct an ethical, diverse, and equitable virtual world.

Moderator

The dialogue reveals multiple facets of the emerging Metaverse; a virtual realm which creates opportunities for personal expression, introduces unique challenges, and might potentially offer cognitive health benefits.

An intriguing insight is that individuals’ virtual identities within the Metaverse are frequently influenced by their cultural backgrounds. Crucially, this online platform provides a space for people to explore alternate gender identities, enabling them to challenge or uphold dominant gender norms. The Metaverse is thus a vehicle for users to express elements of their identities they might find hard to communicate in the physical world.

However, no digital universe is devoid of real-world problems. The discussion highlights a deeply troubling issue: harassment within the Metaverse. Alarmingly, women and minorities are particularly targeted for such experiences. More concerning is the notion that merely possessing a feminine avatar can invite more harassment, hinting at a concerning digital perpetuation of gender-based discrimination.

Complementing these insights, data from Japan demonstrates how economic stagnation and cultural expectations are spurring some men to assume feminine virtual avatars, as a means of escaping societal pressure. The Metaverse thus emerges as an instrument for challenging and navigating around pre-established socio-cultural norms.

Nonetheless, while grappling with the challenges the Metaverse presents, users are not necessarily advocating restrictive legislation. Most prefer superior moderating processes, improved reporting systems, and adaptable safety measures as opposed to stringent regulation that could potentially squash their virtual freedom and enjoyment.

Notwithstanding the challenges and inequalities, a positive aspect of this virtual experience comes to light. Adjusting between the Metaverse and physical reality is feasible. The transitional process is likened to mood shifts experienced when moving between formal and relaxed states.

Extending this concept, it is proposed that the cognitive effort needed to adapt to the Metaverse could serve a health-promoting role. The brain stimulation incurred from shifting realities might act as a form of anti-ageing training, supporting longevity and cognitive health.

In conclusion, this discourse presents a nuanced perspective of the Metaverse. It emphasises its potential as a platform for alternative identity performance and the troubling prevalence of harassment within it. There’s an urgent need for enhanced safety measures, albeit with a resistance to restrictive legislation. The evolving nature of the Metaverse continues to inspire questions and curiosity, particularly regarding the cognitive health benefits of engaging within this emerging virtual world.

Audience

The summary primarily probes into two predominant themes related to the virtual realm, with specific emphasis on the Metaverse.

One primary focal point is the possible fatigue and confusion triggered in the human brain due to exposure to virtual reality. This concern is rooted mainly in the proposal centred on Naem-san, who, despite existing within the virtual universe, may still be susceptible to physical exhaustion due to her organic essence remaining seeded in palpable reality. This sentiment is encapsulated by a palpable undercurrent of worry, with a spectator voicing apprehension over the possibility that the stark distinction between tangible and virtual realities could result in brain confusion, especially when both realities intermingle.

In contrast, the other focus orbits around curiosity and intrigue regarding commercial practices within the Metaverse. This curiosity hails from inquiries about the function of firms within this digital expanse and their strategic marketing operations. The sentiment resonates with neutrality, emerging from a quest to comprehend the mechanisms and dynamics of the Metaverse. The breadth of these inquiries broadens to encompass speculations about monetary exchanges within the Metaverse. This curiosity is solidified by the acknowledgment that the speakers have noted businesses inaugurating marketing ventures within the virtual dimension, and have voiced a keenness to discern more about the role and potential advantages for users and businesses alike.

The extensive dissection of these themes provides insight into human adaptability in relation to rising virtual environments such as the Metaverse. Moreover, the analysis underscores the escalating significance and potential of such digital spaces for commercial purposes, illustrating the importance of comprehending their dynamics and functionalities. Nevertheless, the analysis also highlights potential challenges, particularly concerning physical and mental well-being, hence presenting areas requiring further inquiry and research.

Session transcript

Moderator:
Kyoto International Conference Center, and as well as through Zoom, remote participation. This session is the D-Zero event, titled Talk with Metaverse Residents, New Identity and Diversity. This session is to, basically it’s a presentation session, but really quite different from the normal session, because this session is to feature Metaverse residents, and the main part of this is the presentation from the Metaverse. Then before we go into the detail, let me make the round of self-introduction. My name is Akinori Maemura, from Japan Network Information Center, JPNIC, the organizer and proposal of this session, and then I have another organizer, please make a self-introduction. So I’m Keisuke Kaneyasu from NEC Corporation. Thank you for coming today. Yeah, thank you very much. And then we have the two presenters, one is Ludmila Bradekina, Mila, please introduce yourself. Hi, thank you for inviting me here today. My name is Ludmila, and I’m a PhD candidate at the University of Malta in Europe. Thank you. Thank you, Milo. And then, Nem, it’s your turn, please. Hi, this is VirtualGAM Nem, call me Nem. I am a Metaverse culture evangelist. I’m spending much time in Metaverse, and I’m working on the activity to introduce the culture of Metaverse. I’m happy to talk with you today. Thank you. Great, Nem, thank you very much. So before getting into Nem’s presentation, I want to say I’m a PhD candidate at the University of Malta in Europe. Thank you very much. Thank you. presentation, just introduce the history of this session, because this session was first introduced in Japan Internet Governance Forum 2022. We have that in the same day zero session, by the way. It was really interesting for us to experience the Metaverse residents live. So I was quite convinced that this kind of a session should be needed in a global IGF in Kyoto. Then here I am. Thank you very much for the MAG and the IGF Secretariat to accept this proposal for Make it Real. With that, before I go long, please, I will hand over the microphone to Nem for her presentation. Nem, please. Thanks. Thank you, Maemura-san, for super kind introduction. And thank you people coming to Internet Governance Forum today. We are very, very happy to talk with you. Well, we, Nem and Mira, as a research unit to investigate the impact of the Metaverse and VTuber on humanity. Today, I will go ahead and I will talk about Metaverse and identity. And after that, Mira is going to talk about

Virtual Girl Nem:
diversity. My name is Nem. Do you know millions of people around the world are already spending their lives in the Metaverse, this virtual world? I am also one of them. As a Metaverse I’m spending much time in Metaverse every day and conveying its culture to you, people in physical world. I’m introducing the culture as a virtual YouTuber and from TV show. And also, I am publishing a book and getting a grand prize by the Japanese Book Award. Today, I’m going to talk about… In the first, I will do a demonstration of my Metaverse life. And after that, I will talk about Metaverse identity and communication. Okay, let’s go on to the first one. Demonstration of Metaverse life. In the beginning, let me explain my equipment. What kind of equipment Metaverse people use. This is VR goggle. I think some of you already use this, maybe. Well, this is a device that allows me to enter the first-person perspective of this avatar and fully dive into the world of Internet with my entire body. Currently, my physical body is something like this. Wearing this big headset. By the way, what is this big sensor, this one? This is really troublesome when I am drinking beverage. What do you think is this? This is facial tracking sensor. This sensor allows me to… It’s scanning my facial movement. my real body and applying the movement to this avatar. Look at my eye. Winking. Facial movement. Everything. By using this, my avatar’s face is something like my real body. And also, I am attaching so many devices, equipment on my real body and that allows me to this is finger trick tracking. My finger movement is also real-time tracked and applied to this avatar. And full-body tracking. Not only headset and controller, I am attaching so many sensors sensors like this on my stomach and legs. This allows me to move the foot movement of my avatar and even I can stand up. In my real room, my chair, gaming chair is around here, so I can sit down here. Yeah, this is equipment. By the way, why do you think we Metaverse people use this kind of tracking mechanism? The reason is to gain a sense of unity with this avatar. Feeling the avatar’s body as if it were my own physical body. As a result, experience in the Metaverse becomes super, super realistic. And in addition to VR equipment, I am using voice changer. Now I am speaking with my voice converted to girl voice. So, converting my avatar and voice, everything. It feels like I’ve been reborn as a completely different person in the Metaverse. This is VR equipment. And in the next, let me talk about VR world. Currently, now I am here. This is my study room. In VR, well, I live with a sense of freedom. I’m just great nature. My physical body is in a very, very small room. But I never feel confined because, yeah, I am feeling like I’m really in this Metaverse world. Now, I am opening up a vast space only for me, spanning tens of kilometers just for myself. In the Metaverse, you can generate space infinitely and for free. And you can also use teleportation. In the Metaverse, the concept of movement doesn’t exist. You can teleport to a friend at any time or summon a friend to your place. And you can also set free access restriction to this world. Unlike physical reality, you can freely set access restriction to the spaces. Usually, I am allowing my friend to enter my world freely. And I will spend time with my friend talking with each other or watching TV show together or something like that. Oppositely, well, I can also create a… space that only people who have paid can access and hold paid events. For example, yeah, this is my music live event. Yeah, this is me. And yeah, many people are coming. Well, yeah, in the music live, I have gathered up to 1.2 thousand people at most. If you try to hold such a big event in physical reality, it requires big costs, such as venue fees and many staff members. But in the Metaverse, you can operate it for free just by yourself. This is really fantastic. Also, Metaverse accelerates creativity. Within the VR space, you can work on programming, modeling, everything without removing your VR goggles. Unlike that on PC screen, the creative is very, very intuitive. I, G, F. Anyway, this is already working as a 3D model. Yeah, like this, you can use this VR world, yeah, to be more creative. Yeah, this is really wonderful. And after that, let me talk about my avatar. In the Metaverse, you can freely design your avatar and live in any form you like. My avatar, for example, is unique in this world. This is only designed for me. It is designed by my favorite famous manga artist based on the illustration, this one. Illustration, he drew for me. A modeler created a 3D model. It’s similar to having a hair styling or ordering tailor-made clothes in physical reality, but in Metaverse, everything can be edited, including your hair, face, everything. Also, you can quickly change your avatar. I have so many avatars. I have so many avatars. Yeah, these avatars are created by another artist. Let’s try. Now, my soul is moved from this avatar to this avatar. This is me. Now, this is me. Switching avatar changes my mood. By wearing this avatar, I could be more cute. And, yeah, this is me. This is also my avatar, yay. By, when I am wearing this, I am more, how can I say, I am more comical. Yeah, avatar changes my feeling. It’s very interesting. Okay, well, now, oh, now, this is me. Yoisho, avatar, katazukete, shoto. Shoto. Yeah, this is the secret of my avatar. By the way, how do you think? I think I am watching all of you right now. The answer is virtual display. Well, this one. Yeah, I am watching you using the window. And not only this, I can open so many windows around my body. Well, oh, this is my cunning pepper. Well, anyway, it’s like having several huge, weightless smartphones floating around my body. Yeah, this is super, super useful. You can make them invisible to the others. Yeah, it’s super useful. Yeah, this is a demonstration of my better birth life. Let’s go back to my study room. Okay. Have you started getting the image of better birth life? From now on, let’s talk about better birth. What is better birth? The word is coming from the combination of better, meaning beyond, and universe. Yes, it means, oh, it means a world surpassing physical reality. Now, the world is getting attention. As the next generation of Internet, the word is originally coming from a super famous SF novel named Snow Crash. And in the novel, the Metaverse world is something like that people can immerse themselves using VR goggles and people can communicate. Yes, what I am showing you is something that shows up on the Snow Crash Metaverse. Then, what actually is the detailed definition of Metaverse? Actually, the definition of the Metaverse is not yet so clear. But in my book, I am defining by the virtual space that meets the following seven criteria. Spatiality, self-identity, massive simultaneous connectivity, creativity, economic availability, accessibility, and immersiveness. I don’t explain the details of them, but simply that is not a game. The virtual world is where you can live your life. That is the Metaverse. Okay. The Metaverse world that meets those seven criteria already exists. That is social VR. The famous social VR is VRChat, NeosVR. Oh, this is NeosVR, what I am using now. And Cluster, VirtualCast, and so many services are there. And they have pros and cons, and they are not perfect. They are under hard development. Yes. And then, how many people are there living in the Metaverse world? Currently, there are… population is dynamically increasing. It’s increased by more than five times in the last three years. The reason is COVID. By the COVID accident situation, people want online communication, online but body-to-body communication. And second reason is Quest 2. Previously, VR goggles are super expensive, but Quest 2 can do everything. It’s more cheap. As a result, currently the population is estimated to be between several millions to around 10 million people worldwide. The population is so increasing, even right now. Then how are people spending their time? It is a big question. Yeah. To review the lifestyle of the Metaverse people, we conducted a large-scale survey. In 2021, we collected 1,200 responses worldwide. And this year, in 2023, the number reached 2,000. Thank you. And this is the service, user service. The VR chat is number one popular service worldwide. And the other services are used depending on the country. For example, VirtualJet, Cluster, they are popular in Japan. And Neo4j is popular in Europe and America. Yeah, it’s interesting. The trend is different depending on the region. And age. How old are the Metaverse people? Yeah. Every age of the people. are living in Metaverse, but more than half are in their 20s. Yes, it’s a little young. And the play frequency, how frequently do we use VR GO coming to this VR world? It’s very shocking data. Almost half of them are using Metaverse almost every day. Also, how long time do they spend in one time? The reason is more than half people spend more than three hours in one session in one day. It’s so long. Yeah, I’m also like that. And what actually is the purpose to use Metaverse? This is also interesting. There are so various reasons. Well, to socialize with friends, walking around the world, play games, to participate in events, broadcasting, making avatar, creating something, everything. Yes, in the Metaverse, you can do everything. Yeah. Today, I think the revolutionary point of Metaverse is these three. Identity, communication, and economy. Metaverse will result in the big revolution for humanity. Today, let me talk more about identity. Identity. Yeah. In the Metaverse world, you can freely design your identity, name, avatar, voice, everything. Identity transforms from what you receive, but what you design. exploring new selves, living life as the desired self, or switching between multiple selves. I am also switching the selves in the real world and the Metaverse world. The Metaverse allows the soul to be perceived in 3D, enabling awakening with aspects of yourself that could not be recognized in the physical reality world. One interesting fact is the difference of the physical sexuality and avatar appearance. This is the sexuality in the physical reality of the real user. This is the avatar appearance amongst male users and this is by female users. Both male and female use feminine avatars. It is almost 80%. This is very interesting data, I think. Do you think this is only about Japan? No. Even in Europe and America, the trend is the same. Feminine avatar is very popular. Why do you think? The reason is fashion and communication. The use of feminine avatars simply prefers the visual appearance of the avatar. Communication is a secondary big reason. It’s very interesting. Not only the avatar appearance, let’s go on to the avatar type. There are so many various types of avatars in the Metaverse world. We categorize these avatars into seven different categories. Human, semi-humanoid, robot, animal, plant, monster. one is used most. The result is the semi-humanoid. In the physical reality world, normal humanoid type is very popular, almost 100% I think, but in the metaphysical world, more fantasy state avatar is very popular. And the use is a little depending on the region. For example, in United States, the animal type avatar is a little bit more popular. And well, in Europe, robot type avatar is popular like this. The trend is different depending on the region. In the end, let me talk about communication. In the metaverse world, you can design yourself and how they communicate. Well, we investigated their communication distance, skinship, and love. In the metaverse world, filter such as age, gender, and titles are eliminated and accelerating more essential communication. This is the most wonderful point of the metaverse communication. Yes, this is the distance. 75% of people say in metaverse world, the avatar to distance, getting more close. And also, skinship extremely increased. 74% of people say they do skinship. Do you skinship in physical reality world? I think this is very interesting data. And love, this is also interesting. Yeah, this is the world you can spend life. So love is also popular. But love exists, but the trend is very different. For example, well, in the Metaverse world, they are falling in love because of their personality, not the visual. When you do similar question in the physical reality world, number one answer is basically the visual in the beginning of the love. But in the Metaverse, the trend of love is changed. And also, one more interesting thing is when you fall in love in Metaverse, the biological sexuality of your real lover is important for you. The 75% of people say it’s not important. It is very different from the love in the physical reality world. Yes, communication also changes in the Metaverse world. Conclusion. For the past four million years, humanity has lived in physical reality. Existing IT technology has been great, but it has only served to make life in physical reality more inconvenient. But in Metaverse, Metaverse is a fundamentally designed new universe where various magics are possible, allowing humans to live in a more humane way. Many people already live there, creating new unique cultures with more freedom. The evolution of humanity has already begun. Thank you. I hope this presentation make you more understand Metaverse. And then, let me pass over to my best friend, Mira. Thank you very much, Nem.

Moderator:
a very good presentation, maybe give her the applause. Thank you. Okay, Mira, your turn, and I think you can spend a full 20 minutes, and then we will still have the question Q&A time for 10 minutes. So if you had any question to the name part, you can definitely make that in the last part. I will definitely secure some time for the Q&A. With that, thank you, Mira, for your stuff, please.

Liudmila Bredikhina:
Thank you. Let me share my presentation. Can you see it? Is it all good? Yes. Yes. Thank you, Nem, so much for your amazing presentation and for your amazing introduction to The Metaverse. I think it encapsulated really well the overall different aspects of the virtual world. And so my talk will focus more on gender and diversity, especially from the academic perspective. So let me introduce a little bit myself. So I obtained my master’s degree in Asian Studies from the Geneva University back in 2021. And my master’s thesis focused on researching Japanese men who use feminine presenting characters in virtual worlds as a form of gender fluid practice rooted in traditional Japanese arts. And after completing my master’s degree, I received the Prix Jean, so Gender Prize in English, award from the Geneva University in 2022. So since 2019, I have published a dozen of peer-reviewed publications in international journals focusing on manners in which technology impacts self-expression, uses gender identity in social interactions. I’ve also had the pleasure of sharing my findings that numerous tech-related events such as SIGGRAPH, Labal Virtual, IEEE, VRNVR and VR Days Europe. And as of this year, I had the opportunity to lecture at Japanese universities such as Keio, Kindai and Kyoto University. So now, as a PhD student since 2023, I continue to work on my research that I started in 2019, but focus more broadly on gender practices among Japanese users in the metaverse. My perspective, my theoretical perspective, comes from feminist and critical men’s studies that inform my research and inscribe it in a global context. I conduct my participant observation in the metaverse and online spaces, what I do basically is virtual ethnography. And I find it very informative to go beyond the virtual slash physical world barrier and examine how changes on and beyond the screen can impact who we are, especially in terms of gender. So the theme of today’s talk is gender and diversity in the metaverse. So for a while now, the target set by the UN Sustainable Development Goals, as you can see on the left-hand side, the small picture, right? So those goals have been addressing, among other issues, diversity, safety and inclusion to rebuild the physical world. However, ensuring the representation of people’s diversity, ethical behaviour and providing a sense of community and belonging should not be limited to the physical world. For example, as Professor of Journalism and Media Studies, Erkan Sarkar notes in the book, The Future of Digital Communication, there have been debates about the lack of diversity, actually, among developers and decision makers in tech-related industries. In a recent article about designing the metaverse, Mateo Zaglio and John Clarkson propose 10 principles for designing a better metaverse that you see on this page. It’s the big black picture. So in that article, the authors outline several ideas for social equity and inclusion. And they highlight specifically the importance of empowerment of diversity through self-expression. So among other 10 principles, right? To construct a good and inclusive metaverse. And I want to focus specifically on the gender self-expression part of the metaverse. So today we have three parts. I’ll start with some very brief academic foundations. I will not make it too long, but I think it’s good to build up on existing research. I will then present several of my key studies. One from my masters and two from our research with them. One of which you’ve already heard about a little bit. And then I will close on with some potential issues stemming from self-expression and what could be good to consider if you want a safe experience for everyone. So let’s start with building on the strength foundation with the academic research. So first, an avatar appearance entails specific interactions, right? So for example, if you use a very small cutesy character, it will be probable that people will be very welcoming and warm towards you. If you use a very strong, big character, people might have a different interactions towards you. Just in the physical world, unfortunately, our appearances might determine certain interactions. So, and users know that they are judged on their looks. And the second aspect is that in the visual characteristics we choose can impact, for instance, user’s confidence and behaviors. Once again, depending on the character that you incarnate in the virtual world, you might act differently. It might give you confidence. It might give you self-fulfillment and it might also impact the way you interact, especially if you use a character of a different gender representations. the one you identify is as in the physical world. And finally, what is most relevant for today’s talk, avatars can be seen as tools that enhance the individual by making them virtual, empowering them to become who you want to be and create the person you want to be, as Nem has mentioned in their book. So an interesting phenomena that Nem briefly touched upon, and it has been also highlighted in quite a number of Western academic articles, that occurs in virtual environments, in gaming spaces, is what is called gender swapping. So gender swapping would be, for example, I, as a woman, would be using an avatar that has a very masculine appearance, right? So it basically changes using the virtual character of a different gender identity that is quite distinct from the one in the physical world. So through those virtual representations, for example, you can try out, play, or even try to identify yourself to a different gender identity. And according to researchers, for example, amongst male users who use feminine characters in games, in virtual games, they can deploy those characters for tactics to enhance their appearance and even for in-game benefits. So there can be some strategic aspect to it. But there’s also a number of literature that highlights that users swap their gender appearance in virtual worlds because they want to try out a different gender identity or because it provides them with something that they cannot obtain in the physical world. So the virtual body should not be underestimated as just this, you know, avatar that anybody can incarnate. Sometimes there are greater implications when users decide to choose this or that appearance. However, also what happens in the digital space or the metaverse is not limited to the confinements of the screen. So giving shape to this so-called called yourself in the virtual realm is not a naive activity detached from physical world and hits a preexisting online and offline phenomenon. In other words, the virtual self is not separated from the physical space self as the virtual identity is the self transformed from the physical world self, right? So for example, it is based on certain of your cultural background, a sociocultural background that you will choose this for that representation in the virtual reality, for example, those things can impact it. So as such, it is essential to consider the space outside of the metaverse, even that existing gender roles and discourses influence the choice of virtual identity as users bring perceptions and meaning shapes by the physical world setting. So this implies that through avatars, individuals can support and or subvert hegemonic gender norms of the physical world, making the virtual realm a really crucial data for gender and identity research. My suggestion here is that the virtual space and the physical world influence one another, never really detached as binary opposites. I hypothesize the gender norms of the physical world can lead certain users to desire or to experiment with their gender identity in the metaverse, encouraging gender diversity and representation through virtual characters. While not all users might desire to swap genders in the metaverse, we cannot overlook the fact that the virtual realm allows certain people to experience an alternative gender identity and indulge in self-expression. So I want to start with my case studies now that we see that there has been quite a great amount of research done on gender in the virtual realm, and it’s not something it has been going on for since the beginning of the internet, let’s say. So I want to share with you a graph from last year’s project titled Harassment in Metaverse, conducted together with. So this large scale quantitative study among Japanese, European and US users focused on harassment issues in the Metaverse. This graph might remind you of the one previously shown by NEM, but from another study that we did in 2021. So just very briefly, but to be eligible and participate in the studies, a user had to use social VR and HMD, so head mounted display system, at least five times since last year. Users could answer the Google form from September 5th to 24th of 2022. And then we collected 876 answers from users worldwide. So I’ll not go into details, but I encourage you to consult the full report that is available online for free. It’s titled Harassment in Metaverse, and you have the link written below. So just as a very brief note, we did not inquire about the user’s gender, as our study aimed not to investigate whether users’ currently physical world gender identity matched their assigned sex at birth. So just to avoid confusion, here you will see written biological sex of VR users and avatar appearance, because we wanted to use terms that would be the most inclusive and most understandable by the majority of the users. As you can see in this graph, very similar to what NEM has shown, the majority of the users are still male users, but more than 70% use feminine appearances, be they male users or female users. And we can also see that about 13% of female users play with masculine presenting avatars. So overall, we could see the gender swapping, especially amongst male users, is quite common. And we actually asked a similar question, 2101, as NEM has touched upon it. As you can see, the numbers are pretty similar with the 2022 report, with the difference being there are more female users in 2022, compared to men. to 2021. What is quite interesting is in the 2021 report, we ask users the reasons for engaging with avatars that had a different gender appearance. And this will be just a reiteration of what Nem said before. What I want you to pay attention to is that while for many the choice was motivated simply by the appearance, for about a quarter of overall users deploying an avatar of a different gender identity enabled them to express themselves better and communicate with others. So that implies that through the avatar appearance, they achieved a certain self-expression that was not attainable to them in the physical world. And when it comes to my finding that I collected through numerous surveys, in-depth interviews and participant observations since 2019, I can summarize them by saying that for my informants, so Japanese men who use cute looking avatars in virtual and digital spaces, becoming those characters enabled them to temporarily free themselves from the breadwinner model of manhood which still prevails in contemporary Japanese society. So of course not all men claim so, but the ones that I focus on did. So we can see that the virtual character enables something that people cannot attain in the physical world due to sociocultural norms. So during my master’s thesis data collection, out of 51 participants, 18 expressed that being a man is difficult in Japan. For instance, users told me that being masculine in Japanese society is a heavy responsibility, as you can see on the testimonies here, right? And that most men cannot fulfill their requirements. One informant even enumerated the difficulties. He said, for example, if you do not work hard, if you do not get ahead, if you do not earn money, many women will not look at you and that is difficult. Another user told me the image of men working to sustain their families is still very strong in Japan. And that image of man’s life. becomes work. As a result, it is easy to give an image of failure. Others also mentioned that adult male are oppressed, and mainly because of old-fashioned ideas that are still present in Japan, and that quite normal for men and women to have to act according to general masculine and feminine standards. So as such, conforming to the idea that one’s gender identity should be the one ensigns sex at birth. In reaction to the everyday difficulties, the participants I interviewed turned towards those cutie-looking avatars to get away from the expectations, right, and freed themselves from being male, or because they were tired of socio-cultural expectations, and because they couldn’t express themselves as they wanted to. So those above presented difficulties partially stem from the prolonged socio-economic stagnation that we see more or less everywhere in the world right now. Several academics have highlighted, as I show on the left-hand side of this graph, that basically since the 90s there has been some economic stagnation, which resulted in employment difficulties amongst men and women, and life has become overall more precarious. What I want to note, what is quite interesting, is Akure Takayuki Kiyota, a writer who collects love stories of more than 1,200 men and women, and published on the theme of love and gender, he says that men experience difficulties in maintaining self-sufficiency, self-esteem, and seeking approval and admiration from others, leading to difficulties in everyday masculinities. This is not to paint a negative image of the physical world, but more to highlight that there are certain issues that my informants have experienced have been also noted by academics around the globe, right? And while vulnerability might seem to be a fundamental human right, many of my… performance, I talked with explained that them as men, they felt they did not have access to or were allowed to be vulnerable in their daily lives. And according to researchers such as Ida Yumiko, by deploying what can be called feminine aesthetics, men distance themselves from dominant masculinity and strategically perform their gender identities. For example, my informants connect online in their free time and consciously or unconsciously challenge the daily gender expectations they are discontent with by becoming other gender identities in the metaverse. The picture I paint here is that of the physical world of socioeconomic conditions and sociocultural gender norms that influence users’ gender behavior online and in virtual environments. Of course, everyone’s story is different. Everyone uses aliters in different ways and for different means. And that’s what makes the metaverse so diverse. But to live in a more gender diverse and inclusive society, we have to pay attention, listen to, and acknowledge the variety of expressions in the metaverse going beyond the avatar’s appearance. I also believe that achieving gender diversity in the metaverse is not enough. We must also make it a safe place for users to express themselves or to replay characters. So on this page, I’m referring back to the large scale quantitative study harassment in the metaverse that I mentioned previously. And as we can see from this chart, harassment in the metaverse is quite significant, right? As you can see, quite a lot of users from Japan, North America, Europe, either male or female, have experienced it. Our findings regarding harassment were similar to those of other scholars. Women and minorities tended to experience more unwanted behavior. And our data also demonstrated that playing feminine avatar, be it because you are a woman or because you’re in a virtual character that has feminine representations, tended to be reasons for harassment. Also, for example, identity specific. slurs and physical attacks were quite common attacks to like non-cisgender identities. So these findings demonstrate that the identity exploration aspects of the embodied avatar can be kind of a double-edged sword. In other words, while the Metaverse provides users with a space to play and experiment with their gender identities, some users are also harassed for what they do. So in the Metaverse, I want to discuss, in the meantime, I want to discuss what we can do to make everyone’s experience better. As we can see in this chart, most users do not want limitations by legislations. Many also believe that platform guidelines are sufficient and the idea of excessive restrictions to prevent harassment is unappealing. So rather than having legislations, the people we asked those questions to wrote that they requested suggestions to platform to make our experience, everyone’s experience safer. So I want to conclude this talk by presenting their suggestions. As you can see here, we summarized everything in four main requests to platforms. As we can see, users would like to see a more flexible tool so that the user and the community could defend themselves. For example, that could be tools to hide users that friends have blocked or to have a safety zone. Second, users also mentioned wanting stricter moderation and a more robust reporting system to make it easier for moderators to go through claims and punish the harasser. Third, users wish to have an adjustable safety measure and a block feature for repeated offenders, be it the account or a suspension measure of an HMD. Finally, users showed concern regarding further restrictions that might hinder their freedom or make the VR experience less enjoyable. This is also probably why the majority did not want to see any legislations implemented with the social VR space as shown in the previous slide. I really suggest to you to have a look at the survey. It’s quite interesting. and I think in terms of how we can make the Metaverse a better place, there’s some quite valuable information. So thank you for having me here today. This was a very brief presentation and I just want to conclude by saying that to make everyone’s experience, we have to acknowledge everyone’s different manners to identify in the virtual space and that the virtual realm can become a space where users can express their gender and go beyond social and economic difficulties they might experience in the physical world. Thank you. Thank you.

Moderator:
Thank you very much, Mila. And thank you very much for keeping your presentation in the said 20 minutes. That’s great. We will have the Q&A session for the 10 minutes until the top of the hour. The first question will be that it’s already given in the chat window from Neo Neo-san. He will say, I am not so sure, but he thinks that the gap between the real gender and avatar gender might be because the supply of the avatar is dominated by the female one. What do you think, Neema, about this? Am I clear for the questioning?

Virtual Girl Nem:
Yeah, that is a very good question, I think. Yes, previously, the supply of the masculine avatar is very limited. But recently, masculine avatars is also increasing, so the gap is decreasing, I think. But as I explained, this is the reason why they are switching the gender in the virtual world. Half of them are answering the main reason is fashion. The supply might be affected for them. They might choose feminine avatar because of the supply. But the last half of them answers the reason is communication. They say feminine avatar is more, they are wearing feminine avatar to be more good communication to the others. This kind of people is not depending on the supply. So for the answer to the question, yes and no. Okay.

Moderator:
I hope this answers your question, Neo Neo-san. Thank you for that. And then the floor is open for another question. Please never hesitate to make yours. This is a very good chance for you to make that directly to them or Mila. Come on. Come on. Maybe from Maemura-san. Please come up to the microphone, state your name and then make it in English, please.

Audience:
This is question for Naem-san. I’m Tanaka from ITU-AJ. So I’m afraid that aren’t you tired because you in virtual world, but your physical body is real. So brain may cause some confusion or something like that. Aren’t you tired every day?

Virtual Girl Nem:
It’s also a very very good question. Well yeah that answer is again yes and no. In the beginning when I started the Metaverse life yeah my brain sometimes confused but after three or four five years I already completely accustomed to this life. How can I say switching Metaverse life and physical reality life is already very basic thing for me. For example for you now you are in the suit and joining the conversation yeah there you are very in formal state but when you go back to your home you wear the pyjama and talk with your family. Then this kind of mood switching is also basic thing also for the physical people of course the extension become very wide but yeah but yeah people can be accustomed to switching the life. Today I can super quickly change myself just wearing VR goggles. Thank you. Okay thank you good comment and also one more this is comment

Moderator:
I expect this confusion a kind of stress it’s good training anti-aging so good inspiration to the brain so I wish in future we will have an eternal life in Metaverse world. Thank you very much. Thank you. Thank you very much it’s really great great comment Tanaka-san. Thank you very much I need to do this then and for the anti-aging. Any other questions? We still have the five minutes please. State your name and go ahead with your question.

Audience:
Hi, thank you. My name is Maria De Brassefer from IFLA. Well, thanks, first of all, for your presentation. It was quite interesting. I think my question is more around, I don’t know much about the metaverse, but I’ve heard, for example, that there’s a lot of companies that are trying to launch marketing strategies via the metaverse. And because I don’t really fully understand how this works, I was wondering if, for example, NEM could tell us a bit about it. So how does this usually work in the metaverse? Is it still everything free or can you already interact with money on the metaverse? And do companies already have a place in it? And how are the users in the metaverse also interacting with this? Yeah, just what is the collateral effect of this? Thank you.

Virtual Girl Nem:
Maybe you’re caring about some company has super big power. That could be the future problem of the metaverse, right? Yes, yes. Yes, of course. Yeah, that is possible. That is a possible question. At the moment, we are making very good communication between users and companies. Making the good metaverse is something like to be a god. You can set the physical form of everything to the world. Yes, platform is a god. So I can understand your care, but to make a useful service, the communication with the actual user is very, very important. And these working services always have very good communication. And big tech is also working on making the Metaverse world, but their activity is not getting well at the moment because they are not making good communication with the customer. So I am very optimistic for the future of Metaverse because if they cannot make good communication with the customer, the people don’t use the service. So the business doesn’t work. So I think we can make good communication, keep good communication even in the future, I think. How do you think, Mira?

Liudmila Bredikhina:
Yes, I think it all depends, as usual, with what are the company’s motivations, whether it’s exploitative or not. Hence why I think, like I mentioned in the beginning, the year in Chad is about promoting diversity and ethical well-being in the physical world. And we have to apply the same rules and the same, let’s say, concerns to the virtual world, right, to the Metaverse. And if we do do so, yes, I think we can have a really good experience between the economy and the users without having monopolies and things like that. But it all depends, of course, on the intentions of certain companies and on the way the leaders of those companies want to market themselves and how they want to use the Metaverse. Because people in the Metaverse are people, they’re not just sabotage, you know. Thank you. Thank you very much.

Moderator:
It’s almost the top of the hour, and now I need to close the session. But before that, I’d like to have the last few words from the presenters. Then Mila first, please. Thank you so much for listening to us. We’re really happy to have this opportunity to share our research and our recent reports with all of you. And we hope that our presentations will spark some future discussions and new possibilities to collaborate and also to explore the metaverse. Thank you. Thank you, Mila. Nem? Thank you, everybody, for listening to our talk. I’m super happy because the metaverse is great, as I explained. But at the moment, that is a very, very small world. Yeah, this big body Internet Governance Forum finally recognized the metaverse world and we can talk to each other. This is a very good next step. Thank you so much. Thank you very much, Mila and Nem. So please give them a big hand for the great presentation. Thank you very much. This session is closed. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you so much. Thank you, Mila and Nem. Thank you, everyone. Thank you. Thank you. It was fun. Thank you.

Audience

Speech speed

157 words per minute

Speech length

198 words

Speech time

76 secs

Liudmila Bredikhina

Speech speed

171 words per minute

Speech length

3517 words

Speech time

1231 secs

Moderator

Speech speed

122 words per minute

Speech length

1052 words

Speech time

516 secs

Virtual Girl Nem

Speech speed

111 words per minute

Speech length

3097 words

Speech time

1681 secs

Women IGF Summit | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Baratang Miya

The analysis reveals a stark gender disparity in digital spaces, a phenomenon particularly prominent in Africa where a mere 15% of the population have proper access to the internet. Among those, a significant majority are men, indicating a deep-rooted gender gap in the digital landscape of the continent. The absence of women’s voices presents a prevalent issue in both political and digital discourse, undermining the premise of information democratisation. The lack of leadership opportunities for women in tech spaces further exacerbates this imbalance.

Progress has been observed in arenas such as the Internet Governance Forum (IGF), characterised by an emerging focus on women’s presence in leadership roles that help shape the digital platform. Despite such strides, challenges persist. A significant facet of these predicaments lies in the widespread digital illiteracy and limited understanding of technology’s benefits, reflecting slow technology adoption across Africa. Alarmingly, 85% of the African population remains unaware of their ‘rights’ outlined by the IGF, primarily due to their restricted access to the internet. In some countries, women’s access to the internet is extraordinarily low, at around 12% compared to their male counterparts. The need for comprehensive efforts to enhance digital infrastructure, elevate literacy levels, and implement capacity-building measures is crucial to usher more women into the digital domain.

However, not all is grim. The aftermath of the COVID-19 pandemic has seen a welcome increase in the number of women embracing the ICT sector. The persistent work-from-home requirement has allowed female participation on the internet substantially, letting a previously underrepresented section of society contribute to the digital conversation.

Initiatives emphasising women’s entrepreneurship are also noteworthy. They leverage digital technology to improve business operations. An exemplary programme is the ECA, which has successfully assisted numerous women; ambitions are set high to expand this programme to all African countries by 2025. Complementary to these efforts are strides to ensure female inclusivity and active participation in digital spaces, supported by the expected surge in Africa’s young demographic and various capacity-building endeavours.

Support for the Feminist Global Digital Compact principles is also significant. These principles encompass a myriad of provisions focused on digital rights, freedom from gender-based violence, universal digital rights, safe internet usage, safeguards against harmful surveillance and transparency in AI. This endorsement heralds a future of digital spaces that adhere to the principles of equality and safety.

Yet, significant hurdles persist. One crucial issue is that of inadequate and inconsistent electricity supply, as observed in South Africa, which directly influences consistent digital connectivity. This issue highlights the broader need for inclusivity in addressing the challenges faced by nations in the Global South in relation to digital economies. It is instrumental for global digital discourse to extensively acknowledge these challenges to develop a well-rounded global approach.

The argument for promoting women’s technological literacy carries considerable weight given their vital role in educating future generations. It is cautioned that the onset of an under-educated or technologically illiterate generation could severely compromise the future. The importance of empowerment is underscored as a fundamental element of progress. The insights gained call for an expansion of initiatives like Women in IGF beyond Africa to achieve truly global coverage. Ultimately, striving for gender equality remains pivotal in fostering a diversified, inclusive, and innovative digital future.

Mactar Seck

In Africa, despite substantial progress, wide disparities persist in the engagement of women in the tech sector and the equalisation of internet access. Women currently comprise less than 12% of the workforce in the tech field, significantly lower than the global average of 40%. The primary contributing factors involve limited internet access and the gender digital divide. To counter these issues, several African nations have initiated funding schemes to encourage and support women’s involvement in tech.

The positive increase in women joining the digital sector, spurred by changes to work dynamics during the COVID pandemic, has strengthened these efforts. This trend highlights the potential of African women to substantively contribute to the digital economy given suitable resources and opportunities.

Nonetheless, progress remains hindered, particularly for rural women who continue to face appreciable barriers to accessing the digital world. These obstructions include inadequate internet connectivity, with less than 10% of rural areas having access, and entrenched cultural practices that impede efforts to overcome the digital divide and restrict women from utilising digital technologies and gaining necessary skills.

The digital divide also extends to a significant proportion of Africa’s population, suffering from lack of access to electricity. This challenge, impacting over 50% of African inhabitants, substantially hinders digital access. However, solutions are being proposed, including efficient use of existing infrastructure and the deployment of innovative low-energy technologies, offering potential improvements to digital access.

Moreover, it’s important to recognise that not only access but safe navigation of the digital sphere remains a concern for African women. Reports indicate that 30-40% of women face online harassment, depending on the country, highlighting the need for robust protective measures such as raising awareness about cybersecurity. This will ensure women’s safety in the digital arena.

Adding to these complexities, many African women lack legal identity due to established cultural norms. This reality affects approximately 5 million people, predominantly women. This absence of official identification further marginalises these women, exacerbating the issues connected with digital exclusion and complicating the trajectory towards gender equality.

In conclusion, whilst strategic interventions and improvements in digital infrastructure have been implemented, achieving digital inclusion of women in Africa still necessitates comprehensive, concerted and context-specific actions. These initiatives should address the multifaceted challenges, from enhancing digital and electricity access to altering cultural attitudes and ensuring legal recognition. This comprehensive approach is a prerequisite to realising an inclusive digital future and, in doing so, fostering progress towards the broader Sustainable Development Goals.

Audience

This comprehensive examination primarily focusses on the intertwined issues of inadequate representation, digital disconnection, and gender disparities within the framework of the Sustainable Development Goals (SDGs). There is a pressing need for broader representation from the global majority and a more significant involvement of women in the Internet Governance Forum (IGF), and the Digital Economy.

Concerningly, reports and statistics on digital connectivity appear to overlook rural areas and regions suffering from disrupted electricity supply, pointing to a perceived urban bias. It’s necessary to understand that the question of consistent electricity supply is intrinsically linked to the progress of digital connectivity. Acknowledging this is a vital step towards ensuring digital inclusion for all.

The conversation also tackles the prevailing societal fear that the rise of Artificial Intelligence (AI) will lead to an increase in unemployment, particularly hitting young Africans. This stereotype overshadows the potential positive effects of AI in the workforce.

However, there are also positive aspects within this argument. It recognises achievements made by women within the technology sector. Documenting and showcasing these successes and innovations will boost much-needed visibility and acknowledgement. For initiatives such as the Global Digital Compact, it’s argued that women and young individuals should be treated as stakeholders, rather than just beneficiaries. Their voices and contributions are vital in moulding the digital landscape.

A striking observation within the analysis is the substantial digital divide in Mexico, with stark disparities in internet access and usage. Women and the older generations are disproportionately affected by this issue, with almost 78.1% of Mexican women reported to have trouble using the internet, primarily due to age and lack of knowledge. There’s a positive mindset emerging that suggests these demographics should be empowered through an education about the internet.

In summary, this detailed analysis calls for concerted efforts to overcome representation gaps, disparities in digital connectivity, and gender inequality within the digital sphere. These measures are critical to effectuate the related SDGs to ensure that advancements in technology, AI included, promote inclusive growth and equality. Recognition of achievements, especially amongst women in technology, and the shift from mere beneficiaries to stakeholders in digital initiatives, are key steps on this journey.

Margret Nyambura

The sentiment across the analysed items suggests a strong emphasis on securing meaningful connectivity and representation for women in the digital realm, particularly in STEM careers. This assertion is supported by statements revealing a considerable ‘leaky pipeline’ effect globally, resulting in fewer women and reduced retention in STEM-related sectors. Worryingly, the data reveals that only 22% of primary schools in Africa offer reliable internet connectivity, and women are 19% less likely to utilise mobile internet compared to men.

There is a robust case for the urgent need to increase involvement and representation of women in the STEM field. It is posited that, whilst opportunities for both genders are ostensibly equal at an early age, cultural norms and deep-seated biases often guide women towards the arts.

The review also underlines the potential risk of women’s exclusion from forthcoming opportunities in emerging technologies. The sentiment expressed suggests that unless the integration and participation of women in this digital revolution are considered from early childhood, achieving meaningful gender equality in these areas could remain elusive.

Crucially, the importance of data reflecting women’s experiences in the digital economy is emphasised, supporting fair decision-making. In its absence, there is an elevated risk of biased outcomes and decisions, perpetuating systemic gender-based discrimination in resource allocation. Better representation within collected data could serve as an effective remedy to such bias.

Addressing digital inclusion in rural areas, there is suggestion of implementing innovative methods, such as art and drama. These creative outlets are proposed as means to expand digital space awareness, transcending connectivity issues to ensure vital information reaches rural populations. Moreover, the role of African youth, who constitute the majority of the continent’s population, could be instrumental in disseminating these technologies.

Additionally, there is a compelling argument for utilising innovative technological solutions to bridge the digital divide, particularly acute in rural areas. The belief is that all individuals can bring digital awareness to their local communities, thus facilitating potential technological progress. The digital space is envisioned as a pivotal tool in driving such advancement, reducing inequalities for sustainable development.

In conclusion, the arguments and evidence presented suggest a positive shift in attitudes towards gender equality, quality education, and reduced inequalities. However, substantial work needs to be done to ensure comprehensive digital inclusion and equal representation in STEM fields, particularly concerning women and rural communities.

Session transcript

Baratang Miya:
So, again, welcome, everyone. My name is Baratang Mia. I am from Gal Hype, Women Who Code, and we’re organizing women in IGF for the second year now. We had one in Addis. And following that up is this particular one. So I will start the session. We have three speakers, one presentation about what’s happening with women in the ecosystem. And then we’ll have Mr. Mark Tasek to tell us about what has he done in the past year about what we aimed for. Last year, the focus was on contributing towards the global digital compact. And we have done that very well. So I will touch on that. But in the interim, I just want to say that the reason we hold women in IGF is that even though women are a huge part of the IGF community and make up half of the population globally, women’s voices have been historically low and entirely absent in the political and digital spaces. So these imbalances ultimately deny leadership opportunities to women in spaces, especially in the spaces that they would have to impact the technology, build it, and create it in their local environments, especially women who face additional forms of discrimination. So IGF Women’s Summit focuses on leadership mindset and spotlight the women who can harness the power in this pivotal moment on the Internet and to shape the digital platform. We want to contribute a new way forward and come up with actionable solutions to drive meaningful progress in Internet governance. So, the final output of this particular summit would be a resource produced by a community of participants at the summit. One of the thing is making sure that the planning for next year, which we want to take this and hopefully turn it into an NRI where women’s issues and gender issues could be discussed in a formal setting. At the moment, it’s been driven by one organization and few of us that were there last year, but I think now we want to make it bigger and try to approach the Internet Governance Secretariat and say, please give us an opportunity to become an NRI at global level that women across the world, across the globe, can contribute towards the IGF and contribute at the leadership level and make sure that the impact of what women and gender issues should be covered at that level. So, based on that, whatever we come up with, Margaret will do a presentation and what the presentation she will do will publish it as part of the final outcome of this and take some of the points on what do we do with what came out of that presentation. So it’s more of an open forum. Everyone who is here is allowed to contribute to how can we make gender be taken as an entity that could contribute towards IGF, because at the moment there isn’t one entity that is really focusing on gender issues and we take Women’s Summit as part of that. With that in mind, I will give an opportunity to Mr. Maktasek who will talk to us, but I just want to say that gender disparity is very huge. In Africa, I know that Women are working very hard. I’m looking at Lily here who’s driving so much about internet governance from the basic and on the ground, but at the same time, because of the literacy level and because of the misunderstanding of where AI is going and what technology is bringing to the communities, the uptake is very slow. The people are still not understanding what is technology doing for them and why is it beneficial? And if you come and you talk about the rights and digital rights and human rights of people in the technology space to people who do not have technology, it’s as if you are taking away something from them. So we have to find the balance of making sure that even if we’re talking about the rights of women online and we talk about cybersecurity and we talk about gender and we talk about all these issues that comes up at IGF, we also have to talk about how do we make sure that the same people that we’re talking about on the ground have access to internet? Because at the moment, I think it’s only 15% of Africans who have proper internet access. That means when we talk about the rights that we’re talking about at IGF, 85% of Africans don’t know what the hell we’re talking about because it’s not implemented yet. So how do we talk about something they don’t have? So that balance, it’s something that I think we would have to address, especially women, because they are the ones who are literally not online. And with that said, I’ll hand over to Mr. Mactar.

Mactar Seck:
Thank you very much, Mrs. Baratang, and good morning, everyone. To attend this important event to discuss women in IGF. As you know, women is playing an important role in the Africa ecosystem. But we have a lot of concern about their participation. When we look at the African culture, the women are the ones who educated children. They wake up early, very early, to go to try to get some things done, some work, to give food to their kids. Women also leave their country, their rural area, to come in the urban city. Try to get some work to support their family inside their respective country. Women also suffer a lot in Africa due to the lack of health hospitals. When you look at the number of women who died due to lack of hospitals, there are too many in Africa. At school also, a lot of women don’t have a chance to go to school for several reasons. Sometimes it’s cultural. In some cultures, women are not allowed to go to school. In other cultures, they put priority for women to work at home to support their mother. Another problem also, in several African countries, women don’t have any legal form of identity. Because, for the culture… in global networks, you are here, in the middle of internet, in the middle of internet, it’s available to you or want to purchase Arey Very welcome and I like to And the digital technology, we have a lot of progress, in the several in the activities of women. now, in this e-commerce, they are applications on e-commerce, in the food system, have developed a lot of application focus on their activity. And with the development of this technology, the role of women become more, more important. And we have seen, since the COVID, an increased number of women in the digital sector. And I think now, everybody agree, women is part of this digital ecosystem. When you look at the room, I think we have more women than men. It is a case also in the population. We have more women than men. But we have several challenge for that, to access to this digital technology. Still, we have more men connected. We have more than 1,000 women in Africa. We have 45 men connected compared to 34 women. Access also on the tech work is very low for women. Generally, it’s around 12% compared to the world average 40%. They are not doing their work here. It’s already point that this is women harassment online. In a lot of countries numbers are very high. This number goes to 30 to 40% of women on the Internet have been high racists. There are many countries where there is an increase in quantity, one less control concrete affects 5 million people in the continent without any legal identity. And most of them are women. Why it is important to have a woman in the digital ecosystem is because we have to understand what is the future we want for women, what role we want for women in this digital ecosystem. And we can link this to this global digital compact, the role of women in this digital compact. Now, first, let’s start by key model that defenders can try to develop in the next 30 years. Thank you very much. to women and youth to this digital technology. How we can do? We have 24% access to Internet, less than 10% in the rural area. We have to put the right infrastructure in the city, in the urban city, as well as in the rural area. It is something very important. The government should put in place the right infrastructure to provide equitable and affordable access to the women. This can be done through several ways. We can provide a lot of facilities for women to access to this service. Also we can develop public infrastructure to provide more access to women to this digital technology. Also, we need to build the capacity of women. It is very important. When we look at the COVID period in Africa, we identify around 5,000 innovation, application focused on linkage to the COVID. More of these applications have been developed by women. We have a ground in the continent. We need to build their capacity. We should have a lot of programs to build the capacity of women, not at the university or high school, but as a primary school, like what Kenya already started last year. third, we provide access, we look at the capacity of the women, but we need also to involve the women in the tech sector. It is very important, when you have the capacity, the government should promote access to the women, for women to this sector. And through access to the market, through also support, funding support to this women entrepreneur to access to this market is something very important. It is something some country did very successfully, like Tunisia and Morocco. They have an incentive fund to support women tech in the sector. Another one, it is we need awareness about this cyber security child online, it’s not only child, but I think all women are concerned by this cyber crime issue. We need to secure the cyber space for women. By organizing several information working group, awareness campaign to explain to the women the opportunity and the risk to be online. I think it is something very important we have to highlight for all women, such as those in Africa. And in the Global Digital Compact also, we have highlighted the important role of women as gender crosscutting issue in all the ten key priority area. How to involve this women? to be part on this global digital compact, one, on the digital public infrastructure, how this woman can access to the digital public infrastructure like e-commerce platform, e-government service, digital ID platform, and we have a lot of, we have some recommendation for government, how government can have affordable access to this digital technology, this goes through building infrastructure and also an appropriate regulation to make sure everybody have access to this digital technology. Of course capacity building is very important through the global digital compact and focus more on the emerging technology, how we build the capacity of women in the emerging technology to make sure women are part on this revolution. As you know, by 2030, 90% of the new job will be on digital or need a digital component. We need also to build the capacity on this artificial intelligence because artificial intelligence is very easy, it’s not required a lot of infrastructure, just if you are smart, we need some application, some infrastructure to develop application and we have seen also this development during this COVID period. What is important also for women in this global digital compact, it is the issue of public good. We need to put in place infrastructure policy to provide opportunity of women to access to this. you select, raise money, and make there a connection to financing,ười doesn’t get money for public good, this is very important for the continent. Internet fragmentation, so employed because when you talk about internet fragmentation, we can talk about on one Internet, but you go, you have Africa, several Internet to you have it. People who don’t have access, they don’t have access to the Internet. So we have to make it open and access to everybody. Everybody should be have equal right access to Internet. Women and men should have equal right to access to Internet. Something very important we have to take into consideration. And also, we have to consider access to Internet. We have to consider access to Internet. We have to consider access to Internet. If we consider access to Internet is a human right, I think we can sort out the issue of access for women in the continent. And it is something we discussed last year. Several progress have been made. We organized a lot of forum to discuss this principles, but it is not enough. It is important to do this, to gather these ideas and programs in the country order to improve access for women to this technology to harness also their produce through digital technology and I can highlight some initiative with the ECA level. Thank you very much. This is a training program for women in the age between 12 and 25 years old. This program provides skills to these women, to these people, during a training for two weeks. The objective is to show without any skills on digital technology, within two-week formation, within two-week formation, to be part of this digital era. And the women are trained in several areas, like artificial intelligence, web gaming, we have also the statuses, we have also several applications on climate change, 3D printing, and other areas. And since we launched this program, now we have been problem-solving our dream of developing gender discriminatory program in seven countries. And we have 35,000 girl trainees and they have developed around 300 projects. It is very amazing to have a look at the project developed by this young girl just now in Africa. It is something we have in Africa, people who get a husband very early and they are faced with a lot of problems and they developed a website to show the problem faced by this young girl. We have also other applications related to the girl living in the city, coming from the rural area. And they teach here in Kenya. And they get a lot of problems. There staying in this city. It is a people you meet to work at home in several family and they develop a lot of application on that also On health sector. We have also several application developed by this girl Another project important we call it take African woman focus on the woman entrepreneurship and we support a woman to Improve their capacity to access to the market. It is a ECA program and we already organized this We last year we organized this in four country and several woman has been have been selected and they get price and to improve their business through digital and technology and the objective is by 2025 to have this program in all African country Another important project for for woman it is a capacity building on FinTech When we look at Kenya, Kenya is a good example where the FinTech sector Is leading by woman and we have a design a program with Alibaba to Build the capacity or to improve the capacity of women in the FinTech sector. It is something very important I think we can Say as example in several other country to see how we can duplicate What’s happened in Kenya in other African country this show the importance of? this session and I would like to congratulate Barata and her team to have this session in the second edition of the woman in IGF and To reassure the support of ECA of this on this activity Because, it is very important to support women, why? We are all coming from women. Without women, you don’t have anyone in this world. And we need to support them, to make them happy, to make them involved in all sectors, and they become leading this for industrial revolution. And I’m sure with the capacity we have now for the young generation, when you discuss with them, when you look at the project they develop, we are confident that African is on the right way to be part, such as a woman community, in this for industrial revolution. I’m going to stop there, and thank you very much for listening to me, bye.

Baratang Miya:
Thank you very much, Magda. I would like to ask everyone to join us on the table, because it looks very empty, but on the other side, if we can all just join up here, to make the room a little bit warmer. Thank you very much, Magda. I just want to thank you very much. When you started, you said that women are the ones who educate children. Maybe I am lost. Can we just join on the table as we walk in? As you mentioned, women are the ones who educate children, and one other thing that you mentioned is that since COVID, there has been an increased number of women in the ICT sector. And I think it’s one of the reality that COVID has really given women an opportunity to work from home, participate on the internet in manner that we wouldn’t before. You mentioned that access is around 12% to women compared to men. Africa has 500 million people in the continent, and in the next coming five years, 70% of those would be youth. but more women also. So I’m trying to think of the concrete strategization for the membership experience so that don’t basically have the intellectual experience, you know, we not necessarily need women to participate, we need to empower women to do that work on the country at largeμαι so that’s what we’re trying to do. So I’m trying to think of the concrete strategies for the membership experience so that don’t basically have the intellectual experience, you know, we not necessarily need women to participate, we need to empower women to do that work on the country at large. So I’m trying to think of the concrete strategies for the membership experience so that don’t basically have the intellectual experience. So I’m trying to think of the concrete strategies for the membership experience so that don’t basically have the intellectual experience, you know, we need women to do that work on the country at large. So I’m trying to think of the concrete strategies for the membership experience. So I’m trying to think of the concrete strategies for the membership experience. And so that is a contrary and it’s not an obvious and it’s not a confounding fact that we all see different possibilities and different ways how we can benefit from that coming unmillion dollar system. And as Dr. Chris was mentioned in terms of kind the conferences in France and Brazil people are increasingly prioritizing the digital transparency welfare by expanding Polish support but expanding the contribution to bilateral collaboration in this talk, so we want to make sure that in this If what is there meets our needs.

Margret Nyambura:
And probably to go deeper on these digital ecosystem that is much informed or driven by the STEM area. That existing data reveals that globally there is a leaky pipeline leading to few women and lack of retention in STEM careers. That as we start at an early age, we probably have the same opportunities, men and women. But as we continue as young children, do we get the same opportunities? Are there cultural issues? Are there biases? Are there issues at the household level that keeps on pushing women to more arts and men to more STEM subjects? What can we do to ensure that from that particular age, from young age, we are having women focused on STEM subjects so that when we come to their 20s, their 30s, their 40s, we are not saying that we do not have women in the STEM area or in artificial intelligence or emerging technologies. And yet, and I think statistics have been given that in the African context, most people are in the rural areas. And statistics have shown that 22% of primary schools in Africa have only 22% have reliable internet connectivity and we are talking about e-learning. We are talking about e-health. How do we ensure that our children that are in rural areas, they have the same opportunities as those who are in other places? So if connectivity is a start, then research shows that we are not where we need to be. We need to connect people in the rural areas. Majority of us in Africa, we are in rural areas and that’s where we don’t have connectivity. Again, statistics show that meaningful connectivity is still a wild dream for most of the women in the global majority. And as GDIP, we are looking deeper and learning how we can address the inequality. And the research we are doing. of female yearling differently coil with the flexible technology and a dimension for continued success and longevity of the global university and the educationarf the guide points is our mis送g ng ing ex z act el for women in this space. And what we mean by meaningful connectivity. You can be a consumer of content but you can also create. Again we are talking of appropriate devices. If you have a smart phone then you are probably able to do more than a basic phone. Again we are talking of unlimited broadband connectivity. That you can always get that connectivity whenever you want it and wherever you want it. And not only do you have that connectivity but then you are able to use it on a daily basis. And it is due to that, then you can keep on being innovative, keep on seeing how best to have a productive life in the digital space and with that you are able to improve your livelihood. So I know statistics have been given but using data of the 2023, we are saying that women are 19% less likely than men to use mobile internet. And again across the continent, all of us, we are using mobile internet. So you see again there’s that gap. And in Africa you know, there are a lot of youth in the space that have soft records and phones and they are not easily able to use those data. And for it is extended and out is still far away. So we are sending away our data so they are not necessarily easily able to access the data that they require. That’s something that we have to see. So a must save alf life So, we need to do an analysis of that, so that even as we are talking about inequalities in the digital space, then we are able to quantify that. So, meaningful connectivity of women in the global majority is a prerequisite for them to assume a central role in the digital space. So, we need to do an analysis of that, so that even as we are talking about inequalities in the digital space, then we are able to quantify that. So, meaningful connectivity of women in the global majority is a prerequisite for them to assume a central role as citizens and participants in the digital economy, that emerging technologies such as artificial intelligence and machine learning are the engines of the future economies. And we must be in that space as creators, not as consumers and our social and economic lives depend on these technologies. In the digital economy, the development outcome of digital execution can only be addressed if we can quantify the opportunity cost of digital inclusion, that every time try to see what are women losing if they’re not connected. And not just in terms of monetary value, in terms of shaping our future generation so that we contextualize that. And the aim is to contextualize the digital exclusion narrative and link it to livelihoods and social economic development, that in the digital ecosystem, there is something for everybody. Everybody can make a niche of themselves, but you need to be aware, you need to have the skills for you to be able to do that. And data on women’s experiences of digital economy and cost of inclusion is vital to reflect women’s world views in artificial intelligence decision-making based on gender data in public domain. That we are going to the world of artificial intelligence. Do we have enough data about women and then presenting that data, has it been collected? And if we don’t do that, then we end up with biased decisions, outcomes, which can lead to a negative outcome. Thank you. will lead to development outcomes. Actual intelligence replicates biases and discrimination in decision making algorithms due to lack of data on women and global health. So inequalities perpetuate when artificial intelligence are used to make decisions on allocation of resources. So again, this is an area that we need women to actively participate. So, contextualization of the opportunities for women in the digital space, once meaningfully connected and digitally skilled, will lead to digitally innovative livelihoods. And we have seen that, that as long as you give people the tools they need to navigate, then they are able to make their own livelihoods. And the ripple effect of empowering women through digital inclusion, skills development, and cyber hygiene, which again, we must emphasize to ensure that, again, we are bringing holy human beings, the young generation, we should not let them be influenced by these new technologies. They must use it to develop their socioeconomic, to shape their lives, and again, still not forget our culture, so that we are moving with our culture in a safely environment, a digital environment. And ultimately, giving meaning to artificial intelligence, Internet of Things, and other emerging technologies in the eyes of the women in the global majority. That as long as these technologies are making sense to us, as long as they can make us change our livelihoods, as long as we can see a change, a positive change, after the usage of these technologies, then we can move forward, and we can actively participate in shaping the future of the emerging technologies. If you’re not thinking about holistic inclusion, that is from early childhood, from what they can do with these technologies, from the very beginning, meaningful integration and active participation of women in emerging technologies will remain a mirage, and we risk excluding them from the many opportunities in the digital space. So again, we need to start from early childhood, try to see what are the opportunities. So, I think it’s important for us to understand what is the role of the digital ecosystem and how can we be safe in the digital space and as we move that way, then everybody have a space in the digital ecosystem. Thank you, and back to you, Baratang. Thank you.

Baratang Miya:
Thank you, Margaret, that was a brilliant presentation, in fact. Thanks for covering all the elements from the digital platforms that are implemented in the digital environment and I think within the ecosystem, if 70% of Africa is going to be youth in the next coming five years, I think it’s just fair that a youth member of Africa speak on behalf of African issues, and I would like to give the opportunity to Lilly to speak. Two years ago, there was no definition of inclusionity or women participation when it comes to Duальных and the Youtube phrases and you basically Stereo can equal woman participation, how do we describe it? We get up and feed each other but we know what they say, now that the social media space gives us say yeah, we can go to one corner and one similar corner and we can see that they’re not included, that is huge. I’ve always been saying we add more to it, that is more inclusiveness, more inclusibility and the essence of bringing to the forefront issues that affect women the most to the IGF. I’m seeing that there’s actually a glaring evidence of things that don’t happen when women aren’t present. Who can speak as women do, who can talk about things that women have to share, in terms of the sexual violence issue we haven’t really hunted down yet is the pacific issue, how rich are our rights as women? What do we actually consider to be the most inclusive discourse we ever had? So, for me, that sets the cup as both a youth and as a woman, I’m seeing that, and also as an African, for people who are essentially growing to become next leaders in this space, I see that, for us, this is really the platform for us to also make sure that we have a place where we can be present. So, for me, that sets the cup as both a youth and as a woman, I’m seeing that, and also as an African, for people who are growing to become next leaders in this space, I see that, for me, that sets the cup as both a youth and as a woman, I see that, for me, this is really the platform for us to also amplify the voices of work we’ve been doing as young people, and just like we’ve said about the Global Digital Compact, especially with the focus on capacity building on the African continent, I just want to highlight some of the things you’re doing for the African-American community. So, for me, it’s about sharing in local languages, also expertise where we bring people from areas where there are issues that are happening, they can give grassroots ideas to the platforms to contribute, and to say what is the demand. All of those are things that, looking at the future, projecting will be important for us to make sure that we are building solutions that do not just assume that we’re going to be able to do it, but also to be able to communicate and to be able to say, hey, we’re going to be able to do it. So, for me, as also a youth, it is the reason why some of the work we’ve done on the African continent is very important, that we get to the grassroots, know what the issues are, and able to bring it to the forefront for authorities to see what it is that we feel and to demand it. So, it’s what I’ll add to the session. partnership and to also add this conversation, because this is time not to talk about et cetera, because also also in Europe, various spectrums have have been made one in five differentufence for theircategories, feminist principles for including gender in the global digital compact. I think that’s a very, the copies here that are focusing on gender equality came yesterday together and to endorse these principles. Those who want to still participate, there’s a sign in support. If you Google it, you can go in and support these principles. In fact, there’s 10 of them. And I think these are principles that organizations that work with participation of women online and safety in a global to be space to be included in the global digital compact. They were called the feminist global digital compact principles. So if you want to look at them, please go look at them. I’ll just browse through them. The first one is ensure concrete commitments to protect the digital rights of women and girls and marginalized groups. The second one guarantee freedom from technology, facilitate facilitated gender based violence. The third one promote universal rights to freedom of expression, privacy, peaceful assembly, and participation of women and girls in their diversity in all aspects of life. Ensure universal affordability accessible and safe internet access for all. Fifth, demand strict action against harmful surveillance application and high risk AI system. Six, expand women’s participation and leadership in the technology sector and digital policymaking. The seventh one is prioritize strategies that reduce the environmental impact of new technologies. Eight, implement measures for states. and transnational cooperation to ensure data privacy governance and consent. Nine, adopt equality by design principles and a human right-based approach throughout all phases of digital technology development. As a software engineer and a coder in my life, this was one of the principles I thought it’s critical to be included at the moment because 90% of the time women are consumers. If we don’t include them in the design phase, if we don’t get them to speak when the product is still at the initial stage, we’re going to miss their voice from the onset. So I felt this was one of the highlights for me to make sure that a human right-based approach and equality by design principle is included. Transparency and human rights and gender rights impact assessments and incorporated into their development of any algorithmic decision-making systems or digital technologies prior to deployment and are not tested without these principles to prevent discrimination and harmful biases being amplified and perpetuated. The last one is set AI safeguard and standards to prevent discriminatory biases. In closing of this session, I would like to ask if there’s anyone who has comments or questions for the speakers. Okay, thank

Audience:
you so much for this amazing discussion but also looking at the contribution that have been shared in the room in regards to women in IGF. I have two reflections as far as the participation of women and girls is concerned on the matters of digital economy and I’ll speak on the context of Africa but also I was also actually wondering why we do not have a representation from the global majority. Does it mean that the women in global majority are more advanced when it comes to issues of technology? or do we think there is need for us to have a shared equilibrium so that we’re also able to learn? This is just a reflection from the panel presentation. This is what I want to say that I think for me, looking at the contribution, but also the progress of women in tech, of course, there is a lot of biases, challenges, and barriers that women face. But when it comes to documentation of statistics and reports, we majorly focus on the women and girls in the urban areas when you’re looking at the content of connectivity, the content of access, affordability, and issues of safety. And for me, part of the learning from yesterday’s presentation on the feminist principles included in the GDC is how can we be able to also reach those who have not been reached when it comes to issues of digital connectivity? How can we be able to look at getting access to the rural communities, working with women and young people in the informal settlement so that we do not assume that the fact that there is a discussion on internet connectivity and digital governance, we also ensure that we are reaching them as early as now so that we do not wait when we’ve made a lot of progress for us to remember that there is someone we’ve not reached. That is one. Two, we cannot speak about digital connectivity without the question of electricity. What happens to the region that for them, electricity, it’s like a prophecy yet to be fulfilled? What are we doing about that as we push for the global digital compact? and we have to make sure that we are working with the countries and the member states to ensure that electricity does not become a question that we are still seeking answers to. Another thing is, I think we have had change and stories of progress for women in tech and specifically for Africa. But where are the success stories that we are able to document and showcase? I’ve heard about the innovations that have been made. How are we able to document the milestones we have made so far to inform the learning and inspiration to those young women who may want to get into the tech space, but also to be motivated that there is something that we can do and we can progress with. There is the fear that AI is going to take away the jobs of young people and young women in Africa. How can we be able to remove the stereotype around artificial intelligency and the connection of employment opportunities for the very many young Africans who are struggling with the struggle of employment? I think those are some of the discussions we need to start having as early as now. And my last reflection is, from the presentation we’ve had, is how is internet governance, how is global digital combat going to benefit women and young people? And this is a mistake that probably some legal frameworks at the global level have done, that we may not want to repeat looking at women as beneficiaries of frameworks being developed at the global level, and not as stakeholders, influencers, and key decision makers. The moment we take women as beneficiaries to this process, that is the beginning of failure because we’ll miss their input, we’ll miss their shaping of ideas, and we’ll also miss the ideology that they will have to contribute to this process. Thank you so much.

Baratang Miya:
Thank you very much. The issue of electricity is a very big one. I’m from South Africa. Every two hours, electricity is shut down. Like, what is that? It’s just such disempowering exercise. For me, electricity and internet are like this. Any other comment? There is a concern in the room that we haven’t had any Global South voice. North, if we can have one comment, is it? Yeah.

Audience:
Good morning. I’m going to try to speak in English. My name is Marta Lucia Meacher Camarena from Mexico. I’m senator in Mexico from Guanajuato, our states. Well, I’m very glad to be here. I will try to say, to talk about our country in America Latina, Latin American. Well, we have the, see, we only, I only, well, I’m very glad to be here and to hear the problems you have as the same as we have in Mexico. In Mexico, we are about 120 millions in Mexico, and we have the same gap that you have. We have discrimination. We are not. the first population that can talk about internet. We have very deep problems with the access at this kind of services. I want to tell you that in Mexico, 78, one percent of the total of women in our country and we have the problems to use internet. And to access to the text that you mentioned, we have the problem and the problem is the age you have. If you have my age, you have problems with the internet. But I see that our youth is, they have no problems to take this, to use this kind of services. But my worry is that we are very behind men in these issues. We have to empowerment women, we have to make them to use this kind of of herramientas, tools, to access not only at internet, to access to the right, to be informed, and to the right to be in the text and to the right to be in the development because we have this problem. Now, we have to know about this, but our countries are not interested in this kind of issue. So I’m glad that you are discussing this issue because in Mexico, we are worried about this because only the youth people and women are in this issues. But I don’t know, about 50 years. They don’t know this kind of tools. Thank you.

Baratang Miya:
Thank you very much. If we can just get one minute, half a minute, 30 seconds, closing from all the speakers.

Margret Nyambura:
Thank you, Baratang. I probably just to comment on her comment that I think you have to be innovative in this space so that we can include even those who are in the rural areas. And for me, I look at the use of art, drama, so that they use, that you have said, they make a majority of the African population, and we can use them to go back to their villages. They have access to the technologies, but every person must have their roots. And if we do that, then we are able to create awareness, we are able to educate women and the other young people in the village using various innovative technologies so that they use digital space as a tool to move to where they need to be. And if we do that, then we are able, even as we solve the issues of connectivity in the rural areas, then we are using other ways to ensure that these people are getting the right information. Thank you.

Mactar Seck:
Thank you. Let me focus on the issue of electricity. If you look at the statistics, there is no way for African people to get access to oil. We have around the world 733 million people without access to electricity in the world. In Africa, you have 600 million. It means 53 per cent of the population doesn’t have access to electricity. And there is a lot of disparity in this country. Look at South Sudan, 7.7 per cent only of the population has access to electricity. With that, how we can expand this digital access to all? We need to use it as a source of technology, we have it, we can use some technology, this technology doesn’t cost a lot of energy. Also we have the infrastructure, we don’t use efficiencies of infrastructure in the continent. We have to live. Thank you. But we can discuss bilateral on the issue of that.

Baratang Miya:
I just want to thank everyone for coming here. And in closing I would like to quote what Dr. Segg had said, which is, women are the ones who educate children. The meaning of that is women are the ones who educate the future. Because children are the future. And if the future is educated by people who don’t have opportunities and are illiterate in terms of technology, then we are building a wrong future. Hence we need to – Lily said what are the costs of not having women empowered? The cost is we will be talking about the same thing in the next coming 50 years if we do not give women opportunity, if we don’t change this dynamic of understanding that women are educating the future. We won’t change. So for me it’s – this is not an African issue, this is not a Mexican issue, this is a global issue. If Africa is not empowered, if Mexico is not empowered, if – Okay, Japan is a different story. If we’re not empowered, then nobody’s empowered because future, technology and AI doesn’t care about where you live. You can do anything you want on the internet. You can harass anyone from anywhere, any woman from anywhere, and we need to get that stopped. Now, there are two things that I wanted to highlight. One was that education. The second one that came from you as a speaker is to say, how do we take women in IGF and make it a global issue? Not just an African issue. Now, it was formed, I started this in the African IGF and focusing on women in Africa. So, obviously, the legacy came to the IGF in Japan. The speakers are also from Africa, and now we need to grow it. And I hope that next year we’ll get speakers from across the world, and it will be about global issues and not just issues that are reflective of Africa. And I would like to encourage everyone to really look at the feminist principles for including gender in the GDC. This was discussed yesterday by all organizations that are focusing on gender issues, and we’ve endorsed it. I’m hoping that people are going to join. There’s a sign-in on support of this principle online, so you should just Google feminist principles for including gender in the GDC. So, personally, as Gale Hype and as Baratang, I have signed on that document to say I support these principles, and these are reflective of so many organizations coming together and putting this, thanks to Equality Now, APC, World Wide Web, UN Women, and all those who played part, and UNFPA for literally bringing us all together to come up with this document. So, I’m endorsing this document. As part of Women in IGF, we are saying this is something that we really endorse to take to the GDC. Thank you very much to everyone for coming. Bye. Thank you.

Audience

Speech speed

148 words per minute

Speech length

1138 words

Speech time

460 secs

Baratang Miya

Speech speed

167 words per minute

Speech length

3264 words

Speech time

1175 secs

Mactar Seck

Speech speed

142 words per minute

Speech length

2365 words

Speech time

996 secs

Margret Nyambura

Speech speed

204 words per minute

Speech length

1796 words

Speech time

528 secs

Bottom-up AI and the right to be humanly imperfect | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sorina Telenau

The analysis unveils an assemblage of sentiments regarding the application of Artificial Intelligence (AI) in multifaceted domains such as negotiations, decision-making, educational sectors, foreign affairs, and surmounting challenges faced by smaller and developing nations.

A positive aspect of AI is enlightened in its capacity to support complex decision-making procedures and foster critical thinking within educational environments. The effectiveness of AI in enhancing decision-making and negotiation is showcased in the global digital compact simulation. The AI advisor was utilised to refine arguments and language, whilst being trained to offer details on digital policy and internet governance. Further, in the realm of education, dismissing the use of AI in schools is argued to be counter-productive. The significance of AI in stimulating critical thinking and understanding intricate policy matters is underscored, thereby highlighting its role in shaping quality education and nurturing innovation.

However, the sentiment isn’t unequivocally positive. The analysis also uncovers AI’s limitations, stressing the importance of its critical application. Instances where AI hallucinates and doesn’t always deliver perfect results have been pointed out, demonstrating that although AI could be a valuable tool, it must not be relied upon blindly.

The evaluation also delves into the struggles of small and developing nations, particularly in digital governance and diplomacy. The overwhelming volume of information and tasks, combined with limited resources and a dearth of time, often poses significant challenges for these countries, thereby requiring the use of AI for effective decision-making and negotiation.

AI’s significance in foreign affairs emerges as it economises time and provides diplomats with a foundation for negotiations. Ministries of Foreign Affairs are encouraged to develop their own AI systems to retain control over data, relying on their knowledge base and experience. The concept of ‘bottom-up AI’ is proposed, arguing that it could allow a more controlled and tailored use of AI, and return AI back to users.

The potential of AI to promote underserved communities and mitigate representation inequalities is also explored. Bottom-up AI’s development based on knowledge from these communities bolsters this argument, aided by the observed stance that AI can encourage more meaningful engagement for smaller countries.

Nevertheless, despite the proposed benefits, the need for transparency and accountability of AI systems is underscored, with apprehensions regarding the non-explainability of neural networks being raised. There is significant criticism regarding uncritically accepting statements from large AI systems and a generic tendency for blind trust.

The evaluation concludes by emphasising the importance of addressing current AI issues, such as regulation, before getting consumed with future challenges. Large firms are depicted as demanding future AI regulation whilst disregarding existing issues, prompting a call for allocating resources to counter today’s challenges before concerning ourselves with future ordeals.

In harmony with Sustainable Development Goals (SDGs) 4, 9, 10, 16, and 17, the overall analysis accentuates the potential of AI in driving innovation, assisting in quality education, reducing inequalities, aiding in institution-building, and fostering partnerships. Nevertheless, the pivotal importance of careful, regulated, and transparent usage of AI is underscored.

Audience

The discourse unveiled a plethora of critical points spanning numerous subjects. A significant challenge was identified in Brazil with regard to technology – a substantial number of NGOs are grappling with integrating technological approaches due to lacking tech literacy. This issue hampers these organisations from fully capitalising on their potential in their operations, suggesting the necessity for dedicated digital literacy programmes.

Interestingly, the proposition was raised that augmenting participation and representation in tech-related matters could bolster the advocacy of local perspectives. This argument was underpinned by the desire to categorise knowledge in a manner that respects and supports local viewpoints, shining a spotlight on an essential consideration in the democratisation of technology and inclusivity.

The discussion then veered towards concerns about the economic ramifications of automation. Technological tools such as chatbots in Brazil’s service sector usage have soared, stirring anxieties surrounding potential structural unemployment and escalating the possibility for diminished economic opportunities and job security. In view of this, there was concurrence on the need for a paradigm shift to orchestrate the origination of dignified, rewarding economic opportunities.

The discourse additionally exhibited a robust belief in innovation and its prospective benefits. Participants conveyed stout support for a bottom-up Artificial Intelligence (AI) approach and open-source methods for managing knowledge on a grander scale. The capacity of these methods to organise and categorise knowledge with sensitivity to local perspectives was seen as a hopeful potential.

However, feedback and constructive criticism were deemed essential for the amelioration of larger systems. Questions were raised about whether insights from these systems were being considered and whether prevailing systemic problems required addressing, indicating a need for rigorous examination and rectification of these systems.

A particularly thought-provoking point in the discourse was the expression of concern regarding the rapid displacement of families due to the expanding influence of modern technology. This issue particularly afflicts rural areas of Brazil, leading to a diminution of the countryside and augmentation of cities. This cultural and knowledge erosion is significant, especially in small communities.

A suggestion was forwarded in response to these challenges to utilise AI to preserve and cultivate the history and culture of small communities. This would involve AI assisting in updating and uploading knowledge about these areas, spanning physical practices, agricultural practices, stories, and mythologies.

One neutral sentiment proffered revolved around AI’s design and adaptability, specifically tailored towards individuals with disabilities. Current AI systems are often trained on ‘perfect’ data, potentially making them less adaptable to human error. Conversely, humans are able to learn from their mistakes. Consequently, developers must cultivate more adaptable AI that can accommodate humanlike errors.

In a related argument, it was posited that AI should be enhanced to aid persons with disabilities rather than marginalising them. There is apprehension that current AI protocols might inadvertently engender a standard of ‘perfection’ that could be exclusionary, particularly for individuals with disabilities. However, by ensuring AI is a tool for inclusivity rather than exclusion, an opportunity arises.

In sum, these insights prompt a reassessment of how technology, specifically AI, is utilised and incorporated into diverse sectors of society. The call is widespread for more tech literacy programmes, adaptable AI, and active involvement in technology decision-making. These transformations would contribute significantly to striking a healthy balance between swift technological progression and preserving crucial aspects of our cultural heritage and humanity.

Jovan Kurbalija

Jovan Kurbalija, the esteemed Director of the Diplo Foundation, professes the significant intersection of philosophy, technology, and artificial intelligence (AI), particularly concerning education, cultural context, governance, and ethics. He promotes profound understanding of technological advancements without becoming engrossed by its complexities, thereby maintaining a steadfast focus on the broader societal and philosophical effects.

At the heart of Kurbalija’s argument is the Diplo Foundation’s innovative development of a hybrid system. This unique construct, merging artificial intelligence with human intelligence for reporting, has been cultivated based on the Foundation’s extensive experience and session management. The potential capabilities of this system in promoting dynamic learning environments and stimulating intellectual engagement were also highlighted.

Adding a fresh perspective to the discourse, Kurbalija proposed that AI models should harmonise with each community’s distinct traditions and practices. He believes this would contribute to a more authentic, bottom-up AI model that does not limit itself to predominantly European philosophical traditions. In a similar vein, he emphasised the urgent need for high-quality data in developing diverse, flexible open-source AI models.

However, he stressed the importance of preserving individual and community-based knowledge rights, protecting against its potential commodification by AI. Kurbalija highlighted concerns regarding transparency and explainability within AI applications, allied with apprehensions about AI’s misuse in creating disinformation.

Certain aspects of AI’s current governance invoked criticism, notably the sidelining of smaller entities by larger corporations. A call was made for increased corporate responsibility due to the extant challenges related to AI usage. Despite AI’s potential in preserving small communities’ heritage and culture, a significant gap was recognised concerning the lack of initiatives that leverage AI to safeguard cultural diversity.

While acknowledging AI’s potential in aiding individuals with disabilities, caution was raised about anthropomorphising AI, reinforcing that AI should serve as a tool, not as a master. The uniqueness and imperfection of human traits were lauded as invaluable characteristics and were claimed to be essential considerations in the development of AI.

In conclusion, Kurbalija’s discussions presented a potent outlook on AI’s broad societal impacts, issuing an urgent summons for more inclusive and ethical AI development, whilst highlighting concerns regarding transparency, accountability, and the conservation of local cultures and individual rights.

Session transcript

Jovan Kurbalija:
My name is Jovan Kurbalija, I’m Director of Diplo Foundation and Head of Geneva Internet Platform. Together with me is Sorina Teleanu, who is Director of Knowledge at Diplo Foundation and person who is involved extensively in AI developments. And now, while we were preparing for today’s session, we thought of having two ways to approach it and we will be guided by your questions and comments about this session. We want to develop it by genuinely as a dialogue. We have a lot to offer in terms of ideas, concepts and overall approach of Diplo to artificial intelligence, but I’m sure there is a lot of expertise in the room. And this is basically the key, therefore let me suggest a few practicalities. We will talk, but whenever you have a question or comments, raise the hand and don’t feel intimidated. The only stupid question is the question which is not asked. There are a few exceptions of this rule, but that’s basically our approach. Therefore you can, I always think when we gather for a meeting or for a course, because we are teaching a lot, I said how we can really maximize on this hour, this valuable time for all of us, generally we sometimes underestimate that importance of moment, importance of being there. And I think in Kyoto with Zen Buddhism and other things, with the Asian religious traditions, we can learn more about being there, being at the moment, and trying to grasp, trying to really find this unique energy. And unique because this is moment, this very second, this very second of our life and our existence and our interaction. Therefore let’s maximize on that. Now, Serena, shall I monopolize the microphone or you’re so gently nice? I started with philosophy, and probably this is one of the possible entry points. Because artificial intelligence for the first time pushed us to think about the questions, why do we do it, or questions of why for our existence, the question of our dignity, the question of purpose, the question of efficiency, many core questions that civilization has to face. Therefore, if you see our leaflet about the Humanism Project, you can see that we approach it through technology, through diplomacy, through governance, and through philosophy, linguistic and art. You can get any entry point. I suggested this philosophy entry point, and you will see why it is important. Now, I’m sure you will be using a lot of cameras. Unfortunately, these days we don’t use this. We just brought with my wife Nikon from Europe, very heavy Nikon, and she told me in Tokyo, she said, why do I need to carry this heavy Nikon with the lenses, you know? Zoom out, zoom in, when iPhone is basically, good iPhone camera is doing a lot. Now, we won’t get into this discussion. I’m sure that there will be passionate Nikonists or Canonists, you know, these two tribes, who will say, no, no, no, no, you still do it with Nikon. But idea is to zoom in, zoom out. We zoom in on philosophy, we zoom out on questions, zoom in on technology, or zoom out on philosophy. Therefore, try to use that optics within next hour. What is uniqueness of Diplo is that we, whatever we do in digital governance, since the very beginning of organization, we needed to touch technology. Therefore, we did the TCPI programming, we did the DNS, we did everything in order to know how it functions. We wanted to see what is under the bonnet. One problem, and I’m noticing, I was at the first IGF, at the Working Group on Internet Governance, which is ancient history, a long, long time ago. But I noticed that sometimes we discuss things without understanding it. We don’t need to be techies, mind you. These issues are sometimes even philosophical. But you have to have a basic understanding what’s going on and how it functions. This again, when we need to have the scale, you know, to understand technology, but not to become techies. Because then if you are only techie, you are basically, you won’t see the whole forest from the tree. Everything will be just neural networks these days. Or yesterday, crypto or blockchain, or the day before, TCPIP, and that’s then basically a problem. So it’s a tricky exercise. We have all of these entry points. And what I suggest, which is also in the title of the session, is also there is another aspect that we should keep in mind, that walk the talk approach works in a way that whole IGF will be reported by our hybrid system combining artificial intelligence and human intelligence. So if you go to IGF 2023, it’s Digwatch 2023, you can also download the iPhone or Android app, and you will be having the reports from the sessions coming by mix of artificial intelligence and human intelligence. Now, how does it work? We have been reporting from IGF for decades, summarizing long session into humanly, basically. Now we said, okay, let’s codify that, our reports, and create AI system. Therefore we can have something which could be called IGF GPT, or IGF AI. But basically we train the IGF on our reporting and our sessions. It is now deployed by our AI team. Poor guys have to wake up early in the morning, they are based in Belgrade. They are now doing reporting by AI system, doing everything automatically from transcribing, also special language for transcribing for IG and AI and cyber terminology. Everything transcribing, and then making it into the report which you can visit here for each session. Now, as you will see from the reports and you will see from our work, I think this session is, we have just to put that it’s GMT time because I was confused this morning. I said, what, 2 o’clock in the morning? You will have after the session, I don’t know, about 20 minutes or half an hour, I don’t know exactly what will be the timing. You will have reports from this discussion. Therefore, again, we think we have to walk the talk. It’s enough to talk about AI, how important AI is, how it’s changing the world, ethics. AI will eat us for breakfast, or we may survive, we may not survive. That’s another discussion which I’m very critical and skeptical about. But let’s use AI. And let’s see, only by using it, we can see how it works and how dangerous it is. We are not naive about dangers. There are risks. But many risks are now and here. If you just go to the risk in the future, it could be a bit tricky. Because whenever future was brought in negative way in discussions, it was often around certain ideologies. And the message is, forget it today, forget it now, we discuss future. And once we come to the bright future, we’ll be happy. But what’s happened in the meantime with our lives, you know, the references, I won’t make references to the historical experiences, but it’s a very tricky argument on the future. Therefore, there is something that you can use now. But let me again zoom out. And go to basically, if I manage to close this, oh, I managed to find, great, wow, now it’s again, make to be now. We call it that we had winter of excitement, ChargePT came into the force, everything is changing, it can write master thesis instead of you, blog post, you know, you know the whole stories, let’s say, December, January, February. Although AI is much, much older, as all of you know. Then there was a spring of metaphors. People suddenly realized, wow, it’s coming, let’s do something with this. Metaphors are danger, apocalyptic, a magnet on the risk for society, or nice ones, it will help us. Then you have summer of reflections. And we call it autumn of clarity. Think about four seasons, not the hotel, but four seasons in AI, winter of excitement, spring of metaphors, summer of reflections, autumn of clarity. Now, during the summer of reflections, what I did, I said, OK, let’s see what happened. Two things we did, Serena and myself, and she will explain the other thing, what she did at the course. We said, OK, let’s do, let’s recycle ideas. What were the ideas of ancient Greek on axial age? What can Socrates teach us about AI and prompting? What about journey of zero from Indian civilization via Al-Khwarizmi and Tunis to Fibonacci? What about ancient Greek? What about Chinese three big philosophers and AI? What would these people tell us? About knowledge, about ethics, about individuals and communities, about the Renaissance with Voltaire and Rousseau, great thinking of Renaissance period, Holbein’s painting, it’s a bit of my niche interest, the Vienna thinkers. When you really think about today’s era, and under this, there is a text, you can see that you have five thinkers who live in Vienna between two world wars who basically set the stage for AI in Geneva and Vienna and I’ll show Geneva thinkers. Hayek on knowledge, Freud about human psychology, and possibly a person who inspired thinking about AI is Ludwig Wittgenstein, who basically moved to the probability theory and language as a key element of the philosophy. Then we said, OK, those are Vienna thinkers. You have then Ubuntu thinkers in Africa, again another rich civilization and thinking not written in the texts, but basically codified in the practices. And in parallel, what we did during this summer of reflection, Serena went to deliver the course with the College of Europe for the group of students from Germany. She wrote the blog post. Whatever we do, we codify because we believe in creative commons and enriching discussion on, in this case, AI. Serena, you may tell us a few words how you, I will scroll, but what did you do during the course and what was the purpose of using AI and how did we use it?

Sorina Telenau:
Thank you, Jovan. Hello, everyone. We won’t spend more time talking, but just quickly because the title of this session includes this whole idea of bottom-up AI, and we hope to hear from you what you understand by that. But what we did at the summer school is just one example of bottom-up AI. So quickly explaining what happened there, we had a group of 25 students. And for about 10 days, we simulated the negotiations of a global digital compact. You’re at the IGF. I’m pretty sure you know what’s with the whole GDC and the discussions around it, so I won’t go into that. We split the team into, well, the group into a few teams. Technically representing some of the biggest countries and groups, we had China, the US, Brazil, and a few others, also civil society and technical community. And the task was to prepare and then negotiate how they would see a global digital compact looking like. But to help them, and also because many of them were newcomers to the whole idea of digital governance, what our team in Belgrade, Serbia did was to prepare this AI advisor. How it worked, we fed it. A lot of documents on internet governance and digital policy, and also with the contributions that stakeholder made to the global digital compact process. And then each of these five teams had their own advisor. What you see on the screen is the advisor of Brazil, right? The idea was for students to engage with the AI, to see how it works, to use it in the process of them preparing their arguments for negotiation, but also to discover the bad of the technology or the challenges. And that, I found the most beautiful part of it all. At the end, we sat a bit and talked about how they actually used the advisor, what they found useful and what they found challenging, and the discussion was really good. They were able to say, okay, we used it to fine-tune our language, to be better at negotiating for our position, to find things we might not know about our own country or our own stakeholder group. But we also understood that we cannot just rely on what the AI is telling us, but take it critically, assess it, and actually use our minds. Another reason why we did this is, as you probably know, some of the schools around the world have taken this very, very knee-jerk reaction, saying, okay, we’re going to ban the use of artificial intelligence in schools, which we think is not a good approach to take. So the idea at the summer school was to expose students to the use of AI for them to be able to develop this critical thinking as to how you can use it, why it’s good, and where you shouldn’t actually rely on it, because, again, it’s just the technology, and sometimes it does hallucinate. But this was just an example. example of a bottom-up AI and how we’re trying to build this from the bottom. And I think we can turn to the audience, Jovan, and ask what everyone here actually understands by bottom-up AI before we actually go into more of what we’re doing. So I’m going to move around. Do we have a roving mic? Ah, there is a mic there, right? A question to you all in the room, because we promised we’re going to have more of a discussion and not the two of us speaking for 90 minutes, which kind of defeats the whole purpose. What do you understand by bottom-up AI? Or if that doesn’t sound like an interesting question, why did you join this session? What did you expect from it?

Jovan Kurbalija:
Please. Is it before or after coffee? Thank you so much.

Audience:
The reason I’m here is because I want to know what you think of it. Help us wrestle the idea to the ground, and we’ll probably help you back.

Jovan Kurbalija:
Thank you. The AI charge EPT won’t reply in this way, you know, therefore it is really smart. Thank you. Thank you. But try to be, as we move to the next step, basically Sorina explained practical use on the critical issue when universities worldwide are banning use of AI, charge EPT. They tried with anti-plagiarism software, it doesn’t work. Even AI stopped using anti-plagiarism software, therefore this is not an option. Therefore our message, and it was successfully accepted, there are some anecdotes how some professors reacted to it, but we won’t mention the names. But academic community reacted, no, we are in charge. Forget AI. We said, no, AI can be interlocutor, can sharpen your thinking. As Sorina proven practically, and students love that, because it can sharpen your thinking. Then we comment on the questions that AI ask, provide it, answers that provide it, and say this is good, this is stupid, this works well, this works good. Therefore that element is critical. Now it is going to change educational system profoundly. We are on the similar generation, let’s put it this way, depending on the educational traditions, but there was a lot of learning by heart. There was a lot of listening to the ex-cathedra professors in my educational sort of process. And a few professors who basically acted like charge EPT, in mind question and answered and provide me stupid answers, stupid questions, are still people whom I can remember. Therefore that element of conversation AI can help us. And now the argument is that don’t kill the messengers, don’t put your head in the sand. Let’s see how AI can help us achieving critical elements of any educational system. It is improving critical thinking, it is improving creativity, and it can do. Therefore our argument, and we can substantiate it practically, whatever we are mentioning today can be substantiated practically, is that AI can be a great help for the real education. I’m sorry, not for the Bologna style, taking the assignments and number of the credits and this and that. That’s another story we can discuss. But for, I would say, ancient Greek or Roman education about inquiry, creativity, questioning, and consider yourself as a dignified thinker who can engage in the thinking process. Now let me, this is for example about Ubuntu ethos on these philosophical issues, where you can find really powerful thinking from Africa that can enhance artificial intelligence and basically should be codified, especially if the companies or hopefully African actors deploy AI in their context. And this is the first building block for bottom-up AI, which was the title of the session. We have to codify local traditions, practices, ideas that deal with questions of family, the question of universal individual creativity, knowledge, happiness, and whatever we ask charge EPT today, or even more advanced system in the future. It is, it cannot be designed only by European philosophical and thinking tradition. This is the first point on genuine bottom-up AI. The second important aspect, which we have been doing at Diplo, and I have so many windows open, I will hope that is to develop, I’m sorry, to, okay. We argue that there are a few, two points of relevance for bottom-up AI. First, it is ethically desirable because it let us preserve our knowledge. It’s not anymore just about data. It’s about our knowledge. This is what defines us as individuals, as humans, as a civilization, as a culture, as a family. And we are speaking about ultimately critical discussion for the future of our society and each of us individually. And what we did, we basically said, okay, what can we do? And first we went for open source. And as you can see, there is a very critical discussion about big systems bringing the fear and danger as a risk for society, mainly by a few big companies, OpenAI, Google, a few companies, you know, usually some Altman and these people were touring the Congress and places all over the world, which is a bit paradoxical situation. They created something and telling us, hey, guys, it’s very dangerous. I said, okay, but stop investing in it if it is too dangerous. Of course, there is a bottom-up competition argument, but there is something strange on a very logical level on these things. And most of them are very nervous about open source AI, except if somebody told me that he would become one of my heroes, I would be very surprised. Mark Zuckerberg created Meta, created Lama, and they made it for their own reasons, competition with Microsoft, with Google, and other actors. But Lama is doing quite well, and there are developments like Falcon in United Arab Emirates. There are now developments with quite a powerful model, which brings things to the relatively simple issue, and we can now discuss it. It is not that much innovation, neural networks, they were innovation when they were introduced. But now you need basically a lot of hardware and a lot to be friends with NVIDIA and basically to have processing GPUs. and a lot of hardware to process. If you can invest in that, you can train big models. That’s another issue which makes me personally nervous. It’s forget garage, forget bottom-up, in-depth scenario. Except for time being, there are pushbacks. And it will be dynamics in this way. Therefore, the first element is open-source approach. The second is you need high-quality data. And that will be an interesting story, because most of these companies more or less process the trillions of, I don’t know, whatever, books. I got a bit lost when it comes to this number, over the billion, but trillion something. And now they come to the point that they cannot get any more high-quality data. Therefore, they are doing so-called annotators, or data labeling, mainly. You know this Kenya case with OpenAI. There was the strike of people who were working on OpenAI data. But basically, they sit next to each other. And they’re basically annotating, say, for this is a bird, this is a cat, or this text is useful, this text is bad, and the other things. I will show you how we do it at Diplo, as sort of annotations. But this is, I would say, the key diagram, because quantity of data is limited by definition. You know, there is this idea of AI creating data itself, but I’m not sure that it will go too far. And you have the quantity, quality of data. Therefore, quality of data will be critical. And then, even with the small data, if you have high quality, you can create AI. That’s basically what is going to happen in the coming years. And this is the reason why companies are very nervous. They are rushing to get into quality of data, to get it in order to capture that future competition. Now, what we do in a, you can read the blog post, but what we do, we have a system which basically annotates any text. Therefore, when Sorina and I read the text, we annotate the text. And we are, our teaching system is based on annotations. Therefore, by teaching, by doing research, we are creating high quality data. Therefore, it’s integrated in the work. And I will show you practically how it works. Sorina, if you don’t mind. Okay, for example, for example, for example. You are following, obviously, developments in Middle East. You are on Alger’s era. And you are reading the text. And you will say, I’m not now, I’m just inventing the argument. You will basically, I will use the highlight. Let me see. If I’m in the Google, you will use highlight. You know how it works. It’s usually all, often does not work when it’s needed. When you try to show it. Ah, it’s a public. Okay. Okay. Or you can open. You annotate. And I write in annotator, Sorina. What do you think about this argument? Sorina will answer this. In this case, it’s public. She will answer this. She will receive the annotator. And two of us are adding a new layer on the thinking of the text on Alger’s era. Now, we have, because we have been using this as a teaching method for the last 20 years. Those of you who are from Diplo alumni, they know that we were basically, I designed this method based on metaphor that I like to highlight the text. And write something in annotations on the side, or have a sticker. We developed this system 20 years ago. But this is now the critical system of adding the layers of the quality on the text. Now, when AI comes and see this text, Charjiputi will just process it. But in our case, if there is discussion, we say, aha. This paragraph is important. Sorina, Jovan asks Sorina. Sorina asks answer. And then Sorina and I are developing our, basically, our very local bilateral AI, build around knowledge graphs. Therefore, we can then share it with the rest of the humanity, or keep it for ourselves, or share with Diplo, or share with you, share with the others. Therefore, our idea is that we can bring AI back to individuals. And then develop big systems. Ultimately, why should I send it to the big system? Well, I can do it. We can keep it for ourselves. And then share as our human right, our right of citizens of society with the rest of society. Now, this is the key concept behind AI. Now, has it triggered some ideas for questions or comments? How does it work, practicalities, anything else? It’s a bit intimidating where you have to stand and to walk next to Mike. But if you can shout also, I’m fine for any question or comment. So far? So far? No. Therefore, this is the basic idea. Return, let us preserve our knowledge. Why this knowledge that Sorina and I will create around discussion? She can comment then on what’s going on today is in Israel and Palestine. Why should we share it with somebody else? Why we don’t preserve and then share as our knowledge? It can become much more complex when you annotate complex texts, philosophical books, other texts. This belongs to us. Then we, in Diplo, we share it. You see, it’s public. We share it because we think everything should be creative commons. But we are very nervous if we, because of technical facilities, have to contribute it to open AI or to Google or to Baidu, whoever is basically providing this system. Therefore, what happened with Google 10 years ago, or Facebook and others, when they basically commodify our data and our sort of use of internet is now becoming, is starting on much higher level with knowledge. And that’s basically idea two, bottom up. One thing is that we talk, and I explain to maybe some people who have got interested into this. The other question is if we can prove it in practice. And this is different. If you have a system that can prove in the practice that it can work. That’s basically what we have been doing with the bottom-up AI, returning AI back to people with all their strengths and weaknesses. Serena?

Sorina Telenau:
Maybe we give one more close to practical example of how this could be implemented. We’re having these discussions in Geneva. A part of our work is to support engagement of small and developing countries in digital governance, digital diplomacy, and all these big organizations in Geneva, but also beyond. And we’re interacting a lot with missions in Geneva, and we hear a lot, especially from the smaller one, how they cannot follow everything and anything because, well, there’s a lot. And also how sometimes they don’t have enough time to research what they have done before to actually come up with a position to present at some organization or some negotiation. So in discussions with them about this whole idea of bottom-up AI and how we can or cannot use technology, this idea also came up. Can a Ministry of Foreign Affairs develop its own AI system to use for their own purpose instead of putting data into ChatGPT or BARD or whatever else, and actually rely on the wealth of knowledge they have developed over the years? And the simple answer is yes. And should they do it? Again, the simple answer would be yes, because you don’t give your data to a bigger system out there, and you don’t rely on all other information that might be coming from different sources, but you rely on what your Ministry of Foreign Affairs has developed over the years, policy papers, documents, and whatever else. And again, the question would come obviously here as well, can you rely completely only on AI to come up with a position that your diplomat will negotiate in an intergovernmental process? No. But you can use it as a starting point to save time, because you don’t have that much time to actually come up with something. So if you have a starting point, and then you bring your own expertise and your own abilities, that would help. So this would be one example of how we see bottom-up AI happening, and helping in this specific example smaller countries.

Jovan Kurbalija:
And here is, we may just display it again if you don’t mind, here is the conclusion from the last week discussion on, well, 10 days, on the General Assembly. We processed all statements delivered, you know, President Biden, heads of states, were basically saying what do they want to do, what are their views on different issues, from climate change, Ukraine war, digital, and we, okay, we asked the question, what did they say about digital? And we processed that, and we got to the report, which is a very interesting report. You say, let’s say, on artificial intelligence, line by line, relevance, what Barbados said, what Ethiopia, what India, what Malta, what Andorra, what Somalia, in the bullet points, what they said. And then you have also in-depth report with the statements, what each country basically, what is the transcript of the session, and what is the summary, let’s say, on Albania, you can see how many words, the speech length. What is knowledge graph? I mentioned already knowledge graph is critical. You can do knowledge graph on anything. We’ll be having knowledge graphs about all sessions in the IGF. This is a proximity of thinking. Could we have, Sorina and myself, knowledge graph about today’s session? What were the stances of what Albania was arguing it? What are the arguments? What is the speech itself? What is the summary of the session that was hosted by Albania? And then what was interesting, we asked also AI, based on all statements, if you put all knowledge in the General Assembly, or in IGF, we’ll do similar thing with the IGF. You asked the question, and you asked the question, what should we do to combine action on climate change, change and gender? I hope they’re not testing the system, because now they’re shifting to, let’s see, I hope it will work sometimes. I hope the system gives the question, the answer, based on all speeches delivered. Now we won’t read it, but that’s basically what is delivered. Or what was the, when there was a session in Security Council, we did the same thing. And then each session, you know how it is with the multi-stakeholder advisory group, you have at the beginning of the session, you have the key question, and then the answers, but also on, based on what parts of the speeches AI generated the text. Unlike charge EPT, which will give you just the basically answer, we said no, we want to ask AI to tell us what parts, for example, this answer was generated on part of speeches of the professor from King’s College, mainly his speech. But some other answer was generated, OK, this is Malta speech, or you can go through basically 360 questions that are based on transcript and generated around the idea of what is the climate change answer, the question Bangladesh, Nepal, Slovakia. And you suddenly realize that Bangladesh and Slovakia have something close when it comes to the discussion question is about climate change and digital commerce. Therefore you basically discover completely different event. And this will happen with IGF, maybe we’ll have, oh, at the session on AI, there was somebody else discussing bottom-up AI, which I’m not aware of. Maybe not calling bottom-up AI, maybe calling organic AI, or something like this. And you suddenly say, aha, here is a knowledge graph between Jovan, Sorina, and John, and Pietro, and Mohamed, and the other sessions, and said, OK, I didn’t know that we are doing the same thing. And that’s basically, I’m just giving you very concrete examples. What Sorina said, small states got really excited about it. Because you say Djibouti had three diplomats in Geneva. They don’t have a chance to follow the all sessions in Geneva on health, on migration, human rights. But if they have this system, they say they will receive alert, hey, by the way, Djibouti, at the working group, 70, 100, I don’t know how many working groups, at the ITU or WHO, there was discussion of relevance for your maritime security. They are very, because they are a big port, they’re interested in maritime security. By the way, follow that discussion. Therefore, suddenly, you have equalizing aspect of AI, that it brings small states that they can take care of their sort of interests, specific interests. We just highlighted a few options, and probably we’ll close with this. We started with philosophy. This is ultimately philosophy. philosophical issue, but give you a few concrete applications in education, in diplomacy, in IGF itself. You can follow IGF itself. And it would be interesting to hear your reflections on the quality of the report on the ideas around it. And then about this practicalities, how it can improve, let’s say, inclusion in global governance. For small countries, small organizations to follow what’s going on on their interests. The ultimate message is, let’s return AI to citizens. Let’s make it bottom up. Let’s build around it. And let’s find practical uses. It’s enough of the big talks about ethics and AI. Here are practical uses. And last point, which is important, it was part of the title of the session. As we are discussing, let’s preserve human imperfection. Because we cannot compete with machine. We should sometimes, people were critical about my title of this session, that we should let AI hallucinate, as we sometimes hallucinate. And if you think about the major breakthroughs in the history of humanity, they’re usually related to the time when some people had a chance to be lazy. And in ancient Greece, in, let’s say, British Empire time, when all sports were invented, from soccer to tennis to all major sports, because these people had a lot of time. Others were working for them. I won’t go into that. But if we can basically leave a bit of imperfection, and there is one blog post which I cannot find, about need for human imperfection, we should facilitate that. We won’t win the battle with machine on optimization. This is not possible. But we should preserve spaces for imperfection, for being lazy, for having time to reflect, for developing arts, for making mistakes. And this is the reason why I went to the flea market in Belgrade to search for the new Turing test. Basically, flea market traders, as you know, they’re masters of human psychology. And I said, they’re completely imperfect, always on the edge of the criminal milieu and the other things. And I was going through the market and asked one of the traders, who is not, who is legitimate. I asked him, OK, tell me, what do you think? For ultimate limits of artificial intelligence, with colleague of mine, Misko, we’ve been trying to see if AI can replace experienced trader at the flea market. Approaching a seller on a flea market can be a great way to find unique items at a reasonable price. But it’s important to be aware of the potential risks of being ripped off. It’s usually best to avoid revealing too much about your level of experience. Don’t be confrontational or aggressive, as this can put the seller on the defensive and make negotiations more difficult. Approach negotiations with confidence and a clear idea of what you’re looking for. You know, it’s like, OK, I’m going one more round. Well, I got confused. Trader Misko and AI gave us very similar answers. But, and big but, AI can explain what to do. However, AI cannot act yet as a flea market trader. For the time being, flea markets remain a refuge for human uniqueness. That was one of the, in my search for human imperfection, I go to the flea markets and other places and see what are going to be our niche. Because we cannot compete with machines. We cannot, they will be always more optimized than us. But we have a right and we have, I would say, duty to preserve the core humanity which has been passed to us from previous generations in all cultures, from Ubuntu to Zen to Shintoism to basically ancient Hinduism to Christianity to ancient Greece. Underlying element is that humans are in charge. And that is basically one thought which I would like to leave you with, that in this battle, we will be having a tough time. But we can do it and we show it practical how it can be done with bottom-up AI. I’m getting some sign, but my human imperfection is. No, I’m looking at the room.

Sorina Telenau:
I’m hoping now we can have a bit of a dialogue. So please, questions, comments, your own thoughts about your interactions with AI, how we preserve our humanity in all this, how we build bottom-up AI, how we rely on it for whatever your work is. Yes, please.

Audience:
Hello, everyone. My name is Emanuela. I’m from Brazil. And I represent Instituto Alana, which is an organization that is focused on defending children’s rights on the internet, on the environment, and focused on social justice as well. I have a few questions for you. One thing that I thought that was really interesting about the diplomatic view and the advocacy view is that these two that you guys presented, like this approach, could be really good for advocacy organizations because you have a knowledge management system approach that I think that could be very helpful and contextual. But I think my question is very practical, like how to incorporate this when considering especially in Brazil, I see that a lot of NGOs and organizations, they are not very tech. So you said about open source. I want to know the practical side. How can we benefit from this kind of technology? I have a second question that another issue that we face is considering how to increase voices and increase participation in such a big world. But how can Can we increase participation on these matters about tech, and do you think that this approach of bottom-up could be something that could be used to organize different participation approaches from different places and, you know, categorize knowledge in a way that could be sensitive to local perspectives, but with more, you know, data analysis? So this is my second question. And the last question, sorry, but just to, you know, fill up the debate. Sorry? You compensate for other people. And one thing that worries me a lot is about the structural unemployment that we are seeing in the service sector. Like, this is a sector that employs a lot of people in Brazil. So and we see the increase of usage of chatbot and automation. So I was wondering, what are the economic perspectives of bottom-up AI that you are presenting? How do we move, like, economic opportunities for people that are rewarding, that, you know, that signify, yeah, dignity? Because we see a lot of unemployment and we don’t see a lot of – anyway, I think you guys understood, like, the basic approach. Thank you a lot.

Jovan Kurbalija:
Thank you for excellent questions. Well, all inspiring, let’s probably start with the third one. This is exactly what I mentioned when I said instead of discussing what will – may happen with AI generative, artificial generic intelligence basically killing us, which you can hear from some Altman and his gurus, there are things that are happening now. People are losing jobs. And there is a risk that whole generation, if I can use the slang, could be basically thrown under the bus. Not only anymore blue-collar jobs, but white-collar jobs, lawyers, accountants, I would say many of us in this room. That’s a big, big problem. And how to deal with this now and here? I hope that IGF, we can report in the – with AI basically what will IGF say about that, but it’s a huge problem. Our argument and strong argument is that job is not about – only about universal income. It’s a question of dignity. It’s a question of realization of your potentials. It cannot be reduced of, oh, you will get the money at the end of the day and go fishing or go whatever you want to do, what makes you happy. No. This job has been throughout the civilization the way of realizing our potential and appreciating our core human dignity. Now, it’s a big issue. This is why this is a social contract discussion of utmost relevance. And for example, Ubuntu civilization, African traditions are interesting. You are because I am. And there are different ways of seeing it, not just optimization, optimization, optimization. I don’t have an answer, but I would say that should be on the top of the agenda or whoever discusses policy and the other issues. Do we need always to optimize? In some cases, we may step back. It will be counterintuitive. It would be difficult to promote, but we should introduce this right, human right, to be imperfect. We have that right because it defines us as humans. Therefore, that’s the – Sorina, if you want to add anything on that.

Sorina Telenau:
No, no, no. Shall we take the other two questions? So we had the other one on how bottom-up AI might be able to help better representation from the underserved communities, I guess. I guess there are multiple ways. First of all, as Jovan was saying earlier, making sure that we do use knowledge from these communities when developing these AI systems. And then as we were giving the examples of small missions or these kind of, yeah, smaller entities, that would be a way to help them better represented in the discussion. But what I didn’t understand from your question was whether you’re talking about representation in governance discussion or representation in the development of AI. Then the example we were giving with following the reporting, for instance, from the UNGA, which would then be able to alert the smaller countries, okay, this is something that might be of interest for you. This is a country that you might want to build an alliance with. So in this way, it can help foster more meaningful engagement while or where these countries cannot follow everything and anything. And then the other example we were giving, how it can help build the position to get to that meaningful engagement. And then what we usually say that if you’re not at the table, you’re on the menu, then AI in these examples can help avoid that very unpleasant situation, especially with the smaller countries that don’t afford to follow everything because of limited resources. So we do see these issues, and it’s not only us, again, it’s countries seeing it themselves. We have had quite a few discussions in Geneva with smaller missions.

Jovan Kurbalija:
What Sorina said, I think what applies to small countries, let’s say in Geneva, applies to small NGOs or civil society coming from Brazil, I guess, or any other country. You don’t have human resources. I mean, Diplo delegation, this place is three of us in the room, and Anastasia will come. Comparing to other delegations, it’s basically a statistical mistake. But we will contribute to public good by this reporting. And now practically what can be done, and it’s the most important, we are starting the project where we will try to push some of the ideas on civil society supported by European Union and engagement and inclusion of civil society. You basically, how would it work? Your organization deals with jobs or? Child rights, okay. You will make your map and say knowledge graph based on your documents, based on your Zoom meetings, whatever you want to put it, it will be your knowledge, knowledge graph. You will just apply it on the whole analysis of IGF, and you will say, aha, here is the similar problem that people face in Uganda, or in Romania, or in whatever place. Therefore, suddenly out of the transcript you will get and you will say hints how to do it. Or how to frame discussion next time for the next IGF to be more persuasive. Because you realize that this argument in child protection didn’t fly at this IGF. People just brush it and say, that’s not next question, you know how it works. But somebody’s rhetorical approach made the wonder that we will get really deep insights into this. And you, what is beautiful, through the process you develop AI. Because by commenting on what worked, what didn’t work, you have reinforced learning. And your system is on every stage stronger and stronger. Therefore, in two or three IGFs, even with delegation of two people, you can have impact of organization of 200 sometimes. Because you know what is your focus, you know what are your strengths, what sessions you will follow, and what you will do practically. That’s powerful. Now, how to do it? The best way is, Paulina, my colleague, can brief you. on later on, or you can exchange details about this project that is starting in January, which will have one of the elements, how to use AI to enhance basically participation of the local communities and the other actors. And what Sorina said, by developing your knowledge graph, you will take specificities of Brazil, and it will be element which won’t be generic child safety or child rights, which is developed by big system. No, it will be specific to Brazil or even local communities. I don’t know, Rio, I don’t know Brazil very well, but specific problems that exist in communities. Therefore, from the problem of future work of jobs, which is big issue, to what Sorina explained about developing system, to practicalities, contact Pavlina and you can join some activities or project. I think we have a partner from Brazil as well, and that could work in this way, practically. And it’s very important that we are practical on AI, otherwise discussion will be too theoretical. Let’s see if you inspire some other questions or comments. Critical ones, challenges, we need to, we have some, or you’re just playing with your hair, you know. Good. No questions, everything is clear? Or extremely

Audience:
I’m just wondering what you’re learning about the bigger systems, so that, are there ways in which you are giving them feedback or ways in which you are noticing sort of systemic problems that really ought to be addressed in the models themselves?

Jovan Kurbalija:
Well, bigger systems are big, and they’re big not only in the number of the data they process of money they attract, but also they basically don’t listen to small guys like us. They have important things to finish, to go to US Congress or EU Parliament or Chinese, whatever place they discuss this issue, or therefore there is a bit of arrogance, an element of hubris, I would say, which could be dangerous because it’s not only their business, it’s also our business about our future and our knowledge. We found it a bit, you know, in any technology you have magic. I still remember when I first was using a mobile phone, it was a magic. Technology is a bit magical. Internet and the other things. For us, we are now typing, but when you think, there is the element of magic. Now, AI brings magic on steroids, and some Altman can go, I’m mentioning him very often because I’m very critical about this use, and say, oh guys, AI will eat us for breakfast. I’m using this sort of, I said, okay, but why, how, when? Give us something. We cannot trust you just on these words. I mean, you have to, and first, let’s discuss jobs today. Let’s discuss disinformation. Let’s discuss a destroyment of the public spaces, online spaces, with the AI contributes. It’s not only AI. We found that problematic discussion, and especially non-explainability or partial explainability of neural networks adds to the magic. We put something, AI does something, and you get something. This is why we insist always to have the source of the answer of the question. Yes, here is a source, and this is the first step. We don’t know how AI got this answer, but we know, and ChargPT can know that, and Bard, and Baidu, and the others, they can know what were the sources for that answer. This is already the first step. Therefore, we see a lot of lack of transparency, confusion, and I’m afraid to say that it will be fertile ground for the conspiracy theories, because when you are just saying, well, trust us, we want to regulate you, and don’t ask questions, just trust what we are telling you, and then you basically, for me personally, I have a problem with that. I don’t think that things cannot be explained, at least source of your conclusion. I know neural network is not easy to explain technically. I have a colleague who is into AI, and he said, listen, be careful when you go to these IGFs of the UN. If you introduce explainability of neural networks, half of us will be in the jail, and I said, okay, there are realistic concerns, but there are things that can be done. That’s my sort of criticism of big, big systems. And to add on that a bit,

Sorina Telenau:
if I may just reinforce one of your points, in all these discussions about AI governance, you’ve probably followed Sam Altman and a few of the other big guys saying, yes, it’s a huge mess that we’ve created. Well, they don’t really say we’ve created, but AI is coming with all these challenges, and it’s going to break the world, and destroy us, and this and that, and we need to regulate. But if you look carefully at this discussion on we need to regulate, what they’re saying is we need to regulate future AI, not the AI we as big companies have developed, but future AI. So let us do our things, we’ll continue doing the best, and you should worry about the future. And I think that’s problematic, and I think we should hold them more to account what’s happening right now. As Jovan was saying, we have problems right now with AI that we should be solving before looking at the future. Not saying we shouldn’t worry about the future, and what might happen, but maybe put more resources into what’s happening right now, and how we address today’s challenges. And that would be it.

Jovan Kurbalija:
I’m looking for one presentation which we may share later on, where we are basically, ah, here it is. I was recently in Brussels, obviously they’re preparing the new regulation, and we said okay, let’s see what does it mean to regulate AI. You regulate hardware, you regulate data, you regulate algorithms, and you are the first to see it publicly. We didn’t show because there was some problem with PowerPoint during that session. And we regulate apps. What does it mean practically? What do you regulate? For example, as Serena said, you can’t hear some Altman saying regulate apps, or even data. Why they are not showing sources? Obviously, if you find the book which is copyrighted as a source, there will be a problem. As you know, there are already court cases in the United States against open AI. Or hardware computing power, where things are happening with the NVIDIA and the GPUs. What do you regulate? Read carefully. Next time when you hear, listen to some Altman, you can’t find except, oh, regulate AI capabilities. What does it mean? Basically, we created these capabilities, let us stop the other developments, and basically, I’m now a bit cynical. Let’s have monopoly on this. I said, no, that’s against competitive market. It’s against creativity. It’s, again, other issues. But there are problems that we have to deal with. How apps can be misused? How people can be thrown out of the jobs? How this information can be generated? You know the whole story. It’s part of public discussion. But where do we regulate? You can’t hear companies talking about data. That’s non-existent. They are already concentrated on this blue one, which is basically vague. They avoid apps, red one, because this is very concrete, you know. And hardware, it’s more geopolitical discussion these days between US, China, and these big players who is going to have hardware capability to process data. We’ll be publishing soon article on this to bring clarity when I started winter of excitement, spring of metaphors, summer of reflections, autumn of clarity. There could be disagreements. But let us not misuse the magic of technology of AI. Magic is important. It can inspire. But let’s not misuse it. Let’s basically keep the magic of technology while discussing governance issue where they are. That’s it. Looking again at the room. It’s always this sort of tension in the air. We want questions. No, we don’t want to force you to ask the questions. But we have 10 minutes more? No, we have 25 more minutes. OK. Let’s listen. Let’s chat in the corridors if there are no other questions and our comments. Oh, you have two questions on this side. I hope it is not forced question because we are asking for the questions. No? No, no, go ahead. Go ahead.

Audience:
I was thinking, sorry, let me introduce myself. I’m Julia. I am a youth from Brazil. I am with my delegation here. And I was thinking when you were talking about Ubuntu and other societal aspects of the philosophy behind AI or what could be the philosophy behind AI. And I got me wondering if there is any initiative to use AI as a means to preserve and to develop small communities, history, and culture and have them not to be lost into the translation that we are experiencing of losing practical and physical knowledge and ways of sharing knowledge, like families are being estranged by the recent modern changes that they are moving too much. They are being displaced by technology and opportunities, job opportunities, and so on. Is there an initiative or a group or an entity towards preserving small cultures or at least, not small cultures, but trying to bring access to small cities or small communities to try and update and upload their knowledge and basically, their knowledge. I’ll step their knowledge. But we can also imagine that knowledge of villages, of small cities can comprehend into physical practices, agricultural practices, and stories, and mythology, and so on. Because I have a personal, that’s also a personal question for me. Because I think about how are we losing, I’m from Brazil, how much we’re losing from being away from the countryside and having the cities expand and the countryside shrink. Although, the countryside is the majority of our landmass.

Jovan Kurbalija:
Great. I think it’s an excellent question. The short answer is yes. And I’ll give you, but let me give you an example of Diplo. It’s always, I always try to start with myself. We are a small organization. We have our, let’s say, we are a small community somewhere in Amazonia, where we basically live in the river and we had a culture. And we had to deal with the questions that every humans have to deal with, the question of family, love, purpose of life, what do you do after you die, what do you do with your kids, and these things. This is a knowledge. This is a very valuable knowledge, maybe not codified in the books of big philosophers, but this is core knowledge. Can it be saved? Yes. Should it be saved? Yes. Are there initiatives to save it? No. Why is it the case? I can’t tell you, but it’s very sad, because we are losing on this diversity of humanity. And I don’t think there is a hierarchy of knowledge and experience. Maybe money and power is not equally distributed, but human capability to innovate is distributed. And that’s basically how it can be done. Now, is there initiative? No. Can it be done with open source tools? Yes. Is it easy to do technically? Yes. Organizationally? No, because you have to change the habits and you have to change quite a few things, but not undoable. Is there interest to support it? No. Well, you will hear here many inclusion, cultural diversity, but when it comes to concrete things, there is no action. And I think countries like Brazil should push, especially the new government, I think it’s keen on the diversity, should push organization like UNESCO to do something to preserve the knowledge by using AI. And that, what is your name? Julia. It could be Julia’s initiative. We have a question from a colleague here. Could you just, well, the processes that you have to stand next to the mic. Please.

Audience:
Yes. Thank you. My name is Nicodemus Nyakundi. I’m from Kenya. I’ve come under the Dynamic Coalition on Accessibility and Disability. I work in Kiktanet under Digital Accessibility, more specifically for persons with disabilities. So there’s something that has been disturbing my mind, and I really need to understand when it comes to AI. More so that AI has not deviated so much from the normal approach towards machines and computers, like it is based on inputs and output models. So we have AI that is mostly trained on perfect data. I call it perfect because it is a predetermined data, and that is considered to be normal. But we want that AI to work with the imperfect human, a human who makes errors. So also we have to recognize the good thing is that we make mistakes, but then as humans, we resolve back to correct the mistakes. So my question is, what approach should we take to ensure that the AI is as human as us, and that it can work with persons with disabilities, and ensure that they also contribute to basic life needs for persons with disabilities, so that it does not create more of a marginalization, because it will come into the interface of, say, defining another form of perfect of which not all of us are. Thank you.

Jovan Kurbalija:
Sorina? OK. Let me unpack the few issues, one about people with disabilities. AI offers possibility, serious possibility. We are seeing it with transcribing, with the other issues with people with disabilities. Again, people with disabilities are not prominent yet in AI debates. And here again, small communities could ask actors like UN to check their disability quality, how people with disabilities can access. We recently have some study, and we are going to do to check diplomatic websites how disability friendly they are. And that’s, I would say, that push has to be strong from the bottom up communities and other actors. This is the first question. Should we make AI look like us? That’s a philosophical issue, and I’m not sure. I would preserve AI as a tool in the mindset. It will be a powerful tool, but always a tool, which Serena used during the course this summer to enhance learning. To have it always as a good tool, as a good, not to have it as a master, but to have it as our servant. That’s very important mentally. That will be powerful servant, which may revolt and which may say, OK, I want to have some power over it. But that’s basically I would keep it. Obviously, we’ll try to mimic. It excites us. If you read the Frankenstein from the Mary Shelley, this is the best example. Basically, she, Dr. Frankenstein, wanted to create the perfect creature. And that creature, if you can recall the book, was created to be good. And then it went out of the lab, and people were afraid. And people became aggressive. And then creature reacted and started getting nasty, basically, how we now perceive Dr. Frankenstein. This is where I am, for example, very uneasy with anthropomorphizing AI, putting it as humans. Because it is exciting. You can have a nice event. People are excited. Oh, Sophia, what is the name of all of these robots? Fortunately, I don’t see any Sophia at the IGF. Oh, Sophia can answer your question. And they said, no. What we do, we have a coffee machine as AI. It had the first session. For those of you who were in IGF Berlin 2019, it was participant in one session. You can search IGF coffee machine. That’s an element which we have to be very careful. Otherwise, we will finish like creature of Dr. Frankenstein. Because we will think that that creature is creating us some problems. But I here made one suggestion. If the IGF gives you a chance to be a bit imperfect, and I share it here on the screen, you can go to the philosopher’s path here in Kyoto. I heard it’s a nice walk. Run away. Be a bit imperfect. Don’t be at all sessions, except thank you for coming for our session. But here is the leading Japanese philosopher who basically studied, who used to have a walk through this philosopher path. And you can see that he was reflecting on society, on purpose, on happiness, on other issues. I don’t know if you are going to have somebody from philosophy department at Kyoto University, which was one of the best in Japan. But that would be an interesting discussion. Back to tradition of Ubuntu coming from Kenya. OK, it’s more towards south. But all that tradition of us belonging to collectivity and being the part of being empowered by collective, by family, by our surrounding. That’s the answer to the practical, again, answer, if the weather will be nicer, go to the. We don’t have a cherry blossom. I will criticize AI organizers why it is not in April. But we can get back to Kyoto for this. But philosopher’s path is an interesting place where these guys walking like Kant used to walk in now Kaliningrad. It was, at that time, Prussian city. The famous Immanuel Kant, he was walking every day same route. He was only late one day. And that’s a mystery, pettiness of philosophical discussion while he was late. But I forgot the name of this Japanese philosopher. Oh, Nishikida Kitaro is basically the best Chinese philosopher. I plan to read more carefully and see what we can learn from him about AI and basically develop this discussion further. And my call for imperfection, try to discover this lovely city. You will have, anyway, Diplo’s reporting. Therefore, you can read what was happening. But should it be official at this point? No. I’ll get in trouble with the secretariat and these things. And thank you for coming. Let’s walk the talk and enjoy the corridors and chats and basically continue this interesting debate about bottom-up AI and our right to be humanly imperfect. Thank you. Thank you, Sorina. Thank you.

Audience

Speech speed

148 words per minute

Speech length

1035 words

Speech time

420 secs

Jovan Kurbalija

Speech speed

150 words per minute

Speech length

8952 words

Speech time

3590 secs

Sorina Telenau

Speech speed

207 words per minute

Speech length

1649 words

Speech time

477 secs