Day 0 Event #1 IGF LAC Space

Session at a Glance

Summary

This discussion focused on various aspects of internet governance and digital issues in Latin America and the Caribbean. Representatives from regional organizations like ICANN, LACNIC, and Internet Society shared updates on their work in areas such as policy development, technical training, and promoting multi-stakeholder participation. Key themes included efforts to strengthen regional cooperation, address cybersecurity challenges, and increase internet access and digital skills.


The session also featured presentations from researchers on topics including online child protection, indigenous youth and technology adoption, and cybersecurity policies in Brazil. These studies highlighted issues such as the risks of online grooming in gaming platforms, disparities in internet access between urban and rural indigenous communities, and the need for clearer cybersecurity frameworks.


Additionally, researchers presented work on combating misinformation through AI-assisted fact-checking tools for journalists, as well as the adoption of AI in judicial systems. The discussions emphasized the importance of responsible AI implementation that respects human rights and maintains human oversight.


Throughout the session, participants stressed the value of multi-stakeholder dialogue and regional cooperation in addressing internet governance challenges. They also noted the evolving nature of digital issues, with new topics emerging alongside longstanding concerns. The discussion underscored the ongoing need for spaces that facilitate cross-sector collaboration and knowledge sharing on internet policy in Latin America and the Caribbean.


Keypoints

Major discussion points:


– Updates from various Latin American and Caribbean internet governance organizations on their recent activities and initiatives


– Reflections on current global internet governance processes like WSIS+20, IGF, and the Global Digital Compact


– Presentations of research projects on topics like online child protection, indigenous youth and technology, cybersecurity policy in Brazil, and AI tools for journalism and the judicial sector


Overall purpose:


The goal of this discussion was to provide a space for Latin American and Caribbean internet governance stakeholders to share updates, discuss regional perspectives on global processes, and highlight relevant research being conducted in the region. It aimed to foster collaboration and knowledge-sharing among regional actors.


Tone:


The overall tone was informative and collaborative. Speakers shared updates and research findings in a professional manner. There was an underlying sense of enthusiasm about regional cooperation and contributions to global internet governance dialogues. The tone became slightly more academic during the research presentations but remained accessible overall.


Speakers

– LITO IBARRA: Moderator


– FEDERICA TORTORELLA: Co-host


– LIDIA ANCHAMORO: Part of Colnodo, Colombian organization; Participates in IGF Secretariat


– OLGA CAVALLI: Organizer of South School on Internet Governance


– RODRIGO DE LA PARRA: ICANN Latin America representative


– SEBASTIAN BELAGAMBA: Internet Society representative


– BASILIO RODRIGUEZ PEREZ: LAC-ISP representative


– ROCIO DE LA FUENTE: LAC-TLD representative


– LIA SOLIS: LACNIC representative


– MARIA FERNANDA MARTINEZ: CETIS representative


– PAULA OTEGUY: LACNIC representative, moderator for research presentations


Additional speakers:


– JOSE ROJAS: Lawyer, expert on civil crime, researcher on child grooming


– CAMILO ARATIA: Sociologist from Bolivia, researcher on indigenous youth and technology


– THAIS AGUIAR: Lawyer from Brazil, researcher on cybersecurity policies


– SOLEDAD ARENGUEZ: Expert in new technologies and education, researcher on Trust Editor project


– MARIA PILAR CHORENZ: Doctor in law and social rights, expert on technology and rights, researcher on AI adoption in judicial sector


Full session report

Internet Governance in Latin America and the Caribbean: A Multi-Stakeholder Dialogue


This session focused on the Latin America and Caribbean Internet Governance Forum (LAC IGF) space, a regional initiative that brings together diverse stakeholders to discuss internet governance issues. As explained by moderator Lito Ibarra and co-host Federica Tortorella, the LAC IGF aims to foster collaboration, knowledge-sharing, and multi-stakeholder dialogue on internet governance in the region.


LAC IGF Structure and Purpose


The LAC IGF serves as a platform for discussing regional perspectives on global internet governance processes and highlighting relevant research. It operates through a multi-stakeholder model, involving participants from government, civil society, the private sector, and the technical community. The forum’s structure includes a steering committee and working groups focused on various aspects of internet governance.


Regional Organizational Updates


Representatives from key regional internet governance organizations provided updates on their recent activities:


1. Lidia Anchamoro (IGF Latin America and Caribbean Secretariat): Implementing new statutes and collaborating with Colnodo on various initiatives.


2. Olga Cavalli (South School on Internet Governance): Organizing the 17th edition in Mexico and celebrating their WSIS prize and champion status.


3. Rodrigo de la Parra (ICANN Latin America): Focusing on increasing regional participation in policy development processes and reaffirming multi-stakeholder principles.


4. Sebastian Belagamba (Internet Society): Implementing a new 5-year strategic plan and addressing challenges related to the WSIS+20 review and IGF mandate renewal.


5. Basilio Rodriguez Perez (LAC-ISP): Advocating for 6 GHz frequency allocation for Wi-Fi and expressing concerns about “fair share” proposals impacting network neutrality for small ISPs.


6. Lito Ibarra (LAC-IX): Deploying new infrastructure across 34 internet exchange points in the region.


7. Paula Oteguy (LACNIC): Supporting local internet governance initiatives through programs like FRIDA and detailing the LIDRES program for developing internet governance leaders.


8. Rocio de la Fuente (LAC-TLD): Developing a single server for domain name queries to improve efficiency and reduce costs for ccTLDs in the region.


LIDRES Program


Paula Oteguy introduced the LIDRES program, an initiative by LACNIC to develop internet governance leaders in Latin America and the Caribbean. The program aims to strengthen regional participation in global internet governance discussions and foster local expertise.


Research Presentations on Internet Governance Issues


The session featured presentations from researchers on various internet governance topics relevant to Latin America:


1. José Rojas: Presented research on child grooming risks in online gaming environments, highlighting the paradox between real-world and online safety practices for children. His study emphasized the need for better education and awareness about online risks in gaming platforms.


2. Camilo Aratia: Shared findings on technological appropriation among indigenous youth in Bolivia, revealing significant disparities in internet access between urban and rural indigenous communities. His research employed ethnographic methods to understand how limited access affects digital literacy and cultural practices.


3. Thais Aguiar: Discussed the evolution of cybersecurity policies and frameworks in Brazil, focusing on the complex governance structure involving multiple stakeholders. Her research analyzed policy documents and interviewed key actors to map the cybersecurity ecosystem in Brazil.


4. Soledad Arenguez: Presented on the development of an AI tool called Trust Editor for detecting misinformation in news articles. Her work involved collaboration with computer scientists and journalists to create and test the tool’s effectiveness in identifying false or misleading information.


5. Maria Pilar Chorenz: Explored the adoption of generative AI in judicial systems in Argentina, raising questions about the implications of AI in legal decision-making. Her research methodology included surveys and interviews with legal professionals to gauge attitudes and concerns about AI integration in the justice system.


Conclusion


The LAC IGF space continues to play a crucial role in facilitating dialogue and collaboration on internet governance issues in Latin America and the Caribbean. This session demonstrated the diverse range of activities undertaken by regional organizations and the depth of research being conducted on topics ranging from cybersecurity to digital inclusion. By bringing together various stakeholders and perspectives, the LAC IGF contributes to a more inclusive and informed approach to addressing the complex challenges of internet governance in the region.


Session Transcript

FEDERICA TORTORELLA: Perfecto, listo, ya. Pueden dejar de hacerme, pueden sacarme como anfitrión. Muchísimas gracias, Lito.


LITO IBARRA: Perfecto, lo digo ahora, ahora. Okay, you can… Ya, ¿comenzamos?


FEDERICA TORTORELLA: Sí, ya comenzamos y le doy la palabra a Federica. Gracias.


LITO IBARRA: Hola. Hola, buenas tardes. ¿Me escuchan aquí todos en la sala? Lito speaking. Hello everyone. Can you hear me in the room? Thank you. Can you hear me on Zoom as well? We can hear you as well. Thank you very much. So, let’s begin with today’s session with the LAG space. We have interpretation into English and into Spanish. If anyone needs interpretation into Spanish or into English, you can choose the language of your preference on Zoom. We had a technical issue, that’s why we started a few minutes later, but now it’s been solved, so you can choose, as I said, the language of your preference, either Spanish or English. So, welcome to this space that traditionally takes place in international events where we try to catch up with everything that Latin American and Caribbean organizations do, these organizations that we operate in the region. So, for today’s program, today’s agenda, we have several issues. We have Federica Tortorella in Zoom. You can see her on the screen. And Rocío de la Fuente from LAG-TLD. will be co-hosts online of this session. Without further ado, we will start. We will have three main spaces. One for the reports of the organizations of the region. Each organization has three minutes each for their presentation. We will try to stick to our schedule. Then we will have a second round with some organizations of the region, who will answer to the reflections of the organizations regarding the main current processes, the GLC, the WSIS, the S20, the IGF, and so on and so forth. And also the reports on recent research on internet governance that are relevant for the region. Federica, would you like to say something?


FEDERICA TORTORELLA: Federica speaking. Thank you, Lito, for the support. Greetings, everyone. Thank you for being here in this eighth edition of IGF Lackspace. As Lito said, I would like to remind the dynamics of the session. First of all, we will have the regional organizations and two rounds of questions of three minutes each. So we ask you to please stick to the agenda so that we can deal with all the topics. And on the second part of the session, we will have our researchers, those who are part of the leaders program, who will share with us their findings and their research and projects. Lito, would you like to take the floor once again, or would you like me to continue, Lito speaking? Unless you need to add something, I can call upon people to participate, following the agenda you provided. If you see that I’m missing something, please do not hesitate to interrupt me. So we will call Lidia. Lidia, please go ahead. Please turn on the mic.


LIDIA ANCHAMORO: Can you hear me? This is Lidia speaking. Thank you, everyone. It’s a pleasure for me to see you here in Rialde. And those of you who are here online, I am Lidia Anchamorro. I am part of Colnodo, which is a Colombian organization. And I also participate in the Secretariat of IGF. From Colnodo, we accompany processes such as the Colombian Table of Internet Governance that participated in 11 editions of the forum, and who has the participation of different sections or sectors of the organization, of society, and to discuss, to create, or to draft documents, and to make recommendations to the government, and everything regarding digital policies. And based on this experience in 2023, we applied to become the Secretariat of IGF. We were chosen by the stakeholders of IGF. This is what I will share with you today. This year, we organized the 17th edition of the forum. This edition took place in Santiago de Chile. Last year, we had the 16th edition in Bogotá. And it was like the reactivation after the pandemic of these in-person events. We had organized these events in a virtual manner for three years. So it was a real challenge to foster once again the gathering and the interest of people to participate in debates of the multi-sector, cross-section debates of IGF. We had several sessions in a room where the program committee proposed some panels that took place with the participation of different organizations. And the challenge for this year was to implement the new status of IGF. These new statutes were adopted in 2021. They were adopted after a consultation process that was organized with the Latin American community. And the goal was to specify in a more clear manner the roles of the different bodies of IGF. The bodies established in these statutes were the Committee of Massive Stakeholders, that is to say the strategic committee, the Committee for the Selection of Workshops, a secretariat that, until that time, was organized by LACNIC. From the beginning of LACNIC IGF, it was organized by ACNIC until 2023, where the application process was opened, where Cornado was assigned as a secretariat. With this new structure and And with these roles more clearly defined in the structure, the idea was to strengthen the levels of representation and participation of the different actors. And throughout this year, we have implemented these groups. This is Federica, Lilian speaking. I’m sorry, this is Federica speaking. So this is Lilian speaking. I will try to discuss the workshops later on.


LITO IBARRA: Lito speaking. We have just three minutes, OK? So I will give the floor to Olga Cavalli.


OLGA CAVALLI: Can you hear me? This is Olga Cavalli speaking. Thank you. Thank you for inviting us. And thank you for being here. I will do my best for you to hear me properly. In our school, we are organizing the 17th edition in Mexico in the second week of May, from the 6th to the 9th of May. The program has evolved and grown throughout the years. It started with a week of training. And now it’s a six-month program. It has a prior course of two months, virtually. And then we have a course in a hybrid format. And then we have signed an agreement with the University of Mendoza, where I studied. And I got my diploma of engineer, where the evaluation of the first two weeks, after this first evaluation, they are able to do research. And then they can get a diploma of internet governance. We’ve had already four cohorts of this course of studies. We have trained 8,000 fellows, and this training is for free. And they also can have a right to accommodation and food, the different meals. We’ve received the WSIS prize and also the WSIS champion for the impact on internet training. We’ve had 50 scholars or fellows that had accommodation and meals. We had more than 100 speakers from all over the world, and fellows were from Latin America in person. And virtual fellows participated from all over the world, from North America, Asia, Europe, among others. And we have also organized the eighth edition of internet governance that is a three-day training that is similar to the one organized in a hybrid manner. And it took place last month in November. And I think that my three minutes are up. Thank you, Lito.


LITO IBARRA: Lito speaking. Thank you, Olga. Thank you for sticking to the three minutes. We will give the floor to Aiken with Rodrigo de la Parra, who is connected online. Please, go ahead.


RODRIGO DE LA PARRA: Rodrigo de la Parra speaking. Thank you, Lito. Good afternoon. Greetings, everyone. Greetings to those of you who are in person and to those of you who are connected online. Congratulations. Thank you for organizing this space. It’s very important to keep updated regarding the work of the organizations and what we do regarding internet governance and its importance all over the world and the importance of the IGF. As you know, this work is always carried out in a very coordinated manner and with an amazing capability. And I can, the Latin American and Caribbean team, continue to work in two main issues. The first one is to foster a greater participation of actors of the region of Latin America and the Caribbean in the development process of policies of ICANN. This work goes on, and many activities are organized in our in-person meetings that this year took place in three different places in the world, as you know, San Juan, Kigali, and Istanbul, where we met with LAG Space, where we had the participation of different of our colleagues that are working in ICANN. And something that is very important that I would like to make focus on is that we centered many of our efforts in training at the technical level of those actors that are involved in the operation of TNS in the region. We have to remember that part of the internet governance organization lies in the activities. And I think that we’ve had a very good result that these activities helped us understand, from the operational point of view, how internet can be interoperable and that can be safe. So thank you for this opportunity. We will go on working. in this Light of Thoughts next year, and I hope to meet you all real soon. Thank you very much.


LITO IBARRA: Okay, let’s now go with ISOC. Sebastián is speaking. Thank you,


SEBASTIAN BELAGAMBA: Litos. I am Sebastián Belagamba. And let me tell you that this is a very particular year for our organization because we have a five-year strategic plan and we are starting to implement our strategic plan next year. So let me share with you some news on our strategic plan and the work we will do with the community. Our strategy was defined in March this year. The Internet Society Board approved the strategy based on two main challenges that were identified at one point in time. One of them is global inequality. We understand there is an issue of global inequality we need to address and the lack of trust on the Internet. People lack trust on the Internet. These two challenges are big issues that we can address. I mean, global inequality. We need to see how we can connect people, those that are not connected to the Internet. And we also need to see how we can improve connectivity to countries that are already connected. We need to make that connection more efficient. These two global challenges are being translated into our strategic goals that we have set for next year. One of them is that people all over the world may have access to a resilient and affordable Internet. And secondly, people need to have a safe and robust internet experience. They need to feel protected on their daily life. These are the two main goals we have for this next five-year plan. And implementation will be related to some programs. I am circulating a PowerPoint presentation where you’ll see more details on these programs. And I would like to invite you all to visit our website where you’ll see our strategic plan, our five-year strategic plan, and the implementation of our strategic plan during 2026.


LITO IBARRA: Lito speaking. Thank you so much, Sebastian. And thanks for sticking to the time. We are okay with the time and the agenda. Now we will give the floor to Esteban Lezcano. He will be speaking on behalf of LAC-ISP. Esteban, please go ahead.


ESTEBAN LEZCANO: Thank you, Lito. This is Esteban Lezcano speaking. And Basilio will be the one presenting today.


BASILIO RODRIGUEZ PEREZ: Basilio speaking. Good afternoon, everyone. LAC-ISP is a ISP organization from Latin America. We are an association of Latin American and Caribbean ISPs. And we work on the development of the ISP market. We work on different regulatory asymmetries for ISPs to be able to provide their services, their network maintenance services, particularly to those regions that are underserved. In Brazil, thanks to these regulatory asymmetries, we can say that ISPs cover 52% of the market. in fixed and white band in Brazil. We have two main issues in Latin America that we need to advocate for. One of them is the frequency of six gigahertz for Wi-Fi. This is really important for small ISPs to be able to deliver their services with quality. This frequency needs to be used outdoor in order to improve the service in case areas may not be able to use a wireless service. There is another issue we’re facing and this is the talks and the discussions we see from some regulators and the free share. We are against the idea of free share, because this has nothing to do, this has nothing of fear. This implies a huge risk for the whole Internet. Impacting on the network neutrality, and this will lead to issues for small ISPs. In South Korea, they started to charge for content, and this cannot be applied in Latin America or in the thousands of thousands of ISPs we have in Latin America. I am almost finishing my intervention. So, this would be a task or something that will cause huge problems to ISPs and Latin America.


LITO IBARRA: Lito speaking. Thank you, Basilio, and thanks, LAC-ISP, and now Lito Ibarra, this is me, will speak about LAC-IX. So, LAC-IX is the Latin American, the Caribbean organization for internet exchange points. In case you don’t know, an exchange point is where the local traffic of a country is being exchanged among providers in order to minimize the time and the cost and the delay in communications. The IX, or exchange points, are growing with the passing of time, they are part of the internet critical infrastructure to weather with the DNS. This is part of the internet critical infrastructure. In the case of LAC-IX, we’re deploying new infrastructures, LAC-IX, once at the end of the year to update its database, and this year we had four new exchange points and we are now 34 traffic exchange points in the regions that are part of LAC-IX. These are members of our organization, they are not the total amount, but they are part of the organization. We are still working with other organizations, particularly with the Internet Society, LACNIC, and other technical organizations in the region, and we are also articulating and working together with the technical community. We have a general assembly in the LACNIC event at the beginning of the year, in May, and we hold a virtual meeting throughout the year. This is really relevant for participants. We held four training sessions for technicians. Bear in mind that these exchange points are looking for CDNs. These are copies of contents from large Internet content providers in order to provide a more efficient access to users. They also work with technicians. There are working groups. There are some working groups. One of them is a public policy working group, and there is another working group that follows up the LACNIC policy development process, which as you know, is open and public. They report on any policy proposal on the critical infrastructure of the Internet, particularly affecting IXPs. And we have communications on LinkedIn and in our websites for our members and those of you who are interested. And this is the end of my intervention. Now, I would like to give the floor to Alai, Raúl Echeverria. Are you connected?


ROCIO DE LA FUENTE: Rocio speaking. We don’t see Raúl online, but he can take the floor at the end of the session.


LITO IBARRA: Lito speaking. Okay, now we are now going to give the floor to LACNIC, Lia Solis. Please go ahead.


LIA SOLIS: Lia speaking. Hello, can you hear me? Okay, good morning, everyone. Let me begin with my intervention. LACNIC is the Caribbean and Latin American network of operators. This is a non-for-profit organization and it is based in Montevideo, Uruguay. And we have been operating for 14 years now. Our mission is to gather… operators together. I mean, technical people operating the networks, helping us to communicate, and we’re aiming at being a reference association. Our mission is to strengthen the relationships among operators all around the region by fostering knowledge and by promoting the work of our working groups. We want to foster discussions, exchange information, collaborate with our community. Our organization culture is based on volunteers. We have more than 50 people working with volunteers and in an inclusive manner. As for our structure, we have a committee, a program committee, asking for technical proposals that are related to the internet operation, and we have a board, working groups, and the community. The next year, we are working on different programs. We held webinars and interviews to our members of the community. We have podcasts recorded. These are sort of talks and discussions for technicians to be able to get information, and we also published relevant content, and we spread that content through our discussion list. We also have alliances with the different organizations, such as the Internet Society, LACNIC, ICANN, among others, and we also try to strengthen our institutional image, and we would like to position our organization as a technical organization. We have an annual event in the LACNIC meeting. and we hold an event in each of the countries. This year, we’re working within the framework of strengthening the technical community, on creating a training or a working group, and we deliver trainings on IPv6, security, peering, among other topics. In our working groups, we have the participation of the Latin American and the Caribbean region in the ITF. And I think this is the end of my intervention. Thank you.


LITO IBARRA: Thank you, Lia, for this report. Now, I would like to give the floor to the CETIS. Fernanda Martinez is here, Fernanda speaking.


MARIA FERNANDA MARTINEZ: Thank you, Lito. CETIS is the Center for Study of Technology and Society at the University of San Andres. This is an interdisciplinary and academic center. The goal of our center is to promote and train people on different topics related to internet policies. We were able to cover our goals this year. We have three programs, and we hold two events throughout the years. We participated in 19 events. We had over 450 people participating in different events. And I will share with you the annual report of our activities, but I will give you a very brief summary in the link that I will be posting in the chat. This year, we are happy because one of our most important reports have been published. We have been working on this report throughout 2020 and 2021. This is the universality indicator report. This is a project that is led by UNESCO. And the goal of this project is to map, to do a mapping of each of the countries based on five internet-related access, rights, openness, multi-stakeholderism, gender, access. It is a very relevant report, and even though some elements may have changed, the methodology being used is really useful, because we can map the situation in each of the countries. And this is also very relevant, because we believe that in order to create public policies, we need to base ourselves on evidence. So this report covers one of the aspects that is very relevant for us, and this is the creation of evidence in order to translate that evidence, or to provide the evidence to decision makers in order for them to be able to craft policies that are robust. The approach is really enlightening, because it is based on the recommendations that were required, and the report includes an advisory board. It is a multi-stakeholder body, and this gave us the opportunity to have a very relevant dialogue with the stakeholders in the ecosystem, particularly in Argentina, meaning the private sector, public sector, civil society, and we were able to draft some recommendations based on each of these areas, and we highly recommend reading this report. I will end my intervention now, and there is another very interesting project, and I will talk about it in my second intervention. It’s a pleasure for me to be able to participate here. And thanks again, Lito.


LITO IBARRA: Lito speaking. Thank you, Fernanda. Let’s move to LACNIC. We will give the floor to Paula Oteguy. Thank you, Paula.


PAULA OTEGUY: Paula Oteguy speaking. Thank you, Lito. Greetings, everyone. Good morning, good afternoon for those of you who are in Riyadh. It’s a pleasure to present on behalf of LACNIC the work that we’ve been doing in this space, in the LAC IGF, in the IGF NAC space. So I would like to comment on the support program to analyze these local programs, local support, to tell you a little bit what it’s about, and also to seize this opportunity to emphasize the importance and the relevance of these initiatives in the stakeholder model. This program provides support to the internet governance initiatives in our region for them to organize their events. And it’s organized and it’s addressed to regional, national initiatives, and also young people’s initiatives at the local level, and also internet governance schools. For you to get to know a little bit what it’s about, this support is translated mainly in funds for the organization and for the implementation and execution of these spaces. And also, as long as the initiative requires it, a webinar with up to 500 participants, and also the possibility for LACNIC experts to contribute and to cooperate with the topics of the agenda of those initiatives. Participating actively in panels and discussions with experts and technical experts. dealing with cyber security, DNS security, among other topics. And some numbers for you to get to know are 2024, we supported 10 local initiatives in the region, two youth initiatives at the local level. Here, I’d like to mention that they are emerging in the region, these youth initiatives at the country level. And also, we supported like IGF in its 17th edition, and youth IGF, and internet governance schools, highly recognized, and also the virtual schools by internet governance, and a recent initiative of the Chilean University for Internet Governance and International Relationships. In order to apply for this support, you have to visit our website in the opportunity section. Therein, you will find internet governance, and there is a form that you need to complete, and we will contact you in order to make this support possible. Yes, I’m nearly finished. I’d just like to highlight the role of IRs, and their importance in the multi-stakeholder model.


LITO IBARRA: Lito speaking, thank you, Paula. We will close this section with the LAC-TLD presentation in charge of Rocio de la Fuente.


ROCIO DE LA FUENTE: Rocio de la Fuente speaking, thank you, Lito. Thank you for your support in the in-person event, and also everyone for participating in another edition of the IGF LAC space. I’d like to tell you the progress we’ve made in the single server user that enables to make a unique. consultation of a domain name under multiple ccTLDs of countries and territories in the region. In this manner, we provide an additional channel to make consultations for the regions that are available for registration and those already registered. For those already registered, it enables to add direct users to the websites of ccTLDs to make consultation of the available information. This year that is operating in a beta model with a lot of effort of ccTLDs, so we invite you all to use it and to diffuse it there. It’s used because it’s a way to promote the use of the domain names in the region. I’d like to take this opportunity as well to tell you about the efforts or rather the activities and the events we’ve been developing with other organizations, with the technical community in the last two years and mainly in the last year where our goal was to strengthen the relationship with other actors of the ecosystem and also with governmental bodies. I think that these efforts are very effective because it’s enabled to gather different organizations among others and other ICANN organizations. We’re very happy with its work and our will is to go on consolidating this technical community as a sector. I would like to give Federica the floor for this last minute that I have left. Federica,


FEDERICA TORTORELLA: you have the floor. Federica speaking. Greetings, everyone. I’d like to tell you some information as Rocio said. Thanks to this session, we have built a repository at the regional level and the idea is to consolidate in a single document the information. The main information regarding the organizations in the region that deal with internet governance issues. So we invite you to look at this repository. And for those of you who would like to contribute, you are most welcome to do so. We will share the link in the chat. You can write to me or to Rocio, and we will tell you how you can cooperate. The main idea is to consolidate the information so that it is easily accessible to know which are the organizations that are there and what are they doing. That will be it on my side. Dito, you have the floor.


DITO: Thank you very much, Dito, speaking. Thank you for sticking to your three minutes. Yes, we will be waiting for that link. I think that it’s very, very useful for many of us because we’ve heard a lot of information and we’ve made notes. But I think that it’s a great initiative to have this information online. Thank you very much. Let’s move to the next round of Black Space. The question was to share the reflections on the organization you belong to relating to the processes that are taking place. And we have five organizations, rather six organizations, sorry, that are registered. And you have three minutes each. We will start with Internet Society. Sebastian, please.


SEBASTIAN BELAGAMBA: Sebastian speaking, I know that three minutes is not a long time. But just to give you an idea, I think that we’re going through a very important year for internet governance. Next year, the mandate for IGF will be ended, and the WSIS mandate, not only IGF, which is the mechanism that we have to gather all this information, but also the elements regarding to WSIS, lines of action, and so on and so forth, that emerged in 2005 in Tunis. We had a mandate for 10 years that was renewed in 2015, and now in 2025, we will reach the end of the mandate. This is in parallel to other events. The global digital compact was approved last year that has some contact points with the lines of action of WSIS, and we need to understand at the intergovernmental process how they interact. It’s not very clear how the implementation of GDC will be, and which will be the review of WSIS plus 20, and how the line of actions of both the WSIS and these lines of action will interact if that is the case. So what’s important is to bear this in mind, and all of this is included in the agenda to be discussed in the next 12 months. We, as a community devoted to internet governance, either directly or indirectly, we have to have a position and a line of action. So in the next few months and the next few weeks, we have to define this coordinated action. I think that the most important thing to highlight is that for those of us who are part of the technical community of the internet, we are trying to have a coordination, a collaboration, a position that is maybe not a single position, but a consolidated one, at least in the context of the internet. in order to submit a productive proposal to have an efficient product emerging out of these processes. So it’s a year full of challenges. Thank you very much.


LITO IBARRA: This is Lito. We will give the floor to Rodrigo de la Parra with ICANN Latin America. You have three minutes, please.


RODRIGO DE LA PARRA: Rodrigo de la Parra speaking. Thank you, Lito. Yes, of course, the challenge to share in these three minutes so many thoughts about these processes that are so different. But these three processes show us this opportunity to reaffirm the principles that we agreed upon throughout the crisis process. When discussing in the internet governance and everything that’s been going on in this process that has the participation and the consensus of the multi-stakeholder model. In order to reaffirm these principles and the bodies that have been created around this consensus, internet mundial or world net has been a process that reminded us of these processes. And the global digital compact has many reaffirmation principles of these main principles regarding internet governance. So I think there are some implementation challenges. And it’s very important, as Sebastian said, we need to be coordinated in order to verify that any implementation made throughout or around these topics is based on these principles. So we have WSIS plus 20. Next year, I think that in our region, we can prove that the examples of collaboration that we have, such as taxes, that some important cases of how we’ve been working in the last few years, not only in the technical community, but how we have integrated ourselves in a very practical and very effective manner with other sectors, such as the governmental sector and different organizations, intergovernmental organizations at the regional level. So it’s a huge task as a region, I believe. We need to foster this collaborative mindset in the region. So once again, thank you for this opportunity,


LITO IBARRA: Lito speaking. Thank you, Rodrigo. So Sebastián and Rodrigo have defined a context, the context we are in, and also the one that we will have next year. It’s been a very critical year with big changes. And I think that many of us like that in that meeting with CISPLAS 20, the decision of going on with IGF was taken, undertaken, and to have a greater budget to go with these debates that take place at the global level. We need to think about something in the Latin American region. Again, as the Secretary will discuss this, we have some examples at the regional level. And we also have to see which are the national examples of the IGF. If in a very pessimistic scenario, the IGF would change, would be suspended by a decision of the states. the countries and the regional events wouldn’t have to be suspended as well. We have developed a culture and an environment in which we can go on discussing these topics at the national and the regional level. Of course, we’ll have to have more financing. This is always a challenge. But the machine is running. So I think that this is a process that we have copied, so to speak, from the summits of the OASIS of 2003 and 2005. But we can contribute with Latin America and Caribbean flavor. And we have to go on with these efforts. We have a very important community of events, of people in the region, in our countries, some stronger or with more capabilities than others. But we can go on supporting ourselves. This is in the event that the IGF was suspended or was provided a different form. But as Rodrigo and Sebastian said, we have to be very, we have to be very, we have to pay attention to what happens to this. I believe that we have, likely, we have a very strong and solid ecosystem of these organizations at the regional level. Many of them in the House of the Internet in Montevideo, they are physically there. But we can go on working in this manner. Thank you very much. Let’s give the floor to Olga Cavalli. Thank you.


OLGA CAVALLI: Olga speaking. I will be doing a retweet to Sebastian, Rodrigo, and other colleagues. What can you tell me about the South School of Internet Governance and the IGF? Well, this was our place. This is where we were born. What we should do as parts of different processes to strengthen ourselves. We need to be very engaging, we need to be diverse, and this takes time, resources. But we also need to be inclusive. The most important example was the IGF this year, and I would like to highlight Lillian’s role. She has been key because we were able to reinvigorate a space based on a very intense work of coordination. This is an example to replicate, and something that I said at the ICANN meeting because I had a similar question, and this is that we need to understand that stakeholders are different, and being on an equal footing doesn’t mean that we are the same. Governments do have their own process, if they need to take their own decisions, they have their own responsibilities, but we need to work together anyway, and that is our responsibility. Reinforcing the space has to do with this. Since the mandate of the IGF was renewed, in the school we expect to have an IGF, and we are a very collaborative community, and we know that we can work together, but the school is always open to offer the space. Thank you Olga, you have one minute left, but now we will give the floor to Lillian Chamorro from Colnado. And like IGF, I think there is an open flying mic.


LILLIAN CHAMORRO: Lillian speaking. To close this idea and to open the next idea, let me add the following. In this IGF edition, we have 99 proposals from different Latin American countries and from different organizations to, you know, be here. We selected 15 sessions were really high quality. This was a very complex process. Recommendations were being made and defining the sessions was a very dynamic process as well. We have over 120 panelists from different countries, over 300 face-to-face participants. All sessions were streamlined. And when it comes to the second question, we need to make the most of these platforms and we need to make the most of what all we have done in Latin America and the Caribbean region. We need to add our own flavor, as Lito said, and this is what we do at the LAC IGF. We give our own flavor to internet governance. Internet governance is not only crafted in these spaces because we meet everyone from all over the world, but governance is also crafted in our local spaces and these regional governance spaces are being, you know, discussed at a national regional space. This is where realities, problems, and even good aspects are being reflected. I love seeing the IGF in El Salvador because they had a rock session, a heavy metal music session. And this is what we have in Latin America, you know, we need to show ourselves as we are, you know, we need to participate in the WSIS, in the GDC. We need to show what we do, the power we have, and how we can help to, you know, relate those spaces. We are addressing topics that are important for the GDC and these topics are being discussed in our communities. They are part of our regional processes. We also need to understand that the discussion doesn’t need to be technology-centered. We need to center ourselves on technology, but on human aspects and environmental aspects, because this is something that we need to take into account in our region. We have the Amazonia, we have other regions and things to take care of, and this needs to be done in our discussions. IEGF has been a great platform for working with different ecosystems and stakeholders, but it is also an opportunity to create synergies, to show what we are doing. I think that we have been appropriating this ecosystem, and we have new opportunities to strengthen our spaces and to give our flavor to these spaces in order to work together. We are learning, really, a lot. I am always learning from the communities I work with, and I think we can replicate this in other discussion spaces.


LITO IBARRA: Thank you, Lillian. And to close this second block of the LAC-ISPs, I would like to give the floor to the LAC-ISP representative. Basilio, please go ahead.


BASILIO RODRIGUEZ PEREZ: Thank you so much. At LAC-ISP, we are always aware and participating in different discussions such as the IEGF, NETMundial and WSIS discussions. And this is thanks to the support we receive in order to participate in those spaces. The multi-stakeholder mechanism of Internet is really important for us. Internet itself will not make sense without the whole mechanism that was created. throughout time. And I like what Lito said on the possibility of not continuing with the IGF or with having a local IGF. And this is what we have to do. We need to focus and work to keep this multistakeholder model and this multistakeholder mechanism. But we also need


LITO IBARRA: to work as required. Thank you, Lito speaking. Thank you, Basilio. So we will now close this second block of the LAG space. And we have a third block or a third session. And we will now hear from researchers from CETI and LACNIC. So now I would like to give the floor to Paula Oteguyi. She is online and she will be the moderator of this part of the session. Paula,


PAULA OTEGUY: please go ahead. Paula Oteguy speaking. Thank you very much, Lito. We will now begin with the second part of the session. This is a space we started three years ago and we would like to keep on promoting the space. The idea is to share research projects that are relevant to our region on the internet development. So this space will be devoted to researchers from the LIDRES project in LACNIC and researchers supported by CETI will be now presenting us an overview of all the work they have been doing so far. Having said this, we will start with LACNIC and the LIDRES program and then we will give the floor to the CETI representative. The LIDRES program in LACNIC supports researchers, and with a particular overview, and this is on local projects, that is a period of time of three months, they have the support of well-known mentors in our regions, and the results of the researchers, and from LACNIC we support researchers, little, sorry, but


ESTEBAN LEZCANO: we are hearing you. Can you hear me okay? Esteban is speaking. Yes, Paula, we are hearing you.


PAULA OTEGUY: Please go ahead. Paula is speaking. So, as I said, the results of the research are of the ownership of the authors, but we help with the promotion of these results. There will be three presenters today, from last year, from the researchers being carried out last year. They will have eight minutes each, so I would kindly ask our presenters to stick to their time, and I would like to introduce José Alberto Rojas from Peru. He is a lawyer, an expert on civil crime, and he will be introducing a research on child grooming, on online gaming, and protection of children in Latin America. José, if you are there, you have the floor.


JOSE ROJAS: José Rojas speaking. Thank you, everyone. Today I have the honor of introducing an investigation on child grooming and online gaming. One of the most concerning aspects of cybercrime is this. This research aims at describing this phenomenon and offering recommendations that could be useful for educators, parents, legislators, and for the industry itself. Grooming… is known as sexual-related proposals to children or adolescents, either face-to-face or by using technology and communication technologies. This is a problem that is well-known, but it has new dimensions when we speak about gaming. To give you a different overview, let’s think about this paradox. We teach children not to talk to strangers outside, but in the digital world, they talk to strangers without knowing who is the person behind avatars or usernames. Online gaming are essential spaces for socializing, but they also represent a very risky environment. Platforms such as Minecraft, Roblox, or Fortnite gather young people together, and they’re always interacting. Part of the research that was carried out in Chile reveals that 82 percent of children recognize the risk of sexual harassment online, but 40 percent of these children do have permission from their parents to play on virtual spaces with strangers. They do not have supervision, and they have the consent of their parents to be able to interact in these video games with people they don’t really know. This gives us an idea of the risk and the supervision techniques. When it comes to grooming and video games, where this is facilitated due to the fact that groomers may adopt false identities, the interaction in chats and using real-time communications or tools, allow groomers to establish trust relationships very quickly. One of the main issues and something we saw in each of the researches is the You know, the sending of gifts or virtual coins. These are being used as manipulation tools. The main goal of this research is to see how grooming operates in online gaming and the implications it has in Latin America. So after this research, I set up the specific goals for this research, and this is to identify the most vulnerable platforms and video games, to analyze the behavior patterns of groomers in this environment, to assess the level of risk and the knowledge about this risk among parents and educators, and to provide measures to mitigate the issue. So this research provides a qualitative and quantitative analysis. We gather different cases based on different interviews, and we also interviewed digital experts. We analyzed the legislation in Latin America to see the effectiveness to fight against grooming in online gaming. One of the main findings of this research is that 180 cases were reported in Roblox and other platforms, and this report was done before law enforcement agencies. I consulted with law enforcement agencies in different Latin American countries, and this is being detailed in the research. The platforms that are most vulnerable, those platforms that have the option of online and real-time chat, I mean, the use of avatars are widely used among groomers, and the lack of knowledge among parents is also important to take into account. During the COVID pandemic, there was an increase of this risk, The cultural acceptance of these video games is something that we saw in our research in Chile. When it comes to victims, miners said that they experienced anxiety, fear, social isolation, among other situations. Sometimes these experiences create problems in the long term. And some of the examples that we gathered, particularly we had one example in Peru that gave rise to the investigation, is something that happened in 2023. There was a first judgment. There was a man contacting children through a platform and they asked him for pictures. In Argentina, there was one adolescent that was manipulated by a video game platform. And then he was contacted via WhatsApp and he was requested to provide sexual related content. During the pandemic, grooming reports increased 81%. And this has to do with the time that the miners spend online. Among the proposals that are being delivered in the research, we have strengthening digital education, creating campaigns for parents and educators in order to prevent grooming, teaching miners to recognize risky or suspectful behaviors online. And it is important to foster the multi-stakeholder participation and to engage governance and child protection organizations to design the protection measures and create agreements among countries to prosecute cases. Also to promote innovation, such as the use of artificial intelligence to moderate content and to identify. behavioral patterns to improve verification and parental control features on platforms and it is also important to promote laws in Latin America allowing the real prosecution of grooming in all its shapes. One of the biggest issues that we realized in our research is that there are no concrete or proper protocols for grooming online. The support on video gaming or the video gaming support areas were the ones addressing these issues but they were not able to share information when the criminal investigation started. So in order to finish my intervention let me add the following. Grooming in online gaming is a growing threat and it requires a coordinated answer. My research aims at providing over a shed light on you know this aspect but to promote protection. The protection of children in online environments is a collective action involving families, governments, companies and the society as a whole. Thank you so much for your attention.


PAULA OTEGUY: Paula Oteguy speaking. Thank you Jose. Thank you for sticking to the time. Congratulations for that and for sharing the main findings of your investigation. It’s very of your research is very important to note vulnerabilities regarding this topic and the recommendations of your research are of high value to all of us. So thank you very much. We will go on with our next researcher Camilo Aratia who is a sociologist from Bolivia who is online with us. His research is called of young indigenous people and technological appropriation. Camilo, welcome.


CAMILO ARATIA: Camilo speaking. Good morning, good afternoon, depending on where you are connecting yourself from. I’d like to start by introducing this topic of Peters. And let me tell you what this investigation is about. My research is related to the appropriation of technology in young population that identifies itself as indigenous. I tried to understand the framework of what we understand by technological appropriation and the model that involves four aspects, access, accessibility, learning, transformation. And by understanding this, technological appropriation should move forward these stages so that we can say, OK, we have technological appropriation regarding these young people. We will have to analyze first which is the access to technology, how they’re learning, once they have access, how they have integrated this. Because we cannot forget that mostly regarding self-identified indigenous populations, we have to understand how this is integrated in their culture. And also transformation. How much has technology transformed their perception as indigenous people, their communities, but also in a more broader sense. That is what we are talking about when we talk about technological appropriation. Based on this, I interviewed a lot of young indigenous peoples. I participated in focus groups, not in all the territory of Bolivia, but in some specific regions. I worked with Aymara, with Quechua young people, with young people from African communities, from Afros and from Chiquitano communities, and also with Chaqueños young people. Chaqueño is not one of the indigenous peoples from Bolivia, but in the Bolivian Chaco, which is in the border with Argentina, there are 20 indigenous peoples that live there. So they identify themselves as Chaqueños. They have roots in these indigenous peoples. And when we did the focus group, we included them there because they identify themselves as indigenous people. So the first point was to understand that young indigenous peoples, at least in Bolivia, many of them inhabit almost urban spaces. There are not isolated populations regarding technology. So in this sense, we asked them about accessibility, taking this into account. There are many interesting answers. Depending on the population and the geographical location where the research was carried out, we had very different realities. For instance, regarding internet access, by meaning not only that the cables exist, but that we have devices to connect to the internet, or that in schools or local… regional governments and in their communities, they do have access to the internet. So in those spaces, which are more urban spaces with a larger population, such as the Madras, Zacqueños, and Afros, they have a good connection, a good access to the internet. But the connection was not a broadband, but rather a mobile connection. That is to say that they were connected to the internet, but most of its use was with mobile devices or mobile data. So they depended of the three companies that we have in Bolivia to connect themselves to the internet. But there was a huge contrast with the lowlands, with the Chiquitanos young people who lived in a community between Concepción de Bolivia and the border with Brazil, which is next to the Amazonian territory. Internet access was scarce. There we have the first contrast. There’s just Intel, one of the telecommunication companies. They could not choose, because if they had another company as a server, they wouldn’t have access to the internet. And internet was invoiced by the hour. And what they told me is that they didn’t have internet in their houses or in their homes. They had to go to some places in their town, in their village. And they needed to go to a more urban region in order to connect to internet. That is the first. The first impression that we have, these young people that identify themselves as Aboriginal people, they have different possibilities of accessing the Internet. So when we started asking about the learning, such as more focused on a digital area such as Chattopadhyay, for instance, they didn’t have this in mind, they couldn’t have access to this because they had more basic problems for accessing Internet. And there we could see the difference between those who lived in the capital city and those who lived in more isolated or rural areas. This is one of the first differences that was very interesting to study. We could have discussed about Chattopadhyay and the Internet, but their access was more limited to it. Regarding the transformation in this community and in the integration, there were many answers that they tried to integrate, that many of the people that are connected are trying to understand that it is a reality that it is here to stay. There was a collective of Aymaras and Afros people in Coroy, in La Paz, they were devoted to artistic activities, and they have a new technology law in their community, and they discussed problems such as digital violence and grooming, and there was a notion, a broader notion of all these matters in those spaces that were more isolated. The reality was different, but there is an effort to integrate technology in all these processes. They saw technology as a bridge. That is what they told us. Something interesting as well is that in the Aymara community, technology was a boom throughout the pandemics because it was their connection to civilization and that created an interest in migrating to urban areas in order for them to study. Because many of these young people, they cannot study these technological courses of study in their communities. That is why they want to move to the city in order to do this. And regarding transformation, it was very complicated because to see that technology as a transformation element in these communities, this is not yet very visible in their cases. I think that this is the most difficult aspect to understand when we discussed about artificial technology and all of this.


PAULA OTEGUY: Paula Oteguy speaking. Sorry to interrupt you. You need to wrap up.


CAMILO ARATIA: OK, Camilo speaking. The conclusion would be that indigenous peoples that live in rural areas or small urban areas, they still have problems accessing technology because in many cases, there is no network, but rather mobile data. So we are still in these preliminary stages. Thank you very much.


PAULA OTEGUY: Paula Oteguy speaking. Thank you, Camilo, for sharing your research and your conclusions. This enables us to know these specific perspectives and points of view. Thank you very much. In order to close up research from the leaders program, I will give the floor to Thais Aguiar, who is a lawyer from Brazil and a researcher in digital topics. And her research is called cybernetic policy, security policies in Brazil, where we come from and where are we going.


THAIS AGUIAR: Thais speaking, first of all, greetings, everyone. Thank you very much for this opportunity to present my research next to these very interesting researchers. And I am very happy to participate in this forum. I would like to thank Paula and my tutor, and also those of you who participate in the leaders program. It’s an honor for me to present my research that wants to analyze the regulatory framework of cyber security in the country. The goal was to research on the gaps and the progresses in Brazil for the implementation of these policies. Regarding methodology, it is a qualitative study, exploratory and documentary one that analyzes public policies and case studies throughout the COVID. And they tried to see which is the history of cybersecurity policies in Brazil and the future thereof. So this is a very brief resume. And I invite you to a brief review. And I invite you to read the whole work on the internet. Where do we come from? In the last few decades, the appearance of internet society has been a great change for Brazil. There is a challenge of the promotion in a way that promotes the use of technology in a secure manner, in a safe manner, in order to preserve internet in an open manner and a safe manner to promote human rights. And the path of Brazilian cybersecurity is marked by a complex evolution of bodies and with a need to balance individual rights and cybersecurity. This is a very complex study that involves a lot of bodies and actors. And this structure of cybersecurity and internet governance has many challenges, such as the need of a greater clarity of notions and the cooperation of the different bodies. In 1995, we had the Committee of Internet Management that promoted the multi-stakeholder model and served as a model to different bodies, not only in the country, but also at the worldwide level. In 2015, we had the Committee of Internet Management that promoted the multi-stakeholder model and served as a model to different bodies, not only in the country, but also at the worldwide level. In 2015, we had the Committee of Internet Management that promoted the multi-stakeholder model There were several bodies like NIC.ER among others, and there are many findings around the states. We have the Committee of Cybersecurity, and also, as I said before, we need more clarity and cooperation among the stakeholders. And there are several events that gave shape to cybersecurity in the country, such as the World Cup of FIFA 2014, the U.K. Olympic Games in 2016 in Rio, and the increase of cyber threats. And these events needed more participation from the bodies that needed to guarantee cybersecurity and also to include cybersecurity in fundamental rights. Brazil has a history of multistakeholder governance in spite of the challenges, and in my research there is the approval of the National Strategy of Cybersecurity in 2020, and this strategy has its main goal to strengthen cybersecurity in the country with a multi-sector approach that involves the civil society and the public-private sector. And the INSC reaches a mature model of cybersecurity with different aspects, such as political one, and there are also some limitations and threats. transparency problems, and securitization of the cyberspace. So when we reached the conclusion that where do we come from and where are we heading, Brazil faces the need to have a more solid cybersecurity framework, more effective one, by promoting the cooperation among the stakeholders. That is to say that we need to have a unified and effective strategy to protect infrastructure services and individuals in the digital space. Policies need to have evidence-based approaches and to involve technical aspects of the public and private sectors and the civil society ones so that the values are in line with democratic values and to respect fundamental rights. Brazil has to seize its experience in a massive stakeholder model in order to have a sovereign nation, digitally speaking, and to protect, at the same time, individual rights and to promote an inclusive and safe environment, digital environment. Brazil has the possibility to become a leader in this sense and also to guarantee cybersecurity for it to become an example in the region by improving public policies and having a more solid, massive stakeholder model to have a sovereign digital nation. Thank you for your time.


PAULA OTEGUY: Thank you very much, Thais. Without a doubt, the issue of cyber security in front of big companies is a complex one. Cyber security faces huge and complex challenges. You mentioned some of them, namely cooperation among different institutions and organizations. So I invite you all to have a look at all these researches that we have. sharing today. I will be posting the link on the chat for you to be able to access these researches. I would like to thank especially the presenters and for representing a group of 16 great researchers that we had in our 2023 edition. And now I would like to give the floor to Fernanda.


MARIA FERNANDA MARTINEZ: Fernanda speaking. Thank you, Paula. Let me echo your comments on congratulating the presenters for their researches. I will read the researches later on and congratulations on these leaders or leaders programs for incentivating research in Latin America and the Caribbean. Now I will introduce the next two researchers. They’re going to share with us their findings. On the one hand, we have Soledad and Pilar Llorenz. And let me say that the idea of this space is to introduce some of the SETI’s investigations or researches, but also to, you know, add new researchers. And we do have at SETI’s different researchers ongoing, but we would like to bring colleagues and we would like to introduce these colleagues to this so important space as it is the IGS. So, Soledad Arenguez will be presenting first. She’s an expert in new technologies and education. She’s a graduate and postgraduate. a teacher in different universities in Argentina and she’s an expert on technologies and education. She’s also coordinator of the communication department in the USCA and she’s also researcher on the misinformation on social media and media literacy. Today she will be sharing with us the progress of her research on the Trust Editor. Soledad, please go ahead. Soledad speaking,


SOLEDAD ARENGUEZ: thank you so much for the introduction. It’s a pleasure to be part of this LAG Space session and to speak about our work on the Trust Editor. Let me tell you what we are from the project. This is an organization that was born in Argentina. We are committed to fighting against misinformation and we want to have an ecosystem based on information we can trust. We have different initiatives ongoing such as media literacy and the development of ideas, products and solutions. One of the solutions we have been working on is a Trust Editor. What is this Trust Editor about? We know we are facing an issue and this is a trust crisis that media are undergoing and this is not the case only in Argentina. This is a worldwide issue because there is a situation of news disconnection and we also need to add fake news and misinformation. Misinformation is not new but it has new dimensions and new complexities based on the advancements that we see on media. So based on this situation and the quick proliferation of these new pieces of misinformation, we see new challenges and challenges being posted by artificial intelligence. This is affecting journalism and communication media because there is no time to check or verify information. So having said all this, the question of how we can face misinformation from media led us to create our Trust Editor prototype. So what is this development about? This is a prototype that uses artificial intelligence to be able to detect inconsistencies in news, in different posts before they are being published. And the idea is to alert editors for them to be able to adjust the information before posting or publishing that piece of information. The goal is to be able to intervene timely, that is, to work in the pre-banking section in order to avoid sharing misinformation or to generate information with fake news or false information. The solution is focused on two key aspects. On the one hand, the idea is to reduce the possibilities of sharing fake news, taking artificial intelligence as one of the tools. And then I will add on that. and to increase trust on media and the news being posted, reducing biases and polarization. This is a project that is part of the LEAP project and with the support of Trusting News. This is an organization working on the creation of indicators to raise or increase trust in the media. This prototype aims at working with a CFS. That is to say, to work with publishers of media agencies and help editors in the news making process. Let me give you some background information. We have the journalist. We have the edition unit. Journalists start producing or drafting their article. They add this information to the SCF. They have the, this is already working. We have the trust editor working. So when the post is sent to editors, the system gets activated. So you can read the article and you can get some indicators. This is what we call quality indicators. And when I say quality, I know that this concept may be quite complex. And what do I mean by this? Why are we emphasizing this complexity? Well, we have a group of publishers. I’ve been working with the publishers, group, editors, and journalists in order to understand what makes an article trustworthy. And let me give you an example. This has to give, or the article needs to cite the sources or the voices of authorities. Let’s say if we have, for example, an article providing diversity in the sources they are citing. So this has been created together with professionals to be able to create these indicators. The development is now in Spanish. We would like to escalate the project and to start working in the pre-banking space. This is going to be integrated. There is a user-friendly visual interface. So in the text, you can identify in colors the inconsistencies or the paragraph or phrases that need to be improved. And this can be analyzed based on a dashboard. In a nutshell, let me also add with you in the chat the presentation for you to see the demonstration and how this dashboard would look like and work with one particular article as an example. But the trust editor identifies, and we are working on the other indicators. But this is training we need to carry out. And the trust editor is being trained to identify, for example. in this case, adjectives or entities, because what we want to see is how these inconsistencies are being reduced and translated into certain indicators. So we work with the sources, the expressions that are being used, for example. So this editor will analyze this trust editor, and that is the name, the reason of the name. This trust editor will analyze the companies that are being mentioned, people, the entities, the times these names appear, because these may lead to certain bias, the adjectives that are being used, the amount of adjectives, and words or terms that are allowing us to differentiate between information and opinion. This is an ongoing project. We want to help publishers. That’s why trust editor will deliver red flags for the human eye, for journalists to be able to review the information. This is not automatized. I mean, this is not to eliminate journalists, but we want to strengthen the task of the journalist. We want to show the red flags. The editor will be checking any indicator or any red flag. And this will give rise or room to improvement. Thank you so much for the time, and we expect to have further news in future sessions.


PAULA OTEGUY: Thank you, Soledad. This is really interesting. Please share the information in the chat so that we can check that information when it is published. And very important to highlight that the human eye is there. There is a human revision behind this tool. Now I’m going to give the floor. to Maria Pilar Chorenz. She’s a doctor in law and social rights, expert on technology and rights. She’s a professor in Cordoba and CETI’s researcher. She will talk about her research on support on judicial sectors in the responsible adoption of generative artificial intelligence. And she’s going to speak about some cases of Argentina, Brazil, Colombia, and Mexico. And Pilar is leading the research in Argentina. Pilar, please go ahead.


MARIA PILAR CHORENZ: Pilar speaking. Thank you, everyone. And thanks those of you who are still in this session. All presentations have been really interesting. Today, I will very briefly summarize all the findings we have seen throughout the year in terms of the generative AI adoption, particularly in the judicial sector of Argentina, because that is my area of expertise. But the framework of this project is led by CETI. And the idea is to understand why the different judicial branches in Latin America are adopting artificial intelligence tools, particularly in the decision-making processes. The question is, are these branches opened to these processes, or are they using these processes explicitly? What are the risks that are being associated to the use of these tools? Judges, legal operators are taking other elements into account. Is this tool being used only for adopting certain measures and not for final decisions? So, the idea of this project, as Fernanda said before, is to get evidence for us to be able to provide this evidence to the legal ecosystem in order to develop certain products allowing us to use generative artificial intelligence and this implementation needs to be done in a responsible manner, taking into account respect for human rights and other legal standards. The source of this project has to do with interviews and the document analysis and the idea is to understand how legal operators and the judicial ecosystem embraces the use of generative artificial intelligence and related tools in decision making. Argentina has a federal system and therefore we have different jurisdictions with different realities in terms of resources and in terms of the different cases they manage. For example, only four jurisdictions manage 60% of the cases that take place in the country. This is quite important to take into account because it has a direct impact on the legal system at the time of implementing these tools. The legal branches that have adopted these tools are not the ones with the highest amount of cases but are the ones that have a low volume of cases and this calls our attention. because this does not solve the issues that all jurisdictions have. All jurisdictions have something in common in the country, and this is that there is low trust or a perception of low trust in the judicial sector. That is to say, the society does not trust in the judicial system because they believe that justice is slow and that decisions are not fair. So the judicial operators see this as a way of improving the delivery of justice. So the emergence of generative AI tools created interest among operators, legal operators, because they were able to adopt resolutions or make decisions or even draft resolutions in a short time, and this was translated into an improvement in the amount of cases they could address. In this context, there are no specific use cases in Argentina implementing generative AI. We can identify three large universes where this tool is being tested. One is those having the support of the superior courts. We have the province of San Luis, San Juan and Rio Negro. They have their own protocol for the use of AI tools. There is a second use that is the one supported by academic institutions and by some state-based institutions. This is a program being developed. throughout the country and the idea is to identify the use that all legal operators are doing with these tools. There are no published results, so we cannot really assess the impact of the tool in this context. And the larger universe has to do with the individual uses that the legal operators are making, let’s say judges or other legal operators, in order to facilitate some task. In this case, we speak about judgment summaries, the summary of cases, looking for some case law element. And there is one particular case where a judge is mentioning the use of AI in a specific resolution, but it has no public repercussions. So, it leads me to analyze the reactions. There is no institutionalized reaction, there is no standpoint from the vast associations. There is an expecting position, if you will, from the legal branch and from the representatives of the ecosystem, and they are expecting to see what these tools will cause. However, there are some consensus that we see in the interviews, and this is that the use and the introduction of AI is a fact and the judicial branch has to embrace this and adapt to its use. There is some consensus, legal actors do understand that there are some tasks, for example, the summary of case law or a judgment are tasks that can be done by artificial intelligence. However, this can be used by using other tools and not necessarily using AI. But they know that in the legal practice and in decision-making, these tools should not be used because there are certain responsibilities that need to be met by those making decisions. And these functional responsibilities have to do with data protection and the way in which those personal data are being protected and managed in legal processes. Human control is another element that needs to be taken into account when working on resolutions or when using these tools. And there is consensus on the fact that the personnel needs to be trained in order to use generative AI. To wrap up, so far we have seen that most of the respondents believe that there is a need to give up the tolerance that they have in terms of some processes and that they need to adapt generative AI, but the lack of regulation may lead to some issues when using these tools. The multi-stakeholder dialogue is also necessary when discussing the use of generative AI. artificial intelligence, and data management is another aspect to be taken into account.


MARIA FERNANDA MARTINEZ: Thank you very much. The idea is that this research is published in March, will be published in mid-March in the CETI website. This is Fernanda Martinez speaking. Before listening to Federica for the close-up, I’d like to thank all researchers for their researches and the large scope of themes that are under the umbrella of internet governance. If we thought about it 10 years ago, the themes that we discussed at the time are still being discussed. Many of them are still being discussed. Are there many new topics? Today, the academia discusses with the technical aspects or with the technical experts, and there is a dialogue across different sectors and across the different professionals with very different backgrounds, and this is very, very enriching. And going back to the question regarding the second section and the first part, it shows how invigorating and how vital these dialogues are among the different sectors in practice, in the field. Then we will see what happens with those spaces, with those sectors, but I think that this is a way to see the very, very rich dialogue that takes place among the different sectors. And something that we say at the CETI is when we start an activity or a project that needs continuity is that it’s difficult to get it started. But once that space disappears, there is something about that debate that goes away as well. So I am calling you to maintain and to keep up these dialogue spaces. Because afterwards, it’s very hard to have them back. Federica, you have the floor.


FEDERICA TORTORELLA: Federica speaking. Yes, thank you. Thank you very much to all researchers and the regional organization, to our remote participants. Thank you to our interpreters to help us with this very valuable task. With this, we close this edition of the LAC space of IGF and see you in Norway very soon. Thank you very much. Have a wonderful rest of the day. Thank you. Bye, everyone. Thank you very much. Thank you, Dito. Thank you, Dito. Bye, everyone.


L

LIDIA ANCHAMORO

Speech speed

108 words per minute

Speech length

426 words

Speech time

235 seconds

IGF Latin America and Caribbean Secretariat implementing new statutes

Explanation

The IGF Latin America and Caribbean Secretariat is implementing new statutes adopted in 2021. These statutes aim to clarify the roles of different bodies within the IGF structure.


Evidence

New bodies established include the Committee of Massive Stakeholders, Committee for the Selection of Workshops, and a new secretariat organized by Colnodo.


Major Discussion Point

Updates from Regional Internet Governance Organizations


O

OLGA CAVALLI

Speech speed

118 words per minute

Speech length

563 words

Speech time

285 seconds

South School on Internet Governance organizing 17th edition in Mexico

Explanation

The South School on Internet Governance is organizing its 17th edition in Mexico. The program has evolved from a week-long training to a six-month program with various components.


Evidence

The program now includes a two-month virtual course, a hybrid format course, and an agreement with the University of Mendoza for a diploma in internet governance.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

RODRIGO DE LA PARRA


PAULA OTEGUY


LITO IBARRA


Agreed on

Need for regional cooperation and knowledge sharing


R

RODRIGO DE LA PARRA

Speech speed

113 words per minute

Speech length

573 words

Speech time

303 seconds

ICANN Latin America focusing on regional policy development participation

Explanation

ICANN Latin America is focusing on fostering greater participation of regional actors in ICANN’s policy development processes. They are also emphasizing technical training for DNS operators in the region.


Evidence

Activities organized in ICANN’s in-person meetings in San Juan, Kigali, and Istanbul, with participation from regional colleagues.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

OLGA CAVALLI


PAULA OTEGUY


LITO IBARRA


Agreed on

Need for regional cooperation and knowledge sharing


Need to reaffirm multi-stakeholder principles in global processes

Explanation

There is a need to reaffirm the principles of multi-stakeholder governance in global internet processes. This includes processes like the Global Digital Compact and WSIS+20 review.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

SEBASTIAN BELAGAMBA


LITO IBARRA


LILLIAN CHAMORRO


Agreed on

Importance of multi-stakeholder model in internet governance


Differed with

BASILIO RODRIGUEZ PEREZ


Differed on

Approach to internet regulation and governance


S

SEBASTIAN BELAGAMBA

Speech speed

130 words per minute

Speech length

638 words

Speech time

294 seconds

Internet Society implementing new 5-year strategic plan

Explanation

The Internet Society is implementing a new 5-year strategic plan starting next year. The plan focuses on addressing global inequality and lack of trust in the Internet.


Evidence

Two main goals: ensuring people worldwide have access to a resilient and affordable Internet, and providing a safe and robust internet experience.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Challenges of WSIS+20 review and IGF mandate renewal

Explanation

The upcoming year presents challenges with the WSIS+20 review and the renewal of the IGF mandate. These processes will shape the future of internet governance discussions.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

RODRIGO DE LA PARRA


LITO IBARRA


LILLIAN CHAMORRO


Agreed on

Importance of multi-stakeholder model in internet governance


B

BASILIO RODRIGUEZ PEREZ

Speech speed

92 words per minute

Speech length

380 words

Speech time

246 seconds

LAC-ISP advocating for 6 GHz frequency for Wi-Fi

Explanation

LAC-ISP is advocating for the use of 6 GHz frequency for Wi-Fi, especially for outdoor use. This is seen as important for small ISPs to deliver quality services.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Concerns about “fair share” proposals impacting network neutrality

Explanation

LAC-ISP expresses concerns about “fair share” proposals, arguing they could negatively impact network neutrality. They believe such proposals could cause significant problems for small ISPs in Latin America.


Evidence

Example of South Korea starting to charge for content, which LAC-ISP argues cannot be applied in Latin America.


Major Discussion Point

Current Internet Governance Processes and Challenges


L

LITO IBARRA

Speech speed

122 words per minute

Speech length

1346 words

Speech time

661 seconds

LAC-IX deploying new internet exchange point infrastructure

Explanation

LAC-IX is deploying new internet exchange point infrastructure in the region. They now have 34 traffic exchange points that are members of the organization.


Evidence

Four new exchange points added this year, bringing the total to 34 members.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

OLGA CAVALLI


RODRIGO DE LA PARRA


PAULA OTEGUY


Agreed on

Need for regional cooperation and knowledge sharing


Importance of regional examples of collaboration

Explanation

Lito Ibarra emphasizes the importance of regional examples of collaboration in internet governance. He suggests that these examples can contribute to global discussions.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

RODRIGO DE LA PARRA


SEBASTIAN BELAGAMBA


LILLIAN CHAMORRO


Agreed on

Importance of multi-stakeholder model in internet governance


P

PAULA OTEGUY

Speech speed

118 words per minute

Speech length

1074 words

Speech time

544 seconds

LACNIC supporting local internet governance initiatives

Explanation

LACNIC is providing support to local internet governance initiatives in the region. This support includes funding, webinar services, and expert participation in events.


Evidence

In 2024, LACNIC supported 10 local initiatives, 2 youth initiatives, and several internet governance schools.


Major Discussion Point

Updates from Regional Internet Governance Organizations


Agreed with

OLGA CAVALLI


RODRIGO DE LA PARRA


LITO IBARRA


Agreed on

Need for regional cooperation and knowledge sharing


R

ROCIO DE LA FUENTE

Speech speed

128 words per minute

Speech length

294 words

Speech time

137 seconds

LAC-TLD developing single server for domain name queries

Explanation

LAC-TLD is developing a single server for domain name queries across multiple ccTLDs in the region. This tool aims to provide an additional channel for checking domain availability and registration information.


Evidence

The tool is currently operating in beta mode with participation from various ccTLDs.


Major Discussion Point

Updates from Regional Internet Governance Organizations


L

LILLIAN CHAMORRO

Speech speed

143 words per minute

Speech length

451 words

Speech time

188 seconds

Opportunity to showcase Latin American internet governance model

Explanation

Lillian Chamorro argues that there is an opportunity to showcase the Latin American internet governance model in global discussions. She emphasizes the importance of regional and national governance spaces in shaping internet governance.


Evidence

Examples of local initiatives like the IGF in El Salvador incorporating cultural elements like heavy metal music.


Major Discussion Point

Current Internet Governance Processes and Challenges


Agreed with

RODRIGO DE LA PARRA


SEBASTIAN BELAGAMBA


LITO IBARRA


Agreed on

Importance of multi-stakeholder model in internet governance


J

JOSE ROJAS

Speech speed

119 words per minute

Speech length

915 words

Speech time

460 seconds

Child grooming risks in online gaming environments

Explanation

Jose Rojas presents research on the risks of child grooming in online gaming environments. The research aims to describe this phenomenon and offer recommendations for various stakeholders.


Evidence

Study in Chile showing 82% of children recognize the risk of sexual harassment online, but 40% have permission to play with strangers without supervision.


Major Discussion Point

Research on Internet Governance Issues in Latin America


C

CAMILO ARATIA

Speech speed

104 words per minute

Speech length

1037 words

Speech time

596 seconds

Technological appropriation among indigenous youth in Bolivia

Explanation

Camilo Aratia presents research on technological appropriation among indigenous youth in Bolivia. The study examines access, learning, integration, and transformation aspects of technology use among different indigenous communities.


Evidence

Findings show varying levels of internet access and use among different indigenous communities, with urban areas having better access than rural areas.


Major Discussion Point

Research on Internet Governance Issues in Latin America


T

THAIS AGUIAR

Speech speed

102 words per minute

Speech length

735 words

Speech time

430 seconds

Evolution of cybersecurity policies and frameworks in Brazil

Explanation

Thais Aguiar presents research on the evolution of cybersecurity policies and frameworks in Brazil. The study analyzes the regulatory framework and identifies gaps and progress in implementing cybersecurity policies.


Evidence

Approval of the National Strategy of Cybersecurity in 2020, which aims to strengthen cybersecurity with a multi-sector approach.


Major Discussion Point

Research on Internet Governance Issues in Latin America


S

SOLEDAD ARENGUEZ

Speech speed

107 words per minute

Speech length

960 words

Speech time

536 seconds

Development of AI tool to detect misinformation in news articles

Explanation

Soledad Arenguez presents the development of an AI tool called Trust Editor to detect inconsistencies in news articles before publication. The tool aims to alert editors to potential misinformation and improve trust in media.


Evidence

The tool uses quality indicators developed with publishers and editors to identify potential issues in articles.


Major Discussion Point

Research on Internet Governance Issues in Latin America


M

MARIA PILAR CHORENZ

Speech speed

106 words per minute

Speech length

983 words

Speech time

553 seconds

Adoption of generative AI in judicial systems in Argentina

Explanation

Maria Pilar Chorenz presents research on the adoption of generative AI in judicial systems in Argentina. The study examines how legal operators are embracing AI tools in decision-making processes and the associated risks and challenges.


Evidence

Findings show varying levels of AI adoption across different jurisdictions, with some courts developing protocols for AI use and others relying on individual use by legal operators.


Major Discussion Point

Research on Internet Governance Issues in Latin America


Agreements

Agreement Points

Importance of multi-stakeholder model in internet governance

speakers

RODRIGO DE LA PARRA


SEBASTIAN BELAGAMBA


LITO IBARRA


LILLIAN CHAMORRO


arguments

Need to reaffirm multi-stakeholder principles in global processes


Challenges of WSIS+20 review and IGF mandate renewal


Importance of regional examples of collaboration


Opportunity to showcase Latin American internet governance model


summary

Multiple speakers emphasized the importance of maintaining and strengthening the multi-stakeholder model in internet governance, both at regional and global levels.


Need for regional cooperation and knowledge sharing

speakers

OLGA CAVALLI


RODRIGO DE LA PARRA


PAULA OTEGUY


LITO IBARRA


arguments

South School on Internet Governance organizing 17th edition in Mexico


ICANN Latin America focusing on regional policy development participation


LACNIC supporting local internet governance initiatives


LAC-IX deploying new internet exchange point infrastructure


summary

Several speakers highlighted initiatives aimed at fostering regional cooperation, knowledge sharing, and capacity building in various aspects of internet governance.


Similar Viewpoints

Both speakers emphasized the importance of improving internet access and infrastructure in the region, albeit through different approaches.

speakers

SEBASTIAN BELAGAMBA


BASILIO RODRIGUEZ PEREZ


arguments

Internet Society implementing new 5-year strategic plan


LAC-ISP advocating for 6 GHz frequency for Wi-Fi


Both researchers focused on using technology to address online risks and improve trust in digital environments, particularly for vulnerable groups like children and news consumers.

speakers

JOSE ROJAS


SOLEDAD ARENGUEZ


arguments

Child grooming risks in online gaming environments


Development of AI tool to detect misinformation in news articles


Unexpected Consensus

Integration of cultural elements in internet governance discussions

speakers

LILLIAN CHAMORRO


CAMILO ARATIA


arguments

Opportunity to showcase Latin American internet governance model


Technological appropriation among indigenous youth in Bolivia


explanation

Both speakers, despite focusing on different aspects of internet governance, highlighted the importance of integrating local cultural elements into discussions and research on internet governance in Latin America.


Overall Assessment

Summary

The main areas of agreement among speakers included the importance of the multi-stakeholder model, regional cooperation in internet governance, and the need to address both infrastructure development and sociocultural aspects of internet use in Latin America.


Consensus level

There was a moderate to high level of consensus among speakers on the importance of regional collaboration and the multi-stakeholder approach. This consensus suggests a strong foundation for continued cooperation in addressing internet governance challenges in Latin America. However, speakers also presented diverse research topics and organizational focuses, indicating a rich and varied approach to internet governance issues in the region.


Differences

Different Viewpoints

Approach to internet regulation and governance

speakers

BASILIO RODRIGUEZ PEREZ


RODRIGO DE LA PARRA


arguments

LAC-ISP expresses concerns about “fair share” proposals, arguing they could negatively impact network neutrality. They believe such proposals could cause significant problems for small ISPs in Latin America.


Need to reaffirm multi-stakeholder principles in global processes


summary

While LAC-ISP emphasizes concerns about specific regulatory proposals like ‘fair share’, ICANN focuses on broader multi-stakeholder principles in global processes. This indicates a difference in approach to internet governance, with LAC-ISP focusing on specific industry concerns and ICANN emphasizing broader governance principles.


Unexpected Differences

Focus of technological development efforts

speakers

BASILIO RODRIGUEZ PEREZ


ROCIO DE LA FUENTE


arguments

LAC-ISP advocating for 6 GHz frequency for Wi-Fi


LAC-TLD developing single server for domain name queries


explanation

While both speakers represent technical organizations, their focus on technological development differs unexpectedly. LAC-ISP is advocating for specific frequency allocation for Wi-Fi, while LAC-TLD is developing a centralized domain name query system. This highlights the diverse technical priorities within the region’s internet governance ecosystem.


Overall Assessment

summary

The main areas of disagreement revolve around specific regulatory approaches, priorities in technological development, and the focus of multi-stakeholder involvement in global processes.


difference_level

The level of disagreement among speakers is moderate. While there are differences in specific approaches and priorities, there seems to be a general consensus on the importance of multi-stakeholder involvement and regional cooperation in internet governance. These differences reflect the diverse interests and perspectives within the Latin American internet governance ecosystem, which could lead to rich discussions and potentially comprehensive solutions that address various stakeholder needs.


Partial Agreements

Partial Agreements

All speakers agree on the importance of multi-stakeholder involvement in global internet governance processes. However, they differ in their specific approaches: Sebastian Belagamba focuses on the challenges of WSIS+20 review and IGF mandate renewal, Rodrigo de la Parra emphasizes reaffirming existing principles, while Lillian Chamorro suggests showcasing the Latin American model as a unique contribution.

speakers

SEBASTIAN BELAGAMBA


RODRIGO DE LA PARRA


LILLIAN CHAMORRO


arguments

Challenges of WSIS+20 review and IGF mandate renewal


Need to reaffirm multi-stakeholder principles in global processes


Opportunity to showcase Latin American internet governance model


Similar Viewpoints

Both speakers emphasized the importance of improving internet access and infrastructure in the region, albeit through different approaches.

speakers

SEBASTIAN BELAGAMBA


BASILIO RODRIGUEZ PEREZ


arguments

Internet Society implementing new 5-year strategic plan


LAC-ISP advocating for 6 GHz frequency for Wi-Fi


Both researchers focused on using technology to address online risks and improve trust in digital environments, particularly for vulnerable groups like children and news consumers.

speakers

JOSE ROJAS


SOLEDAD ARENGUEZ


arguments

Child grooming risks in online gaming environments


Development of AI tool to detect misinformation in news articles


Takeaways

Key Takeaways

Regional internet governance organizations in Latin America and the Caribbean are actively working on various initiatives to strengthen internet governance in the region


There are ongoing challenges and opportunities related to global internet governance processes like WSIS+20 review and IGF mandate renewal


Research on internet governance issues in Latin America covers a wide range of topics, from child online protection to AI adoption in judicial systems


Multi-stakeholder collaboration and dialogue remain crucial for addressing internet governance challenges in the region


Resolutions and Action Items

Continue promoting and supporting local and regional internet governance initiatives


Prepare for upcoming global internet governance processes like WSIS+20 review


Further develop and implement tools to combat misinformation and enhance cybersecurity


Expand research on emerging technologies and their impact on internet governance


Unresolved Issues

How to effectively address the digital divide, particularly for indigenous communities


Balancing cybersecurity needs with protection of individual rights and freedoms


Regulatory frameworks for emerging technologies like generative AI in various sectors


Long-term sustainability of the multi-stakeholder internet governance model


Suggested Compromises

Adopting a balanced approach to AI implementation in judicial systems, maintaining human oversight


Developing region-specific solutions for internet governance challenges while aligning with global principles


Fostering collaboration between technical experts and policymakers to address complex internet governance issues


Thought Provoking Comments

We teach children not to talk to strangers outside, but in the digital world, they talk to strangers without knowing who is the person behind avatars or usernames.

speaker

José Rojas


reason

This comment highlights the paradox between real-world and online safety practices for children, drawing attention to a critical issue in online child protection.


impact

It set the stage for a deeper discussion on the risks of online gaming platforms and the need for better digital education and protection measures for children.


There were several bodies like NIC.ER among others, and there are many findings around the states. We have the Committee of Cybersecurity, and also, as I said before, we need more clarity and cooperation among the stakeholders.

speaker

Thais Aguiar


reason

This comment emphasizes the complexity of cybersecurity governance and the need for better coordination among various stakeholders.


impact

It led to a discussion on the challenges of implementing effective cybersecurity policies and the importance of multi-stakeholder cooperation.


The idea is to understand how legal operators and the judicial ecosystem embraces the use of generative artificial intelligence and related tools in decision making.

speaker

Maria Pilar Chorenz


reason

This comment introduces the important topic of AI adoption in the judicial system, raising questions about its implications for legal decision-making.


impact

It sparked a discussion on the potential benefits and risks of using AI in the judicial sector, as well as the need for responsible implementation and human oversight.


Internet access was scarce. There we have the first contrast. There’s just Intel, one of the telecommunication companies. They could not choose, because if they had another company as a server, they wouldn’t have access to the internet.

speaker

Camilo Aratia


reason

This comment highlights the stark digital divide that exists even within a single country, particularly affecting indigenous communities.


impact

It broadened the discussion to include issues of digital inequality and the challenges of technological appropriation in marginalized communities.


Overall Assessment

These key comments shaped the discussion by highlighting critical issues in internet governance across various domains – from child online safety to cybersecurity policy, AI in judicial systems, and digital inequality. They broadened the scope of the conversation beyond technical aspects to include social, legal, and ethical considerations. The comments also emphasized the need for multi-stakeholder cooperation and context-specific approaches in addressing these complex challenges.


Follow-up Questions

How can the multi-stakeholder model of internet governance be preserved and strengthened at national and regional levels if the global IGF were to be suspended or significantly changed?

speaker

Lito Ibarra


explanation

This is important to ensure continued dialogue and collaboration on internet governance issues in Latin America and the Caribbean, even if global structures change.


What are the specific implementation challenges for the Global Digital Compact and how can they be addressed while maintaining core internet governance principles?

speaker

Rodrigo de la Parra


explanation

Understanding these challenges is crucial for effectively implementing the GDC while preserving the multi-stakeholder model and other key principles.


How can Latin American and Caribbean countries better coordinate their positions and contributions to global internet governance processes like WSIS+20?

speaker

Rodrigo de la Parra


explanation

Improved regional coordination could strengthen the voice and influence of LAC countries in shaping global internet governance.


What are the most effective strategies for improving digital literacy and awareness of online risks like grooming among parents, educators, and children in Latin America?

speaker

José Rojas


explanation

This is critical for protecting children from online exploitation and ensuring safe use of technology.


How can policies and infrastructure development be tailored to address the significant disparities in internet access and use between urban and rural indigenous communities?

speaker

Camilo Aratia


explanation

Addressing these disparities is essential for ensuring equitable digital inclusion of indigenous populations.


What are the best practices for fostering cooperation and clear role definition among the various institutions involved in Brazil’s cybersecurity framework?

speaker

Thais Aguiar


explanation

Improving institutional cooperation is key to developing a more effective and cohesive national cybersecurity strategy.


How can the Trust Editor tool be further developed and implemented to effectively combat misinformation while respecting journalistic integrity and freedom of expression?

speaker

Soledad Arenguez


explanation

Balancing technological solutions for misinformation with fundamental press freedoms is crucial for maintaining trust in media.


What ethical guidelines and regulatory frameworks are needed to ensure responsible adoption of generative AI in judicial decision-making processes across Latin America?

speaker

Maria Pilar Chorenz


explanation

Developing appropriate guidelines is essential to harness the benefits of AI in the justice system while protecting rights and maintaining public trust.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations

Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations

Session at a Glance

Summary

This panel discussion focused on the impacts of artificial intelligence (AI) on marginalized populations, exploring both the opportunities and risks presented by AI technologies. Experts from government, industry, and civil society organizations shared insights on how AI can advance equity in areas like healthcare and education, while also potentially exacerbating existing biases and inequalities.


Key concerns raised included the lack of diverse representation in AI development, biases in training data that reinforce discrimination, and the misuse of AI for surveillance and censorship by authoritarian governments. Panelists emphasized the need for human rights impact assessments, increased transparency from companies and governments, and the inclusion of marginalized voices in AI governance discussions.


Specific recommendations included conducting regular bias audits of AI systems, strengthening data protection regulations, creating inclusive digital spaces, and establishing clear accountability mechanisms. The importance of addressing AI use in military contexts was also highlighted as a critical area requiring more attention and safeguards.


The discussion underscored the necessity of a multi-stakeholder approach to AI governance, with governments, companies, and civil society working together to ensure AI benefits all of society. Panelists stressed that protecting human rights must be central to AI development and deployment, particularly for the most vulnerable populations.


The conversation concluded by emphasizing the timeliness of these issues in light of ongoing UN processes and the need for continued international collaboration to shape inclusive and equitable AI systems that respect the rights of all individuals, regardless of their background or identity.


Keypoints

Major discussion points:


– The risks and opportunities of AI for marginalized populations


– The need for diverse representation in AI development and governance


– Challenges of AI use in military/defense contexts and lack of transparency


– The importance of human rights considerations in AI policy and regulation


– The role of multistakeholder collaboration in addressing AI challenges


The overall purpose of the discussion was to examine how artificial intelligence impacts marginalized populations, identify key risks and opportunities, and explore ways that governments, companies, and civil society can work together to ensure AI benefits all of society while mitigating potential harms.


The tone of the discussion was largely serious and concerned, with speakers highlighting significant challenges and risks posed by AI to vulnerable groups. However, there were also notes of cautious optimism about AI’s potential benefits if developed responsibly. The tone became more action-oriented toward the end, with concrete suggestions for next steps and collaborations to address the issues raised.


Speakers

– Alisson Peters: Deputy Assistant Secretary of State for the Bureau of Democracy, Human Rights, and Labor in the U.S. State Department


– Desirée Cormier Smith: Special Representative for Racial Equity and Justice at the U.S. State Department


– Dr. Geeta Rao Gupta: Special Representative for Gender Equity and Equality at the U.S. State Department


– Jessica Stern: U.S. Special Envoy to Advance the Human Rights of LGBTQI+ Persons


– Kelly M. Fay Rodriguez: Special Representative for International Labor Affairs at the U.S. State Department


– Sara Minkara: U.S. Special Advisor on International Disability Rights


– Nicol Turner Lee: Expert on the intersection of race and technology, digital divide, and digital equality


– Nighat Dad: Founder and Executive Director of the Digital Rights Foundation, expert on online harassment and digital security for women


– Rasha Younes: Human rights advocate and researcher at Human Rights Watch, expert on LGBTQI+ rights


– Amy Colando: Head of Microsoft’s Responsible Business Practice, expert on technology and human rights


– Guus Van Zwoll: Representative from the Netherlands government, involved in the Freedom Online Coalition


Additional speakers:


– Dr. Lee: Audience member asking a question


– Usama Kilji: Representative from Bolo B, a digital rights organization in Pakistan


– Khaled Mansour: Member of Meta Oversight Board


Full session report

Expanded Summary of Panel Discussion on AI’s Impact on Marginalised Populations


Introduction


This panel discussion brought together experts from government, industry, and civil society to explore the impacts of artificial intelligence (AI) on marginalised populations. Alisson Peters, Deputy Assistant Secretary of State for the Bureau of Democracy, Human Rights, and Labor, opened the discussion by emphasizing the importance of a multistakeholder approach to understanding AI’s societal impacts and the U.S. government’s commitment to addressing AI governance.


Key Themes and Discussion Points


1. Risks and Opportunities of AI for Marginalised Populations


The panel acknowledged the dual nature of AI’s potential impact. Nicol Turner Lee, an expert on the intersection of race and technology, highlighted both opportunities and challenges. She noted AI’s potential to improve healthcare outcomes and educational access for underserved communities, while also cautioning that AI systems often reinforce historical discrimination against marginalised groups. Turner Lee provided specific examples, such as AI-driven hiring tools potentially discriminating against women and minorities, and facial recognition systems misidentifying people of color at higher rates.


Jessica Stern, U.S. Special Envoy to Advance the Human Rights of LGBTQI+ Persons, offered a nuanced perspective, stating that “Computers might be binary, but people are not.” She suggested that generative AI could help reimagine inclusive futures and allow for safe and authentic self-expression, while also cautioning about the need to address biases in AI training data.


Sara Minkara, U.S. Special Advisor on International Disability Rights, pointed out that AI development often leaves out the disability community entirely, highlighting the need for inclusive design and development processes.


2. Addressing Biases and Harms in AI Systems


A significant portion of the discussion focused on strategies to mitigate biases and potential harms in AI systems. Nicol Turner Lee emphasised the need to interrogate AI models for bias and question whether automation is appropriate in various contexts. Nighat Dad, Founder and Executive Director of the Digital Rights Foundation, called on companies to do more to address harms on their platforms.


Rasha Younes, a human rights advocate and researcher at Human Rights Watch, proposed concrete steps, suggesting that “developers should conduct regular bias audits and build diverse representative data sets.” She also recommended that policymakers require independent testing of AI systems for biases, particularly when deployed in public-facing roles.


Amy Colando, Head of Microsoft’s Responsible Business Practice, shared insights on how her company employs various approaches to combat societal biases in AI systems. She discussed Microsoft’s efforts to increase transparency while maintaining customer confidentiality, highlighting the tension between these two objectives. Colando also detailed Microsoft’s responsible AI development practices, including ethical guidelines, diverse team composition, and ongoing research into AI safety and fairness.


3. Ensuring Inclusive AI Governance


The need for more inclusive approaches to AI governance was a recurring theme throughout the discussion. Alisson Peters highlighted U.S. government efforts, including a national security memorandum on AI that emphasizes human rights considerations. She noted that the U.S. government has policies to ensure human rights assessments in AI procurement.


Nighat Dad pointed out that AI governance conversations are heavily concentrated in Global North countries, often excluding perspectives from regions where these technologies are deployed. Rasha Younes stressed the need to strengthen protections against digital targeting of vulnerable groups.


Guus Van Zwoll, representing the Netherlands government and the Freedom Online Coalition (FOC), discussed efforts to keep human rights central in AI governance discussions. He mentioned that the FOC would be updating its 2020 statement on AI and human rights in the coming year, organizing workshops to educate policymakers on AI challenges for human rights, and spotlighting examples of AI that can advance human rights for marginalised groups.


4. Transparency and Accountability in AI Development


Khaled Mansour, a member of Meta’s Oversight Board, highlighted the challenge of transparency in human rights impact assessments. The discussion also touched on the need for more conversation on the embedded militarisation of everyday AI tools, a concern raised by audience member Usama Kilji, who called for more discussions and safeguards around military use of AI.


Unresolved Issues and Future Directions


Despite the comprehensive nature of the discussion, several issues remained unresolved. These included questions about how to effectively include marginalised voices in AI governance discussions, balancing transparency in human rights impact assessments with customer confidentiality, addressing AI use in military settings and its potential humanitarian impacts, and closing the digital divide to ensure equitable access to AI benefits.


The panel suggested several areas for further research and action, including:


1. Revising and updating the 2020 FOC statement on AI and human rights to reflect current AI developments and emphasise the disproportionate impact on marginalised communities.


2. Identifying and highlighting the disproportionate impact of AI on marginalised groups through tools like Stanford’s AI MISUSE tracker.


3. Conducting community-rooted AI research that prioritises diversity and addresses AI impacts on marginalised groups.


4. Exploring how AI can be leveraged to empower marginalised groups while ensuring accountability and ethical development.


Conclusion


The discussion underscored the necessity of a multi-stakeholder approach to AI governance, with governments, companies, and civil society working together to ensure AI benefits all of society. Panellists stressed that protecting human rights must be central to AI development and deployment, particularly for the most vulnerable populations. The conversation highlighted the ongoing challenges and opportunities in shaping inclusive and equitable AI systems that respect the rights of all individuals, regardless of their background or identity.


Session Transcript

Alisson Peters: All right, good afternoon, good evening, everyone, we’re going to get started, forgive us for some technical difficulties. Can everyone hear us? Hopefully everyone online can hear us as well. Well, thank you all for joining us, both in person and online. I’m going to try to talk as loudly as I possibly can, because I know it’s been a bit challenging for folks online today to hear every session. It’s my pleasure to be here on behalf of the United States government, where I serve as Deputy Assistant Secretary of State for our Bureau of Democracy, Human Rights, and Labor in the State Department. Before we get started in today’s session, we have the esteemed honor of welcoming virtually a number of our special envoys and representatives to the United States government representing various different marginalized populations, and they wanted to send their greetings as well.


Desirée Cormier Smith: So AI offers incredible potential to advance equity by increasing access to health care, education, and economic opportunity for those who need them the most. However, too often marginalized populations bear the worst harms of AI. There is vast evidence showing how AI systems can reinforce historical patterns of discrimination that disproportionately impact people of African descent, indigenous peoples, Roma people, and other marginalized racial and ethnic communities. And the risks of harm are the most pronounced for people who experience multiple


Dr. Geeta Rao Gupta: That’s right. AI tools are aiding the creation and dissemination of technology facilitated gender-based violence, or TFGBV, especially against women and children. This especially pernicious form of harassment and abuse is already threatening the ability of women and girls to participate in all spaces, online and offline, and has grave consequences for democracy.


Jessica Stern: Yes, computers might be binary, but people are not. Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically. However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals’ lives. Nuanced data about LGBTQI plus people with appropriate privacy protections can help ensure that recommendation algorithms governing, for example, our shopping habits or content we consume on social media don’t entrench harmful social stereotypes or censor the beautiful diversity of humanity. AI is an exciting set of technologies that have the potential across all sectors to help us consider and integrate diverse perspectives.


Kelly M. Fay Rodriguez: Humans are the future of work, and freedom of association and collective bargaining are central to safeguarding workers’ rights and standards amid the rapid expansion of AI technologies, including and in particular for marginalized populations. Unions play an essential role in advocating for practices that can increase the meaningful representation of women and diverse groups and marginalized populations in AI. They advocate for safe work environments, limiting invasive and unsafe workplace monitoring. They ensure fair employment practices, secure equitable compensation, and ensure that benefits are shared.


Sara Minkara: AI is a reality of our present and our future, but also what is a reality is that a lot of time AI is built in a way that leaves us behind. And when I say us, I’m saying the disability community. We need to ensure that AI in development, in design, in the testing, in implementation is accessible for everyone, including the disability community. And not just on the assistive technology side of things, but for all technology.


Alisson Peters: Thank you very much to all of our special representatives and envoys. I think as you heard here at the start of our session, our shared goal in the United States government is really to harness the opportunities of artificial intelligence, whether that be on economic growth, increased access to quality education and advancement in medical care, while mitigating the risk. And we know all too often some of AI’s most egregious harms fall on marginalized populations and those experiencing multiple and intersecting forms of discrimination, including algorithmic biases, increased surveillance, and online harassment. We’re witnessing around the globe an unfortunate trend of governments misusing artificial intelligence in ways that significantly impact marginalized populations, such as through social media monitoring and other forms of surveillance, censorship, and other harassing information manipulation. To counter these abuses over the last four years, the United States government has taken several steps to encourage safeguards at the national level. We’ve introduced a number of executive orders and memos into our government system to safeguard the use and deployment and development of artificial intelligence for human rights. And at the international level, we’re working closely with the Freedom Online Coalition and other key like-minded partners through the UN and other multilateral systems to lay the groundwork for continued international and multi-stakeholder collaboration for years to come. But there’s more work to be done, and that’s where today’s discussion ends. discussion really comes in. The only way that governments can work to ensure that merchants aren’t disproportionately harmed by technological advancements is in partnership with you all around the room, around IGF and those that are online. We’re focused heavily on ensuring that we can easily create safeguards into our systems to ensure that we do not see the dissemination of disinformation and harmful synthetic imagery that can harm marginalized populations, how AI systems can exacerbate existing digital and real world divides, how they can reinforce stereotypes that further stigmatization, especially when these systems are not accessible for all their users. So we’re quite fortunate today by joining with an esteemed panel of experts that have really gathered to work to fight back against these worrying threats and trends. We have a number of our panelists online and we’re fortunate to be joined here in the room by two of them as well. First, we’ll hear from Dr. Nicole Turner Lee, who’s a leading voice on the intersection of race and technology and the digital divide and is a recognized expert on issues of digital equality and inclusion. Her work ensures that all communities, particularly marginalized ones, benefit from technological advancements. We’re also joined by Amy Colando, who’s a lawyer with deep expertise on the intersection of technology and human rights. As the head of Microsoft’s Responsible Business Practice, she leads a team dedicated to advancing Microsoft’s commitment to human rights norms and a responsible value chain that respects and advances human rights. Nagat Dodd, our friend and partner, is a globally recognized lawyer and advocate for women’s rights and digital privacy. She’s the founder and executive director of the Digital Rights Foundation, which focuses on issues of online harassment. data protection, and digital security for women and marginalized populations in Pakistan and is a member of MEDA’s Oversight Board. And Rasha Younis is a prominent human rights advocate and researcher at Human Rights. Her work has highlighted the systemic discrimination and violence faced by LGBTQI plus individuals in the MENA region and beyond, and her efforts have been instrumental in bringing international attention to these issues and pushing for legal reforms. So first I wanted to start out a little bit in the scene in terms of both the risks and opportunities that come from AI and the threats to marginalized populations. Let’s kick things off with Nicole, I’ll turn to you first. Some of these issues benefited from extensive international conversations, from the recognition in the engineering community over the past decade that it is critical to address harmful biases in AI, to efforts to curb the misuse of artificial intelligence and generative AI tools for image-based sexual abuse. Help us set the stage. Where do you think important progress has been made over the past several years and what are current challenges do you think that need to be addressed or elevated on the agenda, particularly as we’re all gathered here this week at IGF to address critical internet governance discussions? I think it’s really important that you help us think a little bit thoughtfully about where the current gaps and opportunities exist that we can leverage.


Nicol Turner Lee: Well thank you so much for the kind introduction and also thank you to all of you and the IGF for hosting this conversation. Before we start though, I do also want to say that I am the author of a new book, Digitally Invisible, to ensure that people know that there is content that I’ve written about this disconnect between the opportunities of technology and those who are marginalized or impacted by it. So I want to lean into this conversation on where we have seen some opportunity and where we have challenges. And in particular, in my few short moments just answering this question, I do want to point out that one of the opportunities that has become most prominent is our ability to engage in artificial intelligence, given the distributed compute power that we have. So I think it’s really important to have this conversation because it also lends itself by the opportunities also pose themselves threats. But what we are seeing is the ability to distribute networks in a way because we are building compute power that has, you know, capacity, which I’ve been doing this for about 30 years in terms of technology and its accessibility by people of color in particular, and we’ve not seen this very distributed network evolve as it is done today with chips and power. The other thing that has been an opportunity of AI has been the way it’s been integrated into a variety of verticals at the Brookings Institution. We started what’s called an AI equity lab allows us to workshop journalism and AI health care and AI criminal justice and AI. And why we do that, first and foremost, by putting the name of the sector and then AI is that we’ve seen an incredible influence of technology tools on these verticals that in essence determine quality of life on the social welfare side, as well as the economic opportunity side. And so I think we’ve come a long way, for example, in health care. We’re actually seeing personalized medicine. We’re seeing more efficiency among doctors when it comes to personalized medicine and the management of health. We’re seeing a lot more contemporary reaction and quick reaction. We saw that during the COVID vaccine development when it comes to pinpointing things that would have taken a very long time in our intellectual discovery are now happening through AI. And I think another area where we’ve seen a lot of promise has been in climate. where we’re able to use drone-enabled surveillance to look at where we have thermal outputs or throughputs that have potential danger for natural disaster or wildfires. We’re also seeing for agriculture, for example, because many of these are very intersectional, the ability to look at climate as it relates to watering times or when we’re able to be most productive in crop development. So I wanna put that out there because I often sound like a pessimist, which I will sound like now when it comes to AI and marginalized communities. So where we see these efficiency growth spurts, one of the areas that we’re seeing a lot of bias, as it’s already been indicated by many of the speakers, is when it comes to flipping these opportunities into challenges or hurdles. So I’ll just close with a couple of thoughts that will frame, hopefully, the rest of the conversation. Obviously, there’s demographic bias. In the United States, that demographic bias is profoundly defined by race and ethnicity, and gender has become more of a human rights concern. In other countries outside of the United States, class has also found its way into the demographic biases, as well as both the United States and outside the United States, geography has become biases. Where you live, who you are, and what you do matters because it is reflected in what we call the Brookings Institution, the traumatized nature of the data which is training these models. It comes with those historical biases, and those historical biases are often traumatized, meaning if there are systemic inequalities that point to the unequal access to education, for example, they will show up in the training data, and as a result, have a consequential outcome of either greater surveillance or less utility for students that may be in that category or impacted. The other area where we actually have challenges is not just by who’s commoditized by AI, those who are impacted by them, but who’s creating them. The lack of representation of who sits at the table to design the model. absent of the people who are actually impacted from them, creates, I think, an over-judgment of power that has consequences that can foreclose on the economic social opportunities that AI models can, the ones that I just spoke about. For example, when we think about who is developing models for the health of black women, let’s just take that for example, people may not understand that the lack of participation of black women in clinical trials may mean that they may not show up, particularly when it comes to breast cancer diagnosis in training models. This was actually recently put out by the Journal of American Medicine, that black women disproportionately experience breast cancer because their data is not represented in major data sets. That actually shows up in AI because AI is not divorced of the market-based data that is actually training these systems. The other thing when it comes to the challenges that we have with AI is the fact that, as it’s been mentioned and as my book suggests, we have a digital divide. We’re creating AI systems, and in many respects, we haven’t closed the accessibility divide. That creates its own set of challenges as to who will be able to benefit. And when you also think about generative AI, and I’ll sort of close here to provide enough time for my colleagues to chime in as well. When we think about the global majority, we do a lot of work at the Brookings Institution on how these systems show up, not only in terms of marginalized populations in the US, but all over the world. In the African Union, for example, we know that there’s a digital language divide, and generative AI is primarily English-based, and it is not necessarily trained on the plethora of dialogue that comes out of a variety of global majority countries. As a result of that, we see challenges when it comes to representation, not only in training data, but whether or not populations actually see themselves in these tools, particularly generative AI that is meant and designed to be, again, a lever for economic and social mobility in those areas. I mean, that along with the rights of workers. who is taking those jobs to be able to annotate the data. I could go on and on, but there are so many structural, behavioral, as well as output or consequential outcomes that occur when we don’t have the right people at the table, when we continue to commoditize subjects to demarginalized populations to fuel the AI models that we’re developing. Third, we don’t interrogate these models. And I’ll just say this, we don’t interrogate them for bias. We also don’t interrogate whether or not they should be used at all, or a decision should be automated in the first place. So I will stop here and look forward to this conversation. Hopefully I gave you enough to talk about as we go into the next speakers. And thank you so much for having me.


Alisson Peters: Thank you so much, Nicole. I think you did a really phenomenal job, first and foremost, plugging your book, which I encourage everyone to buy, but also both laying out the real tangible opportunities that we see from AI, everything from journalism, healthcare, addressing the impacts of climate change, and then laying out in detail some of the tremendous risks that we see for marginalized populations. So you addressed issues around the accessibility divide, exacerbating existing in our societies through use of big data. You talked about who gets a seat at the table and the design deployment and use of these technologies and beyond. So I next wanted to turn to Nagat. Your organization has really been on the front lines of documenting, I think, some of the risks that Nicole just laid out, the exact impact to marginalized populations, whether that be from women and girls. And I know you’ve done a lot of work on tech-facilitated gender-based violence or impacts to human rights defenders or religious minorities. And I’m hoping you can sort of build off of what Nicole was talking about in terms of the broader risks that she laid out and give us some tangible examples or two of where you. seen both the benefits and risks of AI tools to marginalized populations, and then really because we do have many different stakeholders at the table this week in the IGF conversations, whether that be from governments or the private sector, where do you think there’s gaps that require more attention in our international discussions?


Nighat Dad: At Digital Rights Foundation, we have been doing a lot of work around addressing tech-facilitated gender-based violence, and I feel that talking about AI or AI tools is an extension of what we have been talking for years around digital tools or digital rights, and all the harms that we are now connecting with an AI is actually an extension of those harms with the usage of AI, and they have become more sophisticated and advanced. That’s the same case with the tech-facilitated gender-based violence, where we are now seeing how deep-fake images of women and young girls are actually creating more risks for them, specifically when they are from the regions and cultures which are more conservative, where the honor of families or the society is connected to women bodies. One challenge that we are witnessing is basically verifying whether these deep fakes are actually real or unreal. That was not the case before AI generated content when it comes to images and videos. I think another challenge is that regulating this space. Tech companies really have to do a lot, and sitting on META’s oversight board, we actually framed our own experience as a board in terms of what companies like Meta can do to use automation around dealing with the harms on their platforms and released a paper on this. When it comes to the governments, I feel that there is a huge gap of governing AI, the conversations and I always say this, even while sitting at the UN Secretary General’s AI high level body, that the concentration of these conversations are very much concentrated in some global North countries. And in the past, we have seen how technology that is being developed, designed, built, mostly, you know, like dumped in our regions, you know, and we have no say into how, you know, these are these technologies are designed for the marginalized groups in our regions. No, that exactly is the case with the AI tools as well. I mean, there are some benefits where, you know, like it’s it’s also being used in the health care and climate monitoring, climate change and AI power to translation tools are also breaking down language barrier for marginalized groups. But I feel that all these opportunities are still connected to the entire cycle of how AI is being developed, designed, processed and deployed. I think there are lots of things to say, but there is a huge responsibility on AI companies on tech platforms where, you know, all these harms are being being increased by the use of AI. But the government’s also that how we can bring more accountability and oversight into the regulations that they are they are framing without including civil society voices and without having a conversation on human rights violations when it comes to AI tools.


Alisson Peters: Thanks so much, Nagat, I think, you know, you raise a really important point that I suspect we will have a lot of additional conversations this week at IGF about, which is if we don’t protect this multistakeholder model of Internet governance, a multistakeholder model of conversations around the regulation and governance of AI and emerging technologies. Then we will be missing sort of an entire part of the conversation, which is how are these tools being deployed and used in ways that are impacting the whole of society, not just the governments and the people representing them. I think that’s a good pivot over to you, Rasha, as you’ve done a lot of work looking at the impacts of AI tools from government misuse of these technologies. And I know you’ve done an incredible amount of work documenting the ways in which autocratic governments have used technology to repress marginalized populations, particularly LGBTQI plus persons. I’m hoping you could share a little bit of insight on how policymakers and AI developers should be thinking about these issues in relation to the governance and regulation of artificial intelligence, particularly sort of reflecting on the years of research that you’ve done.


Rasha Younes: Thank you so much, and thank you for having me today. In 2023, we published a report on the digital targeting of LGBTQI plus people across the Middle East and North Africa region, particularly in Iraq, Lebanon, Egypt, Jordan and Tunisia. What we found is that governments are using monitoring tools, usually manual monitoring, not sophisticated tools to target and harass LGBTQI plus people. And the significant finding that we had is that these abuses do not end in the instance of online harm in the sense that they are not transient, but reverberate through individual lives in ways that often ruin their lives entirely. In our report and in our follow-up campaign, which we published in 2024, we urge particularly technology platforms such as meta platforms, grinders, same-sex dating apps. etc. to address some of the structural issues that are related to content moderation, that are related to biases, that facilitate and allow for these abuses to take place, especially when they are in the wrong hands. So especially when they are exploited for malicious purposes, such as government targeting of LGBTQI plus people in contexts where they already face criminalization, whether it be direct criminalization of same-sex relations or other laws, such as cybercrime legislation and morality and indecency, debauchery laws that are used to target LGBTQI plus people simply for expressing themselves online. In developing this work, I also want to acknowledge that we are building off of work that Article 19 has done for many years on this specific issue, as well as the framework that Afsaneh Raboo introduced, which is designing from the margins, specifically in technology and AI systems, being able to design technologies with the interests, impacts, and rights of the most marginalized in mind. In some of the recommendations that we aim for, we really want to strengthen protections against digital targeting, while acknowledging that technology can always be used for malicious purposes. There are many ways that regulations and addressing biases and algorithms, for example, can help mitigate some of these abuses that take place offline as a result of online targeting. For example, AI systems often amplify historical biases, as my other co-panelists have said, embedded in the data that they are trained for, which leads. to discriminatory outcomes for LGBTQI plus individuals. So to mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policymakers should also require independent testing of AI systems for biases, particularly when deployed in public facing pools. Incentives for inclusive algorithm design that incorporate the input of LGBTQI plus advocates and civil society experts should be central in requiring and enhancing these systems to better protect the most vulnerable users. When it comes to content moderation systems, we saw and investigated that automated systems frequently misidentify LGBTQI plus content as harmful or inappropriate, especially in languages other than the English language, such as the many dialects of the Arabic language as we found in our reporting, which inadvertently silences advocacy around LGBTQI plus rights, especially in contexts where advocates, activists and community organizers resort to technology in order to empower and connect and build community around their rights. When public discourse and any offline organizing around gender and sexuality is either prohibited or could lead to criminalization and arbitrary harassment of these activists. So particularly in content moderation, there must be a training of moderation algorithms on inclusive data sets that recognize the diversity of LGBTQI plus discourse and incorporating human oversight, particularly for sensitive content, ensuring nuanced understanding of this context. And finally, establishing appeal mechanisms that allow for an effective. remedy for users to challenge automated systems of automated decisions of moderation that unfairly remove LGBTQI plus content or otherwise leave content online that could be harmful and lead to the arbitrary arrest, harassment, torture and detention and other abuses of LGBTIQ plus individuals that, as I said before, reverberate throughout their lives. Finally, I definitely think that this should happen with the privacy and data security of individuals in mind and enforcing robust data protection regulations that allow for penalties, for misuse of sensitive data, especially when it comes to the outing of individuals who are LGBTIQ plus people on public platforms, online harassment, doxing and the resulting discrimination and violence that people face offline in their individual daily lives. As I said earlier, centering LGBTQI voices in design of AI tools is extremely important. So engaging directly with organizers, activists, experts to understand the unique needs and challenges of LGBTIQ plus individuals and also for tech platforms to prioritize the creation of these inclusive digital spaces that actively counter discrimination and harassment that could also happen in tandem. Human rights impact assessments are extremely important. We already know that comprehensive evaluation of risks associated with content moderation, government surveillance and other issues is incredibly important in informing the changes and the upgrading of these tools to be able to safeguard. the human rights of those most impacted by these technology-facilitated harms. Establishing accountability platforms both for governments, for developers, and establishing clear grievance mechanisms for individuals and groups affected by AI-driven decisions is central to beginning to address these harms and the offline consequences of these harms across the globe. Thank you.


Alisson Peters: Thank you so much, Rasha. I think you gave us some really tangible recommendations on how the harms from automated systems. You talked a bit about doing bias audits. I heard human rights impact assessments, providing access to grievance mechanisms, access to remedy. A number of the recommendations you raised are actually expectations set out in the UN Guiding Principles on Business and Human Rights. Earlier this year, the United States government led in the full UN General Assembly agreed to a resolution on safe, secure, and trustworthy artificial intelligence, which encourages and calls for increased implementation of the UN Guiding Principles. Certainly, all governments of the UN have agreed with a number of the recommendations that you laid out in terms of expectations, both for governments and private industry, private sector. I think that’s a good pivot over to you, Amy, as we’ve heard some really tangible recommendations that Rasha has laid out, building off of some of the risks that both Nicole and Nagat outlined. I’m hoping you can share a little bit of self-reflection from Microsoft’s perspectives. What do you think that companies should be doing more of to mitigate the harms that have just been laid out by our speakers? Also, if there are particular steps that you feel like we as governments can and should be taking in terms of industry to help promote these steps. action, I think that would be quite helpful as well. So, over to you, Amy, and thank you for joining us.


Amy Colando: Thank you so much, and thank you so much for having me, both, oops, let me see whether the audio will work out. I’m just going to keep on talking, and we’ll hope it works out. So, thank you so much for inviting me, and I’m learning a lot already in terms of our engagement. These multi-stakeholder conversations are incredibly important to shine a light on our practices, to help us think of additional steps we can and should be taking to commit on the promise of AI. So, let me start a little bit with sharing some examples from Microsoft with the understanding that these are just simply examples, and the multi-stakeholder process is incredibly important in terms of getting that feedback and scrutiny in terms of areas we can do better. My team coordinates Microsoft’s corporate-level human rights due diligence, including human rights impact assessments, under our commitment to respecting human rights and providing remedy under the UN Guiding Principles. That process includes, and is very intentional about, interviewing marginalized populations, and allows us to understand the needs of diverse groups of our users, our supply chain, and our employees, so we can enhance our respect for the rights of marginalized populations. Turning to AI, we recognize there are particular areas of promise and potential, as well as particular areas that might exacerbate existing divides and harms. AI, at its foundation, as Nicole said, requires infrastructure and connectivity, and we’ve established our Global Data Center Community Pledge, which commits us to building an operating infrastructure that addresses societal challenges and creates benefits for communities. This forms the basis of how we engage with stakeholders during all steps of the data center process, including after it is up and operationalized and is tailored to every location so it is respectful of local cultures and contexts and environmental needs. For example, in Australia, this meant weekly meetings over an eight-month period to incorporate traditional indigenous practices into our design process. Through engagement, we introduce the project, gather insights that help us inform our data center design, respecting our neighbors and the environmental resources around them. Next, for the development and deployment of AI, Microsoft’s Office of Responsible AI has partnered with the Stimson Center to bring a greater diversity of voices from the global majority to the conversation on responsible AI through our Global Perspectives Responsible AI Fellowship Program. The fellowship program convenes a multidisciplinary group of AI fellows from around the world, including Africa, Latin America, Asia, and Eastern Europe, across a series of facilitated activities. These activities and the fellows take part are intended to foster a deeper understanding of the impact of AI in the global majority, exchange best practices on the responsible development and use of AI, and inform an approach to responsible AI. To combat the societal biases in AI systems, we employ a variety of approaches and are constantly learning from dialogues exactly like the one we’re having here. In 2018, we identified our six responsible AI principles, including fairness. Our policies are designed to clarify how fairness issues may arise and who may be harmed by them, and we take active steps to implement them into tactical controls and code of conduct. For generative AI systems, we’ve leveraged the U.S. National Institute of Standards and Technology Risk Management Framework to develop tools and practices to map, measure, and manage bias issues, which involve the risk of generating stereotyping and demeaning outputs. In alignment with the goal of minimizing representational harms, we’ve made significant investment in red teaming to identify areas of harms across different demographic groups. manual and automated measurements to understand the prevalence of stereotyping and demeaning outputs and mitigations to flag and block those outputs. We look forward to working with governments, multilateral institutions and multi-stakeholder processes to continue to develop these frameworks, including through OECD due diligence conversations, to help a consistent and aligned approach to improve the offering of A.I. and the potential to serve marginalized populations. For our own for our own generative services, we’ve established a customer code of conduct which prohibits the use of Microsoft generative services for processing, generating, classifying or filtering content in ways that can inflict harm on individuals or society. We have developed and deployed a framework for customer use of sensitive A.I. features, including facial recognition and neural voice. Customers must register for these services, a process that includes defining proposed use cases and may not use the service for other use cases. And we institute technical controls for abuse monitoring and detection. The classifier models that we’ve developed detect harmful text and or images and user prompts, inputs and completions or outputs. The abuse monitoring system also looks at usage patterns and employs algorithms and heuristics to detect and score indicators of potential abuse. Detected patterns consider, for example, the frequency and severity at which harmful content is detected. These prompts and completions are then flagged through content classification and were identified as part of a potentially abusive pattern, are subject to additional review processes to help confirm the system’s analysis and inform actioning decisions. That’s conducted through human review and A.I. review. And then we have a feedback loop with customers and that in turn includes improvements to our own systems. Finally, I’d like to close on a theme that have been identified by my my fellow panelists in terms of the need for more representative data. and to ensure that we are bringing forward marginalized populations to be able to see themselves in the promise of AI. Recently, we identified that our services, our generative AI services, could be improved in terms of representation of people with disabilities, a one billion population around the world. We then partnered with Be My Eyes, which is a service and app that generates videos to allow vision impaired individuals to be able to communicate with others in a crowdsourced platform to actually visualize, visualize items that they’re looking at. This license to the Be My Eyes content allows us to ensure and advance the representation of people with disabilities in our service. In short, or not in short, because I’m closing now, I appreciate the opportunity to be here and to learn from others on the panel about how we can improve our processes and continue to work with government and civil society to advance AI. Thank you.


Alisson Peters: Thanks so much, Amy. I know I have a bunch more questions for you all. I mean, I think we just heard from you, Amy, the amount of work that Microsoft is doing to develop effective safe, happening amongst industry in this space. And yet what we’ve heard from Nagat and from Nicole in Russia is that, there’s real challenges in terms of developing effective safeguards. We know that, I think Nagat, you talked about the need to also ensure that we’re not concentrating a lot of these discussions in specific regions or specific countries or specific companies. And I think all of you across the board talked a bit about ensuring that we have more representative data that we’re recognizing that’s exacerbating biases and discrimination that occur. our society, or in the case of online harms, you know, things like to facilitate gender-based violence, it’s exacerbating gender-based violence that exists in our societies already. And so, I do want to go to the audience for questions, but in sort of reflecting on the questions that we might get, it’s also helpful for us to hear a little bit more about your recommendations on how we overcome some of the challenges that we’re seeing in developing effective safeguards if we have time. So, let me go over to the audience, I know we have folks online as well, if I could just ask our IT friends to pull up any questions, please put them in the chat, and if there’s any questions in the audience, I see folks are having problems hearing as well, so hopefully you can hear us, but if you have any questions, please put it in the chat, and any questions in the room.


Dr. Lee: Hi, thank you so much, Dr. Lee, I’m a big fan, thank you all for taking the time. Amy just mentioned infrastructure and data centers, and I have a question, as the government, U.S. government, sorry, as the U.S. government is integrating AI more and more into public systems, what is the government doing to ensure that patterns of environmental racism and issues with pollution and things that have affected marginalized communities in the U.S. will not be replicated with more and more AI use?


Nicol Turner Lee: I guess I can jump in, I think that’s a great question, I mean the type of power generation that’s going to be required for data centers are definitely going to, in many respects, lead us into areas where there is either more land or less respect for the dignity of the land some people have. So I think we have to, I like the way Amy’s talked about it with Microsoft, we’ll come up with the criteria and some values on where we decide to put those data centers because in the United States, the type of gigabit plus power that is required to actually not just keep these systems operating, but also to keep them cool, will have a disproportionate effect on communities that are either of color or indigenous or communities in which we used to have this term a long time ago in economic development, brownfields, where there’s a possibility to go in and exploit the land for the purposes of the type of potential nuclear reactor objects that are going to be needed to do data centers. And so I urge, I’m not a government employee, but I urge more conversation in this, right? Because it is an area that is becoming increasingly important as nuclear power becomes more distributed and hope that we can find the same type of reputational as well as harm reduction that we’ve spoken about today in terms of the models themselves and how we actually deal with this physical infrastructure.


Alisson Peters: Amy, is there anything you wanted to add to that as well?


Amy Colando: No, Nicole, that was such an excellent comment. I think that it’s recognizing the kind of continuing trends that we see in other words, it’s not AI, it’s like a brand new issue. There’s many new aspects about it, but the trends in terms of power and discrimination continue. Again, like many aspects of AI, I’d say there’s advantages and disadvantages. We are using AI, in fact, to develop new types of concrete that are less impactful on the environment. We have our own sustainability pledge. Other companies do as well, of course. We are continuing to uphold our pledge on carbon outputs. that we made prior to the advances of AI in the last couple of years, and we’ll continue to uphold that as we move forward and look for carbon-free sources of power.


Alisson Peters: And I will just say, you know, from the U.S. government perspective, we have over the last four years under the Biden administration, rolled out a number of new policies, executive order memos from our White House that are really focused on ensuring that as our own government is purchasing artificial intelligence systems, is using automated systems for decision-making, is deploying AI in different ways, and is also providing AI to other governments, that human rights is a core element of sort of that risk assessment that we’re doing, and that is a component in a lot of the new actions and regulations that we have rolled out. One of the things that I will note is we are working currently in the Council of Europe as government on a new convention on artificial intelligence, AI, human rights, rule of law, and democracy, and this framework convention is the globe’s first ever legally binding treaty on artificial intelligence, and one of the key things that that process is doing is also building out a risk assessment framework that has human rights at its core. So as government, we have a framework that we can actually look to that helps us assess what the risks are, whether that be to environmental rights, to environmental defenders, or other fundamental freedoms, freedom of expression and beyond, that that is core to everything that we’re working on. So this is a key piece of a lot of the work that we’re doing as it relates to safe, secure, and trustworthy AI in the US, and I know I speak for other governments that are here at IGF on that as well. And if we could just pull up the questions online, I just wanna make sure also that we’re not missing those.


Usama Kilji: Thank you very much for a very insightful discussion. I’m Usama Kilji. I’m with Bolo B, which is a digital rights organization in Pakistan. So my question specifically around AI use in military and in war, and I think around the world. and we’ve seen increasing use of AI in facial recognition technologies in conflict and in war, but we’re seeing that a lot of these conversations are to at least get the military use of AI, which has acute human rights impacts. So I’m wondering, what can governments and companies do to have more conversations around military use and what safeguards they can put in place, because currently in conflicts, we’re seeing very bad consequences for civilian populations.


Alisson Peters: One more question.


Khaled Mansour: Thank you. My name is Khaled Mansour. I am with Meta Oversight Board. It’s a follow-up to Osama’s question, because I bet you we will get the answers either from you that you do all these checks, human rights impact assessment. Our challenge here is transparency. So what is preventing you from publishing at least a portion of these reports so people who are affected by AI technologies, especially either clients of Microsoft or the U.S., actually happening and how the U.S.


Alisson Peters: Thanks so much. So maybe I’ll turn it over to the panelists first. But we have two questions. One is, how do we better address AI use in military settings with recognition that quite often as we’re having conversations around safeguards, around automated systems, we’re excluding the defense sector from those discussions. So what more could we be doing there? And then second question in terms of transparency reporting. And I know I saw another question back here. I’ll see if we get time. But maybe I’ll turn over to you online first, colleagues, if anyone wants to jump in on either of those questions.


Amy Colando: Sure, I can jump in a little bit. And this is an area of on which I welcome feedback because one of the cornerstones of how I think my team operates is commitments to accountability and transparency in terms of how we uphold Microsoft’s responsibility to respect human rights. At the same time, of course, there are confidentiality commitments to our customers and that those commitments are the same regardless of any customers. Let me just put that out there as a way we, that’s kind of the cornerstones of how we operate. I’d mentioned briefly during my opening remarks that we divide our AI services into potentially sensitive AI services, including facial recognition and neural voice. For those services, we do require defined use cases regardless of customer. And we review those defined use cases against our own responsible AI commitments, which are grounded in respect for human rights. We are endeavoring to increase transparency. So for example, during this last year, my team worked directly on updating some of our transparency around data center operations and the types of services we offer in data centers. But as you know, I’m sure there’s more we can do, more we can do as an industry and where we can do in terms of the kind of industry accepted level of due diligence. I think that’s going to be enormously helpful. So there’s this floor and rather than a race to the bottom, it’s a race to the top in terms of how private sector can work with governments and with civil society to ensure that we’re upholding universal human rights.


Nicol Turner Lee: And I’ll jump in with regards to that question in terms of militarization. So the challenges that we have with AI is that we have a militarization when it comes to just human rights and civil rights. But then we’ve also seen, and I like the way that the audience members sort of talked about this, this integration of a variety of technologies sort of embedded for the use of militarization. So what do I mean by that? We’re seeing facial recognition embedded into other AI-enabled technology that are being used for force. We’re seeing less accountability and transparency about that integration in many respects. And I think, you know, for the United States in particular, and other countries who have an ongoing race to AI with China, these create certain vulnerabilities and national security concerns that we have to pay attention to. So that’s the first thing I want to say. The other thing I think is really important, and I love the way we’re talking about, particularly the United States government’s sort of integrated diplomacy with human rights and AI security, is that I once heard someone say, and I’ll share with the truth because that was so profound, that in the absence of data privacy or international data governance strategy, we actually are also contributing to a national security concern. And so really thinking about ways in which we’re not handling data privacy, like Rasha also spoke about, right, really lends itself to greater militarization because it allows for governments, particularly authoritarian governments, to obstruct the type of transparency and accountability that we need when it comes to these systems of weaponization. And so, you know, I think that, you know, we have, we’re probably going to see a shift to more national security conversations in the United States. National Security Memo is an example of that. I just served as Secretary of Mayorkas on the AI Safety Board about critical infrastructure and AI protections. And I think across the world, I was just in Barcelona at the Smart City Depot, we’re seeing a lot of conversations about embedded militarization of just everyday AI tools and how they can be reversed for that type of application. So I think it’s a conversation we definitely need to have and the U.N. needs to continue.


Alisson Peters: Thanks so much, Nicole. And I will just say on the really important question in terms of how do we address the use of automated systems in our military apparatuses and not just use, but also development and design, right? You know, there’s two things that we’re working on, at least in the U.S. government context. First, we completely agree. with you that we can’t exclude. Test, test, it may work. Apologies all, unless IT issues always. But first and foremost, I think we agree with you on the importance of these conversations. It’s why we started a political military declaration to actually start a global conversation on use of AI in the military. And we would encourage governments that have not joined that declaration, not just because the importance of the declaration, but the importance of the policy conversations around this declaration to do so. And we’re happy to talk to any governments that are here at IGF and beyond. And then the second piece, which I think Nicole talked about is our national security memorandum on AI use in our national security system. We fully recognize that we can’t actually look at how to address everything from human rights impacts of AI to actually how our government is designing and deploying AI itself. Would actually address this in our national security institutions. And so we issued a pretty groundbreaking national security memorandum and to the point on transparency, that’s all public. And that deal is deploying these tools, but all elements of our national security system. So if you have not already had a chance to take a look at that national security memorandum, then happy to also share offline with you. But I think that is certainly an approach that we’re quite proud of as it relates to government transparency and accountability. Before I close this session, I wanted to invite our friends and colleagues from the governments in Netherlands. We have Hoest van Zwolle, who has been, I’ll say a partner of crime in all of our efforts to address the human rights impacts of artificial intelligence. Netherlands is going to be the chair of the Freedom Online Coalition Task Force on Artificial Intelligence and Human Rights next year. For those that are not familiar with the FOC, it’s a coalition of over 40 governments that are dedicated to addressing and ensuring the protection of human rights online, in which Netherlands serves as the chair this year. So I’m hoping to turn over to you, Hustu, close us out, and really just share some concrete ideas, particularly in reflection to some of the great questions that we’ve received on how the FOC can work with other governments next year to address these challenges under your leadership and in partnership with my government and other governments around the room.


Guus Van Zwoll: Thank you so much, Alison. Can you hear me? Yeah, okay. So it will be the TFAIR next year, as we call it, Task Force on AI and Human Rights. And human rights are, of course, a very different thing than humanitarian rights, but I do want to, humanitarian rights, but I just want to briefly touch on the issue of military and AI. In 2023, we started RE-AIM, which is the Responsible Use of AI in a Military Domain. We did it as the Netherlands, and this year the conference was held by South Korea. And together with South Korea, we launched a first committee resolution last month in the UN on exactly this issue, on what is responsible use of AI by the military. And that resolution had a broad support. We had 165 countries in favor, only two against, and six countries abstained. So I think that that is a pretty good start, at least for this conversation. I’m happy to discuss this later after the sessions as well. So I made some quick notes on what our plans are for next year. And basically, we basically want to continue this discussion, this fabulous discussion. Thank you so much, Alison and the US for organizing this. Because we see that the Freedom of the Line Coalition must practically. engaged on AI governance now, as critical global norms and standards are being shaped in the upcoming months. It will not take years, it will be literally months. And this is why we as the Netherlands want to co-lead the TFAIR next year. Our responsibility is to ensure that human rights remain central to these frameworks, protecting vulnerable populations and shaping inclusive and equitable AI systems. In 2020, the FOC already published a joint statement on AI and human rights. But 2020, that is in terms of AI a couple of centuries ago, basic age history was before image generation and was also before larger language models. So we know that the AI system really changed. And I think it’s our task next year to revise this successful 2020 FOC statement, emphasizing as well on the disproportionate impact on AI on marginalized communities. The aimed update will provide clear guidelines for embedding human rights principles into AI governance globally. How do we do this? Well, for example, by collaborating with Stanford’s AI MISUSE tracker, we will try to identify and highlight disproportionate impact of AI on marginalized groups, such as through biased surveillance or exclusionary practices. This tool will ensure transparency and accountability while driving advocacy and for equitable AI practices. We will also organize practical workshops and simulations to equip policymakers and diplomats with the tools and knowledge needed to address these AI challenges and opportunities for human rights with a focus on marginalized communities and women. We will try to get leading voices, very much like the ones we have heard today, to an organization to educate us diplomats and policymakers on the challenges that they see most daunting. Another focus would be on community-rooted AI research. that prioritizes diversity and addresses AI impacts on marginalized groups. These contributions would offer valuable perspectives for fostering an inclusive and rights-based AI governance. We will also spotlight examples of AI that can advance human rights, such as tools for bypassing censorship or supporting civic engagement. These studies will demonstrate how AI can be leveraged to empower marginalized groups while ensuring accountability and ethical development. A great example of this is the Signpost project by the International Rescue Committee. This initiative leverages AI to provide critical information to displaced populations via mobile apps and social media, delivering content in multiple languages. The choices that we will make now and that we are discussing at the IGF this week will determine how AI helps create a fairer world or deepens the current inequalities. Through T-FAIR, we will aim to keep human rights at the center of the AI governance discussion, supporting marginalized communities and building a future based on fairness and accountability. As the upcoming chair for T-FAIR, but also as the chair of the Freedom Online Coalition this year, we are convinced that the FOC, the Freedom Online Coalition, provides a great networking platform to advance this goal. Thank you.


Alisson Peters: Thank you so much, Joost. The Netherlands has been such an incredible leader of the Freedom Online Coalition this year, and I know I speak for my government when I say we’re really eager and excited to work with you all and really build out on today’s discussion next year on the task force. I know there’s never enough time for these conversations, especially when we have such incredible panelists, but I really do want to thank you all for joining on your weekends, where you’re all located, and everyone for joining us in the room. I will say, in concluding this discussion, we will have at IGF throughout the this week and beyond as we look ahead to WSIS Plus 20 and other UN processes, continued debates around the future of artificial intelligence. How do we both leverage the opportunities from AI, the opportunities that Nicole talked about, and how do we also ensure that we are mitigating the risks? So the rewards and the risks are continued conversation that we’re having in AI policy debates. And I think what you have heard from each of our panelists is this question about who is actually setting the table for those debates? Who is at the table? Are they representative of the populations that we as governments are tasked with protecting? Are they representative that industry is actually working and has access? And are they representative of the populations that will be most impacted by how these technologies are being designed, deployed, and used in their societies? We have a lot of conversations in the UN around ensuring that we’re advancing AI for good. But we know that we can only advance AI for good when basic human rights of all people, no matter where they’re located, no matter their faith, no matter their gender, no matter their sexual orientation and beyond are respected. So this is a really timely conversation for IGF. It’s a timely conversation given we just had Human Rights Day this last week. I thank you all for coming, and I hope that we can continue these discussions throughout this week at IGF and beyond. So on behalf of the United States government and my Bureau of Democracy, Human Rights, and Labor, I want to thank you all and thank you all online. And we look forward to being in touch through the Freedom Online Coalition to continue these important discussions. Thank you. Thank you. you


D

Desirée Cormier Smith

Speech speed

123 words per minute

Speech length

82 words

Speech time

39 seconds

AI can advance equity in healthcare, education, and economic opportunity

Explanation

AI has the potential to increase access to healthcare, education, and economic opportunities for those who need them most. This could help reduce inequalities and promote equity in these crucial areas.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Nicol Turner Lee


Jessica Stern


Agreed on

AI can both advance equity and reinforce discrimination


N

Nicol Turner Lee

Speech speed

175 words per minute

Speech length

1906 words

Speech time

653 seconds

AI systems often reinforce historical discrimination against marginalized groups

Explanation

AI systems can exacerbate existing biases and inequalities in society. This is because they are often trained on historical data that reflects past discriminatory practices and societal inequities.


Evidence

Example of breast cancer diagnosis models not accurately representing black women due to lack of participation in clinical trials.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Desirée Cormier Smith


Jessica Stern


Agreed on

AI can both advance equity and reinforce discrimination


Need to interrogate AI models for bias and whether automation is appropriate

Explanation

It is crucial to examine AI models for potential biases and discriminatory outcomes. Additionally, there should be careful consideration of whether certain decisions should be automated at all.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Agreed with

Sara Minkara


Rasha Younes


Agreed on

Need for diverse representation in AI development


Differed with

Amy Colando


Differed on

Approach to addressing AI biases


D

Dr. Geeta Rao Gupta

Speech speed

113 words per minute

Speech length

53 words

Speech time

28 seconds

AI tools are enabling technology-facilitated gender-based violence

Explanation

AI technologies are being used to create and spread technology-facilitated gender-based violence (TFGBV). This form of harassment and abuse particularly targets women and children, threatening their ability to participate in online and offline spaces.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


J

Jessica Stern

Speech speed

124 words per minute

Speech length

112 words

Speech time

54 seconds

AI can help reimagine inclusive futures but biases in data must be addressed

Explanation

Generative AI has the potential to create more inclusive futures and allow for safe self-expression. However, it’s crucial to address biases in the data used to train AI systems to prevent reinforcing harmful stereotypes.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Desirée Cormier Smith


Nicol Turner Lee


Agreed on

AI can both advance equity and reinforce discrimination


K

Kelly M. Fay Rodriguez

Speech speed

120 words per minute

Speech length

86 words

Speech time

43 seconds

Unions play a key role in safeguarding workers’ rights amid AI expansion

Explanation

Labor unions are essential in protecting workers’ rights as AI technologies rapidly expand. They advocate for fair employment practices, safe work environments, and equitable compensation in the context of AI implementation.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


S

Sara Minkara

Speech speed

123 words per minute

Speech length

79 words

Speech time

38 seconds

AI development often leaves out the disability community

Explanation

AI is frequently developed without considering the needs of the disability community. It’s crucial to ensure that AI is accessible for everyone, including people with disabilities, in all aspects of its development and implementation.


Major Discussion Point

Risks and Opportunities of AI for Marginalized Populations


Agreed with

Nicol Turner Lee


Rasha Younes


Agreed on

Need for diverse representation in AI development


N

Nighat Dad

Speech speed

130 words per minute

Speech length

474 words

Speech time

218 seconds

Companies must do more to address harms on their platforms

Explanation

Tech companies need to take more responsibility in addressing the harms caused by AI on their platforms. This includes issues like deep fake images and videos that disproportionately affect women and girls.


Evidence

Experience from META’s oversight board in framing how companies like Meta can use automation to deal with harms on their platforms.


Major Discussion Point

Addressing Biases and Harms in AI Systems


AI governance conversations are concentrated in Global North countries

Explanation

Discussions about AI governance are primarily taking place in developed countries. This leads to a lack of input from regions where these technologies are often deployed, particularly affecting marginalized groups in those areas.


Major Discussion Point

Ensuring Inclusive AI Governance


R

Rasha Younes

Speech speed

114 words per minute

Speech length

878 words

Speech time

458 seconds

Developers should conduct regular bias audits and build diverse datasets

Explanation

To mitigate biases in AI systems, developers need to regularly audit their systems for bias and ensure they are using diverse, representative datasets. This is particularly important for protecting LGBTQI+ individuals from discriminatory outcomes.


Evidence

Findings from a report on digital targeting of LGBTQI+ people across the Middle East and North Africa region.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Agreed with

Nicol Turner Lee


Sara Minkara


Agreed on

Need for diverse representation in AI development


Need to strengthen protections against digital targeting of vulnerable groups

Explanation

There is a pressing need to enhance safeguards against the digital targeting of vulnerable populations, particularly LGBTQI+ individuals. This includes addressing biases in content moderation systems and ensuring privacy protections.


Evidence

Examples of government targeting of LGBTQI+ people using monitoring tools and cybercrime legislation.


Major Discussion Point

Ensuring Inclusive AI Governance


A

Amy Colando

Speech speed

153 words per minute

Speech length

1472 words

Speech time

576 seconds

Microsoft employs various approaches to combat societal biases in AI systems

Explanation

Microsoft has implemented multiple strategies to address societal biases in their AI systems. This includes policies on fairness, tools to map and manage bias issues, and investments in identifying areas of harm across different demographic groups.


Evidence

Examples include the Global Data Center Community Pledge, partnership with the Stimson Center for the Global Perspectives Responsible AI Fellowship Program, and development of a customer code of conduct for generative AI services.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Differed with

Nicol Turner Lee


Differed on

Approach to addressing AI biases


A

Alisson Peters

Speech speed

157 words per minute

Speech length

3211 words

Speech time

1220 seconds

US government has policies to ensure human rights assessments in AI procurement

Explanation

The US government has implemented policies requiring human rights assessments when purchasing or deploying AI systems. This includes executive orders and memos aimed at safeguarding human rights in the development and use of AI.


Evidence

Mention of executive orders and memos introduced into the US government system over the last four years.


Major Discussion Point

Addressing Biases and Harms in AI Systems


Multistakeholder model needed to understand AI’s societal impacts

Explanation

A multistakeholder approach is crucial for comprehensively understanding how AI tools are impacting society. This model ensures that discussions include perspectives beyond just governments and the people representing them.


Major Discussion Point

Ensuring Inclusive AI Governance


US government working on frameworks for risk assessment with human rights at core

Explanation

The US government is developing risk assessment frameworks that prioritize human rights considerations in AI development and deployment. This includes work on international conventions and national security memoranda.


Evidence

Mention of the Council of Europe convention on AI, human rights, rule of law, and democracy, and the US national security memorandum on AI use in national security systems.


Major Discussion Point

Ensuring Inclusive AI Governance


K

Khaled Mansour

Speech speed

137 words per minute

Speech length

82 words

Speech time

35 seconds

Challenge is transparency in human rights impact assessments

Explanation

There is a lack of transparency in the human rights impact assessments conducted by companies and governments. Publishing at least portions of these reports would allow people affected by AI technologies to understand how their rights are being considered.


Major Discussion Point

Transparency and Accountability in AI Development


G

Guus Van Zwoll

Speech speed

152 words per minute

Speech length

710 words

Speech time

279 seconds

Freedom Online Coalition working to keep human rights central in AI governance

Explanation

The Freedom Online Coalition, through its Task Force on AI and Human Rights, is working to ensure that human rights remain at the center of AI governance frameworks. This includes updating previous statements to address the evolving AI landscape and its impact on marginalized communities.


Evidence

Plans for collaborating with Stanford’s AI MISUSE tracker, organizing workshops for policymakers, and spotlighting examples of AI that advance human rights.


Major Discussion Point

Ensuring Inclusive AI Governance


Agreements

Agreement Points

AI can both advance equity and reinforce discrimination

speakers

Desirée Cormier Smith


Nicol Turner Lee


Jessica Stern


arguments

AI can advance equity in healthcare, education, and economic opportunity


AI systems often reinforce historical discrimination against marginalized groups


AI can help reimagine inclusive futures but biases in data must be addressed


summary

The speakers agree that while AI has the potential to advance equity and create inclusive futures, it can also reinforce existing biases and discrimination if not properly addressed.


Need for diverse representation in AI development

speakers

Nicol Turner Lee


Sara Minkara


Rasha Younes


arguments

Need to interrogate AI models for bias and whether automation is appropriate


AI development often leaves out the disability community


Developers should conduct regular bias audits and build diverse datasets


summary

The speakers emphasize the importance of including diverse perspectives, particularly from marginalized communities, in the development and auditing of AI systems to mitigate biases and ensure inclusivity.


Similar Viewpoints

These speakers emphasize the need for more inclusive and diverse participation in AI governance discussions, particularly to address the needs and vulnerabilities of marginalized groups.

speakers

Nighat Dad


Rasha Younes


Alisson Peters


arguments

AI governance conversations are concentrated in Global North countries


Need to strengthen protections against digital targeting of vulnerable groups


Multistakeholder model needed to understand AI’s societal impacts


Unexpected Consensus

Importance of unions in AI governance

speakers

Kelly M. Fay Rodriguez


Alisson Peters


arguments

Unions play a key role in safeguarding workers’ rights amid AI expansion


Multistakeholder model needed to understand AI’s societal impacts


explanation

While most discussions focused on government and tech company roles, there was unexpected consensus on the importance of labor unions in shaping AI governance and protecting workers’ rights in the context of AI expansion.


Overall Assessment

Summary

The main areas of agreement include the dual nature of AI in both advancing equity and potentially reinforcing discrimination, the need for diverse representation in AI development and governance, and the importance of addressing biases and harms in AI systems.


Consensus level

There is a moderate to high level of consensus among the speakers on the key challenges and necessary steps for ensuring inclusive and responsible AI development and governance. This consensus suggests a growing recognition of the need for multistakeholder approaches and increased attention to the impacts of AI on marginalized communities, which could potentially influence future policy and industry practices in AI development and deployment.


Differences

Different Viewpoints

Approach to addressing AI biases

speakers

Nicol Turner Lee


Amy Colando


arguments

Need to interrogate AI models for bias and whether automation is appropriate


Microsoft employs various approaches to combat societal biases in AI systems


summary

While both speakers acknowledge the need to address biases in AI systems, they differ in their approaches. Turner Lee emphasizes the need for critical examination of AI models and questioning the appropriateness of automation, while Colando focuses on Microsoft’s implemented strategies and tools to manage bias issues.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to addressing AI biases and ensuring inclusive AI governance.


difference_level

The level of disagreement among the speakers appears to be relatively low. Most speakers agree on the fundamental issues surrounding AI’s impact on marginalized populations and the need for more inclusive governance. The differences mainly lie in the specific strategies and focus areas each speaker emphasizes. This level of disagreement suggests a general consensus on the importance of addressing AI’s risks for marginalized groups, but highlights the need for further discussion and collaboration on the most effective approaches to tackle these issues.


Partial Agreements

Partial Agreements

Both speakers agree on the need for more inclusive AI governance discussions. However, Dad emphasizes the lack of input from regions where these technologies are deployed, particularly affecting marginalized groups, while Peters focuses on the importance of a multistakeholder approach to comprehensively understand AI’s societal impacts.

speakers

Nighat Dad


Alisson Peters


arguments

AI governance conversations are concentrated in Global North countries


Multistakeholder model needed to understand AI’s societal impacts


Similar Viewpoints

These speakers emphasize the need for more inclusive and diverse participation in AI governance discussions, particularly to address the needs and vulnerabilities of marginalized groups.

speakers

Nighat Dad


Rasha Younes


Alisson Peters


arguments

AI governance conversations are concentrated in Global North countries


Need to strengthen protections against digital targeting of vulnerable groups


Multistakeholder model needed to understand AI’s societal impacts


Takeaways

Key Takeaways

AI offers opportunities to advance equity but also risks reinforcing discrimination against marginalized groups


Effective safeguards and inclusive governance are needed to mitigate AI harms to vulnerable populations


Multistakeholder collaboration is crucial to ensure AI development considers diverse perspectives


More transparency and accountability are needed in AI development, especially regarding human rights impacts


AI governance must center human rights and protect marginalized communities


Resolutions and Action Items

The Freedom Online Coalition will update its 2020 statement on AI and human rights in the coming year


The FOC will organize workshops to educate policymakers on AI challenges for human rights


The FOC will spotlight examples of AI that can advance human rights for marginalized groups


Unresolved Issues

How to effectively include marginalized voices in AI governance discussions


Balancing transparency in human rights impact assessments with customer confidentiality


Addressing AI use in military settings and its potential humanitarian impacts


Closing the digital divide to ensure equitable access to AI benefits


Suggested Compromises

Developing industry-accepted standards for due diligence and transparency in AI development


Creating inclusive digital spaces that actively counter discrimination while protecting privacy


Thought Provoking Comments

AI offers incredible potential to advance equity by increasing access to health care, education, and economic opportunity for those who need them the most. However, too often marginalized populations bear the worst harms of AI.

speaker

Desirée Cormier Smith


reason

This comment succinctly captures the core tension at the heart of AI’s impact on marginalized groups – its potential for both benefit and harm.


impact

It set the stage for the entire discussion by framing the key issues around AI and marginalized populations that subsequent speakers explored in more depth.


Computers might be binary, but people are not. Generative AI can help us reimagine inclusive futures and express ourselves safely and authentically. However, we need to be mindful about biases in the data that AI tools and systems are built on and how they translate into individuals’ lives.

speaker

Jessica Stern


reason

This comment insightfully highlights how AI systems can reinforce or challenge existing social constructs around identity, particularly for LGBTQI+ individuals.


impact

It broadened the conversation to consider AI’s impact on gender and sexual identity expression, which was further explored in later comments about LGBTQI+ rights.


We’re creating AI systems, and in many respects, we haven’t closed the accessibility divide. That creates its own set of challenges as to who will be able to benefit.

speaker

Nicol Turner Lee


reason

This comment draws attention to how existing digital divides can be exacerbated by AI, potentially widening inequality.


impact

It shifted the discussion to consider not just the design of AI systems, but also who has access to them, leading to further exploration of global inequities in AI development and deployment.


The concentration of these conversations are very much concentrated in some global North countries. And in the past, we have seen how technology that is being developed, designed, built, mostly, you know, like dumped in our regions, you know, and we have no say into how, you know, these are these technologies are designed for the marginalized groups in our regions.

speaker

Nighat Dad


reason

This comment highlights the global power imbalances in AI development and governance, raising important questions about representation and self-determination.


impact

It prompted further discussion about the need for more inclusive, global approaches to AI governance and development.


To mitigate these biases, developers should conduct regular bias audits and build diverse representative data sets and policymakers should also require independent testing of AI systems for biases, particularly when deployed in public facing pools.

speaker

Rasha Younes


reason

This comment offers concrete, actionable steps to address AI bias, moving the conversation from problem identification to potential solutions.


impact

It shifted the discussion towards more practical considerations of how to implement safeguards and protections in AI development and deployment.


Overall Assessment

These key comments shaped the discussion by progressively deepening the analysis of AI’s impact on marginalized populations. The conversation moved from identifying broad tensions and challenges to exploring specific impacts on different groups (e.g. LGBTQI+, Global South populations) and finally to proposing concrete actions and governance approaches. This progression allowed for a comprehensive exploration of the complex interplay between AI, human rights, and marginalized communities, while also highlighting the urgent need for more inclusive and equitable approaches to AI development and governance.


Follow-up Questions

How can we ensure patterns of environmental racism and pollution affecting marginalized communities in the U.S. are not replicated with increased AI use?

speaker

Audience member (Dr. Lee)


explanation

This question addresses the potential environmental impacts of AI infrastructure on marginalized communities, which is an important consideration as AI becomes more integrated into public systems.


What can governments and companies do to have more conversations around military use of AI and what safeguards can they put in place?

speaker

Usama Kilji


explanation

This question highlights the need for more discussion and safeguards around AI use in military and conflict situations, which can have severe human rights impacts on civilian populations.


What is preventing companies and governments from publishing at least a portion of their human rights impact assessment reports related to AI technologies?

speaker

Khaled Mansour


explanation

This question addresses the need for greater transparency in how companies and governments assess the human rights impacts of AI technologies, which is crucial for accountability and public trust.


How can we revise and update the 2020 FOC statement on AI and human rights to reflect current AI developments and emphasize the disproportionate impact on marginalized communities?

speaker

Guus Van Zwoll


explanation

This area for further research is important to ensure that human rights principles are embedded in AI governance globally, reflecting the rapid changes in AI technology since 2020.


How can we identify and highlight the disproportionate impact of AI on marginalized groups through tools like Stanford’s AI MISUSE tracker?

speaker

Guus Van Zwoll


explanation

This research area is crucial for ensuring transparency, accountability, and advocacy for equitable AI practices that don’t disproportionately harm marginalized communities.


How can we conduct community-rooted AI research that prioritizes diversity and addresses AI impacts on marginalized groups?

speaker

Guus Van Zwoll


explanation

This research direction is important for fostering inclusive and rights-based AI governance by incorporating diverse perspectives and experiences.


How can AI be leveraged to empower marginalized groups while ensuring accountability and ethical development?

speaker

Guus Van Zwoll


explanation

This area of research focuses on identifying and developing AI applications that can advance human rights and support marginalized communities, balancing the potential benefits with ethical considerations.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #165 From Policy to Practice: Gender, Diversity and Cybersecurity

Day 0 Event #165 From Policy to Practice: Gender, Diversity and Cybersecurity

Session at a Glance

Summary

This panel discussion focused on gender diversity and cybersecurity, addressing the underrepresentation of women in the cybersecurity workforce and the gender-specific impacts of cyber threats. Participants highlighted the need for gender mainstreaming in cybersecurity policies and capacity-building initiatives. They discussed various national and international efforts to increase women’s participation in cybersecurity, such as Canada’s feminist foreign policy and Chile’s incorporation of gender perspectives in its national cybersecurity strategy.


The panelists emphasized the importance of addressing gender-specific cyber threats, including online harassment, deep fakes, and AI-driven job displacement risks for women. They stressed the need for inclusive technology development and policy formulation that considers diverse perspectives. Capacity-building programs like ITU’s HerCybertracks and the Women in Cyber Fellowship were highlighted as effective ways to empower women in cybersecurity.


The discussion also touched on the intersection of gender with other aspects of diversity, emphasizing the need for inclusive approaches that consider factors like neurodiversity and visual impairment. Panelists called for increased funding and support for successful regional programs addressing gender imbalances in cybersecurity education and training.


The panel concluded with plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts, emphasizing the importance of multistakeholder collaboration in addressing these challenges. Overall, the discussion underscored the critical need for gender-responsive approaches in cybersecurity to ensure a more inclusive and secure digital future.


Keypoints

Major discussion points:


– The underrepresentation of women in the global cybersecurity workforce (only about 25%)


– The need to address gender-specific cyber threats and harms


– The importance of gender mainstreaming in cybersecurity policies and capacity building programs


– The role of emerging technologies like AI in exacerbating gender-based online threats


– Strategies for increasing women’s participation in cybersecurity, including targeted training programs


Overall purpose:


The goal of this discussion was to explore ways to increase gender diversity in cybersecurity and address gender-specific cyber threats. The panel aimed to gather insights and recommendations to inform future policy and capacity building efforts.


Tone:


The tone was largely constructive and solution-oriented. Panelists spoke candidly about challenges but focused on sharing positive initiatives and proposing ways to make progress. There was a sense of urgency about addressing these issues, but also optimism about potential solutions. The tone remained consistent throughout, with panelists building on each other’s points collaboratively.


Speakers

– Shimona Mohan: Moderator, from UN Institute for Disarmament Research


– Aaryn Yunwei Zhou: Deputy Director of the International Cyber and Emerging Technology Policy Division at Global Affairs Canada


– Yasmine Idrissi Azzouzi: Cybersecurity Program Officer at the International Telecommunication Union in Geneva


– Hoda Al Khzaimi: Director of the Center for Cybersecurity at New York University Abu Dhabi, Founder and Director of the Emerging Advanced Research Acceleration for Technologies, Security, and Cryptology Research Lab and Center


– Catalina Vera Toro: Alternative Representative, Permanent Mission of Chile in the Organization of American States


– Luanda Domi: General Streaming and Cyber Skills Development Manager, Global Forum on Cyber Expertise


Additional speakers:


– Pavel Mraz: Cybersecurity Researcher at UNIDIR


– Jocelyn Meliza: MUG member and co-facilitator for the BPF on cyber security


– Paula: Audience member, cybersecurity policy advisor


– Kosi: Audience member, student from Benin, chair of NGO called Women Be Free


Full session report

Gender Diversity and Cybersecurity: Addressing Challenges and Opportunities


This panel discussion, moderated by Shimona Mohan from the UN Institute for Disarmament Research (UNIDIR), brought together experts from various organisations to explore the critical issue of gender diversity in cybersecurity. The conversation centred on the underrepresentation of women in the cybersecurity workforce, gender-specific cyber threats, and strategies to promote inclusivity and equality in the field.


Underrepresentation and Policy Initiatives


The discussion began by highlighting the stark gender imbalance in the global cybersecurity workforce, with women representing only about 25% of professionals in the field. Panellists agreed on the urgent need to increase this representation and shared various national and international efforts to address the issue.


Aaryn Yunwei Zhou, from Global Affairs Canada, emphasised the importance of gender-responsive policies, citing Canada’s feminist foreign policy and the use of Gender-Based Analysis Assessments for all policies. Zhou also mentioned Canada’s involvement in the Global Partnership for Action on Gender-Based Online Harassment and Abuse, demonstrating a commitment to addressing gender-specific cyber threats at an international level.


Catalina Vera Toro from Chile’s Permanent Mission to the Organization of American States highlighted her country’s gender-responsive national cybersecurity policy, which aims to increase women’s participation in cybersecurity to 35% by 2030. Toro also discussed Chile’s feminist foreign policy and its focus on promoting gender equality in international forums, including the Open-Ended Working Group on cyber issues.


Gender-Specific Cyber Threats


The panel delved into the unique challenges women face in the digital sphere, emphasising the need to address gender-specific cyber threats. Aaryn Yunwei Zhou pointed out the disproportionate impact of internet shutdowns on women, while Hoda Al Khzaimi, from New York University Abu Dhabi, presented alarming statistics on deepfake content, revealing that 96% of all deep content online is non-consensual sexual content targeting women. Al Khzaimi noted, “These attacks often aim to silence journalists, politicians, and activists,” highlighting the broader societal implications of these threats.


The discussion also touched on the potential exacerbation of gender disparities due to emerging technologies. Luanda Domi, from the Global Forum on Cyber Expertise, shared a concerning statistic: “3.7% of women’s jobs globally are now at risk of being replaced by AI compared to 1.4% of men’s jobs.” This underscored the importance of training women for AI-driven roles and considering the gendered impact of technological advancements.


Capacity Building and Education Initiatives


A significant portion of the discussion focused on strategies to increase women’s participation in cybersecurity through targeted training programmes and capacity-building initiatives. Yasmine Idrissi Azzouzi from the International Telecommunication Union (ITU) in Geneva highlighted the organisation’s Women in Cyber Mentorship Program, which has transitioned into the HerCyberTracks programme. This initiative aims to empower women in the public sector through a holistic approach combining training, mentorship, role modeling, community building, and real-world exposure.


Aaryn Yunwei Zhou mentioned Canada’s Women in Cyber Fellowship for diplomats, demonstrating the importance of specialised programmes targeting specific sectors. Hoda Al Khzaimi stressed the need for deeper technical education beyond surface-level awareness, while Luanda Domi highlighted the importance of tailoring programmes to address specific needs, including neurodiversity.


The panellists agreed on the necessity of comprehensive and tailored capacity-building programmes to address the diverse needs of women and other underrepresented groups in cybersecurity. They also emphasised the importance of localisation and sustainability in these programmes, ensuring that they are adapted to regional contexts and can continue to have long-term impact.


Challenges and Future Directions


The discussion touched on broader challenges in promoting gender diversity in cybersecurity. Luanda Domi pointed out that only 0.05% of development budgets target gender initiatives, highlighting the need for increased funding and support for successful regional programmes. Domi also mentioned the Cyber Safe Foundation’s program in Africa as an example of effective regional initiatives addressing gender imbalances in cybersecurity education and training.


Catalina Vera Toro discussed the ethical considerations and challenges associated with emerging technologies, emphasising the need for responsible development and deployment of AI and other advanced systems.


The panel addressed the digital gender divide in Africa, recognising the unique challenges faced by women in the region and the importance of tailored solutions. Audience members also raised the need for multilingual training programs and a centralised platform listing available fellowships and programs for cybersecurity capacity building.


Conclusion


The discussion highlighted the critical need for gender-responsive approaches in cybersecurity to ensure a more inclusive and secure digital future. While there was a high level of consensus among speakers on the importance of addressing gender disparities in cybersecurity, the conversation also revealed the complexity of the issue and the need for multifaceted solutions.


The panel emphasised the importance of gender mainstreaming in cybersecurity policies, the development of targeted capacity-building programmes, and the need to address gender-specific cyber threats. The discussion also highlighted the potential of emerging technologies like AI to exacerbate gender-based online threats, emphasising the need for proactive measures to ensure that technological advancements benefit all genders equally.


As the field of cybersecurity continues to evolve, the insights and recommendations from this panel provide valuable guidance for policymakers, educators, and industry leaders working towards a more diverse and inclusive cybersecurity workforce. The panel concluded with plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts, as mentioned by Pavel Mraz from UNIDIR, underscoring the importance of sharing knowledge and successful strategies across different regions and organisations.


Session Transcript

Shimona Mohan: Kalina on the screen. I hope you can hear us. So we don’t hear you yet, but if you hear me, please give me a thumbs up, or a nod, or something. OK, perfect. You can hear me. Fantastic. So the reason why I can’t hear you is because I don’t have my earpiece on yet. So I should be able to hear you in a couple of minutes as well. OK. OK. OK. So I hope that’s all good to go. And I hope our colleagues in the room can hear me loud and clear. Fantastic. So thank you all for joining us, and for those of us who are joining us online as well, very nice to have you with us to discuss something that we’ve heard a lot about this year, which is gender diversity and cybersecurity. We know that world over, there’s only about 25% of women in the cybersecurity workforce. So this is a very important conversation for us to have. And I’m very, very glad that us, as the United Nations Institute for Disarmament Research, along with the organizations that have collaborated on this event with us, which is the International Telecommunication Union, the Global Affairs Canada, and the Stimson Center, join us to convene this conversation and further the discussions around gender diversity and cybersecurity. So just to give you a little bit of a brief about today’s session, we know that there is a growing acknowledgment of the gender dimension of cyber threats, as well as the persistent digital gender divide, with women representing only maybe about 25% of the global cybersecurity workforce. However, specific gender-differentiated impacts of cyber threats and strategies have continued to increase when it comes to the global cyberspace at the current moment. And this kind of hinders the multi-stakeholder efforts to enhance cyber resilience and promote inclusive international peace and security in governance models. So today, with our fantastic panel, which is drawn from all kinds of diversities around geographies, around stakeholders, and around substantive expertises, we’ll discuss a little bit more about how to make sure that these gender-specific harms and threats are countered in an effective manner, both through substantive measures as well as through governance measures. So joining us today, just to give you a brief introduction, is, firstly, Ms. Aaron Yunwei Zhu, who is the Deputy Director of the International Cyber and Emerging Technology Policy Division at the Global Affairs Canada. We also have Professor Hoda Alkhazaimi, who is joining us online for now, and hopefully also in person, is the Director of the Center for Cybersecurity, New York University Abu Dhabi, and also the Founder and Director of the Emerging Advanced Research Acceleration for Technologies, Security, and Cryptology Research Lab and Center. We also have Ms. Yasmin Itrissi-Izouzi, who is the Cybersecurity Program Officer at the International Telecommunication Union in Geneva. We’ll also have Ms. Lulu Dugumi, who is the General Streaming and Cyber Skills Development Manager, Global Forum on Cyber Expertise. And we also have online with us Ms. Catalina Peratoro, who is the Alternative Representative, Permanent Mission of Chile in the Organization of American States. Myself, I am Shimona Mohan from the UN Institute for Disarmament Research. And joining us online is also my colleague, Mr. Pavel Mraz, who is a Cybersecurity Researcher, also at UNIDIR. So a couple of housekeeping announcements. I’ll ask our fantastic panelists to give us a little bit of a brief about the questions that I would like to ask you about the issue of cybersecurity and cybersecurity. We’ll have a round of introductions for about five to six minutes each, and then we’ll open up the floor for discussions. So please come prepared with your questions after you hear from our panelists. I would like to also flag that this discussion is part of an ongoing process of collecting recommendations for a compendium of good practices around gender and cybersecurity, mainstreaming gender and cybersecurity, that we as UNIDIR are undertaking with the help and collaboration and contribution of our partner organizations who are also on this panel. So with that, I think we’ll start off with the interventions for the day. And I will first invite, perhaps, since we have in the room, Aaron to kind of give us a little bit of a brief around how can governments tackle gendered cyber threats and attacks, and what kind of policy imperatives are required, and how can a government sort of mainstream these gender considerations in their cyber and digital policy?


Aaryn Yunwei Zhou: Thank you so much for inviting me to this panel. I’m pleased to be here. So for Canada, our approach is to mainstream gender considerations in all aspects of our work, specifically with threats and attacks. We take gender into consideration for both assessments of the threats and our responses to them. I’m sure this audience knows there’s a gender dimension to every aspect of cybersecurity, so not taking gender into consideration actually makes our responses far less effective. So a couple of examples I just wanted to share include internet shutdowns, specifically in Iran. For example, women tended to use Instagram much more, and the internet shutdown had a much more disproportionate effect on their both social and economic participation. And we need to get rid of this arbitrary divide between online and offline, as offline violence is often preceded by online violence. So this is not only the right thing to do, but again, will make our responses more effective. In terms of policy imperatives in Canada, all of our policy initiatives and programs have to go through what is called the Gender-Based Analysis Assessment, GBA+. This isn’t only about how policies affect women, but also how policies affect men and other gender-diverse people. And it’s not only about gender, it also takes into account race, class, ethnicity, cultural background, to really understand how specific policy responses affect people in all their diversity, and to make sure that the programs that we’re designing are fit for purpose. Yeah, and I’ll stop there.


Shimona Mohan: Perfect, thank you so much. I’m also wondering, since you’re here as a representative of Canada, and we know that Canada has a feminist foreign policy with a specific Women, Peace and Security Agenda National Action Plan, that also has several mentions of online harassment and abuse against women and people of diverse gender identities. How have these helped in the prioritization of the inclusion of tech considerations when it comes to gendered harms?


Aaryn Yunwei Zhou: So I think the thing that we do that’s top of mind for us is to create a taboo against online gender-based violence and harassment. So Canada was one of the founding members of the Global Partnership for Action on Gender-Based Online Harassment and Abuse. It’s grown to about 15 countries to form a community of like-minded countries that are building norms against online gender-based harassment and violence at a time when this is increasingly controversial, unfortunately. And what this has meant is we have a community of practice amongst different governments that can share experiences and learnings on how to do this both domestically and in multilateral contexts, notably around the Commission on the Status of Women every year. And another important thing that we do is sponsor the Women in Cyber Fellowship. Because of this program, we’ve been able to train over 50 women diplomats from around the world. Not only has this meant more diverse voices around the table at the Open End Working Group, it’s the first time that any first committee process has reached gender parity at the UN, and it also creates a community for women diplomats, and they can turn to each other and share learning and support each other to bring more diverse voices to those processes.


Shimona Mohan: Fantastic. Thank you so much. In fact, the Women in Cyber Fellowship has been a beacon of hope for all of us. And on the basis of the Women in Cyber Fellowship, we’ve also kind of, at Unity, established something called… the Women in AI Fellowship to replicate the same kind of success in AI-related conversations for women diplomats as well. Speaking of the OEWG, if I may turn to Yasmeen now, I think the OEWG on cyber has often also focused on reducing the gender digital divide to ensure that women get access and equal opportunities in the online space. And this is also, I know, something that ITU has worked on extensively. So how are we currently facing and sort of faring with the divide? And how do you think this gap can be sort of closed?


Yasmine Idrissi Azzouzi: Thank you for that, Shimona. So indeed, the ITU for more than 20 years now, we’ve been very active in closing the digital gender divide in particular by equipping women with digital skills. So the most sort of flagship initiatives when it comes to that are Equals, which is a global partnership, and also Girls in ICTs, which is very much focused on inspiring younger women to pursue careers in technology. In the specific context of cybersecurity, we are mandated by a specific resolution to promote the growth and development of a skilled and diverse cybersecurity workforce, and in particular, to address the skills shortage by including more women and promoting their employment. So based on that, in 2021, we first launched the Women in Cyber Mentorship Program, which is different from the Fellowship Program in three regions, so Arab region, Asia, and Asia, sorry, Asia-Pacific, and Africa. And one of the cornerstones of this program was really the soft skills development, the mentorship aspect. And as we were running it for three, four years, we received some feedback from participants that when they often participate in Women in Cyber-related programs or Women in Tech-related programs, there’s a strong focus on soft skills and a strong focus on developing leadership and whatnot. And they wanted to go a little bit beyond that and really focus on the technical skills and the hard skills in parallel to this. So then we decided to take this experience and we launched HerCybertrucks, which is a highly specialized and tailored training. So this program puts into practice what I believe to be a holistic approach to capacity building, and why holistic? Basically, to me, I think that capacity building is not just about training. Training is, of course, very important, but it’s also focusing on other elements. One, promotion of role models. So definitely elevating individuals that are from underrepresented communities as successful women in cybersecurity, shedding the light on their successes, and showcase that basically it is possible to have somebody that looks like me be able to be in a leadership position in cybersecurity. Second is community building. So facilitating peer exchange, support networks is definitely key to help individuals navigate challenges when it comes to being a woman in cybersecurity in a male-dominated field. Third is exposure to the reality of the field. So we do so through offering study visits to certs of other countries, for example, where women can see how things are being done and learn on the job, basically job placements. And this basically shows career pathways and provides also practical advice on the reality of the field. And then, last but not least, of course, mentorship is still very key. So we connect aspiring professionals with mentors from a bit all over the world and basically guides them through career pathways that have to do with their professional life, but also their personal life, because, of course, we are, let’s say, multifaceted beings as well. So with HerCybertracks, we’ve tailored curricula to be specifically for women in the public sector. And these are across three tracks, three cybertracks. One is policy and diplomacy, the second is incident response, and the third is criminal justice or cybercrime. And basically, we have decided to go, let’s say, beyond traditional training, incorporate study visits, incorporate networking opportunities, incorporate mentorship, and also focusing very much on inter-regional exchange. So our cohorts have participants from Africa, from Eastern Europe in particular, very different contexts, where actually people were surprised to be facing similar challenges, even if the context is quite different. So to sum up, basically, in order to really close the gender-digital divide and really promote what we call the equal, full, and meaningful representation of women in cybersecurity, not just as a checkbox being checked, but really meaningful and skilled representation, we must adopt a holistic approach such as this when it comes to capacity building, combining training, combining mentorship, combining role modeling, community building, and real-world exposure. Thank you.


Shimona Mohan: Perfect. Thanks so much, Yasmin. And this sounds like a fantastic sort of program for women of all walks of life to sort of join and make sure that they can contribute and learn a lot. And I took notes when you were talking, and I really like the fact that you focused on all the different ways of engaging them as tracks, and then sort of making sure that all of those tracks are coming together as one holistic program to make sure that they’re learning as much as they can. But I think going forward from here, I would also love to talk a little bit more about the tech side of things. How are we seeing these harms come up? Because we’ve heard a lot about these harms and how they have specific impacts or adverse effects on women and other gender-diverse individuals. But we’re lucky to have Professor Hoda on the call with us. So I will make use of her expertise and perhaps also ask her, what kind of gendered threats do you see sort of existing in the cyberspace? And who do you think these affect the most? And if you could give us some examples, that would be fantastic. I don’t see them on the chat, sorry, on the call, but I hope you can hear me.


Hoda Al Khzaimi: Can you hear me?


Shimona Mohan: Yes, I hear you, I hear you, I hear you, I hear you.


Hoda Al Khzaimi: Perfect. Thank you, Shomona, for inviting me to this important discussion. I think gendered cybersecurity threats are becoming increasingly urgent, especially as emerging technologies such as AI, deep fakes, and quantum computing to shape the digital landscape and today’s reality. Today, we’ll address two key areas, which are the example of these gender-specific threats, including those like threats that can be interpolated by emerging technologies and strategies for designing next generation solutions to mitigate these harms. The problem on the space for us is how do we define harm? When it comes to gender and women and vulnerable groups, harm is being defined on a magnified aspect, including intangible harm that comes with reputational risk. We have seen technologies as in deep fakes, for example, technology, which is one of the significant emerging threats, particularly for women, where it has been found by DeepTrace in 2019 that 96% of all deep content online is non-consensual sexual content pertaining to target women. These attacks often aim to silence journalists, politicians, and activists, which is an example that we’ve seen recently within the Indian media for one of the Indian journalists who has been targeted with a deep fake video designed to discredit her work and incite harassment. Emerging technologies as well, like AI-based content verification tools that is being developed, for example, by Microsoft, where I’m talking about a video authenticator and blockchain initiatives, like the one that Adobe has been developing on Adobe Content Authenticity Initiative, are promising solutions for those kind of deep fake aspects, as well as other advanced machine learning and cryptographic signature schemes that aims to identify the origins of the contents and flag manipulated media online. We really need to encourage platforms to start authenticating every type of messaging that’s being created online, and also flagging non, I would say, integral or non-authentic material that is being exchanged. When we talk as well about the biometric expectations and AI biases within, for example, facial recognition, we know very well that facial recognition systems often exhibit systematic bias against gender in general, but women in particular of color, they have much more kind of weakness for. A study that has been generated in 2018 showed that the error rate for those system is up to 34.7% for darker-skinned women in comparison for less than 1% for lighter-skinned men. Those biases can lead to wrongful surveillance and enforcement action, which often targets marginalized groups. So emerging technologies, such as federated learning, for example, and privacy-preserving AI models can reduce biases by ensuring diverse decentralized data training is being deployed. Additionally, initiatives by the ITU and UNESCO are pushing for global standards and ethical AI development, for example, emphasizing inclusivity is a must, and it should be. be considered in the design premise of those kind of technology. I’m talking about the cyber harassment and as well IOT devices. We have emerging technologies within the Internet of Things, for example, which introduce new vulnerabilities. Smart home devices and wearables can be exploited through stalking and harassment. There has been multiple incidents in the United States where female, loner females who have been subjected to being tracked by tracker devices and being subjected to targeted attack when they are in a specific area, affecting women not just in abusive relationships, but affecting women in general. According to the UNESCO, such misuse has already impacted 10% of women in developed nations. Security by design principles are a must for us when we are developing and designing the next generation solutions, not just in cybersecurity, but in general and the technology platforms.


Shimona Mohan: Thank you so much, Professor. I would also like to sort of, because you mentioned so many threats and especially the ones exacerbated by AI now, from more of a technical perspective, I would really love to understand what kind of measures can be taken, both proactive and reactive.


Hoda Al Khzaimi: I’m sorry, I can’t hear you, Shimona.


Shimona Mohan: Oh, sorry. Are you able to hear me now? So our technical colleagues in the room, if you could please ensure that Professor can hear us, that would be great. I understand we did a little bit of a technical thing to counter the glitch that was occurring in terms of the audio. So maybe in the meanwhile, until we’re resolving this glitch, and then I’ll come back to you, Professor. I assume you can’t hear me, so I’ll come back to you either way. But I actually wanted to ask the Professor first if we could have an overview of the kind of technical solutions that are possible to employ, both proactive and reactive. But I’ll save that question for when Professor Hoda can hear us. And in the meanwhile, because Yasmin spoke a lot about capacity building, I’d like to come back to her. Shimona, if you can hear us, we can’t hear you. Oh, not at all? Okay. I see. So maybe, could we unmute? We can’t hear you. Maybe you are on mute. Yes. Can you hear me now? Yes. I guess too much. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Could you confirm if you’re able to hear me, with maybe like a thumbs up? No, you can’t?


Luanda Domi: Unmute the microphone.


Shimona Mohan: We see it as mute online, the room microphone. Could we unmute the room, please? Okay. Can you hear me now? Yes. Yes. Okay, perfect. Thank you. So, perfect. Okay. So, there’s always a saying in tech panels, that there’s no tech panels without tech issues. So, we’re living up to that. But, professor, if you can hear me, I would like to come back to you on a follow-up question that I had. Thank you for giving us a background, and especially the exacerbated threats by AI. From a technical perspective, I’d really like to understand what kind of measures we can take, both proactive and reactive, to sort of lessen these harms, and who should employ these measures. So, if you could speak a little bit to that, that would be fantastic. Okay. So, not sure if… No. Okay. Could we unmute, please? Okay. I’m sorry to ask this again, but can you hear me? So, professor Hoda, I’m not sure if you can hear me.


Luanda Domi: I think because she’s on the move, maybe she’s having challenges. Okay. Shimona, you can get back to her later.


Shimona Mohan: Sure. Okay. Sounds good. So, we’ll go back to my previous arrangement of questions, then. Okay. So I just wanted to get a sense of solutions that we can employ to make sure that we don’t counter these gender-specific harms against women and people of diverse gender identities. And after more of a technical perspective, I wanted to come back to Yasmin and ask, since we want to effectively combat these gendered threats and harm, I know that a variety of internet governance communities perhaps need to be cognizant of these threats and then be well-trained to respond to them. And you spoke a little bit about this in your intervention about HerCybertracks, which ITU is doing. I’d like to also understand, how do capacity-building interventions, such as HerCybertracks or, as Aaron mentioned, the Women in Cyber Fellowship, help in this effort?


Yasmine Idrissi Azzouzi: Thanks for that, Shimona. I hope there won’t be any sound effects that are unexpected. But I think that including them in the workforce, including more women in the workforce, would be key. Because, of course, as we know, in the cybersecurity field, there’s still this persistent challenge. There are issues beyond just recruitment, but also retention and meaningful representation of women in cybersecurity. And since women are disproportionately affected by online risks, this can have one or two effects. It can either encourage them to go into the field to be able to face these risks or discourage them, on the other hand. But even more so, the programs that we work with are with women in public sector. So women that are often politicians, diplomats, women in the public eye, which, of course, makes them the higher target for gender harms online, harassment, doxing, and whatnot. So through the holistic capacity-building approach that we use for HerCybertracks that I mentioned earlier, we also run these peer exchange platforms or sessions on the challenges of being a woman in cybersecurity. These are intergenerational and interregional. And now participants affectionately call them group therapy, because they’ve actually been platforms for them to share experiences and difficulty in overcoming obstacles in the workplace and overcoming obstacles in the workforce at large. Tears have been shed and hugs have been given at these kinds of things, even though these are very different contexts, again, from a cybersecurity and maybe socioeconomic perspective as well. So many of our participants, the reason why I’m telling you this story is that the key sort of conclusion that comes from these exchange sessions is the power of communities. And women are often left out of formal processes. So what happens at these formal processes, these boys clubs oftentimes, they have found strength in actually creating informal communities parallel to the formal ones. And programs like HerCybertracks and others have actually have the ability to help create these communities. So now past participants stay in contact, they share with each other, they ask questions, they learn from common experiences, and basically this ensures that today’s informal communities of women in cyber that are in government, in incident response, or in cybercrime, tomorrow become the formal networks of the women that are in leadership, which this can ultimately result in really real confidence-building measures and interregional cooperation. So all in all, being able to respond to these threats I think is not just simply a question of having well-trained people, but also about creating environments where diverse perspectives are encouraged, respected, are heard, and leveraged to ideally create these communities of support. Thank you.


Shimona Mohan: Fantastic. Thank you so much. It’s very eye-opening to get an insight into how all of this plays out from start to finish. And I think with this, I would also like to understand how this happens perhaps on more of a national level. So we’re glad to have with us Catalina on the screen, and I hope, Catalina, you can hear me just checking because of the tech PTSD. But I wanted to get a sense of how Chile, which is actually one of the few countries to have a specific focus on gender in its national cybersecurity policy. How does this kind of focus in the cybersecurity policy help both substantively, say, in terms of gendered harms, and also participatorily, in terms of the ratio of women cyber professionals in the country? So over to you, Catalina.


Catalina Vera Toro: Thank you, Shamana. Hopefully, you can hear me as well without an echo. Yes, all good. So first of all, thank you. And I want to say hi to my distinguished fellow panelists and also to the audience and express my appreciation for the opportunity to share a national experience on this important issue. As you mentioned, Chile has integrated a gender perspective on its national cybersecurity policy. So I will go very briefly on how such focus helps substantively the meaningful participation of women and gender diversity, and how there is broader implications also beyond in international cybersecurity discussions and governance. So back in 2023, I want to mention, we became the first South American country to have a feminist foreign policy. That means we joined Canada, the Netherlands, Mexico, and the like that were leading the way on this. And thus, at our core, Chile has advanced on its commitments on human rights and equality. But through this vision of feminist foreign policy, we want to achieve, through policy, a more inclusive country and a more egalitarian society, you could say. So particularly in regards of gender perspectives on cyber policy, there are very few countries, as you mentioned, that have this in their national strategies. For instance, in Latin America and the Caribbean, 14 include some reference to human rights, but only four incorporate gender perspective. That would be Argentina, Colombia, Ecuador, and us. However, Chile has updated its cyber policy, cybersecurity policy in 2024 from its 2017 policy that already had gender as a reference. But now we had incorporated as a gender responsive approach by mainly establishing the obligation of the state to protect and promote the protection of the rights of the people on the internet through strengthening of the existing institutions in cybersecurity matters by capacity building and updating legal frameworks and by gender mainstreaming. All these initiatives must preferentially consider women, both in terms of their protection and inclusion and positive action aimed at correcting the inequalities that continue to exist in our society, as my previous panelists covered very well, and also mainstreaming protection of children and youth, the elderly, and also the environment. And such focus achieved two major objectives, you could say, first, addressing gender-specific harms in cyberspace, but also promoting the inclusion of women in cybersecurity workforce. How we address harms? Well, basically, we ensure that these risks are not only acknowledged, but systematically addressed by incorporating specific measures for gender-sensitive threat analysis and response mechanisms. That means we have updated our policies, are also in the works to broadening policies and laws that mitigate gender-online harms by developing tailored strategies for protection and victim support, empower affected groups to report incidents without fear or stigma, build safer, more inclusive online environments where everybody can engage freely, and through that, measures that are not only ethical imperatives, you could say, they can also contribute directly to more robust, inclusive cybersecurity systems that protect all citizens. When it comes to participation and encouraging women to go into cybersecurity, as it was previously stated, women represent around 20%, 24% of the global cybersecurity workforce. So policies that prioritize gender inclusivity have a transformative potential, and so we have created initiatives such as scholarships, mentorship programs, and leadership opportunities for women in cybersecurity. Through our national cybersecurity ecosystem that we’re building, next year we will have a cybersecurity agency that will cover nationwide and will collaborate with the MFA for cybersecurity as a whole. We’re also paving the way for more equitable talent pipelines. This inclusivity not only improves the ratio of women and cyber professionals, but also enhance cybersecurity outcomes. And I want to mention something more. Nowadays, we are all talking about technology and how we have to get our people, our workforce, ready for that. When it comes to cybersecurity, it’s very interesting because it’s a very broad ecosystem, you could say, of professionals, like, for instance, cyber diplomats, cyber politicians, not only cyber tech people. So then we need the data, actually. How many people are getting into cybersecurity nowadays, not only in the technical field, but also in those broader aspects? And also, cybersecurity is expanding its scope. Therefore, we need more professionals. And on this, Chile is trying not only to get young women into cybersecurity, like to go in early stages into STEMs and so on, but also to help women later in their careers to reconvert into cybersecurity through, like, customized programs, you could say, that are targeted for a specific need, like women that have already been 10 years in the workforce and want to reconvert into cyber field with short programs, short programs with certification, and also to help them leverage their experience to get into the cyber field, because most of the times, the biggest, like, block, you could say, for this is that they don’t have cyber-specific experience. And there is where the government can come in and help incentivize companies to bring in this women workforce that are being reconverted into this field. And for that, we have implemented, well, not only for cybersecurity, but for women in STEM and beyond, the IWALA certification, you could say, that promotes, you know, through government incentives, parity in the workforce in public sector, but also in private sector, so that they can also venture into this new, you know, cybersecurity opportunities that are at hand and make the most out of it. So I’ll leave it at that. Thank you.


Shimona Mohan: Thank you so much for… So much for… Could we unmute, please? Yes. Thank you so much for giving us that snapshot, Catalina. I also know that Chile has often mentioned, and you talked about this in your intervention just now as well, about promoting a sort of gendered approach to cybersecurity and gender-sensitive cyber capacity building at the policy level. I’m really interested in figuring out how does that prioritization help contribute effectively to also your international discussions around cybersecurity? And I know Chile is also very active in the Organization of American States, so perhaps there’s a regional mainstreaming aspect to this as well. So if you could speak a little bit, that would be fantastic.


Catalina Vera Toro: Yeah, sure. Thank you for that question. Well, we are firm believers that in multilateralism, you could say, and in regards of cyberspace, I think there is… a great opportunity at hand to do it more broadly and inclusively, you could say. So Chile has been a big promoter, along with Canada and like-minded countries, on including reference on human rights and also the need of gender perspective when it comes to the internet, but also to cybersecurity, because harms are differently felt for vulnerable groups, and those need to be represented in those discussions. And therefore, I must say, I’m also a women’s cyber fellow, so I’m living testament on how that fellowship can help, you know, women come into leadership roles. Now I’m the head of delegation to the Open Networking Group for Chile, but also to have that sort of network where you can work with women and bring those concepts into the room and also to negotiate in languages. So when it comes to, like, for instance, the Open Networking Group, we have consensus language on the annual progress reports that incorporate gender perspective. I think this is a good way forward and how we build like a universal, you could say, framework, whether it’s voluntarily or eventually legally binding, that incorporates a human-centric approach, but also a gender. responsive approach. It’s the best way forward and it’s an opportunity that we have nowadays that we need to take very practically. When it comes to the Organization of American States, yes, we do have a great program through CICTE on our security pillar at the OAS that basically is focused on recommendations for strengthening gender and cybersecurity through the regional organization, you could say, and they do this by first institutionalizing gender in national cyber strategies, so member states are encouraged to follow OAS guidance to formally integrate gender perspectives into the national cybersecurity policies and action plans. They also, through the organization, promote gender sensitivity capacity building, so they have programs that expand training programs specifically targeted to women, particularly in underserved and marginalized communities, to close that gender gap in cybersecurity expertise. There is also an enhanced regional collaboration, whether that’s through the search of the Americas, but also other programs where we foster partnerships among OAS member states to share resources, best practices, experiences in gender sensitivity cybersecurity initiatives. For that, for instance, we have 11 confidence building measures and one of those is specifically on gender in the region. We also have programs that combat online gender-based violence. We developed a regional framework to try to address online harassment, exploitation, and abuse, ensuring coordinated responses to the cross-border issues, and also there are programs that are targeted to increase representation of women by creating affirmative policies to ensure women occupy leadership roles and technical roles in cybersecurity at the national and regional level. So there is a lot of work that we have done regionally as well, and we are trying to collaboratively try to promote the incorporation of human-centric and gender perspectives also in negotiations. So this is not only something that Chile is doing, but many countries in the region are also promoting the inclusion of these references, because having consensus language can also be a way to build through other negotiations and other specific topics, like for instance the UN Cybercrime Convention, to incorporate human right language and also like the Convention on the Rights of the Child and also gender perspective or gender-based violence concepts as we build a safer cyberspace for all. Thank you.


Shimona Mohan: Thank you so much, Catalina. That was a very good snapshot of how we’re seeing it play out, not just in Chile but in Latin America, and then moving beyond that, zooming out and internationally as well. And speaking of gender mainstreaming, we’re also lucky to have Luanda on the panel with us, who sort of does this as her entire job. So I was wondering if Luanda, you could also kindly help us draw a clearer picture of how this can be achieved in the area of cyber capacity building, and what does gender mainstreaming look like in practice? What kind of elements, for example, are particularly important for policy audiences to consider? And here I’m thinking more along the lines of perhaps DEIA principles or perhaps intersectionality that might also be of interest for policy professionals to consider. But over to you, Luanda. Thank


Luanda Domi: you, Siobhana, and thanks for the invite. I actually think that my fellow colleagues did the job for me by actually giving some really good examples of what they’re doing on their current work on how gender can be mainstream in national policies like Chile, or through mentorship programs like ITU, and then Canada example for WIC fellowship and reaching gender parity in an open-ended working group, which we’re also as GFC very, very thrilled to facilitate that fellowship. If I can put it in more just steps to understand what it is about cybersecurity in terms of when we talk about gender mainstreaming, that we need to understand that it’s not just a technical issue, it is also a social issue, which means that gender significantly influences individuals’ or users’ experience and the perception of cybersecurity. We heard from Dr. Hoda a specific example of what are some gender-specific threats towards, for example, women or marginalized groups that might significantly change the experience of either participating into public discourse, or even taking jobs, or even removing themselves completely from online forums. So when we talk about gender mainstreaming, it’s important to talk about that it has to happen in two streams, policies and technologies that govern our digital world. And I think this is now so many examples we have with AI, you know, like that we have to kind of see that these technologies really do not take on biases that are in our real world. When we talk about foundation of gender mainstreaming, it’s very, very important that it’s systematically integrated in every step of capacity building. That’s the only way that it could be successful. And how is that in terms of designing, implementation, but also evaluation? So these are the three key things. For example, we all know that right now in the cybersecurity workforce, and it was mentioned today, the percentage of women in cybersecurity professions is quite low globally, but I think also regionally it differs. So our intention, we need to be intentional about very gender-specific goals that we want to reach, like in Chile’s case, for example, aiming that to increase a woman’s employment in cybersecurity role to 35% by 2030. This gives us clear policy, clear steps how to implement it, but also to evaluate it later on whether we actually succeeded in reaching that goal or not. Then when we talk about in terms of capacity building, one way or another, it was mentioned here today, we have to see what are systematic barriers beyond women leadership that are currently in cybersecurity and want to upskill, but what are systematic barriers to women’s participation in educational or training that are available for capacity building in cybersecurity? And this is where we talk about why it’s important to develop capacity programs intentionally that address gender-specific needs and how to teach professionals on how to counter online harassment and gender disinformation. What type of common tactics are used to silence women and marginalized voices online? And then address social engineering attacks, which we heard about today, that clearly disproportionately target women. And then I believe Yasmin mentioned more women in politics or leadership positions, which they are more visible to the public. So I think this is something that it’s quite practical to do and quite easy to do. However, I did want to mention why are we here, where we are today. And I recently came out to, came across to a UN Women report that was just launched and it says that only 0.05% of development budgets are targeted to gender initiatives. This is simply too low for us to be able to carry successful policies, policy implementation and programming for addressing gender and gender parity in a global level.


Shimona Mohan: Yes, thank you. Thank you so much, Luanda. I think that’s a very good sort of spotlight on the report also that you gave for the UN Women report and the very worrying statistic of 0.05%. It’s truly, truly terrible. But I have another question for you, but before I turn to you, maybe I can come back to Professor Hoda who has joined us in the room. And this is actually taking a thread from what Luanda said about how it’s important to have both tech and policy perspectives around this so that we can push for meaningful change. Professor Hoda, you earlier spoke about the kind of threats that we have which are gendered in nature. I’d now like to sort of flip the coin and ask what kind of measures can we take from a technical perspective, both proactive or reactive, that can sort of lessen these harms? And who should perhaps take these measures? Over to you.


Hoda Al Khzaimi: Thank you so much. I think it’s a multifaceted approach. We need to definitely tackle the technological barrier because we’re trying to address the fact that women are targeting on digital platforms. And I think for that kind of measure, we need to change the way we develop the technology. At the moment, technology is being predominantly developed for technicality and functional technicalities, rather than to just take in the consideration of having different groups that are being profiled on the platform for different type of reasons and trying to diffuse harm or eliminate harm of the platform. It’s not easy. We have been working with ITU on multiple kind of global initiatives for children protection, for women protection. Yeah. Yes. We’ve been working on a multiple level with the different platform makers, so I’m talking about Meta and other kind of platform makers as well, to build technological kind of solutions and functionalities within the platform, so they can address security and safety by design for different genders. Yeah. And this required us sitting together and understanding if actually resilience by design and not just security by design is being considered and it’s being taken care of with the form of different functionalities on the platform. I’m not going to get through the rigorous examples, because at the moment, you can see on different platforms that censorship and mass kind of analytics are being considered on a multiple level. But also, if we develop the policies and regulations for inclusive access to digital platform, are those policies being translated by the big kind of industry partners or not? So I think it’s very important to bring everybody on board and make sure that we have a foolproof solution when we are developing for those solutions. And inclusivity for women and women education on this platform is very important, which means that I don’t have to only educate them on the fact that they have to access digital platform, but they have to also know about the research and development and kind of niche aspects that are pertaining to research and development and science and development of new technologies. We’re talking about AI, we’re talking about cryptography, we’re talking about security and cybersecurity. So most of those elements are cross-sectorial and also deep into the analysis of the sector, which means you don’t have just to do like a generic surface-scratching awareness program for women or kind of women in leadership or women in cyber kind of sessions, but you really need to educate them on a deeper aspect, on an academic aspect sometime, on the power of developing new technologies. I think what we are lacking at the moment is the power of the collectives. So I would say the power of providing for the ecosystem champions across board and as well the power of funding and the power of the collectives in terms of knowledge capital, bringing everybody else on the same table. And maybe have a co-creation lab for women to design their own solutions within a digital platform and figure out if those solutions could be championed by different industrial partners to be implemented on the different platforms as well and within the industry. So I mean creating disruptors within the industry that comes from innovators of the space who understand those needs and who are focusing on solving those gaps and needs, not focusing on commercially building a massive solution that pertains to a generic use of the public. And maybe then we will be starting to solve all these kind of issues that we have, not just for women but for all other gendered groups and as well for children as well at the same time. So this is one in terms of technology development, in terms of education, but as well in terms of policy formulation. I think there is a huge gap in between the recipient of the policy, which is the women at the moment, and the policy developers who are developing these policies sometimes in Global North and the impacted pool are in Global South. So how can we redistribute policy creation where we have policy labs within the affected zones actually. If we know that, I don’t know how many we’ve said, we have 25% of women are actually in cyber, right? What regions are predominantly high on the number of women in cyber and what regions are low? I know for example for us in this region we have quite of a high percentage of women in technology and STEM and STEAM and women in cyber as well. So how would the learning curve be viewed on this kind of aspects? Would we be able to maybe bring in together this kind of co-learning labs that are globally developed by those indigenous groups, not just by specific entities within the global kind of narratives? I think trying to solve from grassroots perspective is supposed to be a powerful tool that we should exercise, not just on technology level or education or awareness, but also on the policy aspect. Thanks.


Shimona Mohan: Thank you professor. I think you covered a lot in terms of what kind of solutions we can look at and we’ve been talking about solutions on this entire panel and I think that’s a really… I hope you can hear me. Yes, okay. So talking a lot about solutions on this panel and I think that’s particularly important because we’ve heard a lot about problems around and we continue to hear them in the context of emerging tech like AI as well. But a couple of things that you said really are really really interesting especially about sort of combining the convening power of audiences from different fields, from different walks of life to sort of come together and contribute to these efforts. And I’d like to go back to Luanda here who was earlier talking about a very similar thing when it came to both tech and policy audiences sort of coming together and making sure that these mean streaming efforts are taking place in collection in some. So Luanda, I’d like to also come back to you and perhaps ask if you already have any sort of work streams in place or any projects or ideas around this perhaps where cyber capacity building programs are targeting both tech professionals and policy professionals in perhaps a combined manner. And if this would perhaps be this kind of hybrid model of capacity building might perhaps be a bridge for technologists and policy audiences who we’ve seen do have a little bit of a gap to understanding perhaps the same issue or how to tackle it. Professor also pointed out that there’s a disconnect between the people at the policy level and at the recipient level as well. Perhaps, Luanda, you can speak a little bit more to this when it comes to gender mainstreaming and cyber capacity building. Thank you.


Luanda Domi: I think you called my name. We’re having a bit of a challenge hearing you. But please intervene if it’s not my turn. Excuse me, in the room. Okay. Can they hear us? If I heard correctly, also what Dr. Hoda mentioned, I wanted just maybe to say one thing about AI because she mentioned a lot of the risks and one thing that is quite interesting about gender and if we talk about gender and education versus, I mean, women education in STEM and cybersecurity versus employment, I think this is where we see huge differences globally. Now, this is not the case in Middle East. Middle East is doing really great on this area, actually. But what I wanted to say in terms of statistics is that now a big issue is AI and gender employment risk. So again, to that report from UN Women, we see that, I mean, they quantified what we actually knew. It’s that 3.7% of women’s job globally now are at risk of being replaced by AI compared to 1.4% of men’s jobs. So this is only goes to echo the importance of training women in AI driven roles. So only with capacity building programs, we can help bridge this gap, which I assume will only get higher with the years to come by offering specialized trainings like ethical hacking, secure coding and AI governance. I think this is quite important if we’re trying to see like what is some of the solutions out there for a growing problem. And I’m really happy to also hear of UNIDIRS Women Fellowship on AI. I think this is the way that we need to approach it. Now, also when we talk about gender and its intersection, we really must go beyond it and see, adopt the principles of diversity, equity, inclusion and accessibility to ensure that everyone, including women, new diverse individual, those with disabilities like visually impaired can really, really thrive in cybersecurity. And why am I mentioning this? Because we’re talking about what type of training models should be there, separate or hybrid models. But this is when we were looking at what type of models, this has to be adapted to the learners that we are targeting, right? So every learner has different requirements, have different needs, has different pace, and we have to adapt to their unique strengths. For example, when we talk about, so I’m actually lobbying for both, but separate programs need to be tailored for specific needs. For example, because they would provide a focused, safe environment for individuals that can learn without distractions. Let’s talk about, for example, individuals coming from neurodiversity communities, they can learn much more and within excellent pace if they have a highly structured schedules, if they have clear instruction, and if they have, for example, sensory inputs. So these are the things that we need to check. And they have been quite, quite successful in positions that are, for example, ethical hacking, especially for ethical hacking. And I think programs like cybersecurity, new diversity talent pipeline are programs that we need to further scale up for in regional and global level. And then you have, for example, programs that could leverage adaptive tools like screen readers, tactile diagrams, or braille-friendly resources. So these are very good workshops that can be done for visually impaired individuals on topics that have been done so far in secure coding or cryptography. And they help achieve successful graduation rate because through these tools, educational aids, they also get to do a lot of hands-on activities. Now, when we look at hybrid programs, and I would lobby for these programs, but in very specific curriculums or purpose, I should say, not curriculum, but for example, if we want to do exchange for collaboration in terms of bringing together policy makers and technologists, I think in these cases, hybrid cybersecurity training can help in understanding, adapting, regulatory implications. So this could be like very cross-sector capacity-building workshop in cybersecurity. Or another one would be either neurodiverse individuals with traditional learners versus visually impaired and fully disabled individuals. Visually impaired and fully sighted participants. So they have been very good example where they have to do joint exercises where maybe visually impaired individuals had have an environment that could, they could use the tools easily. And onsite participants had the other aspects that could contribute to better interaction with visually impaired participants. So these all foster some sort of teamwork without challenging, I think, the diversity and the strength, unique strengths of each players or learners in this kind of aspects. So I think these are some of the, I think, pros and cons of each. But I would say that in cybercapacity building, both are very important. And they have to be carefully planned so they are inclusive and create unique teams that are successful.


Shimona Mohan: Perfect. Thanks so much, Luanda. Thanks so much, Luanda. If you can hear me? Yes. Thanks so much, Luanda, for that. I think you answered a question that I hadn’t even thought to ask and I should have. But thank you so much for that very, very exhaustive sort of snapshot into the diversity aspect of things, which I think we sometimes sort of skip over or overlook just because we, is it lesser? So thank you so much for bringing it up in this panel again. I think now we’ll perhaps have a very quick lightning round of questions. So I know that there’s one question on the chat, or two on the chat. But I’d also like to perhaps open the floor if there are any questions in the room. Okay. Could you please come up to the mic here and take the questions? And I’d urge you to keep the questions very brief because we have the room for a limited time. Thank you.


Jocelyn Meliza: Hello, can you hear me? Fantastic. My name is Jocelyn Meliza. I am a MUG member and also a co-facilitator for the BPF on cyber security. This year we were focusing on mapping out cyber security initiatives. But then we realized that several organizations are also mapping out, but the gap really came in terms of cross collaboration. And I really liked what Professor Huda mentioned on building the power of the collective and the power of the collective. And something else that we also noted while mapping, we did not see any map of women in cyber capacity building programs. So do those map exist anywhere and even beyond? that how do we harness what you mentioned on the power of the collective. And lastly, an opportunity to also welcome you to the BPF on cybersecurity. It’s on Tuesday at 4.45, because it will be an extension on this, and I think this topic will be very valuable in building into that mapping exercise, as well as building into next year. Thank you.


Shimona Mohan: Perfect, thanks so much. I think we’ll take all the questions together, and then perhaps we can go with the panel. Yes, please.


AUDIENCE: Okay, I hope you can hear me. Thank you for that lovely and insightful presentation. My name is Paula, and I have a question, perhaps to Yasbin. The ITU’s CyberTracks program is fantastic, and I speak as a beneficiary of the program. I am currently a cybersecurity policy advisor, and a lot of the work that I do is based off of what I’ve learned from the program. What I wanted to find out is, of course, the program can only take a certain number of people per cohort, and there are so many people that want to be part of the program. So is there a way that the program could be done in collaboration with governments, for instance, where learners can access the platform freely so that we don’t have to wait for a new cohort to start for people to join the program, but it will be more distributed? Is there a possibility of that happening? And then, as an afterthought, is there a platform that, if I’m interested in building my capacity in cybersecurity, is there a platform where I can go and I will find a list of all the fellowships or programs that are open so that all of them can be accessed from one particular platform? Thank you.


Shimona Mohan: Perfect, thank you. And then the gentleman in the back.


AUDIENCE: Hello, good afternoon. I’m Kosi, I’m a student from Benin. I’m working for government. Also, I chair one NGO called Women Be Free. And I want to know, your capacity-building program, is it available in some language, like French, for example? Is it available? Do you have some tool somewhere? Is it possible to have some different kind of training you provide, if it’s possible to have it in French, in English, and so on? Now, last question. Is it possible to have partnership with your organization directly to provide training for people locally? Is it possible for you to come, for example, to Benin and provide physically training for people there? What is the plan? What is the process? Thank you.


Shimona Mohan: Perfect, thank you. I will also just read out a couple of questions that we have from the chat online. So, one of the questions is about how, what are the ethical considerations and challenges associated with emerging tech? And how can we prepare the next generations to navigate them? And sort of related questions, which we can perhaps club together is, how can we address the digital gender divide in Africa to ensure that women and girls benefit equally from AI advancements? So, I will invite our panelists to sort of do lightning responses, and perhaps club them together with any closing remarks that you have, any final words, comments, et cetera. Perhaps to avoid tech issues, we start from the room, and then we’ll proceed online back to Catalina and to Luanda. So, I’d give the floor to Professor Hoda first, and then we just come together here.


Hoda Al Khzaimi: Thank you so much for your questions. This has been amazing. Paula and Kusi, and what’s your name, I’m sorry? Josephine? Josephine as well. I think you have very, very interesting kind of collective reflection on how we can maybe provide for online tools as long-term assets of this program. We always invest in cyber capacity building and capacity building at large in different regions of the world, but we also wanna make sure that it’s not just a repetitive problem where it’s exhaustively complex to solve. And I think utilizing digital platform where we have a LinkedIn-like platform where you can put in the profiles of people, different capacities that they have developed, and access to courses and material would be very, very kind of powerful to have where the contributors could be different countries from different parts of the world. So, building that kind of platform effect is a must at the moment, I would say. Thank you, Paula, for bringing that up, and as well, Josephine. And as well, considering the different nonsense and languages and as well details that comes from different parts of the world is very important. And this is what I meant exactly when I said that if we started solving for the capacity development problem, for example, in cybersecurity, from within the regions, the solutions and the nudges are quite simple. And the development of those solutions is not very exhaustive. And I think we should put in some effort to have some traces and some kind of legacy that would be created to benefit the public. One of them, I think, the co-creation lab, the funding platform, the co-development platform where we can have the courses and the people, like matching challenges with needs is very important.


Yasmine Idrissi Azzouzi: If I may also take maybe those questions together, in particular, I have a feeling that they are related to sustainability of programs and localization as well. And so really taking advantage, let’s say, of that local ecosystem. So actually at the ITU, we’ve piloted last year a similar approach where we actually wanted the program that we had been running for a while, which was the Women in Cyber Mentorship Program to be then given, quote unquote, and ran by a local organization. So we partnered with this organization called Women in Cybersecurity Middle East, who is a very, very active network in this region. And they have basically ran the program in the region under sort of the guidance and umbrella of the ITU. And this pilot has also shown us that, of course, we know we’ve mentioned that funding is an issue in the cybersecurity capacity building field as other capacity building fields. And so having this capacity of sort of multiplier initiatives where then knowledge and resources are given to local organizations can be part of the solution there.


Shimona Mohan: Perfect. Thank you so much, Yasmin. And I’ll go back to the online room, perhaps first to Catalina and then to Luanda, if in a couple of minutes you’d like to respond to any questions or have any concluding remarks. Thank you.


Catalina Vera Toro: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Plataish. I don’t know, I’ll just go. So basically from a governance and MFA standpoint, what I can say is that we are trying to collaboratively come together within the open-ended working group for capacity building. I think that is gonna be a huge negotiation and a huge pillar because specifically for the Global South, it’s one of those structural elements that need to be well-addressed within the open-ended working group and how we transition to the permanent mechanism. So we are trying to come up with something that is obviously inclusive and that takes into account the needs specifically for developing countries. And that will probably entail some sort of repository and portals, for instance, for the voluntary norms or demand focus offer of capacity building. And also best experience and how we share best experience throughout the world. So I think eventually we’re going to have something that is on a global scale that will bring in all the experiences, hopefully the successful experience. So that it will be available not only in English, but in all the official languages. That also entails French, for instance, of resources that can help each other build that safer cyberspace for all. And that also comes where, and also regional instances and organizations do great work. So within the open-ended work, for instance, we don’t want to duplicate the efforts that regional organizations are doing. We know that not every country is part of a regional organization, but if you are, there are great programs that do rotating workshops. For instance, the Organization of American States rotates throughout countries to do mentorship programs or gender capacity building programs for each of its member states. So there are great resources out there as of now, and we’re hopeful that by the end of 2025, when we move to that permanent mechanism, for instance, in the open-ended working group, we will have something similar in a global scale. So please continue to help us on how to address this. Very briefly on Raby’s question on ethical considerations, I want to say that many countries are coming up with AI policy frameworks, so you have great resources of how countries are. So, I would like to start by saying that we have a lot of work to do in terms of addressing ethical issues when it comes to AI. Of course, the main one is the respect of human rights, that by that ensuring that AI systems align with and uphold fundamental rights such as privacy, equality, and non-discrimination. Luanda went in depth on this and the risk, the high risk of this. So, I would like to start by saying that we have a lot of work to do in terms of ensuring that, you know, that it’s transparent to the user as well. Fairness and equity, so there is no biases. And also environmental sustainability. So, I think there are great efforts globally on this. Of course, there is the work to do in terms of ethical AI. But also, you know, I think that it’s a great resource to look into. There is great work in that. And I just wanted to mention very briefly that back in 2023, Chile was the first host for the high-level ministerial of authorities on ethical AI for the Latin American and Caribbean, so we had a joint declaration on how regionally we should be doing this. So, there is a lot of work to be done in terms of making sure that it aligns with also with the UNESCO framework. So, there is great work that is being done in there, and, of course, the next step and a responsibility of every state is how we bring that back home. And provide programs and education specifically for our younger generations, you know, that are, of course, immersed in technology nowadays. So, I think that’s it.


Luanda Domi: Thank you. I’ll be very, very brief, and this was a lovely conversation. I don’t want to be dim, but I kind of have to. It’s my job. Just to point out to a couple of the things for, I think, the participants that asked the online question. Last year’s survey, I believe it says that the global population is about 2.6 billion people. So, this is a huge number. So, I think we also have to be a bit real on the things as a global community that what we have to invest to then not just address gender parity, but also this regional, I think, challenge of people without access to the internet. And how do we do this? I think accessibility, digital literacy, and the very basic one, providing affordable digital tools. We need to do this as a global community. This is, I think, not just a strategic thing. It’s a moral imperative. To include the populations everywhere on internet. And then also teach them about the risks, right? And then the second thing I would say very quickly, in Africa specifically, there is a cyber safe foundation which has an amazing program on cyber girls, fellowships, and there is a cyber safe foundation on cyber girls, fellowship. And the women that have been trained, about 67% of the women that graduate from this program get employed. And their salaries increase from 200% to 400%. This year, due to funding, goes back to the report, the fellowship almost closed. And this is where we have to voice out, and this is what we do at GFC through our women in cyber capacity building network, lobby for a regional women’s network to get the funding for successful programs to address gender disbalance and training and education in cyber, STEM, or AI. And we really use this as a call of action to support successful programs out there because there are out there. So we just need to support and scale them up. Thank you.


Shimona Mohan: Thank you so much, Luanda. And to everybody on the panel, to Professor Hoda, to Erin, to Yasmin, to Catalina and Luanda for joining us online, for joining us on Sunday evening in December for this very interesting conversation. I will give the floor now to my colleague, Pavel, for concluding remarks, after which we will close the panel. So thank you again for joining us, everybody. And over to you, Pavel.


Pavel Mraz: Hello, Simona, and hello, everyone. Thank you for the floor. I really want to thank all partners, including the governments of Canada, Chile, ITU, GFC, and the Stimson Center for supporting this event with their contributions and recommendations. I think together today, we have identified at least four key takeaways. The urgent need to increase gender diversity in the global cybersecurity workforce. The necessity of mainstreaming gender considerations both into cybersecurity policy, but also into existing and future capacity-building initiatives. And, of course, the need to address and research gender-based threats amplified by emerging technologies, but rest assured, we have listened very carefully today to all your insights, questions, and recommendations. And in order to ensure that these recommendations are properly captured for policymakers who can take action, Unidear, together with partners, plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts. And in terms of next steps, what we plan to do is we will convene a series of online workshops over the year of 2025 with the aim of discussing each of those issues separately. So the gender-based threats, international obligations, women participation in cyber workforce, mainstreaming gender into policy, and capacity-building. And, of course, if these workshops are of interest, please share with us your contact details either in chat or with Simona in the room. We want to capture as broad, as diverse a range of perspectives. We would be more than happy to invite you to continue these conversations. And very lastly, I would just say I would be remiss not to thank the IGF Secretariat for allowing us to have this conversation in a truly multistakeholder fashion by bringing together governments, international organizations, civil society, industry, and technical experts, we can truly assure that our approaches to gender mainstreaming will not be forgotten. So thank you to all of our speakers and participants who have contributed to this dialogue. And rest assured, your insights and recommendations are invaluable, and they will inform our efforts going forward.


Shimona Mohan: So thank you. Back over to the room, and have a wonderful evening in Saudi Arabia. Thank you so much. Thank you. Thank you. Thank you. Thank you.


S

Shimona Mohan

Speech speed

147 words per minute

Speech length

3161 words

Speech time

1282 seconds

Need to increase women’s representation beyond current 25%

Explanation

There is a need to increase the representation of women in the cybersecurity workforce beyond the current global average of 25%. This low percentage highlights the importance of addressing gender diversity in the field.


Evidence

World over, there’s only about 25% of women in the cybersecurity workforce.


Major Discussion Point

Gender diversity in cybersecurity workforce


Agreed with

Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


Catalina Vera Toro


Agreed on

Need to increase women’s representation in cybersecurity


A

Aaryn Yunwei Zhou

Speech speed

125 words per minute

Speech length

469 words

Speech time

224 seconds

Canada’s Gender-Based Analysis Assessment for policies

Explanation

Canada requires all policy initiatives and programs to undergo a Gender-Based Analysis Assessment (GBA+). This assessment considers how policies affect women, men, and gender-diverse people, as well as other factors such as race, class, and ethnicity.


Evidence

All of our policy initiatives and programs have to go through what is called the Gender-Based Analysis Assessment, GBA+.


Major Discussion Point

Policy and governance approaches


Agreed with

Catalina Vera Toro


Luanda Domi


Agreed on

Importance of gender-responsive policies and frameworks


Disproportionate impact of internet shutdowns on women

Explanation

Internet shutdowns can have a disproportionate effect on women’s social and economic participation. This highlights the need to consider gender-specific impacts when assessing cyber threats and responses.


Evidence

Internet shutdowns in Iran had a much more disproportionate effect on women’s social and economic participation, particularly through the use of Instagram.


Major Discussion Point

Gender-specific cyber threats and harms


Women in Cyber Fellowship for diplomats

Explanation

Canada sponsors the Women in Cyber Fellowship program, which has trained over 50 women diplomats from around the world. This initiative aims to increase diverse voices in international cybersecurity discussions.


Evidence

Over 50 women diplomats from around the world have been trained through the program, leading to gender parity in the First Committee process at the UN.


Major Discussion Point

Capacity building and education initiatives


Canada’s feminist foreign policy and national action plan

Explanation

Canada has implemented a feminist foreign policy with a specific Women, Peace and Security Agenda National Action Plan. This policy addresses online harassment and abuse against women and people of diverse gender identities.


Evidence

Canada was one of the founding members of the Global Partnership for Action on Gender-Based Online Harassment and Abuse.


Major Discussion Point

Policy and governance approaches


Y

Yasmine Idrissi Azzouzi

Speech speed

162 words per minute

Speech length

1300 words

Speech time

480 seconds

ITU’s HerCyberTracks program for women in public sector

Explanation

The International Telecommunication Union (ITU) launched HerCyberTracks, a specialized training program for women in the public sector. The program focuses on policy and diplomacy, incident response, and criminal justice or cybercrime.


Evidence

HerCyberTracks offers tailored curricula across three tracks: policy and diplomacy, incident response, and criminal justice or cybercrime.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Shimona Mohan


Aaryn Yunwei Zhou


Catalina Vera Toro


Agreed on

Need to increase women’s representation in cybersecurity


ITU’s holistic approach combining training, mentorship and networking

Explanation

ITU’s capacity building approach combines training, mentorship, networking, and real-world exposure. This holistic method aims to promote meaningful representation of women in cybersecurity.


Evidence

The program incorporates study visits, networking opportunities, mentorship, and inter-regional exchange.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Hoda Al Khzaimi


Luanda Domi


Agreed on

Necessity of targeted capacity building and education initiatives


Differed with

Luanda Domi


Differed on

Approach to capacity building


H

Hoda Al Khzaimi

Speech speed

130 words per minute

Speech length

1759 words

Speech time

806 seconds

96% of deepfake content targets women non-consensually

Explanation

Deepfake technology poses a significant threat to women, with the vast majority of deepfake content being non-consensual sexual content targeting women. This technology is often used to silence journalists, politicians, and activists.


Evidence

DeepTrace found in 2019 that 96% of all deep content online is non-consensual sexual content pertaining to target women.


Major Discussion Point

Gender-specific cyber threats and harms


AI systems exhibit bias against women, especially women of color

Explanation

Facial recognition systems and other AI technologies often show systematic bias against women, particularly women of color. This bias can lead to wrongful surveillance and enforcement actions targeting marginalized groups.


Evidence

A 2018 study showed that the error rate for facial recognition systems is up to 34.7% for darker-skinned women compared to less than 1% for lighter-skinned men.


Major Discussion Point

Gender-specific cyber threats and harms


Need for deeper technical education beyond surface-level awareness

Explanation

There is a need for deeper, more comprehensive technical education for women in cybersecurity, beyond surface-level awareness programs. This includes education on research and development aspects of new technologies like AI and cryptography.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Yasmine Idrissi Azzouzi


Luanda Domi


Agreed on

Necessity of targeted capacity building and education initiatives


C

Catalina Vera Toro

Speech speed

134 words per minute

Speech length

2182 words

Speech time

975 seconds

Chile’s goal to increase women in cybersecurity to 35% by 2030

Explanation

Chile has set a specific goal to increase women’s employment in cybersecurity roles to 35% by 2030. This target is part of their efforts to address gender imbalance in the field.


Major Discussion Point

Gender diversity in cybersecurity workforce


Agreed with

Shimona Mohan


Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


Agreed on

Need to increase women’s representation in cybersecurity


Chile’s gender-responsive national cybersecurity policy

Explanation

Chile has updated its national cybersecurity policy to incorporate a gender-responsive approach. This includes establishing state obligations to protect and promote the rights of people on the internet, with a focus on women’s protection and inclusion.


Evidence

Chile updated its cybersecurity policy in 2024 to incorporate a gender-responsive approach.


Major Discussion Point

Policy and governance approaches


Agreed with

Aaryn Yunwei Zhou


Luanda Domi


Agreed on

Importance of gender-responsive policies and frameworks


OAS rotating workshops on gender capacity building

Explanation

The Organization of American States (OAS) conducts rotating workshops throughout member countries to provide mentorship programs and gender capacity building in cybersecurity. This approach helps distribute resources and knowledge across the region.


Major Discussion Point

Capacity building and education initiatives


Developing ethical AI policy frameworks

Explanation

Many countries, including Chile, are developing AI policy frameworks to address ethical considerations. These frameworks focus on respecting human rights, ensuring transparency, fairness, and environmental sustainability in AI systems.


Evidence

Chile hosted the first high-level ministerial meeting on ethical AI for Latin America and the Caribbean in 2023, resulting in a joint declaration.


Major Discussion Point

Policy and governance approaches


L

Luanda Domi

Speech speed

113 words per minute

Speech length

1808 words

Speech time

953 seconds

Only 0.05% of development budgets target gender initiatives

Explanation

A recent UN Women report revealed that only 0.05% of development budgets are targeted to gender initiatives. This extremely low percentage highlights the need for increased funding and support for gender-focused programs in cybersecurity and other fields.


Evidence

UN Women report finding that only 0.05% of development budgets are targeted to gender initiatives.


Major Discussion Point

Gender diversity in cybersecurity workforce


3.7% of women’s jobs at risk from AI vs 1.4% of men’s

Explanation

Artificial Intelligence poses a greater risk to women’s employment compared to men’s. This disparity highlights the need for targeted training programs to help women adapt to AI-driven roles in the workforce.


Evidence

UN Women report showing 3.7% of women’s jobs globally are at risk of being replaced by AI compared to 1.4% of men’s jobs.


Major Discussion Point

Gender-specific cyber threats and harms


Importance of tailored programs for specific needs like neurodiversity

Explanation

Capacity building programs should be tailored to address the specific needs of diverse learners, including those with neurodiversity or visual impairments. This approach ensures that cybersecurity education is inclusive and accessible to all.


Evidence

Examples of successful programs for neurodivergent individuals in ethical hacking and visually impaired individuals in secure coding or cryptography.


Major Discussion Point

Capacity building and education initiatives


Agreed with

Yasmine Idrissi Azzouzi


Hoda Al Khzaimi


Agreed on

Necessity of targeted capacity building and education initiatives


Differed with

Yasmine Idrissi Azzouzi


Differed on

Approach to capacity building


Need for gender mainstreaming in cyber policies and technologies

Explanation

Gender mainstreaming should be systematically integrated into every step of capacity building in cybersecurity. This includes the design, implementation, and evaluation of policies and technologies that govern the digital world.


Major Discussion Point

Policy and governance approaches


Agreed with

Aaryn Yunwei Zhou


Catalina Vera Toro


Agreed on

Importance of gender-responsive policies and frameworks


P

Pavel Mraz

Speech speed

195 words per minute

Speech length

336 words

Speech time

103 seconds

UNIDIR’s planned compendium of gender mainstreaming practices

Explanation

The United Nations Institute for Disarmament Research (UNIDIR) plans to produce a compendium of good practices for mainstreaming gender into cybersecurity efforts. This initiative aims to capture recommendations for policymakers to take action on gender issues in cybersecurity.


Evidence

UNIDIR plans to convene a series of online workshops over the year 2025 to discuss various aspects of gender in cybersecurity.


Major Discussion Point

Policy and governance approaches


Agreements

Agreement Points

Need to increase women’s representation in cybersecurity

speakers

Shimona Mohan


Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


Catalina Vera Toro


arguments

Need to increase women’s representation beyond current 25%


Canada’s Gender-Based Analysis Assessment for policies


ITU’s HerCyberTracks program for women in public sector


Chile’s goal to increase women in cybersecurity to 35% by 2030


summary

Speakers agree on the importance of increasing women’s representation in the cybersecurity workforce through various policy and educational initiatives.


Importance of gender-responsive policies and frameworks

speakers

Aaryn Yunwei Zhou


Catalina Vera Toro


Luanda Domi


arguments

Canada’s Gender-Based Analysis Assessment for policies


Chile’s gender-responsive national cybersecurity policy


Need for gender mainstreaming in cyber policies and technologies


summary

Speakers emphasize the need for gender-responsive policies and frameworks in cybersecurity at national and international levels.


Necessity of targeted capacity building and education initiatives

speakers

Yasmine Idrissi Azzouzi


Hoda Al Khzaimi


Luanda Domi


arguments

ITU’s holistic approach combining training, mentorship and networking


Need for deeper technical education beyond surface-level awareness


Importance of tailored programs for specific needs like neurodiversity


summary

Speakers agree on the importance of comprehensive and tailored capacity building programs to address the diverse needs of women and other underrepresented groups in cybersecurity.


Similar Viewpoints

Both speakers highlight the disproportionate impact of AI on women, particularly in terms of bias and job displacement risks.

speakers

Hoda Al Khzaimi


Luanda Domi


arguments

AI systems exhibit bias against women, especially women of color


3.7% of women’s jobs at risk from AI vs 1.4% of men’s


Both speakers emphasize the importance of specialized programs to train and empower women in cybersecurity roles, particularly in the public sector and diplomacy.

speakers

Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


arguments

Women in Cyber Fellowship for diplomats


ITU’s HerCyberTracks program for women in public sector


Unexpected Consensus

Importance of addressing neurodiversity in cybersecurity education

speakers

Luanda Domi


arguments

Importance of tailored programs for specific needs like neurodiversity


explanation

While most discussions focused on gender, Luanda Domi unexpectedly highlighted the importance of considering neurodiversity in cybersecurity education, broadening the conversation on inclusivity beyond gender.


Overall Assessment

Summary

The speakers generally agreed on the need to increase women’s representation in cybersecurity, the importance of gender-responsive policies, and the necessity of targeted capacity building initiatives. There was also consensus on the disproportionate impact of emerging technologies like AI on women and the need for more inclusive approaches in cybersecurity education and workforce development.


Consensus level

High level of consensus among speakers, with complementary perspectives on addressing gender disparities in cybersecurity. This strong agreement suggests a clear direction for policy makers and stakeholders in prioritizing gender mainstreaming and inclusive approaches in cybersecurity initiatives.


Differences

Different Viewpoints

Approach to capacity building

speakers

Yasmine Idrissi Azzouzi


Luanda Domi


arguments

ITU’s holistic approach combining training, mentorship and networking


Importance of tailored programs for specific needs like neurodiversity


summary

While both speakers emphasize the importance of capacity building, they differ in their approach. Yasmine advocates for a holistic approach combining various elements, while Luanda emphasizes the need for tailored programs addressing specific needs of diverse learners.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on different approaches to implementing gender mainstreaming in cybersecurity, rather than fundamental disagreements on goals or principles.


difference_level

The level of disagreement among the speakers was relatively low. Most speakers shared similar goals and principles, with differences mainly in specific implementation strategies or focus areas. This low level of disagreement suggests a general consensus on the importance of addressing gender issues in cybersecurity, which could facilitate more effective collaboration and policy development in this area.


Partial Agreements

Partial Agreements

Both speakers agree on the need for gender-responsive policies, but they differ in their specific approaches. Canada uses a Gender-Based Analysis Assessment for all policies, while Chile has incorporated a gender-responsive approach specifically in its national cybersecurity policy.

speakers

Aaryn Yunwei Zhou


Catalina Vera Toro


arguments

Canada’s Gender-Based Analysis Assessment for policies


Chile’s gender-responsive national cybersecurity policy


Similar Viewpoints

Both speakers highlight the disproportionate impact of AI on women, particularly in terms of bias and job displacement risks.

speakers

Hoda Al Khzaimi


Luanda Domi


arguments

AI systems exhibit bias against women, especially women of color


3.7% of women’s jobs at risk from AI vs 1.4% of men’s


Both speakers emphasize the importance of specialized programs to train and empower women in cybersecurity roles, particularly in the public sector and diplomacy.

speakers

Aaryn Yunwei Zhou


Yasmine Idrissi Azzouzi


arguments

Women in Cyber Fellowship for diplomats


ITU’s HerCyberTracks program for women in public sector


Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

We take gender into consideration for both assessments of the threats and our responses to them. I’m sure this audience knows there’s a gender dimension to every aspect of cybersecurity, so not taking gender into consideration actually makes our responses far less effective.

speaker

Aaryn Yunwei Zhou


reason

This comment highlights the critical importance of incorporating gender considerations comprehensively in cybersecurity, not just as an add-on but as a core element that improves effectiveness.


impact

It set the tone for the discussion by emphasizing the practical benefits of gender mainstreaming in cybersecurity, beyond just ethical considerations. This led to further exploration of specific ways gender considerations can be integrated into policies and programs.


To sum up, basically, in order to really close the gender-digital divide and really promote what we call the equal, full, and meaningful representation of women in cybersecurity, not just as a checkbox being checked, but really meaningful and skilled representation, we must adopt a holistic approach such as this when it comes to capacity building, combining training, combining mentorship, combining role modeling, community building, and real-world exposure.

speaker

Yasmine Idrissi Azzouzi


reason

This comment provides a comprehensive framework for addressing the gender gap in cybersecurity, emphasizing the need for a multi-faceted approach.


impact

It shifted the conversation from discussing the problem to exploring concrete solutions, leading to more detailed discussions about specific programs and initiatives that embody this holistic approach.


96% of all deep content online is non-consensual sexual content pertaining to target women. These attacks often aim to silence journalists, politicians, and activists

speaker

Hoda Al Khzaimi


reason

This statistic starkly illustrates the gendered nature of certain cyber threats and their broader societal implications.


impact

It brought attention to the severity and specificity of gender-based cyber threats, leading to discussions about the need for targeted technological and policy solutions to address these issues.


3.7% of women’s job globally now are at risk of being replaced by AI compared to 1.4% of men’s jobs. So this is only goes to echo the importance of training women in AI driven roles.

speaker

Luanda Domi


reason

This comment introduces a new dimension to the discussion by highlighting the gendered impact of AI on employment, connecting cybersecurity issues to broader economic concerns.


impact

It broadened the scope of the conversation to include the intersection of gender, cybersecurity, and emerging technologies like AI, emphasizing the need for forward-looking capacity building programs.


Overall Assessment

These key comments shaped the discussion by progressively expanding its scope and depth. The conversation evolved from establishing the importance of gender considerations in cybersecurity to exploring specific challenges, comprehensive solutions, and future implications. The comments highlighted the multifaceted nature of the issue, touching on policy, education, technology, and economic aspects. This led to a rich discussion that emphasized the need for holistic, collaborative approaches to address gender disparities in cybersecurity, while also considering the impacts of emerging technologies.


Follow-up Questions

Do maps of women in cyber capacity building programs exist?

speaker

Jocelyn Meliza


explanation

This information could help identify gaps and opportunities in existing programs.


How can we harness the power of the collective in cybersecurity initiatives?

speaker

Jocelyn Meliza


explanation

Collaboration across organizations could enhance the impact of cybersecurity efforts.


Is there a way to make the ITU’s CyberTracks program more widely accessible, possibly through collaboration with governments?

speaker

Paula


explanation

Expanding access to this program could benefit more individuals interested in cybersecurity.


Is there a centralized platform listing all available cybersecurity fellowships and programs?

speaker

Paula


explanation

Such a resource would make it easier for individuals to find and access relevant opportunities.


Are capacity-building programs available in multiple languages, such as French?

speaker

Kosi


explanation

Offering programs in various languages would increase accessibility for non-English speakers.


Is it possible to establish partnerships for providing local, in-person training?

speaker

Kosi


explanation

Local training could better address specific regional needs and challenges.


What are the ethical considerations and challenges associated with emerging tech, and how can we prepare the next generations to navigate them?

speaker

Online participant (Raby)


explanation

Understanding and addressing ethical challenges is crucial for responsible technology development and use.


How can we address the digital gender divide in Africa to ensure that women and girls benefit equally from AI advancements?

speaker

Online participant


explanation

Closing this divide is essential for inclusive technological progress in the region.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance

Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance

Session at a Glance

Summary

This discussion focused on the development of an ethical AI governance framework and tool by the Digital Cooperation Organization (DCO) and Access Partnership. The session began with an introduction to the importance of addressing AI as a societal challenge, emphasizing the need for ethical, responsible, and human-centric AI development. Ahmed Bhinder from DCO explained their approach to ethical AI governance from a human rights perspective, highlighting the organization’s diverse membership and goals.


Chris Martin from Access Partnership presented DCO’s view on ethical AI governance, outlining six key principles: accountability and oversight, transparency and explainability, fairness and non-discrimination, privacy, sustainability and environmental impact, and human-centeredness. The discussion then introduced a prototype tool designed to assess AI systems’ compliance with these ethical principles and human rights considerations.


Matthew Sharp detailed the tool’s functionality, explaining how it provides risk assessments and actionable recommendations for both AI developers and deployers. The tool aims to be comprehensive, practical, and interactive, focusing on human rights impacts across various industries.


The session included a practical exercise where participants were divided into groups to analyze AI risk scenarios using the framework. Groups identified potential ethical risks, scored their severity and likelihood, and proposed mitigation strategies. This activity demonstrated the tool’s application in real-world scenarios.


The discussion concluded with remarks from Alaa Abdulaal of DCO, emphasizing the importance of a multi-stakeholder approach in addressing ethical AI challenges and the organization’s commitment to providing actionable solutions for countries and developers. The session highlighted the ongoing efforts to create practical tools for ensuring ethical AI development and deployment on a global scale.


Keypoints

Major discussion points:


– Introduction of the Digital Cooperation Organization (DCO) and its work on ethical AI governance


– Presentation of DCO’s human rights-centered approach to AI ethics and governance


– Overview of an AI ethics evaluation tool being developed by DCO and Access Partnership


– Interactive exercise for participants to apply the tool’s framework to AI risk scenarios


Overall purpose:


The goal of this discussion was to introduce DCO’s work on ethical AI governance, present their new AI ethics evaluation tool, and gather feedback from participants on the tool’s framework through an interactive exercise.


Tone:


The tone was primarily informative and collaborative. The speakers provided detailed information about DCO’s approach and the new tool in a professional manner. The tone shifted to become more interactive and engaging during the group exercise portion, as participants were encouraged to apply the concepts and provide input. Overall, the discussion maintained a constructive and forward-looking atmosphere focused on addressing ethical challenges in AI development and deployment.


Speakers

– Chris Martin: Head of policy innovation at Access Partnership


– Ahmad Bhinder: Representative of the Digital Cooperation Organization


– Matthew Sharp: Senior manager at Access Partnership


– Thiago Moraes:


– Alaa Abdulaal: Chief of Digital Economy Foresight at the DCO


Additional speakers:


– Kevin: Colleague mentioned as handing out worksheets


Full session report

Revised Summary of Discussion on Ethical AI Governance


Introduction


This discussion, led by representatives from the Digital Cooperation Organization (DCO) and Access Partnership, focused on the development of an ethical AI governance framework and assessment tool. The session emphasized the critical importance of addressing AI as a societal challenge, highlighting the need for ethical, responsible, and human-centric AI development.


Key Speakers and Their Roles


1. Chris Martin: Head of policy innovation at Access Partnership


2. Ahmad Bhinder: Representative of the Digital Cooperation Organization


3. Matthew Sharp: Senior manager at Access Partnership


4. Thiago Moraes: Facilitator of the interactive exercise


5. Alaa Abdulaal: Chief of Digital Economy Foresight at the DCO


Discussion Overview


1. Importance of Ethical AI Governance


Chris Martin opened the discussion by framing AI as a societal challenge rather than merely a technical one. He emphasized the monumental stakes involved in AI development and deployment, stressing the need to “get this right” by ensuring AI is ethical, responsible, and human-centric. Martin highlighted the uneven global diffusion of AI technologies, noting the concentration in Asia Pacific, North America, and Europe, while identifying growth opportunities in the Middle East and North Africa. He also mentioned concerns about the increasing energy consumption related to AI development.


2. DCO’s Approach to AI Governance


Ahmad Bhinder introduced the DCO, explaining its membership of 13 countries and an observer network of 20 countries. He elaborated on DCO’s human rights-centered approach to AI governance, noting that the organization had identified which human rights are most impacted by AI and reviewed how AI policies, regulations, and governance intersect with these rights across their diverse membership and globally.


3. Key Principles for Ethical AI


Chris Martin presented DCO’s view on ethical AI governance, outlining six key principles:


a) Accountability and oversight in AI decision-making


b) Transparency and explainability of AI systems


c) Fairness and non-discrimination in AI outcomes


d) Privacy protection and data safeguards


e) Sustainability and environmental impact considerations


f) Human-centered design focused on social benefit


Martin provided specific examples and explanations for each principle, emphasizing their importance in ethical AI development and deployment.


4. DCO’s AI Ethics Evaluation Tool


Matthew Sharp provided a detailed overview of the AI ethics evaluation tool being developed by DCO and Access Partnership. Key features of the tool include:


– Separate risk questionnaires for AI developers and deployers


– Assessment of severity and likelihood of human rights risks


– Interactive visualizations to help prioritize actions


– Practical, actionable recommendations based on risk assessment


Sharp explained the tool’s workflow, highlighting differences between developer and deployer questionnaires. He emphasized that the tool is designed to be comprehensive, practical, and interactive, focusing on human rights impacts across various industries. Sharp also noted how this tool differs from existing frameworks, particularly in its focus on human rights and inclusion of both developers and deployers in the assessment process.


5. Interactive Exercise


Thiago Moraes led an interactive exercise where participants were divided into groups to analyze AI risk scenarios using the framework. The exercise involved:


– Identifying potential ethical risks in AI systems for medical diagnosis and job application screening


– Scoring risks based on severity and likelihood


– Developing actionable recommendations to mitigate identified risks


Participants engaged with real-world scenarios, applying the ethical considerations involved in AI deployment.


6. DCO’s Future Plans and Closing Remarks


Alaa Abdulaal concluded the session by emphasizing DCO’s commitment to a multi-stakeholder approach in addressing ethical AI challenges. Key points included:


– DCO’s belief in collaborative digital transformation


– The organization’s aim to provide actionable solutions for ethical AI deployment


– Plans to share the final AI ethics tool publicly


– DCO’s mission to enable digital prosperity for all through cooperation


– The importance of ethical use of AI in various sectors


Conclusion


The discussion presented a comprehensive overview of the DCO’s efforts to develop an ethical AI governance framework and assessment tool. By emphasizing a human rights-centered approach and providing practical tools for risk assessment, the DCO aims to address the complex challenges posed by AI development and deployment on a global scale.


Additional Notes


– Matthew Sharp mentioned a QR code survey for participants to provide feedback on the session.


– A feedback form was shared at the end of the session for further input from participants.


Session Transcript

Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chris Martin. I’m head of policy innovation at Access Partnership. We’re a global tech policy and regulatory consulting firm. So pleased to be here with all of you and with our partners at the Digital Cooperation Organization. Perhaps we can get started with, I think, a little bit of an acknowledgement that artificial intelligence is no longer really a technical challenge. It’s a societal one. Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact. And the stakes are monumental. They demand that we get this right, and at the same time, key questions remain. Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human-centric? Today we stand at a pivotal moment. Policymakers, technologists, and civil society are coming together to navigate the complex intersection of innovation and ethics, and together we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good. Now, this session isn’t just about policies. It’s about principles in action, defining who we are, what we as a global community value, and how we protect those values, especially in the face of rapid change. I invite you to take this opportunity to explore these possibilities with us, to ask some hard questions, and build pathways to ensure that AI serves humanity and not the other way around. With that, please let me introduce my colleague, Mr. Ahmed Binder from the Digital Cooperation Organization.


Ahmad Bhinder: Hello. Good afternoon, everybody. I see a lot of faces from all around the world, and it is really, really fortunate for us to be able to gather you all here together and showcase some of our work, actually tell you who we are as Digital Cooperation Organization, and discuss some of the work that we are doing and seek your inputs. So we really meant to have this a very interactive discussion, a roundtable session to say let’s see how we can convert this into a roundtable discussion going forward. So my name is Ahmed Binder, and I represent the Digital Cooperation Organization. We are an intergovernmental organization. We are represented by the Ministers of Digital Economy and ICT for the the 16 member states who we represent. And the member states come from, as you would see, from the Middle East, from Europe, from Africa, to South Asia, and we are very rapidly expanding. We have a whole network of private sector partners that we call observers, as you would see in other intergovernmental organizations. And we have over 40 observers who are already with us now. Since our existence, which is, we are quite young, so we started our, we came into conception from end of 2020, so we’re in our fourth year. So with a bit of our organization, what DCO is and how it works, this work this year we started was to look at the ethical AI, ethical governance of AI. And we thought that while a lot of work is done, is being done on ethical and responsible AI governance, we wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights, and what needs to be done to ensure that we have a human rights protective, ethical AI governance approach. There are a couple of reports that we are going to publish on that, and we are developing a policy tool, which will be the crux of our discussion today. We have developed a framework on the human rights risks associated to AI, what are the ethical principles that need to be taken care of, and then the tool is going to provide our member states and beyond. and a mechanism where we can get, yeah, can you hear me all right? Okay, sorry. So we’ll provide with the AI system developers or deployers with a tool where they can assess the systems, compliance per se, or their closeness to the human rights or ethical principles. And then the tool is going to recommend improvements in the systems. So again, I don’t want to kill it in my opening remarks. We have our colleagues from Access Partnership who we are developing this tool together with. So I will give it back to Chris to take us through this and I would look forward to all your inputs into the discussions today. Thank you so much.


Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what DCO’s view is on ethical AI governance. And then my colleague, Matt Sharp, will walk us through the tool itself. And as Ahmed previewed, then we’ll kind of break out to do a little bit of scenarios and get you started. And then we’ll kind of break out to do a little bit of scenarios and get a chance to play with the tool yourselves. I think the first question, why is this important for DCO? Well, it’s a big deal, I think everywhere. And DCO working with members and other stakeholders wants to really be an active member at the forefront of this debate. So these are two of the objectives to kind of start. And then the tool can be seen as one way to instill and align alignment and interoperability. between regulatory frameworks. I think we all recognize there’s a real wide divergence right now of AI readiness and regulatory approach. And then once you start to see that, actually proposing impactful, actionable initiatives is critical. DCO feels that’s important. And lastly, facilitating interactive dialogues like the one we’re here today to have. So a bit deeper on what does a human rights approach look like in AI governance for DCO? Well, it starts with four things. First, looking to prioritize protection and promotion of human rights. Name of the session, that’s why we’re here. To design and uphold human dignity, privacy, equity, and freedom from discrimination. Third, to create systems that are transparent, accountable, that are inclusive, and ones that don’t exacerbate inequalities. And lastly, to ensure advancement and contribute to the common good while mitigating all the potential harms I think we’re starting to see evolve with AI. So the toolkit that we’re developing will take a human rights-centered approach across four different areas. Again, looking at inclusive design and ensuring that there’s participation from diverse communities, especially marginalized ones. It will look to integrate human rights principles like dignity, equality, non-discrimination privacy at each stage of the AI lifecycle. It will seek to recommend the use of human rights impact assessments as a way to get in front of AI deployments and ensure that you mitigate those potential problems early. And then lastly, promote transparency and looking at disclosure of how AI makes its decisions. Taking it a little bit of a step back, and I think illustrating the moment we’re in, is that AI diffusion is pretty uneven across the world. This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside. So this is an important moment to get involved at an early stage. On the governance side, DCO sees really seven different areas where global best practice can be leveraged to advance AI governance. The first looks at institutional mechanisms. Typically these involve how do nation states govern artificial intelligence within their jurisdictions? Do they develop an AI regulator? Do they do it sector by sector? These are questions that are live at the moment across every country. How are they going to plan for that at the government level? Is there an AI strategy or an AI policy that helps dictate the different stages? And then beyond AI specifically, where are they in policy readiness? Cybersecurity frameworks, privacy frameworks, intellectual property, the whole range of different areas that impact AI and are important to consider. And then shifting beyond just the government specific places, but how do you build an innovation ecosystem? On the government side, can you foster investment and entrepreneurship in AI? But also how do you build a culture around that? And how do you do that in a way that also brings in that diversity of participants and voices? So that’s really critical to getting it right. The seventh area is, or sixth area, I’m sorry, is future-proofing the population. And by this we mean getting a population ready for AI. There is going to be displacements in the workforce, there are going to be educational requirements, and countries have to address those as they build these into their societies. And then lastly, an international cooperation is so fundamental, I think that’s why we’re all here at IGF today. And there are a lot of different processes that are underway to allow international collaboration to happen, and being a part of that is important. I think some of the findings across DCO member states are interesting in the sense that it’s a, I think, a unique pairing of different types of nation-states. And we see that it has a lot of varying levels of AI governance across it. It’s not to be unexpected when you have both regionally diverse and economically diverse countries within a single group. And that’s, I think, reflective of the case that we face globally. That feeds into the diverse definitions and approaches to AI, and it also feeds into the potential for further engagement and international cooperation, both within the DCO’s membership itself, but also in events and engagements like the one we’re doing. So there’s a view that we are building around the generic ethical considerations of AI, but part of our conversation today is to help us think about this, and are we getting it right? And there are, right now, very limited recommendations and practical implications to address human rights in DCO member states. And so this tool and this exercise is part of creating that for DCO and potentially beyond. I’m going to walk through these ethical principles very quickly, and then I’m going to pass it to my colleague, Matt, to pick up the tool itself. but the ethical principles that govern this tool are six-fold. The first, dealing with accountability and oversight. We want to ensure there’s clear responsibility for AI decision-making, addressing those gaps in things like verification, audit trails, incident response. We’ll want to look at transparency and accountability, as already discussed. Things that promote clarity in how you make these decisions is important, and you don’t want the complexity to undermine a user’s understanding. We’ve got fairness and non-discrimination as our third principle. Protecting against bias, unfair treatment and outcomes, and mitigating demographic disparities in how these systems perform. Fourth will be privacy. We all care about our privacy, and we’re all concerned about this, as our uses of different technologies now feed the AI ecosystem. We want to make sure there are those personal safeguards in place, and a respect for privacy rights. Fifth, around sustainability and environmental impact. I was on the panel right before this one in this room, and they talked about how AI is going to require the equivalent of another Japan in terms of energy use and consumption, and that’s going to put a strain on resources, so we’ve got to address that. And for the development of AI, comport environmental goals. And then lastly, it’s got to be human-centered. It’s got to be looking at social benefit and ensuring that it’s meeting societal needs while respecting individuality, and aligning these capabilities with those needs. So with that, I’m going to pass it to Matt. He can walk you through the tool itself in a little further detail, and then we’ll pick up the exercise.


Matthew Sharp: Hi everyone. I’m Matt Sharpe, a senior manager at Axis Partnership. Yeah, so the six principles are based on extensive research of frameworks around the world, which we try to distill into these six areas to focus on. And this is a brief description of the tool that we’ve developed, which is still in its prototype phase. But the idea is that this will be available online and publicly accessible for everyone. The tool provides a detailed risk questionnaire, which is different for both developers and deployers. And there are questions to ascertain both the severity and likelihood of risks related to human rights. And based on the way that the questions are answered, there will be an interactive radar graph, which basically helps the user prioritize their actions. And each risk area, an average score will be calculated for each risk area. And this will lead to actionable recommendations being given based on the specific way that the questions were answered. OK, if you go to the next slide. Yeah, and so the tool is designed to be comprehensive, practical, and interactive. The human rights first approach, which maps AI systems to universal human rights, it’s designed to be very practical. So it accommodates various AI systems across diverse industries. and organizations will get comprehensive risk insights and practical guidance on how to mitigate risks related to human rights. So our tool, there are a few other frameworks and tools related to R1. A lot of these are developed by national governments and tend to focus on their own national contexts. For example, the UAE and the New Zealand framework, a lot of them focus on verification of actions rather than risk assessments. And yeah, and a few of the existing tools focus only on AI developers and not AI deployers as well. And generally ours is the one that’s most focused on human rights. So we think this tool offers a unique contribution to advance ethical AI. So I mean, I already talked about this, but basically the way that the tool works, the workflow, so users will register on the website, they’ll provide some information about their AI systems and their industries, they’ll complete the questionnaire covering six risk categories. The questions will be different for developers and deployers. And then based on how the risks are assessed, they will see this risk radar chart, identifying priority areas for action, and they’ll receive advice on targeted mitigation strategies. OK, so this is our framework that underlies the tool. Basically. The diagram shows the principles at the top, and underneath those are more specific risks related to those principles. For example, in the case of privacy, there’s a focus on data protection and unauthorized collection and processing of sensitive information related to accountability and oversight. The risks there are insufficient involvement of humans and inadequate incident handling, for example. Then there’s detailed recommendations below this, which there’s no one-to-one mapping between the principles and the recommendation areas. When a risk category is high risk, it’ll quite often be the case that there’ll be specific recommendations related to each of these risk areas. But of course, these cover data management, validation, and testing of AI systems. The integration of stakeholder feedback is, of course, very important as well. Then just to say that there are two distinct stakeholder groups in the AI lifecycle, the developers and deployers. Of course, they will receive slightly different questionnaires. Developers are, of course, focusing on the design and the construction of AI systems. They need to predict AI risks in advance. They need to think about technical architecture. Deployers, of course, are focused on the implementation of these AI systems and for them, of course, the focus is on operational vulnerabilities and actual impacts on users and stakeholders. Yeah, I mean, this slide is perhaps a bit detailed, but just to say that for each of the recommendation categories, because of the different position in the AI library, there’ll be slightly, which is given to developers and deployers, but the six recommendation categories are consistently used for both developers and deployers. So yeah, if you wouldn’t mind just using a QR code to answer a couple of quick questions. So, yeah, so once you’ve answered those two questions. questions. We have a breakout activity which is designed to basically understand the logic of the AI ethics tool that we’ve developed. So, Kevin, I think, will be handing out worksheets that you could fill in. There’ll be different AI risk scenarios, and the idea here is to review the framework that we presented for the AI evaluator tool, AI ethics evaluator tool, and then identify two ethical risks related to each scenario, the scenario that you’ve given, and then do a risk scoring exercise where you score both the severity and likelihood of the risks you’ve identified. So, you can pick two of the principles that are relevant for your particular scenario. You score the severity and likelihood, their definitions on the worksheets. You calculate a combined score, an impact score for each risk, and then you’re able to rank them from most to least critical, and then you develop actionable recommendations, just trying to come up with two recommendations for the two risks for developer. And this whole exercise should take 15 minutes.


Ahmad Bhinder: Well, sorry to put you through this. We intended to make it as an interactive discussion, and we really wanted, selfishly, to get your inputs on to brainstorming on some of these scenarios. So, I do apologize in advance to the organizers for mixing up the chairs, but I think we should convert how we are sitting into, I think, three breakout groups and have the discussion. Please move your chairs around, and let’s go through this exercise. we can have a more interactive discussion. So we are well within the time for this session. We have half an hour to go. So for 15 minutes, let’s go through this exercise and then we would love to hear your thoughts on this. Thank you.


Chris Martin: And guys, I know this seems daunting. It is not. I promise, I did it myself last week. It’s actually kind of fun. And it gives you a real sense of how to actually start putting yourself in the mindset of assessing AI risk. So we were thinking maybe this side of the room could be one group, and then maybe split this side of the room in two. Those of you in the back, one group, and then those of you up front, here another. We’ve got these sheets that my colleague Kevin is gonna start passing out. I think we’ll hand out one set on this side and then one set there and one set here. And I’m happy to go around and check in with you guys as we take this forward and see how we can actually pull this together. Yeah. Thank you. . . . . . . . . . . . . . . . . . A high likelihood, high risk. The discriminatory impact on vulnerable groups, same thing. And I think for kind of working backwards then, the recommendations we had were you’re going to need valid testing of these thresholds to understand what is going to be correct for your platform. So validation and testing is going to be one remediation measure and then a continuous evolution to improve that. For the harmful or the inadequate human verification, we saw that you have to have human in the loop. And then for the last one, we really… Actually, I think that’s all we have. Thank you very much. Sorry for this. We have 30 minutes and I’ll give my mic to Mr. Thiago. Please, yeah.


Thiago Moraes: So yeah, well, our case is the use of AI for… Okay, diagnose systems critical rare disease, right?


Chris Martin: so The first risk was related to the explainability since we’re talking about rare disease and inaccurate answers can give issues here also Scrutinatory issues more specifically we talked about gender-based Discrimination if like the populations and statistics not well used and privacy risks like data leaks with Very high sensitive data. So for scoring the first one we did a six in the end So for explainability the one for discrimination a four and no privacy We think it’s the most sensitive here. It we gave a nine because from there from a leak many other issues may happen And following Following the Following following the DCO recommendations we suggest for the explainability enhance comprehensive documentation so documentation and reporting and for the discriminatory Impact we suggest validating and testing and for the privacy. We suggest that the management


Ahmad Bhinder: Thank you so much and Again, let’s have 30 seconds here. Are you going to?


Chris Martin: Okay, so our scenario is we have a multinational Okay, so we have an our scenario is we have a multinational cop that is deploying an AI system for screening job applications and to do that they are using historical data to rank the candidates based on predicted peppermint so So for us we thought it’s a risk on Discrimination because we’re looking at it from a perspective that historically people who work in the engineering field with men and mostly white male. And now you’re using that historical data to make an assessment on people who may be applying who look like me. So already we said fairness and non-discrimination, that’s a risk especially discriminatory impact on vulnerable groups. And performance, the scoring was quite high, likelihoods three, severity three, everything quite high. And then here’s one. Thank you. Thank you so much.


Ahmad Bhinder: Before we are kicked out, I would pass the mic to Alaa Abdullal. She is our Chief of Digital Economy Foresight at the DCO for some closing remarks. And sorry to rush through the whole thing.


Alaa Abdulaal: So hello, everyone. I think I was honored to join the session. And I have seen a lot of amazing conversation. At DCO, we are really, as our name say, we are the digital cooperation organization. We believe in a multi-stakeholder approach. And we believe that this is the only approach that will help in the acceleration of digital transformation. And the topic of ethical use of AI is an important topic. Because again, AI now is being one of the main emerging technologies that are offering a lot of advancement and efficiency in the digital transformation of government and different sectors. This is why it was very important for us as a digital cooperation organization to provide actionable solution to help countries and even developers to have that right tool to make sure that whatever systems that are being deployed have the right risk assessment from human rights. And to have that tool available for everyone. And this is why we wanted to have this session to get the feedback to really understand if what we are developing is in the right way. And thank you so much for being here and allocating the time and effort to join this discussion and provide your valuable inputs. And we are looking forward to share with you the final deliverable and the ethical tool hopefully soon. And hopefully together we are building a future to enable digital prosperity for all. Thank you very much for your time and for being here.


Chris Martin: Thanks, everybody. We also just put this up. If you want to provide feedback, we certainly welcome it on this session. Take a picture. It shouldn’t take long. And thanks, all. We really appreciate your participation.


C

Chris Martin

Speech speed

123 words per minute

Speech length

2136 words

Speech time

1034 seconds

AI is now a societal challenge, not just a technical one

Explanation

Chris Martin emphasizes that AI has evolved beyond being merely a technical issue and now impacts society as a whole. He stresses the importance of addressing the societal implications of AI systems.


Evidence

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Ahmad Bhinder


Matthew Sharp


Agreed on

Importance of ethical AI governance


Need to develop frameworks that anticipate risks and seize AI’s potential for good

Explanation

Martin argues for the development of comprehensive frameworks to address potential risks associated with AI while also harnessing its positive potential. He emphasizes the need for a balanced approach in AI governance.


Evidence

Together we need to develop frameworks that both anticipate the risks inherent in these systems, but also seize the transformative potential of AI for global good.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Ahmad Bhinder


Matthew Sharp


Agreed on

Importance of ethical AI governance


AI diffusion is uneven globally, creating an opportunity to get involved early

Explanation

Martin points out that AI adoption is not uniform across the world, with some regions lagging behind. He suggests this presents an opportunity for early involvement in shaping AI governance in these areas.


Evidence

This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Accountability and oversight in AI decision-making

Explanation

Martin emphasizes the importance of clear responsibility and oversight in AI decision-making processes. He highlights the need for mechanisms to ensure accountability in AI systems.


Evidence

We want to ensure there’s clear responsibility for AI decision-making, addressing those gaps in things like verification, audit trails, incident response.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Transparency and explainability of AI systems

Explanation

Martin stresses the need for AI systems to be transparent and explainable. He argues that the complexity of AI should not undermine users’ understanding of how decisions are made.


Evidence

Things that promote clarity in how you make these decisions is important, and you don’t want the complexity to undermine a user’s understanding.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Fairness and non-discrimination in AI outcomes

Explanation

Martin highlights the importance of ensuring fairness and preventing discrimination in AI outcomes. He emphasizes the need to protect against bias and unfair treatment in AI systems.


Evidence

Protecting against bias, unfair treatment and outcomes, and mitigating demographic disparities in how these systems perform.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Privacy protection and data safeguards

Explanation

Martin emphasizes the importance of privacy protection in AI systems. He argues for the implementation of personal safeguards and respect for privacy rights in the AI ecosystem.


Evidence

We want to make sure there are those personal safeguards in place, and a respect for privacy rights.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Sustainability and environmental impact considerations

Explanation

Martin highlights the need to consider the environmental impact of AI systems. He points out the significant energy consumption associated with AI and the need to align AI development with environmental goals.


Evidence

They talked about how AI is going to require the equivalent of another Japan in terms of energy use and consumption, and that’s going to put a strain on resources, so we’ve got to address that.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


Human-centered design focused on social benefit

Explanation

Martin emphasizes the importance of human-centered AI design that prioritizes social benefits. He argues that AI should meet societal needs while respecting individual rights and aligning with human values.


Evidence

It’s got to be looking at social benefit and ensuring that it’s meeting societal needs while respecting individuality, and aligning these capabilities with those needs.


Major Discussion Point

Major Discussion Point 2: Key principles for ethical AI


A

Ahmad Bhinder

Speech speed

136 words per minute

Speech length

695 words

Speech time

305 seconds

DCO is taking a human rights-centered approach to AI governance

Explanation

Ahmad Bhinder explains that the Digital Cooperation Organization (DCO) is focusing on ethical AI governance from a human rights perspective. They are developing tools and frameworks to ensure AI systems respect and protect human rights.


Evidence

We wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights.


Major Discussion Point

Major Discussion Point 1: The importance of ethical AI governance


Agreed with

Chris Martin


Matthew Sharp


Agreed on

Importance of ethical AI governance


M

Matthew Sharp

Speech speed

109 words per minute

Speech length

890 words

Speech time

485 seconds

Tool provides risk questionnaires for both AI developers and deployers

Explanation

Matthew Sharp describes the DCO’s AI ethics evaluation tool, which includes separate questionnaires for AI developers and deployers. This approach recognizes the different roles and responsibilities in the AI lifecycle.


Evidence

The tool provides a detailed risk questionnaire, which is different for both developers and deployers.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Assesses severity and likelihood of human rights risks

Explanation

Sharp explains that the tool evaluates both the severity and likelihood of human rights risks associated with AI systems. This comprehensive assessment helps users understand the potential impact of their AI applications.


Evidence

And there are questions to ascertain both the severity and likelihood of risks related to human rights.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Generates interactive visualizations to help prioritize actions

Explanation

The tool creates interactive visualizations, such as radar graphs, to help users prioritize their actions. This feature aids in identifying the most critical areas for improvement in AI systems.


Evidence

There will be an interactive radar graph, which basically helps the user prioritize their actions.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


Offers practical, actionable recommendations based on risk assessment

Explanation

Sharp highlights that the tool provides specific, actionable recommendations based on the risk assessment results. These recommendations are tailored to the user’s responses and help guide improvements in AI systems.


Evidence

And this will lead to actionable recommendations being given based on the specific way that the questions were answered.


Major Discussion Point

Major Discussion Point 3: DCO’s AI ethics evaluation tool


Agreed with

Ahmad Bhinder


Thiago Moraes


Agreed on

Need for practical tools to assess AI risks


T

Thiago Moraes

Speech speed

109 words per minute

Speech length

18 words

Speech time

9 seconds

Participants engaged in scenario-based risk assessment exercise

Explanation

Thiago Moraes describes a practical exercise where participants applied the AI ethics tool to specific scenarios. This hands-on approach allowed attendees to understand the tool’s functionality and the process of ethical risk assessment in AI.


Evidence

Our case is the use of AI for… Okay, diagnose systems critical rare disease, right?


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Identified ethical risks in AI systems for medical diagnosis and job application screening

Explanation

Moraes reports that participants identified various ethical risks in AI systems used for medical diagnosis and job application screening. This exercise highlighted the diverse range of potential ethical issues in different AI applications.


Evidence

The first risk was related to the explainability since we’re talking about rare disease and inaccurate answers can give issues here also Scrutinatory issues more specifically we talked about gender-based Discrimination if like the populations and statistics not well used and privacy risks like data leaks with Very high sensitive data.


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Scored risks based on severity and likelihood

Explanation

Moraes explains that participants scored the identified risks based on their severity and likelihood. This quantitative approach helps prioritize which ethical issues need the most urgent attention.


Evidence

So for scoring the first one we did a six in the end So for explainability the one for discrimination a four and no privacy We think it’s the most sensitive here. It we gave a nine because from there from a leak many other issues may happen


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


Developed actionable recommendations to mitigate identified risks

Explanation

Moraes reports that participants developed actionable recommendations to address the identified risks. This step demonstrates how the tool can guide users towards practical solutions for ethical AI implementation.


Evidence

Following the DCO recommendations we suggest for the explainability enhance comprehensive documentation so documentation and reporting and for the discriminatory Impact we suggest validating and testing and for the privacy. We suggest that the management


Major Discussion Point

Major Discussion Point 4: Practical application of the AI ethics tool


A

Alaa Abdulaal

Speech speed

139 words per minute

Speech length

250 words

Speech time

107 seconds

DCO believes in a multi-stakeholder approach to digital transformation

Explanation

Alaa Abdulaal emphasizes DCO’s commitment to a multi-stakeholder approach in addressing digital transformation challenges. This approach involves collaboration between various sectors and stakeholders to ensure comprehensive solutions.


Evidence

At DCO, we are really, as our name say, we are the digital cooperation organization. We believe in a multi-stakeholder approach.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Aims to provide actionable solutions for ethical AI deployment

Explanation

Abdulaal highlights DCO’s goal of developing practical, actionable solutions to support ethical AI deployment. This includes tools and frameworks that can be used by countries and developers to assess and mitigate risks in AI systems.


Evidence

This is why it was very important for us as a digital cooperation organization to provide actionable solution to help countries and even developers to have that right tool to make sure that whatever systems that are being deployed have the right risk assessment from human rights.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Plans to share the final AI ethics tool publicly

Explanation

Abdulaal announces DCO’s intention to make their AI ethics evaluation tool publicly available. This commitment to open access aims to promote widespread adoption of ethical AI practices.


Evidence

And we are looking forward to share with you the final deliverable and the ethical tool hopefully soon.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Seeks to enable digital prosperity for all through cooperation

Explanation

Abdulaal emphasizes DCO’s overarching goal of promoting digital prosperity for all through international cooperation. This vision underscores the organization’s commitment to inclusive and ethical digital development.


Evidence

And hopefully together we are building a future to enable digital prosperity for all.


Major Discussion Point

Major Discussion Point 5: DCO’s approach and future plans


Agreements

Agreement Points

Importance of ethical AI governance

speakers

Chris Martin


Ahmad Bhinder


Matthew Sharp


arguments

AI is now a societal challenge, not just a technical one


Need to develop frameworks that anticipate risks and seize AI’s potential for good


DCO is taking a human rights-centered approach to AI governance


summary

All speakers emphasized the critical need for ethical AI governance, focusing on societal impacts and human rights considerations.


Need for practical tools to assess AI risks

speakers

Ahmad Bhinder


Matthew Sharp


Thiago Moraes


arguments

Tool provides risk questionnaires for both AI developers and deployers


Assesses severity and likelihood of human rights risks


Generates interactive visualizations to help prioritize actions


Offers practical, actionable recommendations based on risk assessment


summary

The speakers agreed on the importance of developing and using practical tools to assess and mitigate AI-related risks, particularly in relation to human rights.


Similar Viewpoints

Both speakers emphasized the importance of key ethical principles in AI governance, including accountability, transparency, and fairness.

speakers

Chris Martin


Matthew Sharp


arguments

Accountability and oversight in AI decision-making


Transparency and explainability of AI systems


Fairness and non-discrimination in AI outcomes


Unexpected Consensus

Environmental impact of AI

speakers

Chris Martin


arguments

Sustainability and environmental impact considerations


explanation

While most discussions focused on societal and ethical impacts, Chris Martin unexpectedly highlighted the significant environmental concerns related to AI energy consumption, which wasn’t echoed by other speakers but is an important consideration.


Overall Assessment

Summary

The speakers demonstrated strong agreement on the importance of ethical AI governance, the need for practical assessment tools, and the focus on human rights in AI development and deployment.


Consensus level

High level of consensus among speakers, particularly on the need for human-centric, ethical AI governance. This agreement implies a shared vision for the future of AI regulation and development, which could facilitate more coordinated and effective approaches to addressing AI-related challenges.


Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

There were no significant areas of disagreement identified among the speakers.


difference_level

The level of disagreement was minimal to non-existent. The speakers presented a unified approach to ethical AI governance, focusing on human rights, practical tools for risk assessment, and multi-stakeholder collaboration. This alignment suggests a cohesive strategy within the DCO for addressing ethical challenges in AI development and deployment.


Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers emphasized the importance of key ethical principles in AI governance, including accountability, transparency, and fairness.

speakers

Chris Martin


Matthew Sharp


arguments

Accountability and oversight in AI decision-making


Transparency and explainability of AI systems


Fairness and non-discrimination in AI outcomes


Takeaways

Key Takeaways

AI governance is now a critical societal challenge requiring ethical frameworks and human rights protections


The Digital Cooperation Organization (DCO) is developing an AI ethics evaluation tool focused on human rights


The tool assesses risks for both AI developers and deployers across six key ethical principles


Practical application of ethical AI principles requires careful risk assessment and mitigation strategies


A multi-stakeholder, cooperative approach is essential for responsible AI development and deployment


Resolutions and Action Items

DCO to finalize and publicly release their AI ethics evaluation tool


Participants to provide feedback on the session and tool prototype via the provided QR code


Unresolved Issues

Specific implementation details of the AI ethics tool across different contexts and industries


How to address the uneven global diffusion of AI technologies and governance frameworks


Balancing innovation with ethical considerations in AI development


Suggested Compromises

None identified


Thought Provoking Comments

Every decision that AI systems make, what they power, are going to impact and shape our lives, how we work, and how we interact. And the stakes are monumental. They demand that we get this right, and at the same time, key questions remain. Most especially, how do we ensure that AI is both a powerful tool, but also ethical, responsible, and human-centric?

speaker

Chris Martin


reason

This comment sets the stage for the entire discussion by emphasizing the far-reaching impact of AI and the critical importance of ethical governance.


impact

It framed the subsequent conversation around the ethical implications of AI and the need for responsible development and deployment.


We wanted to look at it from a human rights perspective. So we identified which human rights are more, most impacted by the artificial intelligence, and then we reviewed across our membership and across the globe, how does AI policies and regulation and governance intersects with those human rights, and what needs to be done to ensure that we have a human rights protective, ethical AI governance approach.

speaker

Ahmad Bhinder


reason

This comment introduces a unique approach to AI governance by centering it on human rights, which is not commonly seen in other frameworks.


impact

It shifted the focus of the discussion towards considering AI’s impact on specific human rights, leading to a more nuanced conversation about ethical AI governance.


AI diffusion is pretty uneven across the world. This looks at the market for AI concentrated across Asia Pacific and North America and Europe to a greater degree, but still a lot of opportunity for growth in the Middle East and North Africa, where currently a lot of DCO member states reside.

speaker

Chris Martin


reason

This observation highlights the global disparities in AI development and adoption, bringing attention to the need for inclusive approaches.


impact

It broadened the scope of the discussion to consider the global context and the importance of supporting AI development in regions that are currently underrepresented.


Our tool, there are a few other frameworks and tools related to R1. A lot of these are developed by national governments and tend to focus on their own national contexts. For example, the UAE and the New Zealand framework, a lot of them focus on verification of actions rather than risk assessments. And yeah, and a few of the existing tools focus only on AI developers and not AI deployers as well. And generally ours is the one that’s most focused on human rights.

speaker

Matthew Sharp


reason

This comment provides a comparative perspective on existing AI governance tools, highlighting the unique features of the DCO’s approach.


impact

It helped participants understand the distinctive aspects of the DCO’s tool, particularly its focus on human rights and inclusion of both developers and deployers.


Overall Assessment

These key comments shaped the discussion by establishing the critical importance of ethical AI governance, introducing a human rights-centered approach, highlighting global disparities in AI development, and differentiating the DCO’s tool from existing frameworks. They collectively steered the conversation towards a more comprehensive, globally-aware, and human-centric consideration of AI ethics and governance.


Follow-up Questions

How can we ensure AI systems are transparent and their decision-making processes are explainable?

speaker

Chris Martin


explanation

Transparency in AI decision-making is crucial for building trust and ensuring accountability.


What are the best practices for conducting human rights impact assessments for AI systems?

speaker

Chris Martin


explanation

Human rights impact assessments are important for mitigating potential problems early in AI deployment.


How can countries address workforce displacement and educational requirements resulting from AI adoption?

speaker

Chris Martin


explanation

Preparing populations for AI-driven changes in the job market is crucial for future-proofing societies.


What are effective strategies for fostering investment and entrepreneurship in AI while ensuring diversity and inclusivity?

speaker

Chris Martin


explanation

Building a diverse and inclusive AI innovation ecosystem is critical for ethical AI development.


How can we address the increasing energy consumption requirements of AI systems to align with environmental goals?

speaker

Chris Martin


explanation

The growing energy demands of AI pose significant environmental challenges that need to be addressed.


What are the most effective ways to integrate stakeholder feedback in AI system development and deployment?

speaker

Matthew Sharp


explanation

Incorporating diverse perspectives is crucial for developing ethical and human-centered AI systems.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #170 2024 Year of All Elections: Did Democracy Will Survive?

Day 0 Event #170 2024 Year of All Elections: Did Democracy Will Survive?

Session at a Glance

Summary

This discussion focused on efforts to combat disinformation and protect election integrity across different regions, particularly in light of recent and upcoming elections. Speakers from European institutions highlighted initiatives like the European Digital Media Observatory (EDMO) and legislative measures such as the Digital Services Act to monitor and counter disinformation. The European Parliament’s efforts included pre-bunking videos and media literacy campaigns to empower voters. In contrast, the United States was described as regressing in its approach, with platforms reducing content moderation and a rise in polarization and hate speech. The speaker from Africa Check emphasized unique challenges in Africa, including limited internet access, language barriers, and concerns about potential misuse of anti-disinformation laws. Despite fears about the impact of generative AI on elections in 2024, the final speaker noted that while AI-generated content played a role in elections, it did not have the catastrophic effects some had predicted. However, continued vigilance and research on AI’s impact on elections was stressed as crucial. The discussion highlighted regional differences in approaches to combating disinformation, with Europe taking a more regulatory stance, while other regions face distinct challenges in implementing similar measures. Overall, the speakers emphasized the importance of collaboration, media literacy, and ongoing monitoring to address the evolving landscape of online disinformation and its potential impact on democratic processes.


Keypoints

Major discussion points:


– European efforts to combat disinformation, including legislation, rapid response systems, and initiatives like EDMO


– US challenges with disinformation, including platform inaction and polarization


– African experiences with disinformation in elections, including targeting of journalists and electoral bodies


– The impact of generative AI on elections in 2024 and looking ahead to 2025


– Differing views on the value of legislative approaches to combating disinformation in different regions


The overall purpose of the discussion was to examine approaches to combating disinformation and protecting election integrity across different regions, with a focus on recent and upcoming elections. Speakers shared experiences and initiatives from Europe, the US, and Africa.


The tone was largely informative and analytical, with speakers providing overviews of the situation in their respective regions. There was a sense of concern about the challenges posed by disinformation, but also some cautious optimism about efforts to address it, particularly in Europe. The tone became slightly more urgent when discussing the US situation and the potential impacts of AI going forward.


Speakers

– GIACOMO MAZZONE: Moderator


– ALBERTO RABBACHIN: Representative from the European Commission


– GIOVANNI ZAGNI: Representative from the European Digital Media Observatory (EDMO)


– PAULA GORI: Representative from EDMO


– DELPHINE COLARD: Spokesperson at the European Parliament


– BENJAMIN SHULTZ: Representative from American Sunlight Project


– PHILILE NTOMBELA: Researcher at Africa Check (South African office)


– CLAES H. DE VREESE: University of Amsterdam, Member of Executive Board of EDMO


Additional speakers:


– VIDEO: Narrator in a pre-recorded video about disinformation


Full session report

The discussion focused on efforts to combat disinformation and protect election integrity across different regions, particularly in light of recent and upcoming elections. Speakers from various institutions and organizations shared insights on the challenges and strategies employed in Europe, the United States, and Africa.


European Efforts and the Role of EDMO


Giovanni Zagni, from the European Digital Media Observatory (EDMO), explained EDMO’s crucial role in monitoring disinformation across the European Union. This initiative brings together fact-checkers, researchers, and other stakeholders to provide a comprehensive overview of the disinformation landscape in Europe. EDMO’s work includes coordinating national hubs, conducting research, and providing policy recommendations.


Delphine Colard, spokesperson for the European Parliament, outlined additional measures taken by the institution. These include the creation of a website explaining election integrity measures and the production of pre-bunking videos designed to educate voters about disinformation techniques. Colard emphasized the importance of media literacy campaigns in empowering voters to critically evaluate information. She also mentioned the potential establishment of a special committee on European democracy shields by the European Parliament.


Alberto Rabbachin, representing the European Commission, highlighted the activation of a rapid response system for European elections. This system aims to quickly identify and address disinformation threats as they emerge, demonstrating a coordinated approach to tackling disinformation in Europe.


Challenges in the United States


Benjamin Shultz, representing the American Sunlight Project, painted a concerning picture of the situation in the United States. He described a regression in efforts to combat disinformation, characterized by platforms “giving up” on content moderation. This has led to a rise in far-right narratives claiming censorship and a proliferation of deepfakes targeting politicians.


Shultz highlighted specific examples, including a recent report by the American Sunlight Project on sexually explicit deepfakes targeting members of Congress. He also noted the lack of regulation on election integrity measures in the US, contrasting sharply with the European approach. Shultz expressed concern about platforms like Meta’s third-party fact-checking program and issues with content moderation on YouTube.


African Perspective on Disinformation


Philile Ntombela, a researcher at Africa Check’s South African office, provided insights into the unique challenges faced in combating disinformation in Africa. These include:


1. Targeting of journalists and judiciary bodies by disinformation campaigns


2. A significant digital divide limiting access to fact-checking resources


3. Language barriers that complicate fact-checking efforts


4. Concerns about the potential misuse of anti-disinformation laws for censorship


Ntombela shared an example of how fact-checkers and journalists in South Africa faced accusations of bias when attempting to fact-check politicians’ statements. This led to the formation of an Elections Coalition, which included journalists and media houses working together on fact-checking efforts.


She also highlighted the Africa Facts Network declaration, an initiative to increase collaboration among African fact-checking organizations. Ntombela emphasized the need for context-specific solutions that take into account local challenges and potential risks associated with strict regulations.


Impact of AI on Elections


Claes H. de Vreese, from the University of Amsterdam and a member of EDMO’s Executive Board, addressed the role of generative AI in recent elections. He described the current situation as being “between relief and high alert.” While AI-generated content played a role in the 2024 elections, it did not have the catastrophic effects that some had feared.


However, de Vreese emphasized the importance of continued vigilance and research on AI’s impact on elections. He suggested that observatories like EDMO should continue monitoring how these technologies are deployed across various aspects of the electoral process in future elections, such as those in 2025.


Unresolved Issues and Future Directions


Several key issues remain unresolved and warrant further attention:


1. Balancing free speech concerns with the need to combat disinformation, particularly in the US


2. Addressing the digital divide and language barriers in combating disinformation in Africa


3. Understanding the long-term impact of AI-generated content on election integrity


4. Assessing the effectiveness of current platform policies in addressing disinformation globally


The speakers discussed whether a legislative framework similar to Europe’s would be helpful in their regions. While some saw potential benefits, others expressed concerns about the potential for misuse or unintended consequences.


The discussion concluded with a question from the moderator about the most pressing issues for the coming year. Responses varied by region but included the need for continued monitoring, improved collaboration between stakeholders, and addressing the challenges posed by emerging technologies.


The discussion highlighted the evolving nature of the disinformation landscape and the importance of ongoing research, collaboration, and adaptive strategies to protect election integrity in diverse global contexts.


Session Transcript

ALBERTO RABBACHIN: were committed to analyze the flags and decide, on the basis of their terms of services, if an action needs to be taken. This system was activated for the European election, for the French election, and for the Romanian election. And there was also a similar mechanism that was put in place on the Moldovan election. I’m basically going towards the end of my presentation. I just wanted to have a focus on the Romanian election. We know that they have been very difficult. There was a lot of foreign interference on this election, and this was shown also by the number of flags that the rapid response system had seen, with more than a thousand flags exchanged, coming from civil society organizations and fact-checkers, and going to the major platforms. Alberto? Yes, I’m concluding, because very quickly, I mentioned the European Digital Media Observatory. This is a bottom-up initiative financed by the Commission. We have put more than 30 million euros on this initiative. It has 14 hubs, and it has a system of monitoring this information across the EU and doing investigation. I’m sure that Giovanni will give you all the details on this. I will stop here, and I’m happy to take any questions if you need.


GIACOMO MAZZONE: Thank you very much. The question will be eventually at the end, because now we are very tight. Giovanni, you have been asked to continue and to complement the picture.


GIOVANNI ZAGNI: Yes, thank you. Thank you, Alberto. I’ll try and share my screen, because I have my presentation there. We’ll have a couple of seconds of embarrassed silence. Something is happening, yes, that’s magic, ok, that’s great. So I present, good afternoon, it is great for me to present our work with the European Digital Media Observatory in such an important venue as the 2024 Internet Governance Forum. My time is short, so I will dive right in. First of all, I would like to present a couple of key pieces of information about the observatory, which is usually known through its acronym EDNMO. Alberto mentioned it briefly, so since I have a couple of minutes more, I’ll try to present it. The observatory was established in 2020 as a project co-funded by the European Union. Its core consortium is composed of universities, research centres, fact-checkers, technology companies and media literacy experts. Beside the coordinating core, there are currently 14 hubs connected to EDNMO, covering sometimes just one country and sometimes a larger area in Europe. The concept behind the hubs is to cover the variety of languages and the specificity of media landscapes across the Union, since the challenges posed by this information are clearly very different from Slovakia to Portugal and from Finland to Greece. The general scope of EDNMO is to obtain a comprehensive coverage of the European media ecosystem, mainly with regards to disinformation and all the connected issues, and to propose, as well as enact, new and effective strategies to tackle them. For the broadness of its scope and its multi-stakeholder approach, it is a unique experience in the European landscape for its ability to carry out many different efforts, such as 1. To monitor disinformation across the continent through a network of 50-plus fact-checking organisations, working together on a regular basis. in monthly briefs and investigations. Two, to provide analysis and suggestions in the policy area with a special focus on the code of practice on this information that was mentioned by Alberto right before. Three, to coordinate and to promote media literacy activities such as the one that I’ll present in a minute. And four, to contribute to research landscape, to the research landscape in the field with a special effort in promoting data access to researchers but on this we have a class here which will tell us more in a minute. Let’s try and be very practical and specific about what we have done and what we are currently doing. I will give only a couple of examples of the many activities we are carrying out and I invite you to visit our website edmo.eu to know more. First example, ahead of the 2024 European elections we decided to set up a task force specifically to monitor the media ecosystem and to provide insight, analysis and action around the issue of disinformation. As part of this effort we set up a daily newsletter that, thanks to the hubs and the day-to-day work of fact-checkers in the field, updated policy makers, journalists and experts about the main disinformation narratives and the most important issues of the day, providing a connecting work which is usually made difficult at the European level, for example because of the great linguistic variety of Europe. Let me show you an issue I selected almost randomly. This is issue number 46 on June 10, 2024. It had three main items. One about how disinformation tried to exploit problems with polling stations in Spain on election day, as detected by Spanish fact-checkers. Another one about how a top Swedish candidate’s campaign was likely boosted by coordinated behaviours by ex-accounts. And it was done linking a Swedish report in the newsletter. and the third one about the rise of disinformation targeting the 2024 Paris Olympics, which was spotted by the big French newswire agency AFP and its Factuel department. At the same time we published reports and weekly insights about the main trends in disinformation in Europe and we promoted a Europe-wide media literacy effort with the hashtag BeElectionSmart. The BeElectionSmart campaign was an Edmo initiative to support citizens in finding reliable information about the elections and recognizing false or manipulative content ahead of the elections themselves. Each Monday from 20 to 29 of April to June 3rd of 2024 a new message along with the practical tips was published on the websites and social media accounts of all 14 Edmo hubs covering all EU member states. Most of the activities were new ideas and never tried out at the European scale and some of these activities turned out to be successful pilot projects that we have since adapted to new scenarios. For example the Romanian elections that were mentioned before of late November 2024 made headlines in Europe and beyond when a previously little-known candidate was able to finish first in the popular vote. Citing foreign interference and tampering with the regularity of the electoral campaign, on December 6th 2024 the country’s constitutional court annulled the results and ordered for the first turn of the elections to be held again. Drawing from a pilot during the last few weeks before the EU elections in June that Alberto mentioned, in the context of the Romanian elections we activated a rapid response system mechanism through which members of the Edmo community can proactively flag two very large online platforms suspicious and troublesome cases. It is then up to the platforms to take action or not, according to their terms of service. Moreover, on the EDMO website, we translated and made available to the community at least three analyses of the role of social media on the Romanian elections. Just yesterday, we made progress in an effort to provide a technological tool based on AI, which is currently in its testing phase, to Romanian fact-checkers, a tool which is apparently quite good at spotting networks of social media accounts, carrying on coordinated campaigns of dissemination of suspicion content. This is the result of cooperation between many actors, researchers involved in developing the tool, which are part of the EDMO community, the Bulgarian-Romanian Observatories of Digital Media, or BROAD, which is one of the most active regional hubs in the EDMO ecosystem, and EDMO-EU acting as coordinator and facilitator of these exchanges. It is tiresome, intensive, but also very rewarding work. And let me conclude with some actionable advice, if this short presentation gave you some food for thought. First, disinformation is an issue that naturally crosses disciplines and fields. It is crucial to build a multidisciplinary network of practitioners. Second, it is also necessary to find means to connect those practitioners. If not a big and expensive project as EDMO, then just a newsletter, or a comms group, or an app. Three, to help you communicate the results and attract new forces for your effort, you will need to produce an easy-readable, easy-shareable output. There are many for EDMO, and you can find all of them on the edmo.eu website. Finally, I will invite you to get in contact with us at EDMO to know more about our experience, our difficulties, our few successes, and many challenges. We are very happy to share what we know and to learn about what we do not. And thank you for your attention.


GIACOMO MAZZONE: Thank you very much. Giovanni, I hand over to Paola, please.


PAULA GORI: Thank you, Giovanni, for sharing the work that we are doing at ETMO. I think you did a very impressive presentation of the work we did and I like how you concluded. We are trying to learn, but we are open to learn even more. And that’s why events like today are actually very important also to learn from other experiences. But now I would like to give the floor to Delphine Golar. She is the spokesperson at the European Parliament and she will tell us what the European Parliament actually did. Because, as you know, in June we had the EU elections and that was quite an important moment for the Parliament. So the floor is yours, Delphine.


DELPHINE COLARD: Well, thank you and thank you for the opportunity to join you remotely and to talk here today. Indeed, the Parliament has been active in this area since 2015. As co-legislator, it has been pushing forward legislations, the legislations that my colleague from the Commission outlined, legislations to protect citizens from detrimental and harmful effects of the Internet, but also promoting freedom of speech and ensuring consumers have access to trustworthy information. This was at the core of the priorities during the past legislature and it will remain at the core in the next legislature that just started. And if we take stock now of the European elections that took place last June, well, the European elections are conducted hand in hand with the 27 EU member states. The European Parliament was adding a layer, deploying a go-to-vote campaign to mobilize as many people as possible by showing the added value of European democracy. And an important part of this democracy campaign, of this communication strategy, was to counter and prevent disinformation from harming the electoral process. And the idea was to anticipate potential disruption. could be expected in connection with the European elections. And we cooperated in this end with the other EU institutions, from the colleagues of the Commission that you just heard, the European member states in a rapid alert system, and the fact-checking community to get a complete picture. And I have to say that in-depth analysis that was provided throughout the period by the European Digital Media Observatory that we just heard, Giovanni, was really, really worthwhile. This was instrumental to have this whole-of-society approach. From our internal analysis in the Parliament of national elections in the member states, we knew that we could expect attempts to try and sow distrust in their electoral processes, alleging that elections are fraudulous or rigged, or spreading false voting instructions, or sowing polarization, especially around controversial topics. So what we wanted is to make sure that European citizens were exposed to factual and trustworthy information, so to empower them to recognize the signs of disinformation and also to give them some tools to tackle it. So inspired, we were inspired by many good practices in the different member states. And with the similar example that we had from Estonia and the Nordic countries, we set up a website about the European elections, explaining the technicalities of the elections. I hope you see the slide, because I don’t. The measures, perfect. The measures put in place by the European member states were explained, and it explained also at length how the EU ensured free and fair elections. So the idea of the website was to inform about the different aspects of election integrity, from information manipulation to data security. So the website was also equipping voters with tools how to tackle this information. This is one example. But of course, what we did also is we developed a… a series of pre-bunking videos explaining how to avoid common disinformation techniques. So, for example, taking advantage of strong emotions to manipulate or polarizing attempts or flooding the information attempts with contradictory versions of the same event. And thanks to the External Action Service, our partners in foreign policy, the videos were also available in non-EU languages, including Ukrainian, Chinese, Arabic and Russian. And I think I have a short version that we can show for 40 seconds now. Thank you. And it’s coming.


VIDEO: Disinformation can be a threat to our democracy. People who want to manipulate us with disinformation often use content with strong emotions, such as anger, fear or excitement. When we feel strong emotions, we are more likely to hit the like or share button without checking if the content is true. By doing this, we help spread disinformation to our friends and families. What can you do? Watch out for very emotional content, such as sensational headlines, strong language and dramatic pictures. Question what you see and read before you share. Things you would question if somebody told you face to face. You should also question online. Take a step back. Pause and resist the reflex to react without thinking.


DELPHINE COLARD: So you see, this was an example. So this is what spread throughout the period before the elections. And it was shared via social media and TikTok. And in addition, we organized briefings, workshops for civil society organizations, for youth, for educators, for journalists, for content creators. The idea there was really to engage different audiences, providing really tips and tricks on how to detect and avoid disinformation techniques. So, it was also reaching other members in the European Parliament with a specific guide. One element we are particularly proud of is establishing contact with several youngsters across Europe, especially first-time voters, via Euroscola and what we call the European Parliament’s Ambassador School Programme. Those are flagship programmes of the Parliament for students. As you know, raising awareness of the threat is a key and long-lasting solution that requires the implication of the society as a whole, and it starts with education. These were two examples, or three examples. One element that I want to highlight is also the importance we pushed on having strong relationships with private entities, civil society organisations to convey the importance of voting, and this was spread as widely as possible. So, it was tech companies, other companies, as the Code of Practice also mentioned, it was instrumental to have them on board. I want to mention the importance of strong, independent, pluralistic media that were instrumental in this fight. During the legislature, there were several legislations that were passed at European level, the European Media Freedom Act, or a Directive to Protect Journalists Against Abusive Lawsuits, and we tried also to support media in their work, from briefings, invitations or grants. I don’t know, we saw a lot of things during the last elections, maybe not the tsunami that was potentially, that we potentially feared, but there was an increase of information manipulation attempts targeting the European elections. Until now, we have not detected any that seemed capable of seriously distorting the conduct of these elections, and this is an assessment that we have shared with the EU institutions, the other EU institutions, and the European Digital Media Observatory, Giovanni can of course give you more. more information if you need. But we have to remain vigilant beyond the elections and continue the European elections, continue monitoring because the effect of this information is not a one-off. It’s not only during the European elections or those big moments. It’s a slow dripping that hollows out the stone. Look at what recently happened with the Romanian elections. It’s something that’s where the parliament is scrutinizing really at the moment and asking an information about to the commission. This legislature, we see that the parliament is very eager to have more information. They have passed several legislations to call for step up in this area, stepping up efforts. There will be, that’s a new dimension for this week. There will be a special committee that will focus on the European democracy shields. It’s really to assess existing and planned legislations related to the protection of democracy. They are also asking to deal with the question of addictive designs in social media platforms. So there is a lot of activity. And next week in the plenary, there will be two debates specifically on this information, especially this information during electoral period. Maybe to conclude as Giovanni did with some learnings, three main of our end, information manipulations, information manipulators really see elections as an opportunity to advance their own goals by smearing leaders, exploit existing political issues. So distrust erode the credibility in the democratic system and its institution. Second, good intentions and voluntary actions are not enough. Legislations and regulations play an instrumental part. So parliament has been and will remain a key actor there as co-legislator to shape laws that are fit for the digital age. And third, at the same as Giovanni already underlined, it is really important to continue to implement a whole of society approach. learning from each other’s practices and programs to double our efforts to make society more resistant to destabilization attempts. So this was a bit what we want to transmit for the European Parliament. Thank you.


PAULA GORI: Thank you so much Delphine and indeed as you said maybe there wasn’t a tsunami but as you rightly underlined and as we also underlined as Edmo this information is rather a drop after drop so we cannot just focus on it ahead of the elections but it’s rather a longer process and I think that right after these two presentations I mean there are so many keywords media, journalism, fact-checking, media literacy, emotions, addictive design, media literacy, digital platforms and so on so you understand why the whole of society approach but also that why the multidisciplinary approach for example if we know that emotions play a certain role that’s thanks to a specific research in the field. If we know about fact-checking it’s another field of research so that’s why institutions like Edmo in collaboration with other organizations like the Parliament, the Commission but platforms as well and civil society organization and so on are so important. And now I mean after this lots of words about the EU we thought that maybe would be interesting also to focus on other parts of the world because as we were saying this was a year of elections actually in the whole globe and so I’m very happy to give it over to Benjamin Schultz from American Sunlight who will focus on the US. The floor is yours.


BENJAMIN SHULTZ: Awesome, thanks so much Paula. I’m just gonna share my screen real quick if it wants to go. Okay I think it looks like that was a success. Can everyone see? Yes. Okay lovely. Well thank you all very much it’s wonderful to be here speaking with such a geographically and in so many other ways different diverse crowd from all over the world, different backgrounds, all here to talk about disinformation and how do we make the internet a better, safer place. So thank you for having me. Obviously, in the US, we just had an election. I think, putting it bluntly, I think it’s a result that surprised a lot of people. And really, this year of elections in the States was kind of a weird one. In many ways, we kind of regressed. And I’m going to explain this a bit further on, into my seven or eight minutes. But this is just a taste of what’s to come. But we’re in a very different place in the States than we were four years ago, and even eight years ago, in terms of taking action on disinformation, in terms of platforms playing an active role in content moderation, trust, and safety across the board. We’ve seen things regress, which I think is the direct opposite to how things have gone in Europe. And so it’s a very interesting phenomenon taking place. So with that, I will jump on into the slides here. So this GIF, this is one of my favorite GIFs. And I think it actually really accurately, despite it being funny, really accurately describes the current US approach to disinformation and online harms of all kinds. Whack-a-troll is what we at the American Sunlight Project call it. That’s a play on whack-a-mole. And as you can see, the cat is just trying to tap the fingers as they’re popping up. And as the cat tries to tap them, they go away. And the cat continues to do this for as long as I have this slide up here. And I think, again, this sums up where we’re at. In the last year, really two years, we have seen platforms pretty much just give up entirely. And this is not specific to any particular platform, although I will say that some certainly are doing more giving up. Uh, than others, um, I, you know, for, for lack of, um, or for not wanting to sort of impugn anyone’s integrity, I won’t, I won’t name that platform, but I think we can all take a guess. Um, and this is really problematic because we have seen a massive proliferation of hate speech, of, uh, false information of all kinds from false polling place locations to, um, attacks against elected officials, um, doxing, things like this. This has become commonplace in the States in the last year or two, um, we’ve seen polarization reach really unprecedented levels, political polarization. Um, and the political system is about as toxic as it’s been, um, certainly in my lifetime. And even though that’s, you know, anecdotal and qualitative, I, I, you know, it has certainly become worse in the last couple of years. Um, this has been buoyed by the rise of the far right in the States. Um, there’s been a significant populist turn as there has been in many other countries in the world. Um, and we’ve seen platforms sort of go along with this, um, which I think is very different than how things have played out in Europe. Um, in the States we do not have regulation such as the digital services act or really anything of the sort. Um, and platforms, as you can see just from these headlines, um, have, have surrendered and give it up. Um, termination of, uh, trust and safety teams has taken place. Um, and we’ve also seen the rise of, of a narrative of censorship, uh, from the far right primarily, also limited sections of the far left. Um, but really the political fringe on both sides, um, have started to claim that, um, any content moderation, any, um, action against false and, and, and malicious claims is tantamount to censorship. Um, in the States, of course, we have the first amendment, which protects the right to freedom of speech and, um, expression and assembly. Um, and this is of course a very American, um, American right. I, you know, this is something that I think every single American supports, um, the right to freely express yourself. Um, but where we’re seeing. some conflict, politically, in the states and with platforms in the government and the incoming administration, is sort of where that right, where the boundary is, right? For instance, in the states right now, my organization, the American Sunlight Project, we just released a new report just last week impacting how malicious, sexually explicit deepfakes have actually affected members of Congress. What we found was one in six women in Congress in the states are subject currently right now on the internet to sexually explicit deepfakes. And what we’ve seen is the Senate has passed bills already that would regulate these deepfakes and make it a criminal offense to spread them non-consensually because you’re using someone’s likeness to denigrate them and portray them as being in pornographic material. But we’ve seen pushback in the House from the far right who claims that this is a violation of free speech. So this is kind of the situation in the states right now, and I just want to preface everything that I’ve said here is not me taking a position one way or another, just sort of painting a picture of where we’re at in the states. And so, sorry, I should have skipped ahead a little bit earlier on the slide here, but we’ve seen this kind of democratization in the most kind of extreme way of artificial intelligence, deepfake technology, but also other types of malicious text-based content too that we’ve found plenty of evidence of foreign bot networks that have played a significant role on various social media platforms in this election cycle. You know, to the extent they changed voting behavior, I think that’s really kind of impossible to measure. But certainly we have plenty of evidence and not just us, but pretty much every organization working in this field in the states that this type of content has been pervasive and getting


GIACOMO MAZZONE: into people’s feeds, whether it’s on TikTok or X or Facebook or Instagram or whatever. We’ve seen that algorithms, just as my European colleagues just mentioned, you know, content that is emotional or. gets people riled up, makes them want to click more and scroll more. Algorithms favor this kind of content. And we’ve certainly seen plenty of malicious, fake, and actually also illegal content making their way into people’s feeds. So again, to the extent that changed voting behavior, not sure, but certainly people have been increasingly exposed to false and malicious content in this election cycle. Going back to the deepfake issue, and not just in terms of deepfake sexually explicit material or image-based deepfakes, but we’ve seen numerous instances in the States of actually election officials, and even Joe Biden himself, being spoofed and imitated by deepfakes. And again, in the States right now, just to paint the picture, again, without taking a political position, just accurately describing where we’re at, a lot of this material, it’s completely legal to make and create and disperse. Now, in this headline, the third one on the right here, New Hampshire officials investigate robocalls, et cetera, there was a criminal investigation here because it’s illegal to sort of interfere this blatantly in an election. But there’s plenty of other instances which haven’t been prosecuted of this type of behavior happening. And this is incredibly damaging. And again, in the States, we have pretty much zero regulation on deepfakes on really any kind of election-based integrity measures. We have pretty much none. Which again, is pretty much in direct opposite to how Europe has approached this issue. I think certainly in the States, our institutions are structured differently. I think it’s much harder for us to implement these kinds of measures, the DSA, GDPR, et cetera. But nonetheless, this really just highlights the issue. Thank you, Benjamin. Can you go to a close, please? Yes, we’ll close it up. So one thing, just I’ll wrap it up very quickly. Kamala Harris


BENJAMIN SHULTZ: ran for president in 2020, again, not getting political, but she has pretty much been the most attacked person in terms of gender and identity in the last really four to five years in the States. Numerous studies, actually one study that my boss authored found that of all gender and identity based attacks against any politician in the United States, Kamala Harris received roughly 80% of those attacks. And this really just, again, going back to the polarization, the toxicity of the American political system, this highlights that and it makes it incredibly difficult to get people to agree on a set of regulations, a set of rules for technologies or platforms, et cetera. And so with that, I will wrap it up and say that for the immediate term, the outlook in the States is not so good. And hopefully we get through this tough period and are able to sort of be united as Europe has been on this issue and, you know, improve our feeds, improve our political system and go from there.


PAULA GORI: So thank you very much. Thank you very much, Benjamin. As we are running a little late, I will give it immediately to Pilile Ntombela from Africa Check. We are moving again geographically. Pilile, the floor is yours.


PHILILE NTOMBELA: Good day, everybody. I’m going to just quickly share my screen as well and then I’ll get started if it doesn’t actually share. Can you see my screen at all? Not yet. OK. Now, yes, you can see it now. Yeah, fantastic. Amazing. Great. So I’m just going to quickly go off the screen and then hope for the best. Great. OK, first and foremost, thank you for having me. My name is Pilile Ntombela. I’m a researcher at the South African Office of Africa Check. Africa Check is the continent’s first independent fact-checking organization, and we have offices in South Africa, Kenya, Nigeria, and Senegal. So first and foremost, I’m sure everybody in the room is aware, but we often like to share that misinformation shared by well-meaning people who are trying to inform you and have no idea that the information is false, while disinformation, obviously shared with an intent to mislead, has a golden mind, often a political one, and the people share it know that the information is false. So patterns across the continent this year that we’ve seen through the election year, we found that targeting journalists and the judiciary and all these other bodies was a very powerful tactic. We found that journalists were accused of bias whenever they tried to fact-check, so we had something called the Elections Coalition in South Africa, which included journalists and media houses who would either try and do a quick fact-check themselves, so we trained them beforehand as part of our company, our organization’s training systems, or we helped them to fact-check, and often they were accused of bias whenever they fact-checked a specific politician and told that they support their opposition. We had rumors of electoral bodies favoring parties. A certain party in South Africa actually took our independent electoral commission to the Constitutional Court, which is the highest court in our country, stating that they felt they were being marginalized and also they were being basically receiving unfair treatment. Of course, the Constitutional Court ruled against that because it wasn’t actually true. It was more publicity to make people both wonder whether the Constitutional Court is independent and if the electoral commission is. And so this is the last part, which again leads back to the court case with the Constitutional Court. We found that people are more connected, but voters are still vulnerable to misinformation. Media literacy is not a big like this, a lot of people don’t have it. News media is increasingly putting important information behind paywalls. And in a continent like ours, which has a huge amount of economic inequality and poverty, this is very difficult for people to overcome those. Language remains a barrier. Africa has, I think, more than 2,000 languages across 53 states. And those are the ones that are the official languages. Sorry, let me go back here quickly. The news media as well, but on top of being connected, we found a report by the International Telecommunications Union, which showed that Africa is still, even in 2022, Africa still suffered from what we call the digital divide. We had the lowest level of people that had internet connectivity, which means those people then, whatever information they receive, they don’t have the opportunity to look it up or double check it or send it to a fact-checking organization like Africa Check, which means that these issues then remain a problem. Finally, we had platform accountability. We found that, especially on YouTube, I don’t know if I’m supposed to mention it, but some platforms, people were able to share their information unchecked, disinformation unchecked. One of the platforms obviously has put a note with a sort of miniature fact-check, but doesn’t actually curb the spread of that same content or the same posts that show disinformation. We are part of the META third-party fact-checking program. And so in that side, we found that we actually were able to kind of help the spread, because when we add a fact-check, it downgrades the post or even removes it. However. there’s still far more to be done on social media, particularly in places where, just like I said, in Africa, it’s very difficult for people to access that information in other ways. So if I find that there’s information that is incorrect, but I believe it and I share it and I send it to somebody who has no way of finding out if it’s important, if it’s true, then you’ll find that then that information spreads even faster. And platforms, obviously, their algorithms definitely need to be able to pick up common phrases used for disinformation, particularly in election years, but in general overall. The biggest allegation, the biggest disinformation this year was fraud allegations, particularly in South Africa. Claims of post spreading were by far the worst. A specific politician started these, knowing that he’s a very charismatic character, but also has had a lot of problems with legally and also with the IEC. So of course, started the issue saying that the vote would be rigged even before the election season started, even the year before. The media then did, they just shared with the parity fashion. They’d just not actually fact check it or even try to say this person said. It was more just shared verbatim to what he said. This then trended on social media. And this is some of the examples. So I’m assuming it’s my, but on my left, you have a report by a nationwide newspaper, which then took that statement made by the politician and just made this the headline. Which, of course, then drives the idea rather than creates a sense of this is what somebody said. On the right, the conspiracy was then named the big lie. And we found that between 25 May to June, this cloud was basically. it took over social media with the biggest players being in the biggest causes being the ones in purple and then later the ones in the turquoise. Sorry, we are long. Could you go to a close because we have the last speaker waiting and the next session starting soon. Thank you very much. Okay, sure. Then I’ll just speak about our stance on anti-information regulation. We found that in Africa, the backfire can be quite damaging. So if we use stifle and stifle people, censorship, and also people can turn to covert means, for example, using platforms like WhatsApp, which have end-to-end encryption, and then we won’t have access to them. It also runs the risk of penalizing misinformation instead of disinformation. So we had an accurate, we’ve decided to have a combined accord, which was created this year at the African Exchange, 55 checking organizations in more than 30 countries, basically reaching offline and variable communities, expanding access to reliable information, protecting fact checkers from facing harassment, and collaborating with tech partners to innovate. This then for us was more a collaborative space rather than a legislative or legal space. And that is all I’d say. Thank you so much.


GIACOMO MAZZONE: Thank you, Philippe. Sorry for that. The last speaker now is Klaas. Please, Klaas. Yes. So I suggest something radical,


CLAES H. DE VREESE: seeing that we have taken a lot of time and the next session is beginning soon. So I’m Claes de Vries. I work at the University of Amsterdam, a member of the Executive Board of Aetmo. And I was going to talk about a specific risk, but from a broad perspective around generative AI and elections in 2024. Let me just give you my take home message rather than going through the whole presentation at this point of the panel. I think we are somewhere between relief and high alert. So if you look across elections of 2024, it will be hard to identify a generated AI generated risk that really flipped elections in the last days. of the election. So on the one hand, you could say that was a big relief because that’s very different from the expectations going into 2024 when there were true and genuine fears about elections being overturned through generative AI. That didn’t happen. At the same time, all the evidence, and that’s the evidence that would be in the slides that I will then skip, there’s also not been a single election in 2024 worldwide where generative AI and AI-generated material has not played a role. And I think that’s the take-home message really of this discussion on AI, that there is a certain sense of relief that 2024 did not become the absolute catastrophic year where there was an absence still of regulations in this area and a technology that was available and that was deployed, but maybe not with the detrimental effects that were expected in 2024. Does that mean that the AI discussion is over as we move into a big election year in 2025? Absolutely not. It’s important to look at the impact of generative AI and Aetmo will continue to doing so also in 2025 as we see elections that take place in that year. It’s important to not only look at persuasion of voters, but to see what kind of role AI is playing in the entire ecosystem of elections, whether that is in the donation phase, whether that is in the mobilization phase, whether it is spreading disinformation about your political opponents, and whether or not it is igniting and fueling into already existing conflict lines and emotions that are taking place in particular elections in society. So let that be the take-home message for 2025, that while 2024 did not become the AI catastrophe, which was in many ways predicted by a lot of observers also in this space, I believe that as we move into 2025, there’s all the reason for an observatory like Aetmo to continue the work, to see how these technologies are being deployed across elections. And this is something that we should do collaboratively also with centers and researchers and civil society from outside of the European Union to really get a better grasp on the. impact of AI on elections.


GIACOMO MAZZONE: Thank you very much, Klaas. Thank you for sacrificing your time. Just one question to the two non-European speakers. Do you think that, Philile and Ben, that if you have a legislative framework like in Europe this would make your life easier or not?


PHILILE NTOMBELA: Okay, I’ll go first. So, for us, no, it wouldn’t, for the reasons I mentioned in the presentation. There’s a huge history of censorship, suppression, and once you create a law like that, unfortunately, because everything works with precedent, once one person is able to use that law to manipulate people that are actually trying to spread proper information, it can go wrong in so many ways. And so, this is why we came up with that declaration at the Africa Facts Network. You know, 30 fact-checking organizations around the, 50 actually, around the continent then all said, proved that, well, signed up to say that they’d rather we collaborate, including with responsive governance, so that we can try and fight disinformation and misinformation from, you know, a media literacy perspective, an outreach perspective, and an international, but still within the continent perspective, rather than laws, because of how they can be manipulated, as we’ve seen with other laws in our countries already.


BENJAMIN SHULTZ: Yeah, I generally, I think, yes, actually, I think it would help us in the States. I think, you know, implementing something like totally identical to like the DSA or any general law regulating platforms, I think that would be difficult to like legally deal with. I think it would face a lot of kind of chopping down, but particularly around protection of researchers, data access for researchers, these things, I think, would be extremely helpful and would enable civil society to you know, do what we were doing in 2020, which was analyzing content from platforms and reporting on online harms, which we can’t do today.


GIACOMO MAZZONE: Thank you very much. Thank you to all the speakers. Thank you to the people in the room for the patience. Apologies to the next speakers of the next session. We hand over to them and sorry for not taking question, but we will be outside the room if you need help and you have to raise any point. Thank you.


A

ALBERTO RABBACHIN

Speech speed

134 words per minute

Speech length

210 words

Speech time

94 seconds

Rapid response system activated for European elections

Explanation

A rapid response system was implemented to combat disinformation during European elections. This system involved analyzing flags and deciding on actions based on platforms’ terms of service.


Evidence

The system was activated for the European election, French election, Romanian election, and Moldovan election.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


G

GIOVANNI ZAGNI

Speech speed

145 words per minute

Speech length

1217 words

Speech time

503 seconds

European Digital Media Observatory monitors disinformation across EU

Explanation

The European Digital Media Observatory (EDMO) is a project that monitors disinformation across the European Union. It involves a network of fact-checkers, researchers, and media literacy experts working together to tackle disinformation.


Evidence

EDMO has 14 hubs covering different countries and regions in Europe, and it monitors disinformation through a network of 50-plus fact-checking organizations.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

DELPHINE COLARD


PHILILE NTOMBELA


Agreed on

Need for multi-stakeholder approach to combat disinformation


D

DELPHINE COLARD

Speech speed

147 words per minute

Speech length

1271 words

Speech time

518 seconds

European Parliament website explaining election integrity measures

Explanation

The European Parliament created a website to explain the technicalities of elections and measures to ensure free and fair elections. The website aimed to inform voters about various aspects of election integrity and equip them with tools to tackle disinformation.


Evidence

The website explained measures put in place by European member states and how the EU ensured free and fair elections.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


Differed with

BENJAMIN SHULTZ


PHILILE NTOMBELA


Differed on

Approach to regulating disinformation


Pre-bunking videos to explain disinformation techniques

Explanation

The European Parliament developed a series of pre-bunking videos to explain common disinformation techniques. These videos aimed to educate voters on how to avoid manipulation and recognize disinformation tactics.


Evidence

The videos covered topics such as emotional manipulation, polarization attempts, and flooding of information with contradictory versions of events.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

GIOVANNI ZAGNI


PHILILE NTOMBELA


Agreed on

Need for multi-stakeholder approach to combat disinformation


B

BENJAMIN SHULTZ

Speech speed

174 words per minute

Speech length

1361 words

Speech time

467 seconds

Platforms giving up on content moderation

Explanation

In the United States, social media platforms have largely abandoned content moderation efforts. This has led to a proliferation of hate speech, false information, and attacks against elected officials on these platforms.


Evidence

Termination of trust and safety teams at various platforms, and headlines indicating platforms’ surrender on content moderation.


Major Discussion Point

Disinformation challenges in the United States


Differed with

DELPHINE COLARD


PHILILE NTOMBELA


Differed on

Approach to regulating disinformation


Rise of far-right narratives claiming censorship

Explanation

There has been an increase in narratives from the far-right in the US claiming that content moderation is a form of censorship. This has made it difficult to implement measures against false and malicious claims on social media platforms.


Evidence

Pushback in the House of Representatives against bills regulating deepfakes, with claims that such regulation violates free speech.


Major Discussion Point

Disinformation challenges in the United States


Proliferation of deepfakes targeting politicians

Explanation

There has been a significant increase in the use of deepfake technology to create false or misleading content targeting politicians in the US. This includes sexually explicit deepfakes of female politicians and voice imitations of election officials.


Evidence

A study found that one in six women in Congress are subject to sexually explicit deepfakes. There were also instances of deepfake voice imitations of Joe Biden and other election officials.


Major Discussion Point

Disinformation challenges in the United States


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


P

PHILILE NTOMBELA

Speech speed

147 words per minute

Speech length

1347 words

Speech time

549 seconds

Targeting of journalists and judiciary bodies

Explanation

In African elections, there has been a trend of targeting journalists and judiciary bodies with disinformation. Journalists attempting to fact-check were often accused of bias, while rumors spread about electoral bodies favoring certain parties.


Evidence

A political party in South Africa took the independent electoral commission to the Constitutional Court, claiming unfair treatment.


Major Discussion Point

Disinformation issues in Africa


Agreed with

GIOVANNI ZAGNI


DELPHINE COLARD


Agreed on

Need for multi-stakeholder approach to combat disinformation


Differed with

DELPHINE COLARD


BENJAMIN SHULTZ


Differed on

Approach to regulating disinformation


Digital divide limiting access to fact-checking

Explanation

Africa suffers from a significant digital divide, with the lowest level of internet connectivity. This limits people’s ability to fact-check information or access reliable sources, making them more vulnerable to misinformation.


Evidence

A report by the International Telecommunications Union showed that Africa had the lowest level of internet connectivity in 2022.


Major Discussion Point

Disinformation issues in Africa


Fraud allegations spreading rapidly on social media

Explanation

In African elections, particularly in South Africa, fraud allegations were a major form of disinformation. These claims spread rapidly on social media, often initiated by charismatic politicians and amplified by uncritical media coverage.


Evidence

A specific politician in South Africa started fraud allegations even before the election season began, which then trended on social media.


Major Discussion Point

Disinformation issues in Africa


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


C

CLAES H. DE VREESE

Speech speed

175 words per minute

Speech length

491 words

Speech time

167 seconds

Generative AI played a role but did not overturn elections in 2024

Explanation

While generative AI was used in elections worldwide in 2024, it did not have the catastrophic impact that was initially feared. There was no evidence of AI-generated content flipping election results in the final days of campaigns.


Evidence

No single election in 2024 was overturned due to AI-generated material, despite its presence in every election.


Major Discussion Point

Impact of AI on elections


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


Agreed on

Importance of monitoring disinformation in elections


Need to monitor AI’s impact across entire election ecosystem

Explanation

It’s important to continue monitoring the impact of AI on elections beyond just voter persuasion. AI’s role in various aspects of the election process, including donation, mobilization, and spreading disinformation about opponents, needs to be studied.


Major Discussion Point

Impact of AI on elections


Agreements

Agreement Points

Importance of monitoring disinformation in elections

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Proliferation of deepfakes targeting politicians


Fraud allegations spreading rapidly on social media


Generative AI played a role but did not overturn elections in 2024


summary

All speakers emphasized the importance of monitoring and addressing disinformation in elections, whether through rapid response systems, observatories, or analysis of AI-generated content.


Need for multi-stakeholder approach to combat disinformation

speakers

GIOVANNI ZAGNI


DELPHINE COLARD


PHILILE NTOMBELA


arguments

European Digital Media Observatory monitors disinformation across EU


Pre-bunking videos to explain disinformation techniques


Targeting of journalists and judiciary bodies


summary

These speakers highlighted the importance of involving various stakeholders, including fact-checkers, researchers, media literacy experts, and government bodies in combating disinformation.


Similar Viewpoints

These speakers shared a positive view of the European Union’s efforts to combat disinformation through various initiatives and tools.

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Both speakers highlighted challenges in their respective regions (US and Africa) related to the spread of disinformation, particularly due to platform issues or lack of access to fact-checking resources.

speakers

BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Unexpected Consensus

Impact of AI on 2024 elections

speakers

CLAES H. DE VREESE


BENJAMIN SHULTZ


arguments

Generative AI played a role but did not overturn elections in 2024


Proliferation of deepfakes targeting politicians


explanation

Despite concerns about AI’s potential to significantly disrupt elections, both speakers noted that while AI and deepfakes were present in elections, they did not have the catastrophic impact that was initially feared.


Overall Assessment

Summary

The main areas of agreement included the importance of monitoring disinformation in elections, the need for a multi-stakeholder approach to combat disinformation, and the recognition that while AI and deepfakes were present in elections, they did not have the catastrophic impact initially feared.


Consensus level

There was a moderate level of consensus among the speakers, particularly on the importance of addressing disinformation. However, there were notable differences in approaches and challenges faced in different regions (EU vs. US vs. Africa). This implies that while there is a shared recognition of the problem, solutions may need to be tailored to specific regional contexts and legal frameworks.


Differences

Different Viewpoints

Approach to regulating disinformation

speakers

DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

European Parliament website explaining election integrity measures


Platforms giving up on content moderation


Targeting of journalists and judiciary bodies


summary

The speakers disagreed on the effectiveness and appropriateness of regulatory approaches to combat disinformation. While the European approach favors strong regulation and platform accountability, the US has seen a retreat from content moderation, and the African perspective warns against potential misuse of regulations.


Unexpected Differences

Impact of AI on elections

speakers

CLAES H. DE VREESE


BENJAMIN SHULTZ


arguments

Generative AI played a role but did not overturn elections in 2024


Proliferation of deepfakes targeting politicians


explanation

While both speakers addressed AI’s role in elections, there was an unexpected difference in their assessment of its impact. De Vreese suggested a sense of relief that AI didn’t cause catastrophic effects in 2024, while Shultz highlighted significant concerns about deepfakes targeting politicians in the US.


Overall Assessment

summary

The main areas of disagreement centered around regulatory approaches to disinformation, the role of platforms in content moderation, and the impact of technological advancements like AI on elections.


difference_level

The level of disagreement was moderate to high, with significant variations in approaches and experiences across different regions. These differences highlight the complexity of addressing disinformation globally and the need for context-specific solutions.


Partial Agreements

Partial Agreements

All speakers agreed on the need to combat disinformation, but disagreed on the methods. While the European approach involves a coordinated observatory, the US faces challenges with platform cooperation, and Africa struggles with digital access issues.

speakers

GIOVANNI ZAGNI


BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

European Digital Media Observatory monitors disinformation across EU


Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Similar Viewpoints

These speakers shared a positive view of the European Union’s efforts to combat disinformation through various initiatives and tools.

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Both speakers highlighted challenges in their respective regions (US and Africa) related to the spread of disinformation, particularly due to platform issues or lack of access to fact-checking resources.

speakers

BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Takeaways

Key Takeaways

Europe has implemented coordinated efforts to combat disinformation, including rapid response systems, monitoring by EDMO, and pre-bunking campaigns


The US has seen a regression in platform content moderation and a rise in disinformation, particularly deepfakes


Africa faces unique challenges with disinformation due to the digital divide, language barriers, and risks of censorship


Generative AI played a role in 2024 elections but did not have the catastrophic impact some feared


There are significant differences in regulatory approaches to disinformation between Europe, the US, and Africa


Resolutions and Action Items

EDMO to continue monitoring disinformation across EU elections


European Parliament to establish a special committee on European democracy shields


African fact-checking organizations to collaborate through the Africa Facts Network declaration


EDMO to continue monitoring AI’s impact on elections in 2025


Unresolved Issues

How to balance free speech concerns with the need to combat disinformation, particularly in the US


How to address the digital divide and language barriers in combating disinformation in Africa


Long-term impact of AI-generated content on election integrity


Effectiveness of current platform policies in addressing disinformation globally


Suggested Compromises

In Africa, focus on collaborative efforts and media literacy rather than strict regulation to avoid potential misuse of laws


In the US, consider adopting some aspects of European regulations, particularly around researcher access to data, while respecting First Amendment concerns


Thought Provoking Comments

We found that journalists were accused of bias whenever they tried to fact-check, so we had something called the Elections Coalition in South Africa, which included journalists and media houses who would either try and do a quick fact-check themselves, so we trained them beforehand as part of our company, our organization’s training systems, or we helped them to fact-check, and often they were accused of bias whenever they fact-checked a specific politician and told that they support their opposition.

speaker

Philile Ntombela


reason

This comment highlights the challenges faced by fact-checkers and journalists in Africa, revealing how attempts to combat misinformation can be weaponized against them.


impact

It shifted the discussion to consider the unique challenges faced in different regions and the potential backlash against fact-checking efforts.


In the States we do not have regulation such as the digital services act or really anything of the sort. Um, and platforms, as you can see just from these headlines, um, have, have surrendered and give it up.

speaker

Benjamin Shultz


reason

This comment provides a stark contrast between the regulatory approaches in the US and Europe, highlighting the lack of oversight on platforms in the US.


impact

It prompted a comparison of different regulatory approaches and their effects on platform behavior across regions.


So let that be the take-home message for 2025, that while 2024 did not become the AI catastrophe, which was in many ways predicted by a lot of observers also in this space, I believe that as we move into 2025, there’s all the reason for an observatory like Aetmo to continue the work, to see how these technologies are being deployed across elections.

speaker

Claes H. de Vreese


reason

This comment provides a balanced perspective on the impact of AI in elections, acknowledging both the relief that catastrophic scenarios didn’t materialize and the ongoing need for vigilance.


impact

It shifted the discussion towards a more nuanced view of AI’s role in elections and emphasized the importance of continued monitoring and research.


Overall Assessment

These key comments shaped the discussion by highlighting the diverse challenges faced in different regions when combating disinformation, from accusations of bias in Africa to lack of regulation in the US. They also emphasized the evolving nature of threats, particularly regarding AI in elections. The discussion moved from specific regional experiences to broader comparisons of approaches and the need for ongoing vigilance and research. This led to a more nuanced understanding of the global landscape of disinformation and the varying strategies needed to address it effectively.


Follow-up Questions

How can we improve media literacy efforts to combat disinformation?

speaker

Delphine Colard


explanation

Delphine emphasized the importance of education and media literacy in combating disinformation, suggesting this is a key area for ongoing work and research.


What are the impacts of addictive design in social media platforms on the spread of disinformation?

speaker

Delphine Colard


explanation

Delphine mentioned that the European Parliament is interested in addressing this issue, indicating it’s an important area for further investigation.


How can we better measure the impact of disinformation on voting behavior?

speaker

Benjamin Shultz


explanation

Benjamin noted that it’s difficult to measure how disinformation changes voting behavior, suggesting this is an area that needs more research and better methodologies.


What are effective strategies for combating disinformation in contexts where media literacy is low and internet access is limited?

speaker

Philile Ntombela


explanation

Philile highlighted these challenges in the African context, indicating a need for research on strategies that work in these conditions.


How can we improve platform accountability and content moderation without risking censorship or suppression of free speech?

speaker

Philile Ntombela


explanation

Philile expressed concerns about potential negative consequences of strict regulations, suggesting a need for research on balanced approaches.


What are the long-term impacts of AI-generated content on elections and democratic processes?

speaker

Claes H. de Vreese


explanation

Claes emphasized the need for continued monitoring and research on AI’s role in elections beyond immediate persuasion effects.


How can we improve international collaboration in researching and combating disinformation?

speaker

Claes H. de Vreese


explanation

Claes suggested the need for collaborative efforts across countries to better understand the impact of AI on elections globally.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #185 Universities impact in accelerating the adoption of free, open-source government software towards supporting the Blue Ocean eco-system

Day 0 Event #185 Universities impact in accelerating the adoption of free, open-source government software towards supporting the Blue Ocean eco-system

Session at a Glance

Summary

This discussion focused on the adoption and promotion of open-source software in Saudi universities. Representatives from King Khaled University, Imam Muhammad Bin Saud University, Al-Jouf University, and Al-Qasim University shared their experiences and strategies for implementing open-source solutions in academic and administrative processes.


The universities highlighted their efforts to integrate open-source programming into educational curricula, research initiatives, and innovation projects. They emphasized the importance of building sustainable technology ecosystems, strengthening local content, and encouraging open innovation. Several universities reported success in developing in-house solutions using open-source tools, which have been shared with other government agencies.


Partnerships with the private sector and government bodies were identified as crucial for advancing open-source adoption. The universities discussed various initiatives to support student projects, hackathons, and the creation of specialized research centers focused on open-source development. They also addressed the challenges of cultural change, financial sustainability, and the need for specialized training.


The participants agreed on the potential of open-source software to enhance operational efficiency, promote innovation, and support the digital economy. They called for a clear national roadmap for open-source software adoption and emphasized the role of universities in developing human capital and fostering a culture of open collaboration.


The discussion concluded with a recognition of the progress made in establishing a digital warehouse for government open-source software and the potential for Saudi Arabia to become a global leader in this field. The participants expressed optimism about the future of open-source initiatives in supporting the country’s digital transformation goals.


Keypoints

Major discussion points:


– Universities’ strategies and initiatives for adopting open-source software


– Successful experiences and partnerships in developing open-source solutions


– Impact of open-source adoption on academic, research and administrative processes


– Challenges and opportunities in promoting open-source culture in universities


– Future vision for open-source software development in Saudi Arabia


Overall purpose/goal:


The discussion aimed to explore how Saudi universities are implementing open-source software strategies, share successful experiences, and discuss ways to further promote open-source adoption in alignment with national digital transformation goals.


Tone:


The overall tone was positive and collaborative. Participants spoke enthusiastically about their universities’ open-source initiatives and expressed optimism about the future potential. There was a sense of shared purpose in advancing open-source adoption nationally. The tone became increasingly forward-looking towards the end as participants discussed future visions and recommendations.


Speakers

Speakers from the provided list:


– Dr. Muneerah Badr Almahasheer


Role: Electronic Education Dean at Imam Abdul Rahman bin Faisal University


Area of expertise: Electronic Education


– Dr. Hamed Saleh Alqahtani


Role: Dean of the Department of Electronic Services at the University of Al-Malik Khaled


– Dr. Sultan Alqahtani


Role: Dean of the Department of Information Technology and Electronic Learning at the University of Imam Muhammad Bin Saud


Additional speakers:


– Dr. Badr Al-Daghifig


Role: Deputy Dean of the Department of Electronic Education and Digital Transformation at the University of Al-Jawf


– Dr. Abdul Latif Al-Abdul Latif


Role: Deputy Dean of the Department of Electronic Education and Information Technology at the University of Al-Qasim


– Khaled Al-Ghamdi


Role: Engineer and consultant


– Dr. Ahmed Al-Swayyan


Role: Representative of the Digital Government Board


Full session report

Expanded Summary: Open-Source Software Adoption in Saudi Universities


This discussion focused on the adoption and promotion of open-source software in Saudi universities, bringing together representatives from various institutions to share their experiences and strategies. The conversation covered a wide range of topics, from implementation strategies to future visions for open-source development in Saudi Arabia.


University Strategies and Initiatives


The universities represented in the discussion have been developing comprehensive digital transformation strategies that incorporate open-source software. Dr. Hamed Saleh Alqahtani from King Khaled University highlighted their “Marina Path Initiative” and products like Sprint, Wasl, and A-plus. Dr. Sultan Alqahtani from Imam Muhammad Bin Saud University shared that their implementation process has been ongoing for about a year and five months, with the university contributing systems like the employment gate, access system, link service, and field training system to the digital warehouse. Notably, they have shared 7 million lines of code and 657 files.


Al-Jouf University, represented by Dr. Saleh Albahli, is focusing on agriculture-related applications and AI projects. Dr. Abdul Latif from Al-Qasim University mentioned their goal of achieving 40% of services built on open-source software by the end of the year.


Integration into Education and Research


Universities are integrating open-source software into educational programmes and curricula. This integration extends beyond just teaching the use of open-source software to actively involving students in its development and application. Universities are also leveraging open-source solutions to support academic and research processes, developing in-house solutions that enhance operational efficiency and provide valuable learning opportunities.


Talent Development and Community Building


The discussion highlighted the importance of recruiting and supporting young talent in open-source development. Universities are creating open-source programming factories and communities within their institutions to nurture this talent. They are also organizing hackathons and training initiatives to further develop skills and foster innovation.


Partnerships and Ecosystem Development


Collaboration with industry sectors and government bodies was identified as crucial for building sustainable open-source ecosystems. Universities are working to establish partnerships with the private sector and government agencies to support the long-term viability of open-source initiatives. These collaborations are seen as essential for promoting innovation, strengthening local content, and encouraging open innovation.


Challenges and Opportunities


Several challenges were identified, including the need for cultural change, ensuring financial sustainability, maintaining consistency, and providing specialized training. The Digital Government Agency’s role in creating licenses for open-source government programs was noted as a positive step.


The adoption of open-source software was seen as a way to reduce reliance on commercial software, potentially leading to cost savings for universities. Dr. Abdul Latif mentioned that government spending on commercial software is expected to increase by 24% by 2025, equivalent to 16 billion Riyals, highlighting the potential for cost savings through open-source adoption.


There was also discussion about the potential for monetization of open-source solutions, although this topic revealed some differences in perspective among the participants. Some viewed monetization as a way to ensure sustainability, while others emphasized the importance of maintaining the open and collaborative nature of open-source development.


Future Vision and National Goals


The participants expressed a shared vision for Saudi Arabia to become a leader in open-source software development. Dr. Hamed Saleh Alqahtani called for the development of a clear national roadmap for open-source software adoption in universities. Dr. Sultan Alqahtani spoke about the ambition to become a world-class open-source software repository.


Dr. Muneerah Badr Almahasheer raised the importance of improving the classification and filtering of Saudi government open-source software in the digital warehouse. There were also suggestions to expand the digital warehouse to include international open-source products.


Conclusion and Next Steps


The discussion concluded with recognition of the progress made in establishing a digital warehouse for government open-source software and the potential for Saudi Arabia to become a global leader in this field. Participants agreed on the need for continued collaboration between universities, industry, and government to advance open-source adoption.


Key action items identified include:


1. Developing a clear national roadmap for open-source software adoption in Saudi universities


2. Improving the classification and filtering of government open-source software


3. Expanding partnerships between universities, industry, and government agencies


4. Increasing the percentage of university services built on open-source software


5. Focusing on user experience and the user journey in developing open-source software


The participants expressed optimism about the future of open-source initiatives in supporting the country’s digital transformation goals, while acknowledging the ongoing nature of this transition and the need for continued effort and innovation. The success of universities in digital transformation was celebrated, with a commitment to further progress in open-source adoption and development.


Session Transcript

Dr. Muneerah Badr Almahasheer: In the name of God, the most gracious, the most merciful, welcome, we welcome you, friends of happiness, leaders of e-education in Saudi universities, and we start by welcoming you. First of all, Dr. Hamed Saleh Al-Qahtani, Dean of the Department of Electronic Services at the University of Al-Malik Khaled. Welcome. Secondly, Dr. Sultan Al-Qahtani, Dean of the Department of Information Technology and Electronic Learning at the University of Imam Muhammad Bin Saud. Welcome. Next, Dr. Badr Al-Daghifig, Deputy Dean of the Department of Electronic Education and Digital Transformation at the University of Al-Jawf. Welcome, doctor. And finally, Dr. Abdul Latif Al-Abdul Latif, Deputy Dean of the Department of Electronic Education and Information Technology at the University of Al-Qasim. Welcome. Our workshop today will be held in the Digital Government area, about the impact of universities on the rapid adoption of open-source government software. The workshop is related to the strategy of open-source free software in the Kingdom of Saudi Arabia, and its six pillars. Building a sustainable technology, strengthening local content, encouraging open innovation, enabling human resources, improving cyber security, and supporting the digital economy. The main title of our first dialogue is about a strategy that paves the way for universities to advance technology in open-source software. And how universities can play a leading role in promoting the Blue Ocean system for the adoption of open-source free software. We can take the opportunity to talk with you, Dr. Saad Hamid, about how the University of King Khaled can develop a comprehensive strategy for digital transformation and the adoption of open-source free software as a main tool for innovation and technology, while taking into account the role of this strategy and its relationship with educational programs, promoting innovation, and building societies and institutions within the University of King Khaled. So, you can talk with us about this experience. May God bless you, Dr. Lira.


AUDIENCE: May God bless you, dear colleagues and distinguished guests. Of course, we thank the Digital Government for hosting us, and we also congratulate everyone for the awards they have received in this special and rare occasion. Of course, the University of King Khaled, during the short period in which it was under the care of the Prince of Asir region, Prince Turki bin Dalal, launched the University of King Khaled to the world. And it was through this launch that the strategy of the University of King Khaled was launched, and its goals are the qualitative strategy, which includes improving the educational output, achieving institutional excellence, promoting research and innovation, diversifying alternative desires, as well as improving the quality of life, and also promoting voluntary and social participation. We, in terms of e-services, and in terms of the digital transformation strategy, which supports these strategic goals of the University of King Khaled, we made sure that there is a fundamental component in this strategy, which is the adoption of open-source programming. Because we all know that we all have one goal, which is to have in this precious kingdom, a programmer among a thousand residents. We, in the University of King Khaled, and after the qualitative partnership, which was a year ago, and in this place, or in Ritz, the next day, we, in the University of King Khaled, glorified this partnership, and we started the real work. We sat with various university agencies, and we started the real work, starting with the University of Shun Ta’alimiya, through the implementation of decisions in these programs. We also talked with the Faculty of Accounting, through the adoption of these programs. We also reviewed many plans and curricula, in the technical educational materials, which have a practical side. We made sure that these programs are a fundamental part, in enhancing research, innovation, and digital sustainability in the University of King Khaled, and in our precious country. Also, we, in the electronic services, and through enhancing this culture, the culture of adopting open-source programming, we made sure on all levels, the level of applications, the level of construction, the level of networks, the level of digital transformation, and all levels, to glorify the implementation of these programs, for many reasons. The reasons of spending efficiency, the reasons of enhancing cyber security, the reasons of technical participation, the increase of productivity, and this, by the will of God, we refer to when it comes to the increase of productivity, in the field of research. We have, may God protect you, dear colleagues, we, in the University of King Khaled, in the Electronic Services Agency, we made sure that we have different platforms, in these open-source programs. And we have an initiative now, which is called the Marina Path Initiative, in cooperation with the National Center for Electronic Education, and we made sure, through this initiative, that they really support the adoption of open-source programming. Today, we, in the University of King Khaled, we cooperate with all departments, whether they are students, through the talent union, through the training union, and the training of students on open-source programming. Today, we are with the departments of higher studies, we sat and talked a lot about how to have a research grant in these fields related to innovative technologies, and support the adoption of open-source programming. And we have, by the will of God, in the University of King Khaled, a variety of numbers in this field. Also, the Business Leadership Center, there is a hotline between the departments that we mentioned, and the Business Leadership Center, through the integration of these programs, or the solutions that were built based on the adoption of open-source programming. Today, we are with the colleagues in the Faculty of Applications, we are working hard to have diplomas for remote learning programs in such programs, or the employment of programs in many diplomas. We, in the University of King Khaled, we made sure and we are keen to enhance innovation in the adoption of open-source programming through many hackathons that were held recently. We had a type of hackathon, an innovation hackathon, and one of the most important elements of these hackathons was to have a process based on open-source programming. The talk is long on this side, but we want to conclude this talk about the University of King Khaled and its strategy,


Dr. Muneerah Badr Almahasheer: and all its actions related to the development of the educational process of open-source programming. We are still at the beginning of the process of adopting open-source programming, and we promise you, next year, that you will see many products based on open-source programming. This is a promise? Yes, it is a promise. So, it is a commitment to the workshop? Yes, it is a Southern promise. By God’s will, by God’s will. Thank you, Dr. Saad. I would like to pass the floor to my colleague, Dr. Sultan Al-Qahtani, at the University of King Khaled. We are sure that all universities share the same passion and the work of the creator. We would like to ask Dr. Saad to share his vision about how the University of King Khaled can contribute to the establishment and support of an open-source technology platform to enhance small and medium-sized projects.


Dr.Sultan Alqahtani: Thank you, Dr. Neera, and I would like to thank my colleagues at the Digital Government for hosting this wonderful event. Today, we are at the University of King Khaled and through the University’s Digital Transformation Strategy, one of the goals of the University’s Digital Transformation Strategy is to promote research and innovation. Through this element, we have launched a number of initiatives for the development of open-source software and initiatives that have been launched in cooperation with the Innovation and Business Leadership Center, including the initiative to support small projects and graduate projects. Through this initiative, the University’s Digital Transformation Strategy works in conjunction with the Digital Government to support small and medium-sized projects. the digital warehouse, and also with the faculties that are related to support this system in adopting open and free software. Today, through this strategy and this initiative, the initiative to support small projects and graduate projects, the Faculty of Computer Science and Information and the Faculty of Applications is working on establishing a research university, and even the Faculty of Science and Research is working on establishing a research university specializing in open and free software, projects that are considered a type of projects in this field, and hopefully I will be able to tell a number of successful stories in this field, and also the students who are interested in learning and exchanging skills between students, teaching staff, and those who are interested in this field of open and free software. Today, through the initiative of the Blue Ocean and the Digital Government, we aim to establish specialized research universities, and specialized student universities in the field of open and free software, in the field of information security. We have a number of approximately 12 projects now, which are considered small projects, adopted under the guidance of the Imam, all of them are technical projects, and all of them are based on open and free software. A number of research projects have been transformed from research laboratories into startup companies. For example, one of the projects, one of the students in the message of the Magistrate was able to present a developed software using open and free software, a software specialized in DevSecOps, and this software has now become a tool that is marketed commercially, because it has become a small company, and it is protected by the Black & Play Business Projects, with the National Security Authority. One of the projects, the graduation project, has now become not a graduation project for a bachelor’s level students, but a graduation project that produces solid scientific results. The idea was to use artificial intelligence tools, such as FastText, as an open source component, used by students in a research lab in the college. The idea became popular, and it became an open source tool to analyze bug reports, or what is called software reports, and classify them as security, whether it is a security report or not. If it is a security report, it is classified as a type of security, such as Buffer of Load, Denial of Service, and so on. All these examples I mentioned are the results of initiatives through a strategic plan to support small projects and adopt them in the College’s Business Projects. These are student projects, which have now become research projects, and as I mentioned earlier, I will explain the name, ShieldOps.net has now become a startup company, adopted by the National Security Authority, and the Black & Play Business Projects, and this is one of the results of the initiative. Today, the methodology we follow in the Imam University, in this field, is still on the way, and the implementation is now about a year and five months, and God willing, the process will develop, and there will be a complete adoption of open source software, whether on the level of scientific research, or even on the level of development research in the field of electronic information technology. So, God willing, in the second phase, I will talk in detail about the methodology we follow in the Imam University, and support the initiative of the Blue Ocean, or even support the store of free software, and the open source software of the Digital Government. Thank you.


Dr. Muneerah Badr Almahasheer: Thank you, Dr. Sordhan. In fact, this is an inspiring speech about the strategic relationship between software and projects, and its connection to the student NGO, and the research societies, and the building of societies, and the transformation of some programs from graduate projects to startup companies. In fact, this is a kind of vision that we rely on a lot, and we say that, just as there is a percentage of employment for graduates, we see that in universities, there must also be a percentage of projects that turn into small companies or institutions, but they have a great impact, and have an impact on the university’s strategy. And technology, in its turn, is fast. For this reason, the initiatives and strategies that are launched may not be on paper. They may be implemented in the first months. As we heard from Dr. Hamed, there is an initiative that they started last year, and now, God willing, the university has achieved creativity in this area. Right. So, God willing, good luck to everyone. With God’s will. With God’s will. We now move, with the happiness of Dr. Badr’s colleague, to talk about Al-Jouf University. We know, Dr., that Al-Jouf University is interested in increasing its ability to recruit and support young talents to support the open-source software system. We don’t want you to talk to us in general. We want you to tell us the secrets of the university and its specific goals in this regard. Peace be upon you. Thank you, Dr. and thank you to the members of the Digital Board and the IGF meeting. In the university, maybe I should start with a general strategy, and then I can elaborate on some of the things that we have done in the university. No doubt that the universities now, in their current state, are not only educational institutions, where there is the care of talented people, where there is the support of work, and where there is also an investment part. This is something that opens doors for recruiting talents from all over the world,


AUDIENCE: and even from outside the Kingdom. The strategy that we have done internally, at least in cooperation with the Ministry of Education in the region, we went to public education, and the Faculty of Accounting adopted it in cooperation with the Digital Education and Transformation Office. Several projects were adopted by students, by talented people, in terms of software, and their research was directed to open-source software, in order to have the results for these projects. The program has not ended yet, and the results have not been announced yet. Maybe, God willing, next week or the following week, we will find out. Another thing is the training and introduction boards. Without a doubt, students, before enrolling in the university, must be informed about open-source software, so that we can direct them, at least, to the field of software, or the programs that are specialized in this field. Also, in terms of education, international students, since we are, thank God, in the Kingdom, we do not recruit international students from abroad, we have provided a small part in cooperation with the Ministry of Education, the Ministry of Higher Education, and the Ministry of Science and Research, to offer a scholarship for international students, in software and software engineering. And, thank God, we have a brother and a sister in Yemen. We have students studying software engineering now, and, by God’s will, God willing, they will contribute to the initiative. Also, among the things that we have targeted, are the hackathons, organized by the university. Last year, the university organized two hackathons, one for digital health, for women, and another hackathon for agriculture. Schools, public education, and other universities were invited, and the university participated, thank God. But, in these hackathons, there may be a seed of recruitment, and sponsorship of talents, who are interested in open-source software engineering. One of the projects adopted by the university, or the Institute of Digital Education, is one of the projects, from a group of students, specializing in the process of… It is an application for agriculture, to read plants, and get to know them, through artificial intelligence. So, we invited them, and we provided them with a servant, and the technical equipment, to develop this application, in the future, God willing. The work is still in progress, and also, in cooperation with the Institute of Digital Education, we have recruited one of the students. Dr. Sultan, on the subject of programs and curricula. These curricula must be expanded, especially in terms of programming regulations. They should include sections for open-source programming, as well as programs for open-source programming, e-learning programs, or short-term programs, in terms of adopting them to increase awareness and attract those interested in them, so that we can communicate with them later so that they can be among the trainers in terms of open-source programming. Last summer, we held a co-training session in Amada. We focused on the development of open-source programming. We also launched workshops by our colleagues in Amada to enhance the care of talented people and attract them to work in Amada in terms of open-source programming. Thank you. It’s clear that the care of talented people, attracting them and supporting them is a focus in Al-Jouf University. Al-Jouf University will be one of the fastest-growing open-source programming institutions in your university and in the country. We will now move on to my colleague, Abdul Latif Al-Abdul Latif, from Al-Qasim University, to talk about how Al-Qasim University’s role in promoting change towards a blue ocean system through open-source programming as a strategy and as a working system in Al-Qasim University. I congratulate you on the university’s victory in digital transformation. Thank you, doctor. Thank you very much. It’s a great opportunity to be here with my colleagues and we’re happy to welcome the Digital Governance Board. No doubt, our goal today is to promote open-source and free programming. No doubt, the success story began with the Board at the end of last year, when there was an agreement signed to promote open-source programming. Alhamdulillah, Al-Qasim University had three main goals when it came to promoting open-source programming through education, research, innovation, as well as strategic partnerships with different parties. In the field of education, Alhamdulillah, Al-Qasim University’s Faculty of Accounting has been working hard to implement a number of resolutions that aim to promote open-source programming. Last year, we were encouraged by the idea of managing projects in different departments through open-source tools and software, whether it’s AI, cyber security, or software development in general. Alhamdulillah, in the field of education, we’re taking steady steps, and with our colleagues in the Faculty, we’re providing the necessary support through lectures and courses. In the field of research, Alhamdulillah, we’re optimistic that at the beginning of 2025, a number of initiatives will be launched through the Faculty of Science and Research to target students and teachers. One of the goals is to build ideas in various technical fields based on open-source programming tools. We hope that this initiative will be strengthened in the coming year. As we mentioned, the idea of strategic partnerships started with the Faculty, and Alhamdulillah, the Faculty is grateful for that. It’s working with a number of external parties, and Mr. Khaled Al-Ghamdi is an example of that. Thank you. We’re always bothered by this, but it’s all good and blessed by Allah. Alhamdulillah, we believe that one of the challenges of strategic partnerships is the issue of existing experiences. Students and teachers need to connect with the private sector in this regard. Alhamdulillah, we see that these three main centers are very important, and I’d like to add that even in the field of information technology, we tried to benefit students, teachers, and their ideas by adding internal services in our university. With Allah’s blessing, we’ll be able to take special steps, and the students who participated in this initiative will benefit from it. Thank you very much, Dr.


Dr. Muneerah Badr Almahasheer: You’re welcome. Thank you, Dr. Abdul Latif. We’ll now move to the second topic of our discussion, and we’d like to link it to the strategy of open-source free programming. It consists of a number of main initiatives that are part of the programming and university projects, including public programming centers, open-source programming societies, open-source innovation labs, and partnerships with the private sector. Dr. Hamed, I didn’t forget to congratulate King Khaled University, but I was late in congratulating you when we talked about the success stories. We consider all universities that were in the creative category today to have succeeded, and we’re sure that all Saudi universities, with the support of the Ministry of Education, the University Council, and the state, God willing, will join in and we’ll present a unique model in the most famous digital transformation measure. In the second round, we’ll focus on the success stories and partnerships. We have a question for King Khaled University. What are the main successful experiments that led King Khaled University to develop open-source programming to support academic and administrative processes? How did these successes affect the academic and research community? Also, what is the secret of success and what are the local and international partnerships in this context? May God bless you, Dr. Hamed. Congratulations to all of you. The success of King Khaled University and Al-Qasim University is the success of the university in a big way. Dr. Hamed, your question is related to the success stories and partnerships. Before that, universities now have roles, not just in terms of teaching, they now have roles in developing the university. All of these opportunities will be easily achieved if there is a digital environment that will enable all the digital opportunities or even the national opportunities in 2030. Today, the success stories, if there wasn’t a stone, there wouldn’t be a success. The main stone was, in the past short period, the establishment of a factory for open-source programming and also a community of open-source programmers, especially that work was mostly related to huge data and artificial intelligence, but we were also affected in other areas related to cloud computing, institutional systems, and many other things related to applications and mobile applications. Now, we started with a stone by having a factory, we chose a team, a great team, we chose the best faculty of the University of Al-Khaled


AUDIENCE: to lead this factory. After that, we did a lot of training for them, and we also supported them logistically and financially so that the success stories of the University of Al-Khaled will be produced. Of course, we also determined, among the successes that we will mention later, the location of one of the most beautiful places in the University of Al-Khaled, and Khaled Dar, the architect of this place, and the students… We made this factory an innovative environment. It is the main hub for this factory and for the digital talents that will be reflected in the products we will mention in a few moments. At Malkhad University, the open-source programming had a big impact on all the goals of Malkhad University’s strategy 6, in terms of achieving or improving the quality of teaching and learning. Now Malkhad University students at the academic level, in terms of programming and digital skills, are competing with many universities in the world, whether they are international or international, in terms of getting university degrees. Also, at the academic level, we notice that the students are ready to enter the job market through some of the processes and through the type of training that we have. We have a great experience at Malkhad University, which is that we take the best 30 students from the Faculty of Computer Science, and they are trained in a type of open-source programming in the field of e-services. We found that this intensive training, which is very intensive, had excellent results. Many of them are now working in the ministries of a large foundation. At the research and innovative level, as my brother Khaled mentioned in this dialogue session, which I hope everyone will benefit from, I contacted my colleagues in the research and higher studies about the increase in the percentage of scientific research in open-source programming, which was used in the innovative technologies and research papers related to artificial intelligence. We found that 36% of the scientific papers used now and the published papers, were used in open-source programming. This is also a great achievement. One of the great achievements, the big center that Malkhad University has today, is the result of a very long work, which had an impact on improving the quality of university life. Today, administratively and operationally, more than 50% of the courses are now in Malkhad University. We have stability in business. We have transparency in business. We have operational and financial efficiency in business. This is the result of the adoption of open-source programming in the field of e-services. On the level of volunteering and social participation, we have great successes in how to use these hackathons for open-source programming. We had an initiative under the supervision of the regional governor, called Ajawid. Ajawid was a digital platform, where we conducted 10 practical training courses using open-source programming. The result of this training course was about 9,000 participants. Last week, Malkhad University was the first university to promote open-source and social participation. This result is also a collective work, including digital work. Now, we are at Malkhad University, with a new strategic goal, which is to diversify our initiatives and sustainability. Now, we are seriously thinking about how to promote many of the products based on open-source programming. So, no one will participate in the hackathons? No. Today, we have products like QX, which is one of the great platforms. Malkhad University has been launching this product for 7 to 8 years. This product is a great source of income for the university. We don’t want to close the door on universities. As we mentioned, there could be participation in some of the programs that are included in the digital government. We could also include the idea of marketing the products in universities. This is not prohibited. Why not? It is not prohibited. It is the future. It cannot be marketed in the Kingdom or abroad. It is an opportunity to have a different kind of work in open-source programming. We can say that this is one of the recommendations. The engineer and consultant Khalid Al-Ghamdi always says that universities should think about the social impact, which is a right for everyone, and a right for the state, as well as the economic impact and sustainability of the work in universities. I say yes. Great. On the contrary, we support this point. During the past 6 months, depending on the open-source programming, there are 3 types of products in the university. One of them is called Sprint.


Dr.Hamed Saleh Alqahtani: The goal is to use it in digital hackathons. It was a great experience. I think it is a unique solution. It is not available now. We have another platform called Wasl. The goal of Wasl is to improve the employee experience in Malkhat University. We succeeded in this. We have A-plus, a platform that supports students to exchange knowledge and experiences. I do not object to this, as the engineer Khalid, after the interview, agreed that it should be available soon. Great. Last but not least, all the success stories we mentioned come from our colleagues in other universities. Right. But now, it is up to all parties to realize the institutional distinction. Today, open-source programming must be adopted by all of us to work on it. The culture of programming must be implemented. There are many challenges, such as consistency, financial support, and digital capabilities. But by hand, we can reach, and with the support of the Digital Government, a high degree of adoption, not only at the national level, but also at the global level. I agree with you, Dr. Hamed, that universities have infrastructure, rich in human minds, and the ability to build different programs, and the ability to participate at the university level. Also, as one country, there are gaps in some universities, needs in some universities. We encourage the transformation of the gaps between universities to the integration within universities, to build a strong national university educational system.


Dr. Muneerah Badr Almahasheer: Thank you, Dr. Hamed. If you allow me, I would like to move to Dr. Sultan, at the University of Imam Muhammad Bin Saud. My question to you is, how did the university use open-source programming to develop solutions that support academic and research processes? And if you provide us with examples of technical projects that have been successful in cooperation with government agencies, we will know that you have a number of practices at the University of Imam Muhammad Bin Saud, and you have also taken a step forward in the field of government programming.


muhammed bin saud: Thank you, Dr. Hamed. To be honest, the iron is in the free and open-source programming, its participation and ways of dealing with it. I do not agree with what Dr. Hamed said about the exploitation of products and being an obstacle to one of the parties. On the contrary, the open-source programming opens wide and deep horizons in the topic of participation between the parties. Participation is not only the participation of the source code, and then it is over. Participation may be the participation of the knowledge, the participation of relations, the direct contact with government agencies and development teams. On the contrary, it is a contract of joint partnerships, and it is not an official agreement. Today, at the University of Imam Muhammad Bin Saud, from the first day of the launch of the digital warehouse, the digital government, the warehouse specializing in government programming, free and open-source, we have been contributing with a number of products that have been fully developed within the institution, with a national and specialized hand in this field. A number of systems, I have printed a piece of paper, so that I do not forget it, such as the employment gate, the access system, the link service, field training, a comprehensive system that connects the student and the private sector in the field training subject and complete its procedures. Also, the specialization system for medical students, how to complete their procedures between the government agencies, such as hospitals and the private sector, with their academic system inside the university, the academic instruction system, the communication system, which I will show you some details of, the financial system, the social service, and the electronic gate. These systems, a number of them, approximately, If we calculate the number of programs downloaded, it is more than 7 million line of codes. It is also more than the number of files and shares. This warehouse has more than 657 files and programs that were shared through this warehouse. We found that there is a very big interaction between the information technology community in the other governmental areas. Direct communication with us to inquire about some systems. Some systems are now fully used in other governmental areas. For example, the CRM communication system. Most of the governmental areas have CRM systems, but not all of them. The communication system is a technical system that serves the student and his teaching partner, as well as the employee, in communication between the departments within the university. It also serves a large number of university visitors, who can use this system without getting a direct user, only direct access through the phone. The system is now being used by 10 governmental areas. These 10 governmental areas interact with us directly. I have a development team. The development team is in the governmental areas. Most of the reports that we have taken from these governmental areas have been applied in our system. They have been developed in our current environment. We are using the latest version. This may take a long time at the warehouse level. As I said, the source code is not only a part of a program or a system that is used in governmental areas. No, there is a success story with other governmental areas. The tools used are being developed in other governmental areas. There is a lot of work. Communication is a very big thing. The warehouse serves one of the most important programs in Vision 2030, which is the development of human resources. There are a number of assessments that we have taken on Atecna through this warehouse. This is the first performance for all technical projects for Atecna. If there is not a system that can be used directly through the warehouse, a project is launched, such as what is known in other governmental areas. Today, the success story is being told through this warehouse.


AUDIENCE: We are grateful to our colleagues at the Digital Government Agency for their support and empowerment. We see this warehouse as an interactive environment between governmental areas, and we share its resources in a similar way. We achieve the efficiency of spending, and we also enhance the knowledge of other developed governmental areas. Through this warehouse, we can attract other governments at the technical level. There are security challenges, as Dr. Hamid pointed out in a recent exhibition. In order for this ecosystem to be completed, there must be a number of procedures that start with the Digital Government Agency. There must be a license, and this is one of the discussions that took place between the Digital Government Agency and the Intellectual Property Agency, and I was a part of this discussion. There must be a license, such as the Open Source License, which is available worldwide. This license must also be at the level of the Digital Government Agency, which is the number of the open source government programs. This license gives me trust that the systems that are shared, and this very large number of systems that are shared with other governmental areas, are used in the correct way, and there are no violations or violations of the agreement policies that are based on these systems. Today, we extend our welcome to everyone. We are not open. Of course, the goal is bigger and bigger. We are building an electronic system that is shared with most governmental areas, and it is not forbidden to make it cloud-based, to be SaaS, and therefore to be a subscription to the governmental areas, if there is support in this regard. You changed your mind. I said there is no problem. As you thought. There is no problem. On the contrary, to complete the system, there must be serious steps by the Digital Government Agency to support this. We mentioned that the payment is for sustainability. Very natural. Yes. This is the idea. I defend myself against Dr. Hamid. You may be in a group of people who are passionate about Open Source. No, we are happy with your support for us. On the contrary, I say that it is possible for sustainability, even part of sustainability, and it is not hidden from colleagues. Today, when I contract with a technology company to develop an internal system, I, honestly, the CRM is currently being developed by other governmental agencies. And I trust you, now I can name these agencies, can’t I, Doctor? Yes, like the trust of the Eastern Region, the Health Affairs in the National Guard, the National Center for Electronic Education, King Abdulaziz University, the General Institution for Technical and Professional Training, and until yesterday, there was a communication between them and the development team in Amana, asking about some files, how to test this system on the Distinct Environment Engine. This is a kind of participation. We are providing them with a free service, and at the same time, we are benefiting from these technical experiences. Absolutely. And there is no financial incentive to develop this product. On the contrary, the product is developing itself. That’s right, Doctor. There is no doubt that the user experience, and building the user journey inside the system of free software development, increases the maturity of the project, its efficiency, and also its inclusiveness, progress, and development. I thank you, really, for the rich intervention. And if possible, with the help of Dr. Badr at the University of Al-Jouf, if you could share with us, Doctor, the most prominent experiences in cooperation with the sectors, and you, as a university, mentioned the industrial sectors, as if there is a direction for the industrial sectors at the University of Al-Jouf, and I expect that all of this is related to the regional distinctions, and the interests of the university, and how can these partnerships build a sustainable system for open-source software? Ali, I will start, I will join you from the beginning. Okay. I am not in a hurry. I am not in a hurry. They will not invite us again. Yes. But, at the beginning of development and participation, it starts with the principle of open-source software. It does not prevent that there is a symbolic thing for the process of renting, or for the process of maintenance, that the process or the ecosystem of open-source software is sustained. Because, in the end, where will the expenses come from? It must have a specific budget. The universities’ partnerships in the processes of open-source software are centered around the partnerships with the technical departments, specifically in training and development, and also the opening of academies inside the university for these companies. Today, I was happy to sign a memorandum of understanding here, in Al-Muntadhar and Al-Hamd, with one of the companies that will open an academy, God willing, and this academy will be directed to open-source software, with the help of this company, by highlighting the important role of open-source software, and establishing a working group of experts, even from outside the Kingdom, remotely or in person at the university. We also have two partnerships with technical companies, to build specialized e-stores, which may be used in the processes of open-source software, in the matters of networks and the security of networks. Since we were able, through the partnership with one of the companies, a social partnership between them, to build this factory completely, with the latest technologies, which concern the matter of networks and the security of networks, will be directed to the processes of open-source software. Without a doubt, the most prominent partnership is with Al-Haya, with the support of Mr. Khaled Shakerla. He brought us one of the expert companies in the field of open-source software. Soon, we will announce the factory, and also to support the projects of graduation for students. Now, we will move to the institutional systems. Currently, the software that takes care of the institutional systems is the ERP system. As the ERP system is a large system that exists in most regions, we try to turn it into an open-source software, and the drawings are symbolic of it. If there are drawings, or you share them through the warehouse, we are happy to share them with everyone for free. These partnerships are very important in the external sectors. In the government sector, there are partnerships to exchange knowledge and adopt the software that is being developed, so that there is a complete user experience for certain softwares, and development of these softwares, and a sustainable system for development, and then reuse and replace what is already in the government system. This is about the partnerships with universities. Finally, I would like to invite my colleagues, some of them may be from companies, we are representatives of universities, there are three or four universities here, actually, there are five, I’m sorry, Dr. Ali. So, you can start with a social partnership in developing open-source softwares, or you can build this partnership through workshops, through awareness-raising, through opening academies, or you can adopt what is already in the software universities, and develop it, as the colleagues mentioned, as an excellent experience they have in Al-Mak Khaled University and Al-Iman University. Thank you, doctor. Thank you, doctor. We can pass the floor to Dr. Abdul Latif from Al-Qasim University to talk about the most significant experience that the university has achieved in adopting open-source softwares as a support for its existing projects. Thank you, doctor. We can start with the suffering. One of the reports issued by the Digital Government Board in 2021, I think, was expected to increase the percentage of government spending on commercial softwares by 24% by the end of 2025. This is equivalent to more than 16 billion Riyals. There is no doubt that everyone, especially those in universities, is suffering from the issue of commercial softwares. This is a problem for everyone. The university, thank God, started in 2020 in adopting the idea of developing open-source softwares inside the university. Services that are directly offered to the employees, whether they are students, teaching staff, or employees. Thank God, by the end of this year, by the end of this year, we will have achieved approximately 40% of the services offered inside the university. All of them are built within the open-source software field in all forms of services, both for students and employees. For this, I think, thank God, we have achieved success. Achievements like this need an initiative from the top of the pyramid. Always like this, adopting open-source software is not easy. It requires resources, capabilities, and training. Therefore, we believe that this is the most important story of success. We still have a vision, God willing, that we will be independent for a long time in the issue of developing products based on open-source software. We have participated with the university three weeks ago and with colleagues in the government sectors in one of the systems, which is monitoring the technical infrastructure. Thank God, God willing, it will have a positive impact. On the contrary, we are open to cooperation. This is the goal of our presence today in this meeting. I also believe that one of the advantages we have in the university is the issue of benefiting from the resources of the students. A large part of the graduation projects are aimed at developing our internal services. Students, after graduation, we always adopt them from six months to a year until they get a better opportunity outside the university to develop our internal software and services and directly benefit from them. Thank God, we are proud of this development in our university and we are always honored to exchange experiences with everyone. Thank you, doctor. Thank you, doctor. We would like to conclude with a few words for each of our colleagues in the way they would like, whether it is a thank you speech or an idea for the future of open-source software. Thank you, doctor. I hope that all of our colleagues and I hope that there will be a roadmap for open-source software. Universities have not yet noticed the national direction in software like this. We want a clear and sustainable roadmap. Thank God, this country is proud of its digital capabilities that will make it one of the world’s best warehouses. God willing, we will soon be a world-class warehouse. God willing. Thank you, doctor. I would like to conclude by thanking you, doctor, for the professional management of this session and my cooperation with my colleagues. I would like to thank my colleagues in the Digital Government for their kindness and their generous invitation. God willing, open-source software will be available in the international region of Saudi Arabia. God willing, there will be leading international products that are technologically advanced and nationally advanced. God willing, open-source software. Thank you, doctor. Thank you, doctor. Thank you for the invitation to the Institute. We hope that the Institute will be a leader in all the software that is available, with its classification, with the parties we communicate with, so that it is easier for the parties in the government and private sectors to communicate with the parties, so that there is a certain agreement to share and use this software. Thank you, doctor. Thank you, doctor. Thank you, Mr. Khaled. Also, thank you to the Institute. Thank you, DIGF, for the invitation. Thank you, doctor. I would like to conclude with my pride and honor for the presence of almost 4,000 products in the open-source software store in Mosul, from different parties at the international level. There is no doubt that this is a very big move, and we look forward, God willing, to helping to speed up development in this area. Thank you, Mr. Khaled. Thank you, doctor, for managing the workshop. Thank you, Mosul, and all the colleagues in the session. Thank you, Dr. Abdel Latif. Finally, as universities, we congratulate the Digital Government Board, represented by my colleague, Dr. Ahmed Al-Swayyan, for the success of the Digital Government Board, which we saw today, an effort that can’t be done in a day or in an hour. This is an effort for all governments, and a great national representation, and a special national presence. We also thank the members of Al-Sa’ada, the leaders in Saudi universities, King Khaled University, Imam Hamad bin Saud University, Al-Jawf University, and Al-Qasim University.


Ammira Al-mahashir: I am Dr. Amnira Al-Mahashir, Electronic Education Dean at Imam Abdul Rahman bin Faisal University. I would like to thank Al-Sa’ada’s engineer, Dr. Khadir Al-Ghamdi, for working with us. At Imam Abdul Rahman bin Faisal University, I have known it for a short period of time, but I know that its work has been going on for a longer period of time.


Dr. Muneerah Badr Almahasheer: We look forward to improving the digital warehouse, and to turning it, as Dr. Hamid mentioned, into a global warehouse. We also look forward to the classification process of the warehouse, and the creation and filtering of the Saudi government’s free software, which we are proud of. Tomorrow, we will announce that these products will soon be global. Thank you very much.


D

Dr.Hamed Saleh Alqahtani

Speech speed

130 words per minute

Speech length

244 words

Speech time

112 seconds

Developing comprehensive digital transformation strategies

Explanation

Dr. Hamed emphasizes the importance of universities developing comprehensive digital transformation strategies that incorporate open-source software. He highlights how King Khaled University has made open-source programming a fundamental component of their strategy to support their strategic goals.


Evidence

King Khaled University’s strategy includes improving educational output, achieving institutional excellence, promoting research and innovation, and improving quality of life.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Sultan Alqahtani


AUDIENCE


Agreed on

Importance of developing comprehensive digital transformation strategies


Creating open-source programming factories and communities within universities

Explanation

Dr. Hamed discusses the establishment of open-source programming factories and communities within King Khaled University. These initiatives focus on areas such as big data, artificial intelligence, cloud computing, and institutional systems.


Evidence

The university created a factory for open-source programming and a community of open-source programmers, choosing the best faculty to lead this initiative.


Major Discussion Point

Successful implementations and partnerships for open-source software


Agreed with

Dr.Saleh Albahli


AUDIENCE


Agreed on

Supporting and nurturing talent in open-source software development


Developing a clear national roadmap for open-source software adoption

Explanation

Dr. Hamed calls for a clear and sustainable roadmap for open-source software adoption in Saudi universities. He emphasizes the need for a national direction in this area to guide universities’ efforts.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


Differed with

Dr.Sultan Alqahtani


Differed on

Approach to monetization of open-source software


D

Dr.Sultan Alqahtani

Speech speed

139 words per minute

Speech length

710 words

Speech time

305 seconds

Integrating open-source software into educational programs and curricula

Explanation

Dr. Sultan discusses the integration of open-source software into educational programs and curricula at Imam Muhammad Bin Saud University. He emphasizes the importance of expanding programming regulations to include sections for open-source programming.


Evidence

The university has implemented decisions to adopt open-source programs in various faculties and reviewed plans and curricula in technical educational materials.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Hamed Saleh Alqahtani


AUDIENCE


Agreed on

Importance of developing comprehensive digital transformation strategies


Aiming to become a world-class open-source software repository

Explanation

Dr. Sultan expresses the ambition for Saudi Arabia to become a world-class repository for open-source software. He emphasizes the country’s digital capabilities and potential to achieve this goal.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


Differed with

Dr.Hamed Saleh Alqahtani


Differed on

Approach to monetization of open-source software


D

Dr.Saleh Albahli

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Recruiting and supporting young talent in open-source development

Explanation

Dr. Saleh discusses Al-Jouf University’s focus on increasing its ability to recruit and support young talents in open-source software development. He emphasizes the importance of attracting and nurturing talent from within and outside the Kingdom.


Evidence

The university has implemented programs to introduce students to open-source software before enrolling, offered scholarships for international students in software engineering, and organized hackathons to identify and sponsor talented individuals.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Hamed Saleh Alqahtani


AUDIENCE


Agreed on

Supporting and nurturing talent in open-source software development


Collaborating with industry sectors to build sustainable open-source ecosystems

Explanation

Dr. Saleh highlights the importance of partnerships between universities and industry sectors to build sustainable open-source ecosystems. He discusses various initiatives and collaborations aimed at promoting open-source software development.


Evidence

The university has partnerships with technical companies to build specialized e-stores and develop open-source software. They also collaborate with government sectors to exchange knowledge and adopt developed software.


Major Discussion Point

Successful implementations and partnerships for open-source software


A

AUDIENCE

Speech speed

143 words per minute

Speech length

4573 words

Speech time

1908 seconds

Promoting change towards open-source systems through education and partnerships

Explanation

The speaker discusses how Al-Qasim University is promoting change towards open-source systems through education, research, innovation, and strategic partnerships. They emphasize the importance of these elements in advancing open-source programming adoption.


Evidence

Al-Qasim University has implemented resolutions to promote open-source programming in various departments and is launching initiatives through the Faculty of Science and Research to target students and teachers.


Major Discussion Point

Open-source software adoption strategies in Saudi universities


Agreed with

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


Agreed on

Importance of developing comprehensive digital transformation strategies


Adopting open-source software to reduce reliance on commercial software

Explanation

The speaker highlights the financial burden of commercial software on universities and discusses Al-Qasim University’s efforts to adopt open-source software. They aim to reduce reliance on commercial software and develop internal services using open-source solutions.


Evidence

By the end of the year, Al-Qasim University expects to achieve approximately 40% of services offered inside the university built within the open-source software field.


Major Discussion Point

Successful implementations and partnerships for open-source software


Agreed with

Dr.Hamed Saleh Alqahtani


Dr.Saleh Albahli


Agreed on

Supporting and nurturing talent in open-source software development


m

muhammed bin saud

Speech speed

137 words per minute

Speech length

653 words

Speech time

284 seconds

Developing open-source solutions to support academic and research processes

Explanation

The speaker discusses how the University of Imam Muhammad Bin Saud has used open-source programming to develop solutions supporting academic and research processes. They emphasize the importance of sharing and collaboration in the open-source community.


Evidence

The university has contributed several products to the digital warehouse, including systems for employment, access, field training, and communication. These systems have been downloaded over 7 million times and shared through 657 files and programs.


Major Discussion Point

Successful implementations and partnerships for open-source software


D

Dr. Muneerah Badr Almahasheer

Speech speed

139 words per minute

Speech length

1369 words

Speech time

589 seconds

Improving classification and filtering of Saudi government open-source software

Explanation

Dr. Muneerah emphasizes the need to improve the classification and filtering process of Saudi government’s free software in the digital warehouse. She expresses pride in these products and anticipates their global recognition.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


A

Ammira Al-mahashir

Speech speed

186 words per minute

Speech length

62 words

Speech time

20 seconds

Expanding the digital warehouse to include international open-source products

Explanation

Ammira expresses pride in the presence of almost 4,000 products in the open-source software store in Mosul from different parties at the international level. She looks forward to further development and expansion in this area.


Evidence

The open-source software store in Mosul contains almost 4,000 products from different parties at the international level.


Major Discussion Point

Future outlook for open-source software in Saudi Arabia


Agreements

Agreement Points

Importance of developing comprehensive digital transformation strategies

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


AUDIENCE


arguments

Developing comprehensive digital transformation strategies


Integrating open-source software into educational programs and curricula


Promoting change towards open-source systems through education and partnerships


summary

The speakers agree on the need for universities to develop comprehensive strategies that incorporate open-source software into their digital transformation efforts, educational programs, and partnerships.


Supporting and nurturing talent in open-source software development

speakers

Dr.Hamed Saleh Alqahtani


Dr.Saleh Albahli


AUDIENCE


arguments

Creating open-source programming factories and communities within universities


Recruiting and supporting young talent in open-source development


Adopting open-source software to reduce reliance on commercial software


summary

The speakers emphasize the importance of creating environments within universities to support and nurture talent in open-source software development, including establishing programming factories, communities, and initiatives to attract young talent.


Similar Viewpoints

Both speakers express a vision for Saudi Arabia to become a leader in open-source software, emphasizing the need for a clear national direction and the potential to achieve world-class status in this field.

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


arguments

Developing a clear national roadmap for open-source software adoption


Aiming to become a world-class open-source software repository


These speakers highlight the importance of collaboration between universities, industry sectors, and government agencies to develop and implement open-source solutions that support academic and research processes.

speakers

Dr.Saleh Albahli


muhammed bin saud


arguments

Collaborating with industry sectors to build sustainable open-source ecosystems


Developing open-source solutions to support academic and research processes


Unexpected Consensus

Economic potential of open-source software

speakers

Dr.Hamed Saleh Alqahtani


AUDIENCE


arguments

Developing a clear national roadmap for open-source software adoption


Adopting open-source software to reduce reliance on commercial software


explanation

While the primary focus was on educational and developmental aspects, there was an unexpected consensus on the economic potential of open-source software, both in terms of reducing costs for universities and potentially creating new economic opportunities.


Overall Assessment

Summary

The main areas of agreement include the importance of developing comprehensive digital transformation strategies, supporting talent in open-source software development, collaborating with industry and government sectors, and recognizing the potential of open-source software to reduce costs and create new opportunities.


Consensus level

There is a high level of consensus among the speakers on the importance and potential of open-source software in Saudi universities. This strong agreement implies a unified vision for the future of open-source software in Saudi Arabia’s higher education system, which could lead to more coordinated efforts in implementation and development across universities.


Differences

Different Viewpoints

Approach to monetization of open-source software

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


arguments

Developing a clear national roadmap for open-source software adoption


Aiming to become a world-class open-source software repository


summary

While Dr. Hamed emphasizes the need for a clear national roadmap for open-source software adoption, Dr. Sultan focuses more on the ambition to become a world-class repository for open-source software. This suggests a difference in approach, with one focusing on national strategy and the other on international positioning.


Unexpected Differences

Monetization of open-source software

speakers

Dr.Hamed Saleh Alqahtani


muhammed bin saud


arguments

Developing a clear national roadmap for open-source software adoption


Developing open-source solutions to support academic and research processes


explanation

While most speakers focused on the adoption and development of open-source software, there was an unexpected discussion about the potential monetization of these solutions. This difference in perspective on the commercial aspects of open-source software was not initially anticipated in the context of university adoption strategies.


Overall Assessment

summary

The main areas of disagreement revolve around the specific strategies for implementing open-source software in universities, the approach to talent development, and the potential for monetization of open-source solutions.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of adopting open-source software in universities but differ in their specific approaches and priorities. These differences are not fundamental and do not significantly impede the overall goal of promoting open-source software adoption in Saudi universities. The implications of these differences suggest a need for a more coordinated national strategy that can accommodate various approaches while maintaining a unified direction.


Partial Agreements

Partial Agreements

All speakers agree on the importance of integrating open-source software into university strategies and curricula. However, they differ in their specific approaches: Dr. Hamed focuses on comprehensive digital transformation strategies, Dr. Sultan emphasizes curriculum integration, Dr. Saleh prioritizes talent recruitment, and the AUDIENCE speaker highlights education and partnerships.

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


Dr.Saleh Albahli


AUDIENCE


arguments

Developing comprehensive digital transformation strategies


Integrating open-source software into educational programs and curricula


Recruiting and supporting young talent in open-source development


Promoting change towards open-source systems through education and partnerships


Similar Viewpoints

Both speakers express a vision for Saudi Arabia to become a leader in open-source software, emphasizing the need for a clear national direction and the potential to achieve world-class status in this field.

speakers

Dr.Hamed Saleh Alqahtani


Dr.Sultan Alqahtani


arguments

Developing a clear national roadmap for open-source software adoption


Aiming to become a world-class open-source software repository


These speakers highlight the importance of collaboration between universities, industry sectors, and government agencies to develop and implement open-source solutions that support academic and research processes.

speakers

Dr.Saleh Albahli


muhammed bin saud


arguments

Collaborating with industry sectors to build sustainable open-source ecosystems


Developing open-source solutions to support academic and research processes


Takeaways

Key Takeaways

Saudi universities are developing comprehensive strategies to adopt open-source software in education, research, and administrative processes


Universities are creating open-source programming factories, communities, and partnerships to support development and innovation


There is a focus on integrating open-source software into curricula and supporting student talent in this area


Universities are developing open-source solutions to reduce reliance on commercial software and improve efficiency


There is a vision to make Saudi Arabia a leader in open-source software development and create a world-class repository


Resolutions and Action Items

Develop a clear national roadmap for open-source software adoption in Saudi universities


Improve classification and filtering of Saudi government open-source software in the digital warehouse


Expand partnerships between universities and industry to build sustainable open-source ecosystems


Increase the percentage of university services built on open-source software


Unresolved Issues

How to address security challenges in open-source software adoption


Determining the appropriate licensing model for government open-source software


Balancing free sharing of open-source code with potential for commercialization or sustainability


Suggested Compromises

Allowing symbolic fees for hosting or maintenance to sustain open-source ecosystems while keeping core software free


Sharing open-source code freely but potentially charging for implementation services or specialized versions


Thought Provoking Comments

Today, we are at the University of King Khaled and through the University’s Digital Transformation Strategy, one of the goals of the University’s Digital Transformation Strategy is to promote research and innovation. Through this element, we have launched a number of initiatives for the development of open-source software and initiatives that have been launched in cooperation with the Innovation and Business Leadership Center, including the initiative to support small projects and graduate projects.

speaker

Dr. Sultan Alqahtani


reason

This comment introduces the idea of universities not just adopting open-source software, but actively promoting its development through initiatives and partnerships. It shows a proactive approach to digital transformation.


impact

This shifted the discussion towards concrete examples of how universities are implementing open-source strategies, leading other participants to share their own initiatives.


Today, the methodology we follow in the Imam University, in this field, is still on the way, and the implementation is now about a year and five months, and God willing, the process will develop, and there will be a complete adoption of open source software, whether on the level of scientific research, or even on the level of development research in the field of electronic information technology.

speaker

Dr. Sultan Alqahtani


reason

This comment highlights the ongoing nature of open-source adoption and suggests that it’s a process that extends beyond just implementation to include research and development.


impact

It prompted other participants to discuss their own timelines and methodologies for adopting open-source software, broadening the conversation to include long-term strategies.


We made sure that there is a fundamental component in this strategy, which is the adoption of open-source programming. Because we all know that we all have one goal, which is to have in this precious kingdom, a programmer among a thousand residents.

speaker

Dr. Hamed Saleh Alqahtani


reason

This comment ties the adoption of open-source programming to a broader national goal, showing how university strategies align with national objectives.


impact

It elevated the discussion from university-specific strategies to the role of universities in achieving national technology goals.


Today, we are with the colleagues in the Faculty of Applications, we are working hard to have diplomas for remote learning programs in such programs, or the employment of programs in many diplomas.

speaker

Dr. Hamed Saleh Alqahtani


reason

This comment introduces the idea of integrating open-source programming into formal education programs, showing a commitment to long-term skill development.


impact

It led to further discussion about how universities are adapting their curricula and programs to support open-source adoption.


On the contrary, the open-source programming opens wide and deep horizons in the topic of participation between the parties. Participation is not only the participation of the source code, and then it is over. Participation may be the participation of the knowledge, the participation of relations, the direct contact with government agencies and development teams.

speaker

Dr. Sultan Alqahtani


reason

This comment challenges the traditional view of open-source collaboration, expanding it beyond just code sharing to include knowledge and relationship building.


impact

It broadened the discussion about the benefits of open-source adoption, leading to more comprehensive considerations of its impact.


Overall Assessment

These key comments shaped the discussion by moving it from general statements about open-source adoption to specific strategies and initiatives being implemented by universities. They highlighted the multifaceted nature of open-source adoption, including its role in education, research, national goals, and inter-institutional collaboration. The discussion evolved from simply describing open-source initiatives to exploring their broader implications for university strategies, national technology goals, and the future of education and innovation in Saudi Arabia.


Follow-up Questions

How can universities develop a comprehensive strategy for digital transformation and adoption of open-source free software?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important as it addresses the need for universities to have a strategic approach to implementing open-source software and digital transformation.


How can universities establish and support an open-source technology platform to enhance small and medium-sized projects?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This area of research is crucial for understanding how universities can contribute to the growth of small and medium enterprises through open-source technology.


What specific goals does Al-Jouf University have for increasing its ability to recruit and support young talents in the open-source software system?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for understanding concrete strategies universities are employing to attract and nurture talent in the open-source field.


How can universities promote change towards a blue ocean system through open-source programming?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This area of research is significant for exploring how universities can lead innovation in open-source programming and create new market spaces.


What are the main successful experiments that led universities to develop open-source programming to support academic and administrative processes?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for identifying best practices and successful implementations of open-source programming in university settings.


How can universities use open-source programming to develop solutions that support academic and research processes?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This area of research is crucial for understanding the practical applications of open-source programming in enhancing university operations and research capabilities.


How can partnerships between universities and industrial sectors build a sustainable system for open-source software?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for exploring collaborative models between academia and industry to ensure the long-term viability of open-source software initiatives.


What is the roadmap for open-source software adoption in universities?

speaker

Dr. Hamed Saleh Alqahtani


explanation

This area of research is crucial for providing clear guidance and direction for universities in implementing open-source software strategies.


How can the classification and filtering of Saudi government’s free software be improved in the digital warehouse?

speaker

Dr. Muneerah Badr Almahasheer


explanation

This question is important for enhancing the usability and accessibility of open-source software resources for government and educational institutions.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.